Merge branch 'main' into filesFuseMain2

This commit is contained in:
Gauri Prasad 2022-08-09 12:52:31 -07:00
Родитель 1dd942ddfa 26030f2754
Коммит b2b30f53ec
122 изменённых файлов: 5048 добавлений и 1845 удалений

2
.gitignore поставляемый
Просмотреть файл

@ -18,3 +18,5 @@ build/
tools/
test/manual_scripts/create1000.go
test/manual_scripts/cachetest.go
lint.log
azure-storage-fuse

Просмотреть файл

@ -1,3 +1,18 @@
## 2.0.0-preview.3 (WIP)
**Features**
- Added support for directory level SAS while mounting a subdirectory
- Added support for displaying mount space utilization based on file cache consumption (for example when doing `df`)
**Bug Fixes**
- Fixed a bug in parsing output of disk utilization summary
- Fixed a bug in parsing SAS token not having '?' as first character
- Fixed a bug in append file flow resolving data corruption
- Fixed a bug in MSI auth to send correct resource string
- Fixed a bug in OAuth token parsing when expires_on denotes numbers of seconds
- Fixed a bug in rmdir flow. Dont allow directory deletion if local cache says its empty. On container it might still have files.
- Fixed a bug in background mode where auth validation would be run twice
- Fixed a bug in content type parsing for a 7z compressed file
## 2.0.0-preview.2 (2022-05-31)
**Performance Improvements**
- fio: Outperforms blobfuse by 10% in sequential reads

203
NOTICE
Просмотреть файл

@ -19523,4 +19523,207 @@ third-party archives.
limitations under the License.
****************************************************************************
============================================================================
>>> github.com/gapra-msft/cobra
==============================================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
****************************************************************************
============================================================================
>>> github.com/golang-jwt/jwt/v4
==============================================================================
Copyright (c) 2012 Dave Grijalva
Copyright (c) 2021 golang-jwt maintainers
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
--------------------- END OF THIRD PARTY NOTICE --------------------------------

Просмотреть файл

@ -78,7 +78,7 @@ To learn about a specific command, just include the name of the command (For exa
- List all mount instances of blobfuse2
* blobfuse2 mount list
- Unmount blobfuse2
* sudo fusermount -u <mount path>
* sudo fusermount3 -u <mount path>
- Unmount all blobfuse2 instances
* blobfuse2 unmount all

214
TSG.md Normal file
Просмотреть файл

@ -0,0 +1,214 @@
# Common Mount Problems
**Logging**
Please ensure logging is turned on DEBUG mode when trying to reproduce an issue.
This can help in many instances to understand what the underlying issue is.
A useful setting in your configuration file to utilize when debugging is `sdk-trace: true` under the azstorage component. This will log all outgoing REST calls.
**1. Error: fusermount: failed to open /etc/fuse.conf: Permission denied**
Only the users that are part of the group fuse, and the root user can run fusermount command. In order to mitigate this add your user to the fuse group.
```sudo addgroup <user> fuse```
**2. failed to mount : failed to authenticate credentials for azstorage**
There might be something wrong about the storage config, please double check the storage account name, account key and container/filesystem name. errno = 1**
Possible causes are:
- Invalid account, or access key
- Non-existing container (The container must be created prior to Blobfuse2 mount)
- Windows line-endings (CRLF) - fix it by running dos2unix
- Use of HTTP while 'Secure Transfer (HTTPS)' is enabled on a Storage account
- Enabled VNET Security rule that blocks VM from connecting to the Storage account. Ensure you can connect to your Storage account using AzCopy or Azure CLI
- DNS issues/timeouts - add the Storage account resolution to /etc/hosts to bypass the DNS lookup
- If using a proxy endpoint - ensure that you use the correct transfer protocol HTTP vs HTTPS
**3. For MSI or SPN auth, Http Status Code = 403 in the response. Authorization error**
- Verify your storage account Access roles. Make sure you have both Contributor and Storage Blob Contributor roles for the MSI or SPN identity.
- In the case of a private AAD endpoint (private MSI endpoitns) ensure that your env variables are configured correctly.
**4. fusermount: mount failed: Operation not permitted (CentOS)**
fusermount is a privileged operation on CentOS by default. You may work around this changing the permissions of the fusermount operation:
chown root /usr/bin/fusermount
chmod u+s /usr/bin/fusermount
**5. Cannot access mounted directory**
FUSE allows mounting filesystem in user space, and is only accessible by the user mounting it. For instance, if you have mounted using root, but you are trying to access it with another user, you will fail to do so. In order to workaround this, you can use the non-secure, fuse option '--allow-other'.
sudo blobfuse2 mount /home/myuser/mount_dir/ --config-file=config.yaml --allow-other
**6. fusermount: command not found**
You try to unmount the blob storage, but the recommended command is not found. Whilst `umount` may work instead, fusermount is the recommended method, so install the fuse package, for example on Ubuntu 20+:
sudo apt install fuse3
please note the fuse version (2 or 3) is dependent on the linux distribution you're using. Refer to fuse version for your distro.
**7. Hangs while mounting to private link storage account**
The Blobfuse2 config file should specify the accountName as the original Storage account name and not the privatelink storage account name. For Eg: myblobstorageaccount.blob.core.windows.net is correct while privatelink.myblobstorageaccount.blob.core.windows.net is wrong.
If the config file is correct, please verify name resolution
dig +short myblobstorageaccount.blob.core.windows.net should return a private Ip For eg : 10.0.0.5 or so.
If for some reason the translation/name resolution fails please confirm the VNet settings to ensure that it is forwarding DNS translation requests to Azure Provided DNS 168.63.129.16. In case the Blobfuse2 hosting VM is set up to forward to a Custom DNS Server, the Custom DNS settings should be verified, it should forward DNS requests to the Azure Provided DNS 168.63.129.16.
Here are few steps to resolve DNS issues when integrating private endpoint with Azure Private DNS:
Validate Private Endpoint has proper DNS record on Private DNS Zone. In case Private Endpoint was deleted and recreated a new IP may exist or duplicated records which will cause clients to use round-robin and make connectivity instable.
Validate if DNS settings of the Azure VM has Correct DNS Servers.
a) DNS settings can be defined VNET level and NIC Level.
b) DNS setting cannot be set inside Guest OS VM NIC.
For Custom DNS server defined check the following:
Custom DNS Server forwards all requests to 168.63.129.16
Yes – you should be able to consume Azure Private DNS zones correctly.
No – In that case you may need to create a conditional forwarder either to: privatelink zone or original PaaS Service Zone (check validation 4).
Custom DNS has:
a) DNS has Root Hits only – In this case is the best to have a forwarder configured to 168.63.129.16 which will improve performance and doesn't require any extra conditional forwarding setting.
b) DNS Forwarders to another DNS Server (not Azure Provided DNS) – In this case you need to create a conditional forwarder to original PaaS domain zone (i.e. Storage you should configure blob.core.windows.net conditional forwarder to 168.63.129.16). Keep in mind using that approach will make all DNS requests to storage account with or without private endpoint to be resolved by Azure Provided DNS. By having multiple Custom DNS Serves in Azure will help to get better high availability for requests coming from On-Prem.
**8. Blobfuse2 killed by OOM**
The "OOM Killer" or "Out of Memory Killer" is a process that the Linux kernel employs when the system is critically low on memory. Based on its algorithm it kills one or more process to free up some memory space. Blobfuse2 could be one such process. To investigate Blobfuse2 was killed by OOM or not run following command:
``` dmesg -T | egrep -i 'killed process'```
If Blobfuse2 pid is listed in the output then OOM has sent a SIGKILL to Blobfuse2. If Blobfuse2 was not running as a service it will not restart automatically and user has to manually mount again. If this keeps happening then user need to monitor the system and investigate why system is getting low on memory. VM might need an upgrade here if the such high usage is expected.
# Common Problems after a Successful Mount
**1. Errno 24: Failed to open file /mnt/tmp/root/filex in file cache. errno = 24 OR Too many files Open error**
Errno 24 in Linux corresponds to 'Too many files open' error which can occur when an application opens more files than it is allowed on the system. Blobfuse2 typically allows 20 files less than the ulimit value set in Linux. Usually the Linux limit is 1024 per process (e.g. Blobfuse2 in this case will allow 1004 open file descriptors at a time). Recommended approach is to edit the /etc/security/limits.conf in Ubuntu and add these two lines,
* soft nofile 16384
* hard nofile 16384
16384 here refers to the number of allowed open files
you must reboot after editing this file for Blobfuse2 to pick up the new limits. You may increase the limit via the command `ulimit -n 16834` however this does not appear in work in Ubuntu.
**2. Input/output error**
If you mounted a Blob container successfully, but failed to create a directory, or upload a file, it may be that you mounted a Blob container from a Premium (Page) Blob account which does not support Block blob. Blobfuse2 uses Block Blobs as files hence requires accounts that support Block blobs.
`mkdir: cannot create directory directoryname' : Input/output error`
**3. Unexplainably high Storage Account list usage. Costs $$**
The mostly likely reason is scanning triggered automatically using updatedb by the built-in mlocation service that is deployed with Linux VMs. "mlocation" is a built-in service that acts as a search tool. It is added under /etc/cron.daily to run on daily basis and it triggers the "updatedb" service to scan every directory on the server to rebuild the index of files in database in order to get the search result up-to-date.
Solution: Do an 'ls -l /etc/cron.daily/mlocate' at the shell prompt. If "mlocate" is added to the /etc/cron.daily then Blobfuse2 must be whitelisted, so that the Blobfuse2 mount directory is not scanned by updatedb. This is done by updating the updatedb.conf file .
cat /etc/updatedb.conf
It should look like this.
PRUNE_BIND_MOUNTS="yes"
PRUNENAMES=".git .bzr .hg .svn"
PRUNEPATHS="/tmp /var/spool /media /var/lib/os-prober /var/lib/ceph /home/.ecryptfs /var/lib/schroot"
PRUNEFS="NFS nfs nfs4 rpc_pipefs afs binfmt_misc proc smbfs autofs iso9660 ncpfs coda devpts ftpfs devfs devtmpfs fuse.mfs shfs sysfs cifs lustre tmpfs usbfs udf fuse.glusterfs fuse.sshfs curlftpfs ceph fuse.ceph fuse.rozofs ecryptfs fusesmb"
1) Add the Blobfuse2 mount path eg: /mnt to the PRUNEPATHS
OR
1) Add "Blobfuse2" and "fuse" to the PRUNEFS
It won't harm to do both.
Below are the steps to automate this at pod creation:
1.Create a new configmap in the cluster which contains the new configuration about the script.
2.Create a DaemonSet with the new configmap which could apply the configuration changes to every node in the cluster.
```
Example:
configmap fiie: (testcm.yaml)
apiVersion: v1
kind: ConfigMap
metadata:
name: testcm
data:
updatedb.conf: |
PRUNE_BIND_MOUNTS="yes"
PRUNEPATHS="/tmp /var/spool /media /var/lib/os-prober /var/lib/ceph /home/.ecryptfs /var/lib/schroot /mnt /var/lib/kubelet"
PRUNEFS="NFS nfs nfs4 rpc_pipefs afs binfmt_misc proc smbfs autofs iso9660 ncpfs coda devpts ftpfs devfs devtmpfs fuse.mfs shfs sysfs cifs lustre tmpfs usbfs udf fuse.glusterfs fuse.sshfs curlftpfs ceph fuse.ceph fuse.rozofs ecryptfs fusesmb fuse Blobfuse2"
DaemonSet file: (testcmds.yaml)
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: testcmds
labels:
test: testcmds
spec:
selector:
matchLabels:
name: testcmds
template:
metadata:
labels:
name: testcmds
spec:
tolerations:
- key: "kubernetes.azure.com/scalesetpriority"
operator: "Equal"
value: "spot"
effect: "NoSchedule"
containers:
- name: mypod
image: debian
volumeMounts:
- name: updatedbconf
mountPath: "/tmp"
- name: source
mountPath: "/etc"
command: ["/bin/bash","-c","cp /tmp/updatedb.conf /etc/updatedb.conf;while true; do sleep 30; done;"]
restartPolicy: Always
volumes:
- name: updatedbconf
configMap:
name: testcm
items:
- key: "updatedb.conf"
path: "updatedb.conf"
- name: source
hostPath:
path: /etc
type: Directory
```
**4. File contents are not in sync with storage**
Please refer to the file cache component setting `timeout-sec`.
**5. failed to unmount /path/<mount dir>**
Unmount fails when a file is open or a user or process is cd'd into the mount directory or its sub directories. Please ensure no files are in use and try the unmount command again. Even umount -f will not work if the mounted files /directories are in use.
umount -l does a lazy unmount meaning it will unmount automatically when the mounted files are no longer in use.
**6. Blobfuse2 mounts but not functioning at all**
https://github.com/Azure/azure-storage-fuse/issues/803
There are cases where anti-malware / anti-virus software block the fuse functionality and in such case though mount command is successful and Blobfuse2 binary is running, the fuse functionality will not work. One way to identify that you are hitting this issue is turn on the debug logs and mount Blobfuse2. If you do not see any logs coming from Blobfuse2 and potentially you have run into this issue. Stop the anti-virus software and try again.
In such cases we have seen mounting through /etc/fstab works, because that executes mount command before the anti-malware software kicks in.
**7. file cache temp directory not empty**
To ensure that you don't have leftover files in your file cache temp dir, unmount rather than killing
Blobfuse2. If Blobfuse2 is killed without unmounting you can also set `cleanup-on-start` in your config file on the next mount to clear the temp dir.
# Problems with build
Make sure you have correctly setup your GO dev environment. Ensure you have installed fuse3/2 for example:
sudo apt-get install fuse3 libfuse3-dev -y

Просмотреть файл

@ -9,6 +9,8 @@ parameters:
- name: tags
type: string
default: "null"
- name: container
type: string
steps:
# Installing Go tool
@ -30,7 +32,7 @@ steps:
- task: Go@0
inputs:
command: 'build'
arguments: "-tags ${{ parameters.tags }}"
arguments: "-tags ${{ parameters.tags }} -o blobfuse2"
workingDirectory: ${{ parameters.work_dir }}
displayName: 'Building Blobfuse2'
@ -44,6 +46,33 @@ steps:
# Run Unit tests if parameters is true
- ${{ if eq(parameters.unit_test, true) }}:
- script: |
cnfFile=$HOME/azuretest.json
echo $cnfFile
touch $cnfFile
echo "{" > $cnfFile
echo "\"block-acct\"": "\"$(AZTEST_BLOCK_ACC_NAME)\"", >> $cnfFile
echo "\"adls-acct\"": "\"$(AZTEST_ADLS_ACC_NAME)\"", >> $cnfFile
echo "\"block-cont\"": "\"${{ parameters.container }}\"", >> $cnfFile
echo "\"adls-cont\"": "\"${{ parameters.container }}\"", >> $cnfFile
echo "\"block-key\"": "\"$(AZTEST_BLOCK_KEY)\"", >> $cnfFile
echo "\"adls-key\"": "\"$(AZTEST_ADLS_KEY)\"", >> $cnfFile
echo "\"block-sas\"": "\"$(AZTEST_BLOCK_SAS)\"", >> $cnfFile
echo "\"adls-sas\"": "\"$(AZTEST_ADLS_SAS)\"", >> $cnfFile
echo "\"msi-appid\"": "\"$(AZTEST_APP_ID)\"", >> $cnfFile
echo "\"msi-resid\"": "\"$(AZTEST_RES_ID)\"", >> $cnfFile
echo "\"spn-client\"": "\"$(AZTEST_CLIENT)\"", >> $cnfFile
echo "\"spn-tenant\"": "\"$(AZTEST_TENANT)\"", >> $cnfFile
echo "\"spn-secret\"": "\"$(AZTEST_SECRET)\"", >> $cnfFile
echo "\"skip-msi\"": "true", >> $cnfFile
echo "\"proxy-address\"": "\"\"" >> $cnfFile
echo "}" >> $cnfFile
cat $cnfFile
displayName: "Create AzureTest Config"
continueOnError: false
workingDirectory: ${{ parameters.work_dir }}
- task: Go@0
inputs:
command: 'test'

Просмотреть файл

@ -28,7 +28,7 @@ parameters:
default: "null"
- name: fuselib
type: string
default: "libfuse3-dev"
default: "fuse3 libfuse3-dev"
steps:
# Package manager installs for libfuse
@ -65,7 +65,7 @@ steps:
- task: Go@0
inputs:
command: 'build'
arguments: "-tags ${{ parameters.tags }}"
arguments: "-tags ${{ parameters.tags }} -o blobfuse2"
workingDirectory: ${{ parameters.working_directory }}
displayName: "Go Build"
@ -80,6 +80,7 @@ steps:
# Creating necessary directories
- script: |
sudo fusermount -u ${mount_dir}
sudo fusermount3 -u ${mount_dir}
rm -rf ${mount_dir}
mkdir -p ${mount_dir}
echo "Creating mount dir " ${mount_dir}
@ -116,6 +117,8 @@ steps:
echo "\"adls-key\"": "\"$(AZTEST_ADLS_KEY)\"", >> $cnfFile
echo "\"file-key\"": "\"$(AZTEST_FILE_KEY)\"", >> $cnfFile
echo "\"block-sas\"": "\"$(AZTEST_BLOCK_SAS)\"", >> $cnfFile
echo "\"block-cont-sas-ubn-18\"": "\"$(AZTEST_BLOCK_CONT_SAS_UBN_18)\"", >> $cnfFile
echo "\"block-cont-sas-ubn-20\"": "\"$(AZTEST_BLOCK_CONT_SAS_UBN_20)\"", >> $cnfFile
echo "\"adls-sas\"": "\"$(AZTEST_ADLS_SAS)\"", >> $cnfFile
echo "\"file-sas\"": "\"$(AZTEST_FILE_SAS)\"", >> $cnfFile
echo "\"msi-appid\"": "\"$(AZTEST_APP_ID)\"", >> $cnfFile

Просмотреть файл

@ -9,6 +9,7 @@ parameters:
steps:
- script: |
sudo fusermount -u ${mount_dir}
sudo fusermount3 -u ${mount_dir}
sudo kill -9 `pidof blobfuse2` || true
rm -rf ${mount_dir}/*
rm -rf ${temp_dir}/*

Просмотреть файл

@ -30,7 +30,7 @@ parameters:
default: "null"
- name: fuselib
type: string
default: "libfuse3-dev"
default: "fuse3 libfuse3-dev"
- name: quick_test
type: boolean
default: "true"
@ -190,13 +190,3 @@ steps:
continueOnError: true
condition: always()
# Cleanup Agent dir
- script: |
sudo rm -rf ${root_dir}
pwd
cd /`pwd | cut -d '/' -f 2,3,4,5`
sudo rm -rf [0-9]
displayName: 'Clean Agent Directories'
env:
root_dir: ${{ parameters.root_dir }}
condition: always()

Просмотреть файл

@ -13,6 +13,9 @@ parameters:
type: step
- name: adls
type: boolean
- name: sas
type: boolean
default: false
- name: clone
type: boolean
default: "false"
@ -47,7 +50,7 @@ steps:
- task: Go@0
inputs:
command: 'test'
arguments: '-v -timeout=2h ./... -args -mnt-path=${{ parameters.mount_dir }} -adls=${{parameters.adls}} -clone=${{parameters.clone}} -tmp-path=${{parameters.temp_dir}} -quick-test=${{parameters.quick_test}} -distro-name="${{parameters.distro_name}}"'
arguments: '-v -timeout=2h ./... -args -mnt-path=${{ parameters.mount_dir }} -adls=${{parameters.adls}} -sas=${{parameters.sas}} -clone=${{parameters.clone}} -tmp-path=${{parameters.temp_dir}} -quick-test=${{parameters.quick_test}} -distro-name="${{parameters.distro_name}}"'
workingDirectory: ${{ parameters.working_dir }}/test/e2e_tests
displayName: 'E2E Test: ${{ parameters.idstring }}'
timeoutInMinutes: 120

Просмотреть файл

@ -15,6 +15,7 @@ parameters:
steps:
- script: |
sudo fusermount -u ${mount_dir}
sudo fusermount3 -u ${mount_dir}
sudo kill -9 `pidof blobfuse2` || true
timeoutInMinutes: 20
env:
@ -67,6 +68,7 @@ steps:
# Never cleanup here on container otherwise we lose the huge data, just unmount and go
- script: |
sudo fusermount -u ${mount_dir}
sudo fusermount3 -u ${mount_dir}
sudo kill -9 `pidof blobfuse2` || true
timeoutInMinutes: 5
env:

Просмотреть файл

@ -0,0 +1,139 @@
parameters:
- name: root_dir
type: string
- name: work_dir
type: string
- name: mount_dir
type: string
- name: temp_dir
type: string
- name: container
type: string
steps:
- script: |
blobfuse2 version
displayName: 'Check Version'
- script: |
blobfuse2 --help
displayName: 'Check Help'
- script: |
sudo rm -rf ${{ parameters.mount_dir }}
sudo rm -rf ${{ parameters.temp_dir }}
mkdir -p ${{ parameters.mount_dir }}
mkdir -p ${{ parameters.temp_dir }}
displayName: 'Prepare Blobfuse Directories'
- script: |
blobfuse2 gen-test-config --config-file=${{ parameters.root_dir }}/azure-storage-fuse/testdata/config/azure_key.yaml --container-name=${{ parameters.container }} --temp-path=${{ parameters.temp_dir }} --output-file=${{ parameters.root_dir }}/block_blob_config.yaml
displayName: 'Create Blob Config File'
env:
NIGHTLY_STO_ACC_NAME: $(NIGHTLY_STO_BLOB_ACC_NAME)
NIGHTLY_STO_ACC_KEY: $(NIGHTLY_STO_BLOB_ACC_KEY)
ACCOUNT_TYPE: 'block'
ACCOUNT_ENDPOINT: 'https://$(NIGHTLY_STO_BLOB_ACC_NAME).blob.core.windows.net'
VERBOSE_LOG: false
continueOnError: false
- script: |
cat block_blob_config.yaml
displayName: 'Print Block Blob Config File'
- script: |
blobfuse2 unmount all
sudo fusermount -u ${{ parameters.mount_dir }}
blobfuse2 mount ${{ parameters.mount_dir }} --config-file=${{ parameters.root_dir }}/block_blob_config.yaml
displayName: 'Mount Block Blob'
# Wait for some time to let the container come up
- script: |
sleep 10s
displayName: 'Waiting for Mount'
- script: |
df
echo "-------------------------------------------------------------------"
df | grep blobfuse2
exit $?
displayName: 'Verify Mount'
- task: Go@0
inputs:
command: 'test'
arguments: '-v -timeout=2h -run Test.i.* -args -mnt-path=${{ parameters.mount_dir }} -adls=false -clone=true -tmp-path=${{ parameters.temp_dir }} -quick-test=false'
workingDirectory: ${{ parameters.work_dir }}/test/e2e_tests
displayName: 'E2E Test: Block Blob'
timeoutInMinutes: 120
continueOnError: false
- script: |
blobfuse2 unmount ${{ parameters.mount_dir }}
displayName: 'Unmount Blob'
- script: |
cat blobfuse2-logs.txt
displayName: 'View Logs'
condition: always()
- script: |
> blobfuse2-logs.txt
displayName: 'Clear Logs'
condition: always()
- script: |
blobfuse2 gen-test-config --config-file=${{ parameters.root_dir }}/azure-storage-fuse/testdata/config/azure_key.yaml --container-name=${{ parameters.container }} --temp-path=${{ parameters.temp_dir }} --output-file=${{ parameters.root_dir }}/adls_config.yaml
displayName: 'Create ADLS Config File'
env:
NIGHTLY_STO_ACC_NAME: $(AZTEST_ADLS_ACC_NAME)
NIGHTLY_STO_ACC_KEY: $(AZTEST_ADLS_KEY)
ACCOUNT_TYPE: 'adls'
ACCOUNT_ENDPOINT: 'https://$(AZTEST_ADLS_ACC_NAME).dfs.core.windows.net'
VERBOSE_LOG: false
continueOnError: false
- script: |
cat ${{ parameters.root_dir }}/adls_config.yaml
displayName: 'Print ADLS Config File'
- script: |
blobfuse2 unmount all
sudo fusermount -u ${{ parameters.mount_dir }}
blobfuse2 mount ${{ parameters.mount_dir }} --config-file=${{ parameters.root_dir }}/adls_config.yaml
displayName: 'Mount ADLS'
# Wait for some time to let the container come up
- script: |
sleep 10s
displayName: 'Waiting for Mount'
- script: |
df
echo "-------------------------------------------------------------------"
df | grep blobfuse2
exit $?
displayName: 'Verify Mount'
- task: Go@0
inputs:
command: 'test'
arguments: '-v -timeout=2h -run Test.i.* -args -mnt-path=${{ parameters.mount_dir }} -adls=true -clone=true -tmp-path=${{ parameters.temp_dir }} -quick-test=false'
workingDirectory: ${{ parameters.work_dir }}/test/e2e_tests
displayName: 'E2E Test: ADLS'
timeoutInMinutes: 120
continueOnError: false
- script: |
blobfuse2 unmount ${{ parameters.mount_dir }}
displayName: 'Unmount ADLS'
- script: |
cat blobfuse2-logs.txt
displayName: 'View Logs'
condition: always()
- script: |
> blobfuse2-logs.txt
displayName: 'Clear Logs'
condition: always()

Просмотреть файл

@ -191,7 +191,7 @@ steps:
account_endpoint: ${{ parameters.account_endpoint }}
idstring: "${{ parameters.service }} LFU policy"
distro_name: ${{ parameters.distro_name }}
quick_test: ${{ parameters.quick_test }}
quick_test: "true"
verbose_log: ${{ parameters.verbose_log }}
- template: e2e-tests-spcl.yml

Просмотреть файл

@ -17,11 +17,19 @@ jobs:
containerName: 'test-cnt-ubn-18'
fuselib: 'libfuse-dev'
tags: 'fuse2'
adlsSas: $(AZTEST_ADLS_CONT_SAS_UBN_18)
Ubuntu-20:
imageName: 'ubuntu-20.04'
containerName: 'test-cnt-ubn-20'
fuselib: 'libfuse3-dev'
tags: 'fuse3'
adlsSas: $(AZTEST_ADLS_CONT_SAS_UBN_20)
Ubuntu-22:
imageName: 'ubuntu-22.04'
containerName: 'test-cnt-ubn-22'
fuselib: 'libfuse3-dev'
tags: 'fuse3'
adlsSas: $(AZTEST_ADLS_CONT_SAS_UBN_22)
pool:
vmImage: $(imageName)
@ -53,7 +61,7 @@ jobs:
inputs:
command: 'build'
workingDirectory: ./
arguments: "-tags $(tags)"
arguments: "-tags $(tags) -o blobfuse2"
displayName: "Build"
- script: |
@ -71,8 +79,10 @@ jobs:
echo "\"adls-key\"": "\"$(AZTEST_ADLS_KEY)\"", >> $cnfFile
echo "\"file-key\"": "\"$(AZTEST_FILE_KEY)\"", >> $cnfFile
echo "\"block-sas\"": "\"$(AZTEST_BLOCK_SAS)\"", >> $cnfFile
echo "\"adls-sas\"": "\"$(AZTEST_ADLS_SAS)\"", >> $cnfFile
echo "\"adls-sas\"": "\"$(adlsSas)\"", >> $cnfFile
echo "\"file-sas\"": "\"$(AZTEST_FILE_SAS)\"", >> $cnfFile
echo "\"block-cont-sas-ubn-18\"": "\"$(AZTEST_BLOCK_CONT_SAS_UBN_18)\"", >> $cnfFile
echo "\"block-cont-sas-ubn-20\"": "\"$(AZTEST_BLOCK_CONT_SAS_UBN_20)\"", >> $cnfFile
echo "\"msi-appid\"": "\"$(AZTEST_APP_ID)\"", >> $cnfFile
echo "\"msi-resid\"": "\"$(AZTEST_RES_ID)\"", >> $cnfFile
echo "\"spn-client\"": "\"$(AZTEST_CLIENT)\"", >> $cnfFile
@ -86,6 +96,63 @@ jobs:
continueOnError: false
workingDirectory: ./
# Code lint checks (Static-analysis)
- script: |
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin
$(go env GOPATH)/bin/golangci-lint --version
$(go env GOPATH)/bin/golangci-lint run --tests=false --build-tags $(tags) --skip-dirs test,common/stats_collector,common/stats_monitor --max-issues-per-linter=0 -files component/libfuse/libfuse2_handler_test_wrapper.go > lint.log
result=$(cat lint.log | wc -l)
if [ $result -ne 0 ]; then
echo "-----------------------------------"
echo "Below issues are found in SA"
cat lint.log
echo "-----------------------------------"
exit 1
else
echo "-----------------------------------"
echo "No issues are found in SA"
echo "-----------------------------------"
fi
displayName: 'Static Analysis (Lint)'
condition: always()
workingDirectory: ./
# Copyright checks
- script: |
result=$(grep -L -r --include \*.go "`date +%Y` Microsoft Corporation" ./ | wc -l)
if [ $result -ne 0 ]; then
exit 1
else
echo "Copyright statements are up to date"
fi
displayName: 'Copyright check'
condition: always()
failOnStderr: true
workingDirectory: ./
# Go code formatting checks
- script: |
gofmt -s -l -d . | tee >&2
displayName: 'Go Format Check'
condition: always()
failOnStderr: true
workingDirectory: ./
# Notices files check
- script: |
./notices_fix.sh
result=$(git diff NOTICE | wc -l)
if [ $result -ne 0 ]; then
echo "Notices needs a fix. Run ./notices_fix.sh and commit NOTICE file."
exit 1
else
echo "Notices are up to date."
fi
displayName: 'Notice file check'
condition: always()
failOnStderr: true
workingDirectory: ./
# Running unit tests for fuse3 on ubn-20
- task: Go@0
inputs:

Просмотреть файл

@ -12,21 +12,24 @@ jobs:
strategy:
matrix:
Ubuntu-18:
AgentName: 'blobfuse-ubuntu18'
imageName: 'ubuntu-18.04'
containerName: 'test-cnt-ubn-18'
fuselib: 'libfuse-dev'
fuselib2: 'fuse'
tags: 'fuse2'
hostedAgent: true
stressParallel: 3
Ubuntu-20:
AgentName: 'blobfuse-ubuntu20'
imageName: 'ubuntu-20.04'
containerName: 'test-cnt-coverage'
containerName: 'test-cnt-ubn-20'
fuselib: 'libfuse3-dev'
fuselib2: 'fuse3'
tags: 'fuse3'
hostedAgent: true
stressParallel: 1
pool:
vmImage: $(imageName)
name: "blobfuse-ubuntu-pool"
demands:
- ImageOverride -equals $(AgentName)
variables:
- group: NightlyBlobFuse
@ -63,10 +66,18 @@ jobs:
workingDirectory: $(WORK_DIR)
- script: |
sudo apt-get update --fix-missing
sudo apt-get install $(fuselib) -y
sudo apt-get update --fix-missing -o Dpkg::Options::="--force-confnew"
sudo apt-get install make cmake gcc g++ parallel $(fuselib) $(fuselib2) -y -o Dpkg::Options::="--force-confnew"
displayName: 'Install libfuse'
# Create directory structure
- script: |
sudo mkdir -p $(ROOT_DIR)
sudo chown -R `whoami` $(ROOT_DIR)
chmod 777 $(ROOT_DIR)
displayName: 'Create Directory Structure'
# -------------------------------------------------------
# Pull and build the code
- template: 'azure-pipeline-templates/build.yml'
@ -79,6 +90,7 @@ jobs:
container: $(containerName)
tags: $(tags)
fuselib: $(fuselib)
skip_msi: "false"
# -------------------------------------------------------
# UT based code coverage test
@ -98,7 +110,7 @@ jobs:
# Config Generation (Block Blob)
- script: |
cd $(WORK_DIR)
$(WORK_DIR)/blobfuse2 gen-test-config --config-file=azure_key.yaml --container-name=$(containerName) --temp-path=$(TEMP_DIR) --output-file=$(BLOBFUSE2_CFG)
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/blobfuse2_gentest1.cov gen-test-config --config-file=azure_key.yaml --container-name=$(containerName) --temp-path=$(TEMP_DIR) --output-file=$(BLOBFUSE2_CFG)
env:
NIGHTLY_STO_ACC_NAME: $(NIGHTLY_STO_BLOB_ACC_NAME)
NIGHTLY_STO_ACC_KEY: $(NIGHTLY_STO_BLOB_ACC_KEY)
@ -113,11 +125,13 @@ jobs:
- script: |
rm -rf $(MOUNT_DIR)/*
rm -rf $(TEMP_DIR)/*
./blobfuse2.test -test.v -test.coverprofile=blobfuse2_block.cov mount $(MOUNT_DIR) --config-file=$(BLOBFUSE2_CFG) --foreground=true &
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/blobfuse2_block.cov mount $(MOUNT_DIR) --config-file=$(BLOBFUSE2_CFG) --foreground=true &
sleep 10
ps -aux | grep blobfuse2
rm -rf $(MOUNT_DIR)/*
go test -v -timeout=7200s test/e2e_tests -args -mnt-path=$(MOUNT_DIR) -tmp-path=$(TEMP_DIR)
cd test/e2e_tests
go test -v -timeout=7200s ./... -args -mnt-path=$(MOUNT_DIR) -tmp-path=$(TEMP_DIR)
cd -
sudo fusermount -u $(MOUNT_DIR)
sleep 5
workingDirectory: $(WORK_DIR)
@ -139,11 +153,13 @@ jobs:
- script: |
rm -rf $(MOUNT_DIR)/*
rm -rf $(TEMP_DIR)/*
./blobfuse2.test -test.v -test.coverprofile=blobfuse2_adls.cov mount $(MOUNT_DIR) --config-file=$(BLOBFUSE2_ADLS_CFG) --foreground=true &
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/blobfuse2_adls.cov mount $(MOUNT_DIR) --config-file=$(BLOBFUSE2_ADLS_CFG) --foreground=true &
sleep 10
ps -aux | grep blobfuse2
rm -rf $(MOUNT_DIR)/*
go test -v -timeout=7200s test/e2e_tests -args -mnt-path=$(MOUNT_DIR) -adls=true -tmp-path=$(TEMP_DIR)
cd test/e2e_tests
go test -v -timeout=7200s ./... -args -mnt-path=$(MOUNT_DIR) -adls=true -tmp-path=$(TEMP_DIR)
cd -
./blobfuse2 unmount all
sleep 5
workingDirectory: $(WORK_DIR)
@ -154,7 +170,7 @@ jobs:
# Config Generation (Block Blob - LFU policy)
- script: |
cd $(WORK_DIR)
$(WORK_DIR)/blobfuse2 gen-test-config --config-file=azure_key_lfu.yaml --container-name=$(containerName) --temp-path=$(TEMP_DIR) --output-file=$(BLOBFUSE2_CFG)
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/blobfuse2_gentest2.cov gen-test-config --config-file=azure_key_lfu.yaml --container-name=$(containerName) --temp-path=$(TEMP_DIR) --output-file=$(BLOBFUSE2_CFG)
env:
NIGHTLY_STO_ACC_NAME: $(NIGHTLY_STO_BLOB_ACC_NAME)
NIGHTLY_STO_ACC_KEY: $(NIGHTLY_STO_BLOB_ACC_KEY)
@ -163,16 +179,19 @@ jobs:
VERBOSE_LOG: false
displayName: 'Create Config File - LFU'
continueOnError: false
workingDirectory: $(WORK_DIR)
# Code Coverage with e2e-tests for block blob with lfu policy
- script: |
rm -rf $(MOUNT_DIR)/*
rm -rf $(TEMP_DIR)/*
./blobfuse2.test -test.v -test.coverprofile=blobfuse2_block_lfu.cov mount $(MOUNT_DIR) --config-file=$(BLOBFUSE2_CFG) --foreground=true &
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/blobfuse2_block_lfu.cov mount $(MOUNT_DIR) --config-file=$(BLOBFUSE2_CFG) --foreground=true &
sleep 10
ps -aux | grep blobfuse2
rm -rf $(MOUNT_DIR)/*
go test -v -timeout=7200s test/e2e_tests -args -mnt-path=$(MOUNT_DIR) -tmp-path=$(TEMP_DIR)
cd test/e2e_tests
go test -v -timeout=7200s ./... -args -mnt-path=$(MOUNT_DIR) -tmp-path=$(TEMP_DIR)
cd -
./blobfuse2 unmount $(MOUNT_DIR)
sleep 5
workingDirectory: $(WORK_DIR)
@ -183,7 +202,7 @@ jobs:
# Config Generation (Block Blob - Stream)
- script: |
cd $(WORK_DIR)
$(WORK_DIR)/blobfuse2 gen-test-config --config-file=azure_stream.yaml --container-name=$(containerName) --temp-path=$(TEMP_DIR) --output-file=$(BLOBFUSE2_STREAM_CFG)
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/blobfuse2_gentest3.cov gen-test-config --config-file=azure_stream.yaml --container-name=$(containerName) --temp-path=$(TEMP_DIR) --output-file=$(BLOBFUSE2_STREAM_CFG)
displayName: 'Create Config File - Stream'
env:
NIGHTLY_STO_ACC_NAME: $(NIGHTLY_STO_BLOB_ACC_NAME)
@ -192,12 +211,13 @@ jobs:
ACCOUNT_ENDPOINT: 'https://$(NIGHTLY_STO_BLOB_ACC_NAME).blob.core.windows.net'
VERBOSE_LOG: false
continueOnError: false
workingDirectory: $(WORK_DIR)
# Streaming test preparation
- script: |
rm -rf $(MOUNT_DIR)/*
rm -rf $(TEMP_DIR)/*
./blobfuse2.test -test.v -test.coverprofile=blobfuse2_stream_prep.cov mount $(MOUNT_DIR) --config-file=$(BLOBFUSE2_CFG) --foreground=true &
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/blobfuse2_stream_prep.cov mount $(MOUNT_DIR) --config-file=$(BLOBFUSE2_CFG) --foreground=true &
sleep 10
ps -aux | grep blobfuse2
for i in {10,50,100,200,500,1024}; do echo $i; done | parallel --will-cite -j 5 'head -c {}M < /dev/urandom > $(WORK_DIR)/myfile_{}'
@ -212,7 +232,7 @@ jobs:
- script: |
rm -rf $(MOUNT_DIR)/*
rm -rf $(TEMP_DIR)/*
./blobfuse2.test -test.v -test.coverprofile=blobfuse2_stream.cov mount $(MOUNT_DIR) --config-file=$(BLOBFUSE2_STREAM_CFG) --foreground=true &
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/blobfuse2_stream.cov mount $(MOUNT_DIR) --config-file=$(BLOBFUSE2_STREAM_CFG) --foreground=true &
sleep 10
ps -aux | grep blobfuse2
./blobfuse2 mount list
@ -238,7 +258,7 @@ jobs:
# Component generation code coverage
- script: |
./blobfuse2.test -test.v -test.coverprofile=generate_cmd.cov generate test_component
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/generate_cmd.cov generate test_component
if [ $? -ne 0 ]; then
exit 1
fi
@ -250,7 +270,7 @@ jobs:
rm -rf $(MOUNT_DIR)/*
rm -rf $(TEMP_DIR)/*
./blobfuse2.test -test.v -test.coverprofile=list_empty_cmd.cov mount list
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/list_empty_cmd.cov mount list
if [ $? -ne 0 ]; then
exit 1
fi
@ -258,14 +278,14 @@ jobs:
displayName: "CLI : Mount List"
- script: |
./blobfuse2.test -test.v -test.coverprofile=mount_cmd.cov mount all $(MOUNT_DIR) --config-file=$(BLOBFUSE2_CFG) --log-level=log_debug
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/mount_cmd.cov mount all $(MOUNT_DIR) --config-file=$(BLOBFUSE2_CFG) --log-level=log_debug
if [ $? -ne 0 ]; then
exit 1
fi
sleep 20
./blobfuse2.test -test.v -test.coverprofile=list_cmd_all.cov mount list
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/list_cmd_all.cov mount list
if [ $? -ne 0 ]; then
exit 1
fi
@ -274,35 +294,35 @@ jobs:
displayName: "CLI : Mount all and List"
- script: |
./blobfuse2.test -test.v -test.coverprofile=mount_cmd_all.cov mount all $(MOUNT_DIR) --config-file=$(BLOBFUSE2_CFG) --log-level=log_debug
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/mount_cmd_all.cov mount all $(MOUNT_DIR) --config-file=$(BLOBFUSE2_CFG) --log-level=log_debug
if [ $? -ne 0 ]; then
exit 1
fi
sleep 20
./blobfuse2.test -test.v -test.coverprofile=umnt_cmd_cont.cov unmount $(MOUNT_DIR)/$(containerName)
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/umnt_cmd_cont.cov unmount $(MOUNT_DIR)/$(containerName)
if [ $? -ne 0 ]; then
exit 1
fi
./blobfuse2.test -test.v -test.coverprofile=umnt_wild_cmd.cov unmount testmut*
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/umnt_wild_cmd.cov unmount testmut*
if [ $? -ne 0 ]; then
exit 1
fi
./blobfuse2.test -test.v -test.coverprofile=umnt_negative_cmd.cov unmount abcdef
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/umnt_negative_cmd.cov unmount abcdef
if [ $? -ne 0 ]; then
exit 1
fi
for i in {1..5}; do ./blobfuse2.test -test.v -test.coverprofile=umnt_all_cmd.cov unmount all; done
for i in {1..5}; do ./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/umnt_all_cmd.cov unmount all; done
workingDirectory: $(WORK_DIR)
displayName: "CLI : Unmount options"
# Mount / Unmount Negative tests
- script: |
./blobfuse2.test -test.v -test.coverprofile=mount_neg.cov mount all /abc --config-file=$(BLOBFUSE2_CFG) --log-level=log_debug
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/mount_neg.cov mount all /abc --config-file=$(BLOBFUSE2_CFG) --log-level=log_debug
if [ $? -eq 0 ]; then
exit 1
fi
@ -312,14 +332,14 @@ jobs:
- script: |
./blobfuse2 unmount all
./blobfuse2.test -test.v -test.coverprofile=mount_foreg.cov mount $(MOUNT_DIR) --config-file=$(BLOBFUSE2_CFG) --log-level=log_debug --foreground=true &
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/mount_foreg.cov mount $(MOUNT_DIR) --config-file=$(BLOBFUSE2_CFG) --log-level=log_debug --foreground=true &
if [ $? -ne 0 ]; then
exit 1
fi
sleep 5
./blobfuse2.test -test.v -test.coverprofile=mount_remount.cov mount $(MOUNT_DIR) --config-file=$(BLOBFUSE2_CFG) --log-level=log_debug
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/mount_remount.cov mount $(MOUNT_DIR) --config-file=$(BLOBFUSE2_CFG) --log-level=log_debug
if [ $? -eq 0 ]; then
exit 1
fi
@ -330,13 +350,33 @@ jobs:
displayName: "CLI : Remount test"
timeoutInMinutes: 2
# Doc generation tests
- script: |
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/doc1.cov doc
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/doc2.cov doc --output-location /notexists
touch ~/a.txt
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/doc2.cov doc --output-location ~/a.txt
rm -rf ~/a.txt
workingDirectory: $(WORK_DIR)
displayName: "CLI : doc generation"
timeoutInMinutes: 2
# Version check
- script: |
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/version1.cov --version
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/version2.cov version
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/version2.cov version --check
workingDirectory: $(WORK_DIR)
displayName: "CLI : doc generation"
timeoutInMinutes: 2
# Simulate config change
- script: |
rm -rf $(MOUNT_DIR)/*
rm -rf $(TEMP_DIR)/*
./blobfuse2 unmount all
./blobfuse2.test -test.v -test.coverprofile=mount_foreg_2.cov mount all $(MOUNT_DIR) --config-file=$(BLOBFUSE2_CFG) --log-level=log_debug --foreground=true &
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/mount_foreg_2.cov mount all $(MOUNT_DIR) --config-file=$(BLOBFUSE2_CFG) --log-level=log_debug --foreground=true &
if [ $? -ne 0 ]; then
exit 1
fi
@ -350,12 +390,43 @@ jobs:
workingDirectory: $(WORK_DIR)
displayName: "CLI : Config change simulator"
# Secure Config, fine to use insecure passphrase as this is just for testing
- script: |
rm -rf $(MOUNT_DIR)/*
rm -rf $(TEMP_DIR)/*
./blobfuse2 unmount all
./blobfuse2 gen-test-config --config-file=azure_key.yaml --container-name=$(containerName) --temp-path=$(TEMP_DIR) --output-file=$(BLOBFUSE2_CFG)
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/secure_encrypt.cov secure encrypt --config-file=$(BLOBFUSE2_CFG) --output-file=$(Pipeline.Workspace)/blobfuse2.azsec --passphrase=123123123123123123123123
if [ $? -ne 0 ]; then
exit 1
fi
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/mount_secure.cov mount $(MOUNT_DIR) --config-file=$(Pipeline.Workspace)/blobfuse2.azsec --passphrase=123123123123123123123123 &
sleep 10
ps -aux | grep blobfuse2
rm -rf $(MOUNT_DIR)/*
cd test/e2e_tests
go test -v -timeout=7200s ./... -args -mnt-path=$(MOUNT_DIR) -adls=false -tmp-path=$(TEMP_DIR)
cd -
./blobfuse2.test -test.v -test.coverprofile=$(WORK_DIR)/secure_set.cov secure set --config-file=$(Pipeline.Workspace)/blobfuse2.azsec --passphrase=123123123123123123123123 --key=logging.level --value=log_debug
./blobfuse2 unmount all
sleep 5
workingDirectory: $(WORK_DIR)
displayName: "CLI : Secure Config"
env:
NIGHTLY_STO_ACC_NAME: $(NIGHTLY_STO_BLOB_ACC_NAME)
NIGHTLY_STO_ACC_KEY: $(NIGHTLY_STO_BLOB_ACC_KEY)
ACCOUNT_TYPE: 'block'
ACCOUNT_ENDPOINT: 'https://$(NIGHTLY_STO_BLOB_ACC_NAME).blob.core.windows.net'
VERBOSE_LOG: false
# -------------------------------------------------------
# Coverage report consolidation
- script: |
echo 'mode: count' > ./blobfuse2_coverage_raw.rpt
tail -q -n +2 ./*.cov >> ./blobfuse2_coverage_raw.rpt
cat ./blobfuse2_coverage_raw.rpt | grep -v mock_component | grep -v base_component | grep -v loopback | grep -v "common/log" > ./blobfuse2_coverage.rpt
cat ./blobfuse2_coverage_raw.rpt | grep -v mock_component | grep -v base_component | grep -v loopback | grep -v "common/log" | grep -v "common/exectime" > ./blobfuse2_coverage.rpt
go tool cover -func blobfuse2_coverage.rpt > ./blobfuse2_func_cover.rpt
go tool cover -html=./blobfuse2_coverage.rpt -o ./blobfuse2_coverage.html
go tool cover -html=./blobfuse2_ut.cov -o ./blobfuse2_ut.html
@ -371,4 +442,18 @@ jobs:
artifactName: 'Blobfuse2 Coverage $(tags)'
displayName: 'Publish Artifacts for blobfuse2 code coverage'
condition: succeeded()
# Overall code coverage check
- script: |
chmod 777 ./test/scripts/coveragecheck.sh
./test/scripts/coveragecheck.sh
workingDirectory: $(WORK_DIR)
displayName: "Overall coverage check"
# File level code coverage check
- script: |
./test/scripts/coveragecheck.sh file
workingDirectory: $(WORK_DIR)
displayName: "File level coverage check"
condition: always()

Просмотреть файл

@ -1,14 +1,8 @@
# Blobfuse2 Nightly Build-Sanity Pipeline
# In case of failure on a Self-Hosted Agent perform the following steps to get the vm back online:
# 1. Check which vm is offline by going to agent-pools in Azure pipelines portal
# 2. Log into the VM that is offline
# 3. Clear the _work or work directory which must be in myagent or $(HOME) directory
# 4. Verify whether system is online from the Azure pipelines portal
# Blobfuse2 Nightly Build Pipeline
schedules:
# Cron string < minute hour day-of-month month day-of-week>
# * means all like '*' in day of month means everyday
# * means all, for example '*' in day of month means everyday
# Run only on main branch
# 'always' controls whether to run only if there is a change or not
# Run this pipeline every 15:00 time
@ -48,7 +42,7 @@ parameters:
default: false
jobs:
# Ubuntu based test suite
# Ubuntu Tests
- job: Set_1
timeoutInMinutes: 300
@ -57,28 +51,21 @@ jobs:
Ubuntu-18:
imageName: 'ubuntu-18.04'
containerName: 'test-cnt-ubn-18'
adlsSas: $(UBUNTU-18-ADLS-SAS)
hostedAgent: true
stressParallel: 3
adlsSas: $(AZTEST_ADLS_CONT_SAS_UBN_18)
fuselib: 'libfuse-dev'
tags: 'fuse2'
Ubuntu-20:
imageName: 'ubuntu-20.04'
containerName: 'test-cnt-ubn-20'
adlsSas: $(UBUNTU-20-ADLS-SAS)
hostedAgent: true
stressParallel: 1
adlsSas: $(AZTEST_ADLS_CONT_SAS_UBN_20)
fuselib: 'libfuse3-dev'
tags: 'fuse3'
# Ubn-22 is not supported by devops as of now
#Ubuntu-22:
# imageName: 'ubuntu-22.04'
# containerName: 'test-cnt-ubn-22'
# adlsSas: $(UBUNTU-20-ADLS-SAS)
# hostedAgent: true
# stressParallel: 1
# fuselib: 'libfuse3-dev'
# tags: 'fuse3'
Ubuntu-22:
imageName: 'ubuntu-22.04'
containerName: 'test-cnt-ubn-22'
adlsSas: $(AZTEST_ADLS_CONT_SAS_UBN_22)
fuselib: 'libfuse3-dev'
tags: 'fuse3'
pool:
vmImage: $(imageName)
@ -225,11 +212,9 @@ jobs:
timeoutInMinutes: 300
strategy:
matrix:
ubuntu-20-proxy:
Ubuntu-20-Proxy:
imageName: 'ubuntu-20.04'
containerName: 'test-cnt-ubn-18-proxy'
hostedAgent: true
stressParallel: 3
pool:
vmImage: $(imageName)
@ -445,322 +430,525 @@ jobs:
- script: |
kill -9 $(pgrep mitmdump)
displayName: 'Kill Proxy'
# RHEL Tests
- job: Set_3
timeoutInMinutes: 60
strategy:
matrix:
RHEL-7.5:
DistroVer: "RHEL-7.5"
Description: "Red Hat Enterprise Linux 7.5"
AgentName: "blobfuse-rhel7_5"
ContainerName: "test-cnt-rhel-75"
tags: 'fuse3'
RHEL-8.1:
DistroVer: "RHEL-8.1"
Description: "Red Hat Enterprise Linux 8.1"
AgentName: "blobfuse-rhel8_1"
containerName: "test-cnt-rhel-81"
tags: 'fuse3'
RHEL-8.2:
DistroVer: "RHEL-8.2"
Description: "Red Hat Enterprise Linux 8.2"
AgentName: "blobfuse-rhel8_2"
containerName: "test-cnt-rhel-82"
tags: 'fuse3'
# End of Ubuntu tests
# ----------------------------------------------------------------------------------------
pool:
name: "blobfuse-rhel-pool"
demands:
- ImageOverride -equals $(AgentName)
variables:
- group: NightlyBlobFuse
- name: ROOT_DIR
value: "/usr/pipeline/workv2"
- name: WORK_DIR
value: "/usr/pipeline/workv2/go/src/azure-storage-fuse"
- name: skipComponentGovernanceDetection
value: true
- name: MOUNT_DIR
value: "/usr/pipeline/workv2/blob_mnt"
- name: TEMP_DIR
value: "/usr/pipeline/workv2/temp"
- name: BLOBFUSE2_CFG
value: "/usr/pipeline/workv2/blobfuse2.yaml"
- name: BLOBFUSE2_ADLS_CFG
value: "/home/vsts/workv2/blobfuse2.adls.yaml"
- name: GOPATH
value: "/usr/pipeline/workv2/go"
- ${{ if eq(parameters.exhaustive_test, true) }}:
# ---------------------------------------------------
# RHEL, Cent OS, Oracle Tests
- job: Set_3
timeoutInMinutes: 30
strategy:
matrix:
RHEL-7.5:
DistroVer: "RHEL-7.5"
AgentName: "RHEL 7.5"
Description: "Red Hat Enterprise Linux 7.5"
containerName: 'test-cnt-rhel-75'
hostedAgent: false
steps:
# Go tool installer
- task: GoTool@0
inputs:
version: '1.16.2'
displayName: "Install Go Version"
RHEL-8.1:
DistroVer: "RHEL-8.1"
AgentName: "RHEL 8.1"
Description: "Red Hat Enterprise Linux 8.1"
containerName: 'test-cnt-rhel-81'
hostedAgent: false
- script: |
sudo touch /etc/yum.repos.d/centos.repo
sudo sh -c 'echo -e "[centos-extras]\nname=Centos extras - $basearch\nbaseurl=http://mirror.centos.org/centos/7/extras/x86_64\nenabled=1\ngpgcheck=1\ngpgkey=http://centos.org/keys/RPM-GPG-KEY-CentOS-7" > /etc/yum.repos.d/centos.repo'
condition: eq(variables['AgentName'], 'blobfuse-rhel7_5')
displayName: "Update OS mirrors"
RHEL-8.2:
DistroVer: "RHEL-8.2"
AgentName: "RHEL 8.2"
Description: "Red Hat Enterprise Linux 8.2"
containerName: 'test-cnt-rhel-82'
hostedAgent: false
- template: 'azure-pipeline-templates/distro-tests.yml'
parameters:
working_dir: $(WORK_DIR)
root_dir: $(ROOT_DIR)
temp_dir: $(TEMP_DIR)
mount_dir: $(MOUNT_DIR)
config_path: $(BLOBFUSE2_CFG)
container: $(ContainerName)
blob_account_name: $(NIGHTLY_STO_BLOB_ACC_NAME)
blob_account_key: $(NIGHTLY_STO_BLOB_ACC_KEY)
adls_account_name: $(AZTEST_ADLS_ACC_NAME)
adls_account_key: $(AZTEST_ADLS_KEY)
distro_name: $(AgentName)
gopath: $(GOPATH)
tags: $(tags)
installStep:
script: |
sudo sed -i '/^failovermethod=/d' /etc/yum.repos.d/*.repo
sudo yum update -y
sudo yum groupinstall "Development Tools" -y
if [ $(AgentName) == "blobfuse-rhel7_5" ]; then
sudo yum install git fuse fuse3-libs fuse3-devel fuse3 rh-python36 -y
else
sudo yum install git fuse fuse3-libs fuse3-devel fuse3 python36 -y --nobest --allowerasing
fi
displayName: 'Install fuse'
verbose_log: ${{ parameters.verbose_log }}
CentOS-7.0:
DistroVer: "CentOS-7.0"
AgentName: "COS 7.0"
Description: "CentOS Linux 7.0"
containerName: 'test-cnt-cent-7'
hostedAgent: false
# Centos Tests
- job: Set_4
timeoutInMinutes: 60
strategy:
matrix:
CentOS-7.9:
DistroVer: "CentOS-7.9"
Description: "CentOS 7.9"
AgentName: "blobfuse-centos7"
ContainerName: "test-cnt-cent-7"
CentOS-8.5:
DistroVer: "CentOS-8.5"
Description: "CentOS 8.5"
AgentName: "blobfuse-centos8"
ContainerName: "test-cnt-cent-8"
CentOS-8.0:
DistroVer: "CentOS-8.0"
AgentName: "COS 8.0"
Description: "CentOS Linux 8.0"
containerName: 'test-cnt-cent-8'
hostedAgent: false
pool:
name: "blobfuse-centos-pool"
demands:
- ImageOverride -equals $(AgentName)
Oracle-8.1:
DistroVer: "Oracle-8.1"
AgentName: "ORA 8.1"
Description: "Oracle Linux 8.1 Gen 2"
containerName: 'test-cnt-ora-81'
hostedAgent: false
variables:
- group: NightlyBlobFuse
- name: ROOT_DIR
value: "/usr/pipeline/workv2"
- name: WORK_DIR
value: "/usr/pipeline/workv2/go/src/azure-storage-fuse"
- name: skipComponentGovernanceDetection
value: true
- name: MOUNT_DIR
value: "/usr/pipeline/workv2/blob_mnt"
- name: TEMP_DIR
value: "/usr/pipeline/workv2/temp"
- name: BLOBFUSE2_CFG
value: "/usr/pipeline/workv2/blobfuse2.yaml"
- name: BLOBFUSE2_ADLS_CFG
value: "/home/vsts/workv2/blobfuse2.adls.yaml"
- name: GOPATH
value: "/usr/pipeline/workv2/go"
pool:
name: "BlobFuse pool"
demands:
- Agent.Name -equals $(AgentName)
steps:
# Go tool installer
- task: GoTool@0
inputs:
version: '1.16.2'
displayName: "Install Go Version"
variables:
- group: NightlyBlobFuse
- name: ROOT_DIR
value: "/usr/pipeline/workv2"
- name: WORK_DIR
value: "/usr/pipeline/workv2/go/src/azure-storage-fuse"
- name: skipComponentGovernanceDetection
value: true
- script: |
sudo sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS-*
sudo sed -i 's|baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' /etc/yum.repos.d/CentOS-*
condition: eq(variables['AgentName'], 'blobfuse-centos8')
displayName: "Update OS mirrors"
- name: MOUNT_DIR
value: "/usr/pipeline/workv2/blob_mnt"
- name: TEMP_DIR
value: "/usr/pipeline/workv2/temp"
- name: BLOBFUSE2_CFG
value: "/usr/pipeline/workv2/blobfuse2.yaml"
- name: BLOBFUSE2_ADLS_CFG
value: "/home/vsts/workv2/blobfuse2.adls.yaml"
- name: GOPATH
value: "/usr/pipeline/workv2/go"
- template: 'azure-pipeline-templates/distro-tests.yml'
parameters:
working_dir: $(WORK_DIR)
root_dir: $(ROOT_DIR)
temp_dir: $(TEMP_DIR)
mount_dir: $(MOUNT_DIR)
config_path: $(BLOBFUSE2_CFG)
container: $(ContainerName)
blob_account_name: $(NIGHTLY_STO_BLOB_ACC_NAME)
blob_account_key: $(NIGHTLY_STO_BLOB_ACC_KEY)
adls_account_name: $(AZTEST_ADLS_ACC_NAME)
adls_account_key: $(AZTEST_ADLS_KEY)
distro_name: $(AgentName)
gopath: $(GOPATH)
installStep:
script: |
sudo yum update -y --skip-broken
if [ $(AgentName) == "blobfuse-centos8" ]; then
sudo yum install gcc gcc-c++ make git fuse fuse3 fuse3-devel python36 -y --nobest --allowerasing
else
sudo yum install gcc gcc-c++ make git fuse3 fuse3-devel python36 -y
fi
displayName: 'Install fuse'
verbose_log: ${{ parameters.verbose_log }}
steps:
- template: 'azure-pipeline-templates/distro-tests.yml'
parameters:
working_dir: $(WORK_DIR)
root_dir: $(ROOT_DIR)
temp_dir: $(TEMP_DIR)
mount_dir: $(MOUNT_DIR)
config_path: $(BLOBFUSE2_CFG)
container: $(containerName)
blob_account_name: $(NIGHTLY_STO_BLOB_ACC_NAME)
blob_account_key: $(NIGHTLY_STO_BLOB_ACC_KEY)
adls_account_name: $(AZTEST_ADLS_ACC_NAME)
adls_account_key: $(AZTEST_ADLS_KEY)
distro_name: $(AgentName)
gopath: $(GOPATH)
installStep:
script: |
sudo yum update -y
sudo yum install git fuse3 fuse3-devel python36 -y
displayName: 'Install fuse'
verbose_log: ${{ parameters.verbose_log }}
# Oracle Tests
- job: Set_5
timeoutInMinutes: 60
strategy:
matrix:
Oracle-8.1:
DistroVer: "Oracle-8.1"
Description: "Oracle Linux 8.1"
AgentName: "blobfuse-oracle81"
ContainerName: "test-cnt-ora-81"
# ------------------------------------------------------------
# Debian tests
- job: Set_4
timeoutInMinutes: 30
strategy:
matrix:
Debian-9.0:
DistroVer: "Debian9.0"
AgentName: "DEB 9.0"
Description: "Debian Linux 9.0 Gen 1"
containerName: 'test-cnt-deb-9'
hostedAgent: false
fuselib: 'libfuse-dev'
tags: 'fuse2'
Debian-10.0:
DistroVer: "Debian10.0"
AgentName: "DEB 10.0"
Description: "Debian Linux 10.0 Gen 1"
containerName: 'test-cnt-deb-10'
hostedAgent: false
fuselib: 'libfuse-dev'
tags: 'fuse2'
Debian-11.0:
DistroVer: "Debian11.0"
AgentName: "DEB 11.0"
Description: "Debian Linux 11.0 Gen 2"
containerName: 'test-cnt-deb-11'
hostedAgent: false
fuselib: 'libfuse3-dev'
tags: 'fuse3'
pool:
name: 'Blobfuse Pool'
demands:
- Agent.Name -equals $(AgentName)
pool:
name: "blobfuse-oracle-pool"
demands:
- ImageOverride -equals $(AgentName)
variables:
- group: NightlyBlobFuse
- name: ROOT_DIR
value: "/home/vsts/workv2"
- name: WORK_DIR
value: "/home/vsts/workv2/go/src/azure-storage-fuse"
- name: skipComponentGovernanceDetection
value: true
- name: MOUNT_DIR
value: "/home/vsts/workv2/blob_mnt"
- name: TEMP_DIR
value: "/home/vsts/workv2/blobfuse2tmp"
- name: BLOBFUSE2_CFG
value: "/home/vsts/workv2/blobfuse2.yaml"
- name: BLOBFUSE2_ADLS_CFG
value: "/home/vsts/workv2/blobfuse2.adls.yaml"
- name: GOPATH
value: "/home/vsts/workv2/go"
variables:
- group: NightlyBlobFuse
- name: ROOT_DIR
value: "/usr/pipeline/workv2"
- name: WORK_DIR
value: "/usr/pipeline/workv2/go/src/azure-storage-fuse"
- name: skipComponentGovernanceDetection
value: true
- name: MOUNT_DIR
value: "/usr/pipeline/workv2/blob_mnt"
- name: TEMP_DIR
value: "/usr/pipeline/workv2/temp"
- name: BLOBFUSE2_CFG
value: "/usr/pipeline/workv2/blobfuse2.yaml"
- name: BLOBFUSE2_ADLS_CFG
value: "/home/vsts/workv2/blobfuse2.adls.yaml"
- name: GOPATH
value: "/usr/pipeline/workv2/go"
# Distro Tests
steps:
- template: 'azure-pipeline-templates/distro-tests.yml'
parameters:
working_dir: $(WORK_DIR)
root_dir: $(ROOT_DIR)
temp_dir: $(TEMP_DIR)
mount_dir: $(MOUNT_DIR)
gopath: $(GOPATH)
config_path: $(BLOBFUSE2_CFG)
container: $(containerName)
blob_account_name: $(NIGHTLY_STO_BLOB_ACC_NAME)
blob_account_key: $(NIGHTLY_STO_BLOB_ACC_KEY)
adls_account_name: $(AZTEST_ADLS_ACC_NAME)
adls_account_key: $(AZTEST_ADLS_KEY)
distro_name: $(AgentName)
tags: $(tags)
fuselib: $(fuselib)
installStep:
script: |
sudo apt-get update --fix-missing
sudo apt-get install $(fuselib) -y
displayName: 'Install libfuse'
verbose_log: ${{ parameters.verbose_log }}
steps:
# Go tool installer
- task: GoTool@0
inputs:
version: '1.16.2'
displayName: "Install Go Version"
# ------------------------------------------------------------
# SUSE tests
- job: Set_5
timeoutInMinutes: 30
strategy:
matrix:
SUSE-15G2:
DistroVer: "Suse-15Gen2"
AgentName: "SUSE 15G2"
Description: "SUSE Ent Linux 15-SP1-Gen2"
containerName: 'test-cnt-suse-15'
- template: 'azure-pipeline-templates/distro-tests.yml'
parameters:
working_dir: $(WORK_DIR)
root_dir: $(ROOT_DIR)
temp_dir: $(TEMP_DIR)
mount_dir: $(MOUNT_DIR)
config_path: $(BLOBFUSE2_CFG)
container: $(ContainerName)
blob_account_name: $(NIGHTLY_STO_BLOB_ACC_NAME)
blob_account_key: $(NIGHTLY_STO_BLOB_ACC_KEY)
adls_account_name: $(AZTEST_ADLS_ACC_NAME)
adls_account_key: $(AZTEST_ADLS_KEY)
distro_name: $(AgentName)
gopath: $(GOPATH)
installStep:
script: |
sudo yum update -y
sudo yum install gcc gcc-c++ make git fuse fuse3 fuse3-devel python36 -y --nobest --allowerasing
displayName: 'Install fuse'
verbose_log: ${{ parameters.verbose_log }}
pool:
name: 'Blobfuse Pool'
demands:
- Agent.Name -equals $(AgentName)
- job: Set_6
timeoutInMinutes: 60
strategy:
matrix:
Debian-9.0:
DistroVer: "Debian9.0"
AgentName: "DEB 9.0"
Description: "Debian Linux 9.0 Gen 1"
containerName: 'test-cnt-deb-9'
fuselib: 'libfuse-dev'
tags: 'fuse2'
pool:
name: 'Blobfuse Pool'
demands:
- Agent.Name -equals $(AgentName)
variables:
- group: NightlyBlobFuse
- name: ROOT_DIR
value: "/home/vsts/workv2"
- name: WORK_DIR
value: "/home/vsts/workv2/go/src/azure-storage-fuse"
- name: skipComponentGovernanceDetection
value: true
- name: MOUNT_DIR
value: "/home/vsts/workv2/blob_mnt"
- name: TEMP_DIR
value: "/home/vsts/workv2/blobfuse2tmp"
- name: BLOBFUSE2_CFG
value: "/home/vsts/workv2/blobfuse2.yaml"
- name: BLOBFUSE2_ADLS_CFG
value: "/home/vsts/workv2/blobfuse2.adls.yaml"
- name: GOPATH
value: "/home/vsts/workv2/go"
variables:
- group: NightlyBlobFuse
- name: ROOT_DIR
value: "/home/vsts/workv2"
- name: WORK_DIR
value: "/home/vsts/workv2/go/src/azure-storage-fuse"
- name: skipComponentGovernanceDetection
value: true
- name: MOUNT_DIR
value: "/home/vsts/workv2/blob_mnt"
- name: TEMP_DIR
value: "/home/vsts/workv2/blobfuse2tmp"
- name: BLOBFUSE2_CFG
value: "/home/vsts/workv2/blobfuse2.yaml"
- name: BLOBFUSE2_ADLS_CFG
value: "/home/vsts/workv2/blobfuse2.adls.yaml"
- name: GOPATH
value: "/home/vsts/workv2/go"
# Distro Tests
steps:
- template: 'azure-pipeline-templates/distro-tests.yml'
parameters:
working_dir: $(WORK_DIR)
root_dir: $(ROOT_DIR)
temp_dir: $(TEMP_DIR)
mount_dir: $(MOUNT_DIR)
config_path: $(BLOBFUSE2_CFG)
container: $(containerName)
blob_account_name: $(NIGHTLY_STO_BLOB_ACC_NAME)
blob_account_key: $(NIGHTLY_STO_BLOB_ACC_KEY)
adls_account_name: $(AZTEST_ADLS_ACC_NAME)
adls_account_key: $(AZTEST_ADLS_KEY)
distro_name: $(AgentName)
gopath: $(GOPATH)
installStep:
script: |
sudo zypper -n install fuse3 fuse3-devel
displayName: 'Install fuse'
verbose_log: ${{ parameters.verbose_log }}
# ------------------------------------------------------------
# Mariner tests
- job: Set_6
timeoutInMinutes: 30
strategy:
matrix:
Mariner:
DistroVer: "Mari-1"
AgentName: "MARI 1"
Description: "CBL-Mariner Linux"
containerName: 'test-cnt-mari-1'
fuselib: 'libfuse-dev'
tags: 'fuse2'
# Distro Tests
steps:
- template: 'azure-pipeline-templates/distro-tests.yml'
parameters:
working_dir: $(WORK_DIR)
root_dir: $(ROOT_DIR)
temp_dir: $(TEMP_DIR)
mount_dir: $(MOUNT_DIR)
gopath: $(GOPATH)
config_path: $(BLOBFUSE2_CFG)
container: $(containerName)
blob_account_name: $(NIGHTLY_STO_BLOB_ACC_NAME)
blob_account_key: $(NIGHTLY_STO_BLOB_ACC_KEY)
adls_account_name: $(AZTEST_ADLS_ACC_NAME)
adls_account_key: $(AZTEST_ADLS_KEY)
distro_name: $(AgentName)
tags: $(tags)
fuselib: $(fuselib)
installStep:
script: |
sudo apt-get update --fix-missing
sudo apt-get install $(fuselib) -y
displayName: 'Install libfuse'
verbose_log: ${{ parameters.verbose_log }}
pool:
name: 'Blobfuse Pool'
demands:
- Agent.Name -equals $(AgentName)
# Debian Tests
- job: Set_7
timeoutInMinutes: 60
strategy:
matrix:
# Debian-9.0:
# DistroVer: "Debian9.0"
# Description: "Debian 9"
# AgentName: "blobfuse-debian9"
# ContainerName: "test-cnt-deb-9"
# fuselib: 'fuse libfuse-dev'
# tags: 'fuse2'
Debian-10.0:
DistroVer: "Debian10.0"
Description: "Debian 10"
AgentName: "blobfuse-debian10"
ContainerName: "test-cnt-deb-10"
fuselib: 'fuse libfuse-dev'
tags: 'fuse2'
Debian-11.0:
DistroVer: "Debian11.0"
Description: "Debian 11"
AgentName: "blobfuse-debian11"
ContainerName: "test-cnt-deb-11"
fuselib: 'fuse3 libfuse3-dev'
tags: 'fuse3'
variables:
- group: NightlyBlobFuse
- name: ROOT_DIR
value: "/home/vsts/workv2"
- name: WORK_DIR
value: "/home/vsts/workv2/go/src/azure-storage-fuse"
- name: skipComponentGovernanceDetection
value: true
- name: MOUNT_DIR
value: "/home/vsts/workv2/blob_mnt"
- name: TEMP_DIR
value: "/home/vsts/workv2/blobfuse2tmp"
- name: BLOBFUSE2_CFG
value: "/home/vsts/workv2/blobfuse2.yaml"
- name: BLOBFUSE2_ADLS_CFG
value: "/home/vsts/workv2/blobfuse2.adls.yaml"
- name: GOPATH
value: "/home/vsts/workv2/go"
pool:
name: "blobfuse-debian-pool"
demands:
- ImageOverride -equals $(AgentName)
# Distro Tests
steps:
- template: 'azure-pipeline-templates/distro-tests.yml'
parameters:
working_dir: $(WORK_DIR)
root_dir: $(ROOT_DIR)
temp_dir: $(TEMP_DIR)
mount_dir: $(MOUNT_DIR)
config_path: $(BLOBFUSE2_CFG)
container: $(containerName)
blob_account_name: $(NIGHTLY_STO_BLOB_ACC_NAME)
blob_account_key: $(NIGHTLY_STO_BLOB_ACC_KEY)
adls_account_name: $(AZTEST_ADLS_ACC_NAME)
adls_account_key: $(AZTEST_ADLS_KEY)
distro_name: $(AgentName)
gopath: $(GOPATH)
tags: $(tags)
fuselib: $(fuselib)
installStep:
script: |
sudo tdnf install fuse fuse-devel
displayName: 'Install fuse'
verbose_log: ${{ parameters.verbose_log }}
variables:
- group: NightlyBlobFuse
- name: ROOT_DIR
value: "/usr/pipeline/workv2"
- name: WORK_DIR
value: "/usr/pipeline/workv2/go/src/azure-storage-fuse"
- name: skipComponentGovernanceDetection
value: true
- name: MOUNT_DIR
value: "/usr/pipeline/workv2/blob_mnt"
- name: TEMP_DIR
value: "/usr/pipeline/workv2/temp"
- name: BLOBFUSE2_CFG
value: "/usr/pipeline/workv2/blobfuse2.yaml"
- name: BLOBFUSE2_ADLS_CFG
value: "/home/vsts/workv2/blobfuse2.adls.yaml"
- name: GOPATH
value: "/usr/pipeline/workv2/go"
steps:
# Go tool installer
- task: GoTool@0
inputs:
version: '1.16.2'
displayName: "Install Go Version"
- template: 'azure-pipeline-templates/distro-tests.yml'
parameters:
working_dir: $(WORK_DIR)
root_dir: $(ROOT_DIR)
temp_dir: $(TEMP_DIR)
mount_dir: $(MOUNT_DIR)
config_path: $(BLOBFUSE2_CFG)
container: $(ContainerName)
blob_account_name: $(NIGHTLY_STO_BLOB_ACC_NAME)
blob_account_key: $(NIGHTLY_STO_BLOB_ACC_KEY)
adls_account_name: $(AZTEST_ADLS_ACC_NAME)
adls_account_key: $(AZTEST_ADLS_KEY)
distro_name: $(AgentName)
tags: $(tags)
fuselib: $(fuselib)
gopath: $(GOPATH)
installStep:
script: |
sudo rm /etc/apt/sources.list.d/azure.list
sudo apt-get update --fix-missing -y
sudo apt-get install $(fuselib) -y
sudo apt-get install build-essential git python3 -y
displayName: 'Install fuse'
verbose_log: ${{ parameters.verbose_log }}
# SUSE Tests
- job: Set_8
timeoutInMinutes: 60
strategy:
matrix:
SUSE-15:
DistroVer: "SUSE-15"
Description: "SUSE Enterprise Linux 15"
AgentName: "blobfuse-suse15"
ContainerName: "test-cnt-suse-15"
pool:
name: "blobfuse-suse-pool"
demands:
- ImageOverride -equals $(AgentName)
variables:
- group: NightlyBlobFuse
- name: ROOT_DIR
value: "/usr/pipeline/workv2"
- name: WORK_DIR
value: "/usr/pipeline/workv2/go/src/azure-storage-fuse"
- name: skipComponentGovernanceDetection
value: true
- name: MOUNT_DIR
value: "/usr/pipeline/workv2/blob_mnt"
- name: TEMP_DIR
value: "/usr/pipeline/workv2/temp"
- name: BLOBFUSE2_CFG
value: "/usr/pipeline/workv2/blobfuse2.yaml"
- name: BLOBFUSE2_ADLS_CFG
value: "/home/vsts/workv2/blobfuse2.adls.yaml"
- name: GOPATH
value: "/usr/pipeline/workv2/go"
steps:
# Go tool installer
- task: GoTool@0
inputs:
version: '1.16.2'
displayName: "Install Go Version"
- template: 'azure-pipeline-templates/distro-tests.yml'
parameters:
working_dir: $(WORK_DIR)
root_dir: $(ROOT_DIR)
temp_dir: $(TEMP_DIR)
mount_dir: $(MOUNT_DIR)
config_path: $(BLOBFUSE2_CFG)
container: $(ContainerName)
blob_account_name: $(NIGHTLY_STO_BLOB_ACC_NAME)
blob_account_key: $(NIGHTLY_STO_BLOB_ACC_KEY)
adls_account_name: $(AZTEST_ADLS_ACC_NAME)
adls_account_key: $(AZTEST_ADLS_KEY)
distro_name: $(AgentName)
gopath: $(GOPATH)
installStep:
script: |
sudo zypper -n install git golang make cmake gcc gcc-c++ glibc-devel fuse3
wget https://rpmfind.net/linux/opensuse/distribution/leap/15.2/repo/oss/x86_64/fuse3-devel-3.6.1-lp152.1.19.x86_64.rpm
sudo zypper -n --no-gpg-checks install fuse3-devel-3.6.1-lp152.1.19.x86_64.rpm
displayName: 'Install fuse'
verbose_log: ${{ parameters.verbose_log }}
# Mariner Tests
- job: Set_9
timeoutInMinutes: 60
strategy:
matrix:
Mariner:
DistroVer: "Mariner"
Description: "CBL-Mariner Linux"
AgentName: "blobfuse-mariner"
ContainerName: "test-cnt-mari-1"
fuselib: 'libfuse-dev'
tags: 'fuse2'
pool:
name: "blobfuse-mariner-pool"
demands:
- ImageOverride -equals $(AgentName)
variables:
- group: NightlyBlobFuse
- name: ROOT_DIR
value: "/usr/pipeline/workv2"
- name: WORK_DIR
value: "/usr/pipeline/workv2/go/src/azure-storage-fuse"
- name: skipComponentGovernanceDetection
value: true
- name: MOUNT_DIR
value: "/usr/pipeline/workv2/blob_mnt"
- name: TEMP_DIR
value: "/usr/pipeline/workv2/temp"
- name: BLOBFUSE2_CFG
value: "/usr/pipeline/workv2/blobfuse2.yaml"
- name: BLOBFUSE2_ADLS_CFG
value: "/home/vsts/workv2/blobfuse2.adls.yaml"
- name: GOPATH
value: "/usr/pipeline/workv2/go"
steps:
# Go tool installer
- task: GoTool@0
inputs:
version: '1.16.2'
displayName: "Install Go Version"
- template: 'azure-pipeline-templates/distro-tests.yml'
parameters:
working_dir: $(WORK_DIR)
root_dir: $(ROOT_DIR)
temp_dir: $(TEMP_DIR)
mount_dir: $(MOUNT_DIR)
config_path: $(BLOBFUSE2_CFG)
container: $(ContainerName)
blob_account_name: $(NIGHTLY_STO_BLOB_ACC_NAME)
blob_account_key: $(NIGHTLY_STO_BLOB_ACC_KEY)
adls_account_name: $(AZTEST_ADLS_ACC_NAME)
adls_account_key: $(AZTEST_ADLS_KEY)
distro_name: $(AgentName)
tags: $(tags)
fuselib: $(fuselib)
gopath: $(GOPATH)
installStep:
script: |
sudo tdnf install build-essential git fuse fuse-devel python36 -y
displayName: 'Install fuse'
verbose_log: ${{ parameters.verbose_log }}
- ${{ if eq(parameters.msi_test, true) }}:
# -----------------------------------------------------------
# Ubuntu-20.04 MSI tests
- job: Set_7
timeoutInMinutes: 30
- job: Set_10
timeoutInMinutes: 60
strategy:
matrix:
MSI_TEST:
Ubuntu-20-MSI:
DistroVer: "Ubn20_MSI"
AgentName: "MSITestUBN20"
Description: "MSITEST - 2"
AgentName: "blobfuse-ubuntu20"
Description: "Ubuntu 20 MSI Test"
pool:
name: "Blobfuse Pool"
name: "blobfuse-ubuntu-pool"
demands:
- Agent.Name -equals $(AgentName)
- ImageOverride -equals $(AgentName)
variables:
- group: NightlyBlobFuse
@ -770,15 +958,14 @@ jobs:
value: "/home/vsts/workv2/go/src/azure-storage-fuse"
- name: skipComponentGovernanceDetection
value: true
- name: MOUNT_DIR
value: "/home/vsts/workv2/blob_mnt"
- name: TEMP_DIR
value: "/home/vsts/workv2/blobfuse2tmp"
- name: BLOBFUSE2_CFG
value: "/home/vibhansa/myblobv2.msi.yaml"
value: "/home/vsts/workv2//myblobv2.msi.yaml"
- name: BLOBFUSE2_CFG_ADLS
value: "/home/vibhansa/myblobv2.msi.adls.yaml"
value: "/home/vsts/workv2/myblobv2.msi.adls.yaml"
- name: GOPATH
value: "/home/vsts/workv2/go"
- name: containerName
@ -795,13 +982,13 @@ jobs:
# Install libfuse
- script: |
sudo apt-get install libfuse3-dev fuse3 -y -o Dpkg::Options::="--force-confnew"
sudo apt-get install make cmake gcc g++ libfuse3-dev fuse3 -y -o Dpkg::Options::="--force-confnew"
sudo apt-get update --fix-missing -o Dpkg::Options::="--force-confnew"
displayName: 'Install Fuse'
# Prestart cleanup
- script: |
sudo fusermount -u $(MOUNT_DIR)
sudo fusermount3 -u $(MOUNT_DIR)
sudo kill -9 `pidof blobfuse2`
sudo rm -rf $(ROOT_DIR)
displayName: 'PreBuild Cleanup'
@ -834,7 +1021,6 @@ jobs:
root_dir: $(ROOT_DIR)
mount_dir: $(MOUNT_DIR)
temp_dir: $(TEMP_DIR)
hostedAgent: false
gopath: $(GOPATH)
container: $(containerName)
skip_msi: "false"
@ -908,10 +1094,3 @@ jobs:
mount_dir: $(MOUNT_DIR)
temp_dir: $(TEMP_DIR)
- script: |
sudo rm -rf ${ROOT_DIR}
pwd
cd /`pwd | cut -d '/' -f 2,3,4,5`
sudo rm -rf [0-9]
displayName: 'Clean Agent Directories'
condition: always()

Просмотреть файл

@ -45,7 +45,7 @@ jobs:
# Prestart cleanup
- script: |
sudo fusermount -u $(MOUNT_DIR)
sudo fusermount3 -u $(MOUNT_DIR)
sudo kill -9 `pidof blobfuse2`
sudo rm -rf $(ROOT_DIR)
displayName: 'PreBuild Cleanup'
@ -102,7 +102,7 @@ jobs:
continueOnError: false
- script: |
sudo fusermount -u ${MOUNT_DIR}
sudo fusermount3 -u ${MOUNT_DIR}
sudo kill -9 `pidof blobfuse2` || true
displayName: "Unmount Blobfuse2 Binary Run"
@ -131,7 +131,7 @@ jobs:
displayName: Publish Performance Report
- script: |
sudo fusermount -u ${MOUNT_DIR}
sudo fusermount3 -u ${MOUNT_DIR}
sudo kill -9 `pidof blobfuse2` || true
displayName: "Unmount Blobfuse2 Main Branch Run"
@ -141,11 +141,4 @@ jobs:
working_dir: $(WORK_DIR)
mount_dir: $(MOUNT_DIR)
temp_dir: $(TEMP_DIR)
- script: |
sudo rm -rf ${ROOT_DIR}
pwd
cd /`pwd | cut -d '/' -f 2,3,4,5`
sudo rm -rf [0-9]
displayName: 'Clean Agent Directories'
condition: always()

Разница между файлами не показана из-за своего большого размера Загрузить разницу

18
build.sh Executable file
Просмотреть файл

@ -0,0 +1,18 @@
#!/bin/bash
if [ "$1" == "fuse2" ]
then
# Build blobfuse2 with fuse2
rm -rf blobfuse2
rm -rf azure-storage-fuse
go build -tags fuse2 -o blobfuse2
elif [ "$1" == "health" ]
then
# Build Health Monitor binary
go build -tags healthmon -o healthmon
else
# Build blobfuse2 with fuse3
rm -rf blobfuse2
rm -rf azure-storage-fuse
go build -o blobfuse2
fi

Просмотреть файл

@ -38,6 +38,7 @@ import (
"log"
"os"
"regexp"
"strings"
"github.com/spf13/cobra"
)
@ -61,7 +62,13 @@ var generateTestConfig = &cobra.Command{
Args: cobra.ExactArgs(0),
FlagErrorHandling: cobra.ExitOnError,
RunE: func(cmd *cobra.Command, args []string) error {
templateConfig, err := ioutil.ReadFile(templatesDir + opts.configFilePath)
var templateConfig []byte
var err error
if strings.Contains(opts.configFilePath, templatesDir) {
templateConfig, err = ioutil.ReadFile(opts.configFilePath)
} else {
templateConfig, err = ioutil.ReadFile(templatesDir + opts.configFilePath)
}
if err != nil {
log.Fatal(err)
return err

Просмотреть файл

@ -9,6 +9,6 @@ echo "" >> $loader_file
echo "import (" >> $loader_file
for i in $(find . -type d | grep "component/" | cut -c 3- | sort -u); do # Not recommended, will break on whitespace
echo " _ \"blobfuse2/$i\"" >> $loader_file
echo " _ \"github.com/Azure/azure-storage-fuse/v2/$i\"" >> $loader_file
done
echo ")" >> $loader_file

Просмотреть файл

@ -34,10 +34,10 @@
package cmd
import (
_ "blobfuse2/component/attr_cache"
_ "blobfuse2/component/azstorage"
_ "blobfuse2/component/file_cache"
_ "blobfuse2/component/libfuse"
_ "blobfuse2/component/loopback"
_ "blobfuse2/component/stream"
_ "github.com/Azure/azure-storage-fuse/v2/component/attr_cache"
_ "github.com/Azure/azure-storage-fuse/v2/component/azstorage"
_ "github.com/Azure/azure-storage-fuse/v2/component/file_cache"
_ "github.com/Azure/azure-storage-fuse/v2/component/libfuse"
_ "github.com/Azure/azure-storage-fuse/v2/component/loopback"
_ "github.com/Azure/azure-storage-fuse/v2/component/stream"
)

Просмотреть файл

@ -34,11 +34,6 @@
package cmd
import (
"blobfuse2/common"
"blobfuse2/common/config"
"blobfuse2/common/exectime"
"blobfuse2/common/log"
"blobfuse2/internal"
"context"
"fmt"
"io/ioutil"
@ -52,6 +47,12 @@ import (
"strings"
"syscall"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/config"
"github.com/Azure/azure-storage-fuse/v2/common/exectime"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/sevlyar/go-daemon"
"github.com/spf13/cobra"
)
@ -85,7 +86,7 @@ type mountOptions struct {
}
var options mountOptions
var pipelineStarted bool
var pipelineStarted bool //nolint
func (opt *mountOptions) validate(skipEmptyMount bool) error {
if opt.MountPath == "" {
@ -293,9 +294,11 @@ var mountCmd = &cobra.Command{
config.Set("mount-path", options.MountPath)
var pipeline *internal.Pipeline
log.Crit("Starting Blobfuse2 Mount : %s on (%s)", common.Blobfuse2Version, common.GetCurrentDistro())
log.Crit("Logging level set to : %s", logLevel.String())
pipeline, err := internal.NewPipeline(options.Components)
pipeline, err = internal.NewPipeline(options.Components, !daemon.WasReborn())
if err != nil {
log.Err("Mount: error initializing new pipeline [%v]", err)
fmt.Println("failed to mount :", err)
@ -310,7 +313,7 @@ var mountCmd = &cobra.Command{
Umask: 027,
}
ctx, _ := context.WithCancel(context.Background())
ctx, _ := context.WithCancel(context.Background()) //nolint
daemon.SetSigHandler(sigusrHandler(pipeline, ctx), syscall.SIGUSR1, syscall.SIGUSR2)
child, err := dmnCtx.Reborn()
if err != nil {
@ -318,7 +321,7 @@ var mountCmd = &cobra.Command{
Destroy(1)
}
if child == nil {
defer dmnCtx.Release()
defer dmnCtx.Release() // nolint
setGOConfig()
go startDynamicProfiler()
runPipeline(pipeline, ctx)
@ -378,7 +381,7 @@ func runPipeline(pipeline *internal.Pipeline, ctx context.Context) {
Destroy(1)
}
log.Destroy()
_ = log.Destroy()
}
func sigusrHandler(pipeline *internal.Pipeline, ctx context.Context) daemon.SignalHandlerFunc {
@ -436,8 +439,6 @@ func startDynamicProfiler() {
if err != nil {
log.Err("startDynamicProfiler : Failed to start dynamic profiler [%s]", err.Error())
}
return
}
func init() {
@ -451,7 +452,7 @@ func init() {
mountCmd.PersistentFlags().StringVar(&options.ConfigFile, "config-file", "",
"Configures the path for the file where the account credentials are provided. Default is config.yaml in current directory.")
mountCmd.MarkPersistentFlagFilename("config-file", "yaml")
_ = mountCmd.MarkPersistentFlagFilename("config-file", "yaml")
mountCmd.PersistentFlags().BoolVar(&options.SecureConfig, "secure-config", false,
"Encrypt auto generated config file for each container")
@ -462,14 +463,14 @@ func init() {
mountCmd.PersistentFlags().String("log-level", "LOG_WARNING",
"Enables logs written to syslog. Set to LOG_WARNING by default. Allowed values are LOG_OFF|LOG_CRIT|LOG_ERR|LOG_WARNING|LOG_INFO|LOG_DEBUG")
config.BindPFlag("logging.level", mountCmd.PersistentFlags().Lookup("log-level"))
mountCmd.RegisterFlagCompletionFunc("log-level", func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
_ = mountCmd.RegisterFlagCompletionFunc("log-level", func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return []string{"LOG_OFF", "LOG_CRIT", "LOG_ERR", "LOG_WARNING", "LOG_INFO", "LOG_TRACE", "LOG_DEBUG"}, cobra.ShellCompDirectiveNoFileComp
})
mountCmd.PersistentFlags().String("log-file-path",
common.DefaultLogFilePath, "Configures the path for log files. Default is "+common.DefaultLogFilePath)
config.BindPFlag("logging.file-path", mountCmd.PersistentFlags().Lookup("log-file-path"))
mountCmd.MarkPersistentFlagDirname("log-file-path")
_ = mountCmd.MarkPersistentFlagDirname("log-file-path")
mountCmd.PersistentFlags().Bool("foreground", false, "Mount the system in foreground mode. Default value false.")
config.BindPFlag("foreground", mountCmd.PersistentFlags().Lookup("foreground"))
@ -480,7 +481,7 @@ func init() {
mountCmd.PersistentFlags().String("default-working-dir", "", "Default working directory for storing log files and other blobfuse2 information")
mountCmd.PersistentFlags().Lookup("default-working-dir").Hidden = true
config.BindPFlag("default-working-dir", mountCmd.PersistentFlags().Lookup("default-working-dir"))
mountCmd.MarkPersistentFlagDirname("default-working-dir")
_ = mountCmd.MarkPersistentFlagDirname("default-working-dir")
config.AttachToFlagSet(mountCmd.PersistentFlags())
config.AttachFlagCompletions(mountCmd)
@ -488,6 +489,6 @@ func init() {
}
func Destroy(code int) {
log.Destroy()
_ = log.Destroy()
os.Exit(code)
}

Просмотреть файл

@ -34,9 +34,6 @@
package cmd
import (
"blobfuse2/common"
"blobfuse2/common/config"
"blobfuse2/common/log"
"context"
"fmt"
"io/ioutil"
@ -45,7 +42,11 @@ import (
"path/filepath"
"strings"
"blobfuse2/component/azstorage"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/config"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/component/azstorage"
"github.com/spf13/cobra"
"github.com/spf13/viper"
@ -68,7 +69,7 @@ var mountAllCmd = &cobra.Command{
Args: cobra.ExactArgs(1),
FlagErrorHandling: cobra.ExitOnError,
Run: func(cmd *cobra.Command, args []string) {
VersionCheck()
_ = VersionCheck()
mountAllOpts.blobfuse2BinPath = os.Args[0]
options.MountPath = args[0]
@ -154,15 +155,11 @@ func getContainerList() []string {
// Create AzStorage component to get container list
azComponent := &azstorage.AzStorage{}
if azComponent == nil {
fmt.Printf("MountAll : Failed to create AzureStorage object")
os.Exit(1)
}
azComponent.SetName("azstorage")
azComponent.SetNextComponent(nil)
// Configure AzStorage component
err := azComponent.Configure()
err := azComponent.Configure(true)
if err != nil {
fmt.Printf("MountAll : Failed to configure AzureStorage object (%s)", err.Error())
os.Exit(1)
@ -183,7 +180,7 @@ func getContainerList() []string {
}
// Stop the azStorage component as its no more needed now
azComponent.Stop()
_ = azComponent.Stop()
return containerList
}
@ -256,7 +253,10 @@ func mountAllContainers(containerList []string, configFile string, mountPath str
}
if _, err := os.Stat(contMountPath); os.IsNotExist(err) {
os.MkdirAll(contMountPath, 0777)
err = os.MkdirAll(contMountPath, 0777)
if err != nil {
fmt.Printf("failed to create directory %s : %s\n", contMountPath, err.Error())
}
}
// NOTE : Add all the configs that need replacement based on container here
@ -306,7 +306,11 @@ func writeConfigFile(contConfigFile string) {
}
} else {
// Write modified config as per container to a new config file
viper.WriteConfigAs(contConfigFile)
err := viper.WriteConfigAs(contConfigFile)
if err != nil {
fmt.Println("Failed to write config file : ", err.Error())
os.Exit(1)
}
}
}
@ -327,9 +331,5 @@ func buildCliParamForMount() []string {
}
func ignoreCliParam(opt string) bool {
if strings.HasPrefix(opt, "--config-file") {
return true
}
return false
return strings.HasPrefix(opt, "--config-file")
}

Просмотреть файл

@ -34,9 +34,10 @@
package cmd
import (
"blobfuse2/common"
"fmt"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/spf13/cobra"
)

Просмотреть файл

@ -34,12 +34,6 @@
package cmd
import (
"blobfuse2/common"
"blobfuse2/common/config"
"blobfuse2/common/log"
"blobfuse2/component/azstorage"
"blobfuse2/component/file_cache"
"blobfuse2/component/libfuse"
"encoding/json"
"fmt"
"io/ioutil"
@ -47,6 +41,13 @@ import (
"os/exec"
"strings"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/config"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/component/azstorage"
"github.com/Azure/azure-storage-fuse/v2/component/file_cache"
"github.com/Azure/azure-storage-fuse/v2/component/libfuse"
"github.com/spf13/cobra"
)

Просмотреть файл

@ -34,14 +34,15 @@
package cmd
import (
"blobfuse2/common"
"blobfuse2/common/config"
"blobfuse2/common/log"
"fmt"
"io/ioutil"
"os"
"testing"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/config"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/suite"

Просмотреть файл

@ -34,11 +34,6 @@
package cmd
import (
"blobfuse2/component/attr_cache"
"blobfuse2/component/azstorage"
"blobfuse2/component/file_cache"
"blobfuse2/component/libfuse"
"blobfuse2/component/stream"
"bufio"
"bytes"
"errors"
@ -50,6 +45,12 @@ import (
"strconv"
"strings"
"github.com/Azure/azure-storage-fuse/v2/component/attr_cache"
"github.com/Azure/azure-storage-fuse/v2/component/azstorage"
"github.com/Azure/azure-storage-fuse/v2/component/file_cache"
"github.com/Azure/azure-storage-fuse/v2/component/libfuse"
"github.com/Azure/azure-storage-fuse/v2/component/stream"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"gopkg.in/yaml.v3"
@ -145,7 +146,7 @@ var generateConfigCmd = &cobra.Command{
Args: cobra.MaximumNArgs(1),
FlagErrorHandling: cobra.ExitOnError,
RunE: func(cmd *cobra.Command, args []string) error {
VersionCheck()
_ = VersionCheck()
resetOptions()
// If we are only converting the config without mounting then we do not need the mount path and therefore the args length would be 0
if len(args) == 1 {
@ -210,7 +211,8 @@ var generateConfigCmd = &cobra.Command{
if bfv2StorageConfigOptions.UseHTTP {
http = "http"
}
var accountType = ""
accountType := ""
if bfv2StorageConfigOptions.AccountType == "" || bfv2StorageConfigOptions.AccountType == "blob" {
accountType = "blob"
} else if bfv2StorageConfigOptions.AccountType == "adls" {
@ -361,6 +363,11 @@ func convertBfConfigParameter(flags *pflag.FlagSet, configParameterKey string, c
bfv2StorageConfigOptions.ClientSecret = configParameterValue
case "servicePrincipalTenantId":
bfv2StorageConfigOptions.TenantID = configParameterValue
case "msiEndpoint":
// msiEndpoint is not supported config in V2, this needs to be given as MSI_ENDPOINT env variable
return nil
default:
return fmt.Errorf("failed to parse configuration file. the configuration parameter `%s` is not supported in Blobfuse2", configParameterKey)
}
@ -372,7 +379,7 @@ func convertBfCliParameters(flags *pflag.FlagSet) error {
if flags.Lookup("set-content-type").Changed || flags.Lookup("ca-cert-file").Changed || flags.Lookup("basic-remount-check").Changed || flags.Lookup(
"background-download").Changed || flags.Lookup("cache-poll-timeout-msec").Changed || flags.Lookup("upload-modified-only").Changed {
logWriter, _ := syslog.New(syslog.LOG_WARNING, "")
logWriter.Warning("one or more unsupported v1 parameters [set-content-type, ca-cert-file, basic-remount-check, background-download, cache-poll-timeout-msec, upload-modified-only] have been passed, ignoring and proceeding to mount")
_ = logWriter.Warning("one or more unsupported v1 parameters [set-content-type, ca-cert-file, basic-remount-check, background-download, cache-poll-timeout-msec, upload-modified-only] have been passed, ignoring and proceeding to mount")
}
bfv2LoggingConfigOptions.Type = "syslog"

Просмотреть файл

@ -34,13 +34,6 @@
package cmd
import (
"blobfuse2/common"
"blobfuse2/common/config"
"blobfuse2/common/log"
"blobfuse2/component/attr_cache"
"blobfuse2/component/azstorage"
"blobfuse2/component/file_cache"
"blobfuse2/component/stream"
"bytes"
"fmt"
"io/ioutil"
@ -49,6 +42,14 @@ import (
"testing"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/config"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/component/attr_cache"
"github.com/Azure/azure-storage-fuse/v2/component/azstorage"
"github.com/Azure/azure-storage-fuse/v2/component/file_cache"
"github.com/Azure/azure-storage-fuse/v2/component/stream"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"github.com/spf13/viper"

Просмотреть файл

@ -34,8 +34,6 @@
package cmd
import (
"blobfuse2/common"
"blobfuse2/common/log"
"encoding/xml"
"fmt"
"io/ioutil"
@ -44,6 +42,9 @@ import (
"strings"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/spf13/cobra"
)
@ -67,7 +68,7 @@ var rootCmd = &cobra.Command{
FlagErrorHandling: cobra.ExitOnError,
Run: func(cmd *cobra.Command, args []string) {
if !disableVersionCheck {
VersionCheck()
_ = VersionCheck()
}
},
}

Просмотреть файл

@ -34,13 +34,14 @@
package cmd
import (
"blobfuse2/common"
"errors"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/spf13/cobra"
)
@ -117,15 +118,15 @@ func validateOptions() error {
}
if secOpts.ConfigFile == "" {
errors.New("config file not provided, check usage")
return errors.New("config file not provided, check usage")
}
if _, err := os.Stat(secOpts.ConfigFile); os.IsNotExist(err) {
errors.New("config file does not exists")
return errors.New("config file does not exists")
}
if secOpts.PassPhrase == "" {
errors.New("provide passphrase as cli parameter or configure BLOBFUSE2_SECURE_CONFIG_PASSPHRASE environment variable")
return errors.New("provide passphrase as cli parameter or configure BLOBFUSE2_SECURE_CONFIG_PASSPHRASE environment variable")
}
return nil

Просмотреть файл

@ -51,7 +51,10 @@ var getKeyCmd = &cobra.Command{
Example: "blobfuse2 secure get --config-file=config.yaml --passphrase=PASSPHRASE --key=logging.log_level",
FlagErrorHandling: cobra.ExitOnError,
RunE: func(cmd *cobra.Command, args []string) error {
validateOptions()
err := validateOptions()
if err != nil {
return err
}
plainText, err := decryptConfigFile(false)
if err != nil {

Просмотреть файл

@ -34,12 +34,13 @@
package cmd
import (
"blobfuse2/common"
"errors"
"fmt"
"reflect"
"strings"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/spf13/cobra"
"github.com/spf13/viper"
"gopkg.in/yaml.v2"
@ -53,7 +54,10 @@ var setKeyCmd = &cobra.Command{
Example: "blobfuse2 secure set --config-file=config.yaml --passphrase=PASSPHRASE --key=logging.log_level --value=log_debug",
FlagErrorHandling: cobra.ExitOnError,
RunE: func(cmd *cobra.Command, args []string) error {
validateOptions()
err := validateOptions()
if err != nil {
return err
}
plainText, err := decryptConfigFile(false)
if err != nil {
@ -93,7 +97,10 @@ var setKeyCmd = &cobra.Command{
return err
}
saveToFile(secOpts.ConfigFile, cipherText, false)
if err = saveToFile(secOpts.ConfigFile, cipherText, false); err != nil {
return err
}
return nil
},
}

Просмотреть файл

@ -34,14 +34,15 @@
package cmd
import (
"blobfuse2/common"
"blobfuse2/common/log"
"bytes"
"fmt"
"io/ioutil"
"os"
"testing"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"github.com/stretchr/testify/assert"
@ -137,6 +138,26 @@ func (suite *secureConfigTestSuite) TestSecureConfigEncryptNotExistent() {
suite.assert.NotNil(err)
}
func (suite *secureConfigTestSuite) TestSecureConfigEncryptNoConfig() {
defer suite.cleanupTest()
_, err := executeCommandSecure(rootCmd, "secure", "encrypt")
suite.assert.NotNil(err)
}
func (suite *secureConfigTestSuite) TestSecureConfigEncryptNoKey() {
defer suite.cleanupTest()
confFile, _ := ioutil.TempFile("", "conf*.yaml")
defer os.Remove(confFile.Name())
_, err := confFile.WriteString(testPlainTextConfig)
suite.assert.Nil(err)
_, err = executeCommandSecure(rootCmd, "secure", "encrypt", fmt.Sprintf("--config-file=%s", confFile.Name()))
suite.assert.NotNil(err)
}
func (suite *secureConfigTestSuite) TestSecureConfigEncryptInvalidKey() {
defer suite.cleanupTest()
confFile, _ := ioutil.TempFile("", "conf*.yaml")
@ -178,6 +199,26 @@ func (suite *secureConfigTestSuite) TestSecureConfigDecrypt() {
os.Remove(confFile.Name() + "." + SecureConfigExtension)
}
func (suite *secureConfigTestSuite) TestSecureConfigDecryptNoConfig() {
defer suite.cleanupTest()
_, err := executeCommandSecure(rootCmd, "secure", "decrypt")
suite.assert.NotNil(err)
}
func (suite *secureConfigTestSuite) TestSecureConfigDecryptNoKey() {
defer suite.cleanupTest()
confFile, _ := ioutil.TempFile("", "conf*.yaml")
defer os.Remove(confFile.Name())
_, err := confFile.WriteString(testPlainTextConfig)
suite.assert.Nil(err)
_, err = executeCommandSecure(rootCmd, "secure", "decrypt", fmt.Sprintf("--config-file=%s", confFile.Name()))
suite.assert.NotNil(err)
}
func (suite *secureConfigTestSuite) TestSecureConfigGet() {
defer suite.cleanupTest()
confFile, _ := ioutil.TempFile("", "conf*.yaml")

Просмотреть файл

@ -34,12 +34,13 @@
package cmd
import (
"blobfuse2/common"
"fmt"
"os/exec"
"regexp"
"strings"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/spf13/cobra"
)

Просмотреть файл

@ -34,9 +34,10 @@
package cmd
import (
"blobfuse2/common"
"fmt"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/spf13/cobra"
)

Просмотреть файл

@ -34,9 +34,10 @@
package cmd
import (
"blobfuse2/common"
"fmt"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/spf13/cobra"
)

Просмотреть файл

@ -34,10 +34,11 @@
package cache_policy
import (
"blobfuse2/common"
"blobfuse2/common/log"
"container/list"
"sync"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
)
//KeyPair: the list node containing both block key and cache block values

Просмотреть файл

@ -34,10 +34,11 @@
package cache_policy
import (
"blobfuse2/common"
"container/list"
"testing"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/suite"
)

Просмотреть файл

@ -34,8 +34,6 @@
package config
import (
"blobfuse2/common"
"blobfuse2/common/log"
"fmt"
"io"
"io/ioutil"
@ -43,6 +41,9 @@ import (
"strings"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/spf13/cobra"
"github.com/fsnotify/fsnotify"
@ -294,7 +295,7 @@ func AttachToFlagSet(flagset *pflag.FlagSet) {
func AttachFlagCompletions(cmd *cobra.Command) {
for key, fn := range userOptions.completionFuncMap {
cmd.RegisterFlagCompletionFunc(key, fn)
_ = cmd.RegisterFlagCompletionFunc(key, fn)
}
}

Просмотреть файл

@ -225,8 +225,7 @@ func (tree *Tree) MergeWithKey(key string, obj interface{}, getValue func(val in
if subTree == nil {
return
}
var elem reflect.Value
elem = reflect.Indirect(reflect.ValueOf(obj))
var elem = reflect.Indirect(reflect.ValueOf(obj))
if obj == nil {
return
}
@ -264,8 +263,7 @@ func (tree *Tree) Merge(obj interface{}, getValue func(val interface{}) (res int
if subTree == nil {
return
}
var elem reflect.Value
elem = reflect.Indirect(reflect.ValueOf(obj))
var elem = reflect.Indirect(reflect.ValueOf(obj))
if obj == nil {
return
}

Просмотреть файл

@ -0,0 +1,158 @@
/*
_____ _____ _____ ____ ______ _____ ------
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| --- | | | | |-----| |---- | | |-----| |----- ------
| | | | | | | | | | | | |
| ____| |_____ | ____| | ____| | |_____| _____| |_____ |_____
Licensed under the MIT License <http://opensource.org/licenses/MIT>.
Copyright © 2020-2022 Microsoft Corporation. All rights reserved.
Author : <blobfusedev@microsoft.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
*/
package config
import (
"reflect"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/suite"
)
type keysTreeTestSuite struct {
suite.Suite
assert *assert.Assertions
}
func (suite *keysTreeTestSuite) SetupTest() {
suite.assert = assert.New(suite.T())
}
func TestKeysTree(t *testing.T) {
suite.Run(t, new(keysTreeTestSuite))
}
type parseVal struct {
val string
toType reflect.Kind
result interface{}
}
func (suite *keysTreeTestSuite) TestParseValue() {
var inputs = []parseVal{
{val: "true", toType: reflect.Bool, result: true},
{val: "87", toType: reflect.Int, result: 87},
{val: "127", toType: reflect.Int8, result: 127},
{val: "32767", toType: reflect.Int16, result: 32767},
{val: "2147483647", toType: reflect.Int32, result: 2147483647},
{val: "9223372036854775807", toType: reflect.Int64, result: 9223372036854775807},
{val: "1374", toType: reflect.Uint, result: 1374},
{val: "255", toType: reflect.Uint8, result: 255},
{val: "65535", toType: reflect.Uint16, result: 65535},
{val: "4294967295", toType: reflect.Uint32, result: 4294967295},
{val: "18446744073709551615", toType: reflect.Uint64, result: uint64(18446744073709551615)},
{val: "6.24321908234", toType: reflect.Float32, result: 6.24321908234},
{val: "31247921747687123.123871293791263", toType: reflect.Float64, result: 31247921747687123.123871293791263},
{val: "6-8i", toType: reflect.Complex64, result: 6 - 8i},
{val: "2341241-910284i", toType: reflect.Complex128, result: 2341241 - 910284i},
{val: "Hello World", toType: reflect.String, result: "Hello World"},
}
for _, i := range inputs {
suite.Run(i.val, func() {
output := parseValue(i.val, i.toType)
suite.assert.EqualValues(i.result, output)
})
}
}
func (suite *keysTreeTestSuite) TestParseValueErr() {
var inputs = []parseVal{
{val: "Hello World", toType: reflect.Bool},
{val: "Hello World", toType: reflect.Int},
{val: "Hello World", toType: reflect.Int8},
{val: "Hello World", toType: reflect.Int16},
{val: "Hello World", toType: reflect.Int32},
{val: "Hello World", toType: reflect.Int64},
{val: "Hello World", toType: reflect.Uint},
{val: "Hello World", toType: reflect.Uint8},
{val: "Hello World", toType: reflect.Uint16},
{val: "Hello World", toType: reflect.Uint32},
{val: "Hello World", toType: reflect.Uint64},
{val: "Hello World", toType: reflect.Float32},
{val: "Hello World", toType: reflect.Float64},
{val: "Hello World", toType: reflect.Complex64},
{val: "Hello World", toType: reflect.Complex128},
}
for _, i := range inputs {
suite.Run(i.val, func() {
output := parseValue(i.val, i.toType)
suite.assert.Nil(i.result, output)
})
}
}
func (suite *keysTreeTestSuite) TestIsPrimitiveType() {
var inputs = []reflect.Kind{
reflect.Bool,
reflect.Int,
reflect.Int8,
reflect.Int16,
reflect.Int32,
reflect.Int64,
reflect.Uint,
reflect.Uint8,
reflect.Uint16,
reflect.Uint32,
reflect.Uint64,
reflect.Float32,
reflect.Float64,
reflect.Complex64,
reflect.Complex128,
reflect.String,
}
for _, i := range inputs {
suite.Run(i.String(), func() {
output := isPrimitiveType(i)
suite.assert.True(output)
})
}
}
func (suite *keysTreeTestSuite) TestIsNotPrimitiveType() {
var inputs = []reflect.Kind{
reflect.Array,
reflect.Func,
reflect.Map,
reflect.Ptr,
reflect.Slice,
reflect.Struct,
}
for _, i := range inputs {
suite.Run(i.String(), func() {
output := isPrimitiveType(i)
suite.assert.False(output)
})
}
}

Просмотреть файл

@ -34,7 +34,6 @@
package log
import (
"blobfuse2/common"
"fmt"
"io"
"log"
@ -43,6 +42,8 @@ import (
"runtime"
"sync"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
)
// LogConfig : Configuration to be provided to logging infra
@ -237,7 +238,7 @@ func (l *BaseLogger) logDumper(id int, channel <-chan string) {
l.fileConfig.currentLogSize += (uint64)(len(j))
if l.fileConfig.currentLogSize > l.fileConfig.LogSize {
//fmt.Println("Calling logrotate : ", l.fileConfig.currentLogSize, " : ", l.fileConfig.logSize)
l.LogRotate()
_ = l.LogRotate()
}
}
}
@ -265,11 +266,11 @@ func (l *BaseLogger) LogRotate() error {
// Move each file to next number 8 -> 9, 7 -> 8, 6 -> 7 ...
//fmt.Println("Renaming : ", fname, " : ", fnameNew)
os.Rename(fname, fnameNew)
_ = os.Rename(fname, fnameNew)
}
//fmt.Println("Renaming : ", l.fileConfig.logFile, l.fileConfig.logFile+".1")
os.Rename(l.fileConfig.LogFile, l.fileConfig.LogFile+".1")
_ = os.Rename(l.fileConfig.LogFile, l.fileConfig.LogFile+".1")
var err error
l.logFileHandle, err = os.OpenFile(l.fileConfig.LogFile, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0644)

Просмотреть файл

@ -34,11 +34,12 @@
package log
import (
"blobfuse2/common"
"errors"
"log"
"os"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
)
// Logger : Interface to define a generic Logger. Implement this to create your new logging lib

Просмотреть файл

@ -34,9 +34,10 @@
package log
import (
"blobfuse2/common"
"testing"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/suite"
)

Просмотреть файл

@ -34,8 +34,9 @@
package log
import (
"blobfuse2/common"
"log"
"github.com/Azure/azure-storage-fuse/v2/common"
)
type SilentLogger struct {

Просмотреть файл

@ -34,13 +34,14 @@
package log
import (
"blobfuse2/common"
"errors"
"fmt"
"log"
"log/syslog"
"path/filepath"
"runtime"
"github.com/Azure/azure-storage-fuse/v2/common"
)
type SysLogger struct {
@ -160,11 +161,9 @@ func (l *SysLogger) SetLogFile(name string) error {
}
func (l *SysLogger) SetMaxLogSize(size int) {
return
}
func (l *SysLogger) SetLogFileCount(count int) {
return
}
func (l *SysLogger) Destroy() error {

Просмотреть файл

@ -34,14 +34,11 @@
package stats
import (
"sync"
"time"
)
// FuseStats : Stats for the fuse wrapper
type FuseStats struct {
lck sync.RWMutex
fileOpen uint64
fileClose uint64
fileRead uint64
@ -55,15 +52,11 @@ type FuseStats struct {
// AttrCacheStats : Stats for attribute cache layer
type AttrCacheStats struct {
lck sync.RWMutex
numFiles uint64
}
// FileCacheStats : Stats for file cache layer
type FileCacheStats struct {
lck sync.RWMutex
numFiles uint64
cacheUsage uint64
lastCacheEviction uint64
@ -71,8 +64,6 @@ type FileCacheStats struct {
// StorageStats : Stats for storage layer
type StorageStats struct {
lck sync.RWMutex
fileOpen uint64
fileClose uint64
fileRead uint64
@ -89,8 +80,6 @@ type StorageStats struct {
// GlobalStats : Stats for global monitoring
type GlobalStats struct {
lck sync.RWMutex
mountTime time.Time
}

Просмотреть файл

@ -39,7 +39,6 @@ import (
"log"
"net/http"
"strings"
"sync"
"gopkg.in/yaml.v3"
)
@ -62,7 +61,11 @@ func GetFuseStats(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, string(d))
return
}
json.NewEncoder(w).Encode(&Blobfuse2Stats.fuse)
err := json.NewEncoder(w).Encode(Blobfuse2Stats.fuse)
if err != nil {
log.Fatalf("error: %v", err)
}
}
@ -81,7 +84,11 @@ func GetAttrCacheStats(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, string(d))
return
}
json.NewEncoder(w).Encode(&Blobfuse2Stats.attrCache)
err := json.NewEncoder(w).Encode(&Blobfuse2Stats.attrCache)
if err != nil {
log.Fatalf("error: %v", err)
}
}
@ -100,7 +107,11 @@ func GetFileCacheStats(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, string(d))
return
}
json.NewEncoder(w).Encode(&Blobfuse2Stats.fileCache)
err := json.NewEncoder(w).Encode(&Blobfuse2Stats.fileCache)
if err != nil {
log.Fatalf("error: %v", err)
}
}
@ -119,7 +130,11 @@ func GetStorageStats(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, string(d))
return
}
json.NewEncoder(w).Encode(&Blobfuse2Stats.storage)
err := json.NewEncoder(w).Encode(&Blobfuse2Stats.storage)
if err != nil {
log.Fatalf("error: %v", err)
}
}
func GetCommonStats(w http.ResponseWriter, r *http.Request) {
@ -137,7 +152,11 @@ func GetCommonStats(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, string(d))
return
}
json.NewEncoder(w).Encode(&Blobfuse2Stats.common)
err := json.NewEncoder(w).Encode(&Blobfuse2Stats.common)
if err != nil {
log.Fatalf("error: %v", err)
}
}
func GetStats(w http.ResponseWriter, r *http.Request) {
@ -155,19 +174,17 @@ func GetStats(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, string(d))
return
}
json.NewEncoder(w).Encode(&Blobfuse2Stats)
err := json.NewEncoder(w).Encode(&Blobfuse2Stats)
if err != nil {
log.Fatalf("error: %v", err)
}
}
func allocate() *Stats {
var stats *Stats
stats = &Stats{}
stats.fuse.lck = sync.RWMutex{}
stats.attrCache.lck = sync.RWMutex{}
stats.fileCache.lck = sync.RWMutex{}
stats.storage.lck = sync.RWMutex{}
stats.common.lck = sync.RWMutex{}
return stats
}

Просмотреть файл

@ -116,56 +116,6 @@ func (l *LogLevel) Parse(s string) error {
return err
}
type FileType int
var EFileType = FileType(0).File()
func (FileType) File() FileType {
return FileType(0)
}
func (FileType) Dir() FileType {
return FileType(1)
}
func (FileType) Symlink() FileType {
return FileType(2)
}
func (f FileType) String() string {
return enum.StringInt(f, reflect.TypeOf(f))
}
func (f *FileType) Parse(s string) error {
enumVal, err := enum.ParseInt(reflect.TypeOf(f), s, true, false)
if enumVal != nil {
*f = enumVal.(FileType)
}
return err
}
type EvictionPolicy int
var EPolicy = EvictionPolicy(0).LRU()
func (EvictionPolicy) LRU() EvictionPolicy {
return EvictionPolicy(0)
}
func (EvictionPolicy) LFU() EvictionPolicy {
return EvictionPolicy(1)
}
func (EvictionPolicy) ARC() EvictionPolicy {
return EvictionPolicy(2)
}
func (ep *EvictionPolicy) Parse(s string) error {
enumVal, err := enum.ParseInt(reflect.TypeOf(ep), s, true, false)
if enumVal != nil {
*ep = enumVal.(EvictionPolicy)
}
return err
}
type LogConfig struct {
Level LogLevel
MaxFileSize uint64
@ -300,10 +250,12 @@ func (u uuid) Bytes() []byte {
func NewUUIDWithLength(length int64) []byte {
u := make([]byte, length)
// Set all bits to randomly (or pseudo-randomly) chosen values.
rand.Read(u[:])
u[8] = (u[8] | 0x40) & 0x7F // u.setVariant(ReservedRFC4122)
var version byte = 4
u[6] = (u[6] & 0xF) | (version << 4) // u.setVersion(4)
_, err := rand.Read(u[:])
if err == nil {
u[8] = (u[8] | 0x40) & 0x7F // u.setVariant(ReservedRFC4122)
var version byte = 4
u[6] = (u[6] & 0xF) | (version << 4) // u.setVersion(4)
}
return u[:]
}
@ -311,11 +263,12 @@ func NewUUIDWithLength(length int64) []byte {
func NewUUID() (u uuid) {
u = uuid{}
// Set all bits to randomly (or pseudo-randomly) chosen values.
rand.Read(u[:])
u[8] = (u[8] | reservedRFC4122) & 0x7F // u.setVariant(ReservedRFC4122)
var version byte = 4
u[6] = (u[6] & 0xF) | (version << 4) // u.setVersion(4)
_, err := rand.Read(u[:])
if err == nil {
u[8] = (u[8] | reservedRFC4122) & 0x7F // u.setVariant(ReservedRFC4122)
var version byte = 4
u[6] = (u[6] & 0xF) | (version << 4) // u.setVersion(4)
}
return
}

Просмотреть файл

@ -9,7 +9,7 @@
Licensed under the MIT License <http://opensource.org/licenses/MIT>.
Copyright © 2020-2021 Microsoft Corporation. All rights reserved.
Copyright © 2020-2022 Microsoft Corporation. All rights reserved.
Author : <blobfusedev@microsoft.com>
Permission is hereby granted, free of charge, to any person obtaining a copy

Просмотреть файл

@ -45,6 +45,7 @@ import (
"os/user"
"strconv"
"strings"
"sync"
"gopkg.in/ini.v1"
)
@ -234,3 +235,13 @@ func (bm *BitMap16) Set(bit uint16) { *bm |= (1 << bit) }
// Clear : Clear the given bit from bitmap
func (bm *BitMap16) Clear(bit uint16) { *bm &= ^(1 << bit) }
type KeyedMutex struct {
mutexes sync.Map // Zero value is empty and ready for use
}
func (m *KeyedMutex) GetLock(key string) *sync.Mutex {
value, _ := m.mutexes.LoadOrStore(key, &sync.Mutex{})
mtx := value.(*sync.Mutex)
return mtx
}

126
common/util_test.go Normal file
Просмотреть файл

@ -0,0 +1,126 @@
/*
_____ _____ _____ ____ ______ _____ ------
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| --- | | | | |-----| |---- | | |-----| |----- ------
| | | | | | | | | | | | |
| ____| |_____ | ____| | ____| | |_____| _____| |_____ |_____
Licensed under the MIT License <http://opensource.org/licenses/MIT>.
Copyright © 2020-2022 Microsoft Corporation. All rights reserved.
Author : <blobfusedev@microsoft.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
*/
package common
import (
"fmt"
"math/rand"
"os"
"path/filepath"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/suite"
)
var home_dir, _ = os.UserHomeDir()
func randomString(length int) string {
rand.Seed(time.Now().UnixNano())
b := make([]byte, length)
rand.Read(b)
return fmt.Sprintf("%x", b)[:length]
}
type utilTestSuite struct {
suite.Suite
assert *assert.Assertions
}
func (suite *utilTestSuite) SetupTest() {
suite.assert = assert.New(suite.T())
}
func TestUtil(t *testing.T) {
suite.Run(t, new(utilTestSuite))
}
func (suite *typesTestSuite) TestDirectoryExists() {
rand := randomString(8)
dir := filepath.Join(home_dir, "dir"+rand)
os.MkdirAll(dir, 0777)
defer os.RemoveAll(dir)
exists := DirectoryExists(dir)
suite.assert.True(exists)
}
func (suite *typesTestSuite) TestDirectoryDoesNotExist() {
rand := randomString(8)
dir := filepath.Join(home_dir, "dir"+rand)
exists := DirectoryExists(dir)
suite.assert.False(exists)
}
func (suite *typesTestSuite) TestEncryptBadKey() {
// Generate a random key
key := make([]byte, 20)
rand.Read(key)
data := make([]byte, 1024)
rand.Read(data)
_, err := EncryptData(data, key)
suite.assert.NotNil(err)
}
func (suite *typesTestSuite) TestDecryptBadKey() {
// Generate a random key
key := make([]byte, 20)
rand.Read(key)
data := make([]byte, 1024)
rand.Read(data)
_, err := DecryptData(data, key)
suite.assert.NotNil(err)
}
func (suite *typesTestSuite) TestEncryptDecrypt() {
// Generate a random key
key := make([]byte, 16)
rand.Read(key)
data := make([]byte, 1024)
rand.Read(data)
cipher, err := EncryptData(data, key)
suite.assert.Nil(err)
d, err := DecryptData(cipher, key)
suite.assert.Nil(err)
suite.assert.EqualValues(data, d)
}

Просмотреть файл

@ -34,10 +34,6 @@
package attr_cache
import (
"blobfuse2/common/config"
"blobfuse2/common/log"
"blobfuse2/internal"
"blobfuse2/internal/handlemap"
"context"
"fmt"
"os"
@ -45,6 +41,11 @@ import (
"sync"
"syscall"
"time"
"github.com/Azure/azure-storage-fuse/v2/common/config"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
)
// By default attr cache is valid for 120 seconds
@ -113,7 +114,7 @@ func (ac *AttrCache) Stop() error {
// Configure : Pipeline will call this method after constructor so that you can read config and initialize yourself
// Return failure if any config is not valid to exit the process
func (ac *AttrCache) Configure() error {
func (ac *AttrCache) Configure(_ bool) error {
log.Trace("AttrCache::Configure : %s", ac.Name())
// >> If you do not need any config parameters remove below code and return nil
@ -143,7 +144,7 @@ func (ac *AttrCache) Configure() error {
// OnConfigChange : If component has registered, on config file change this method is called
func (ac *AttrCache) OnConfigChange() {
log.Trace("AttrCache::OnConfigChange : %s", ac.Name())
ac.Configure()
_ = ac.Configure(true)
}
// Helper Methods

Просмотреть файл

@ -34,11 +34,6 @@
package attr_cache
import (
"blobfuse2/common"
"blobfuse2/common/config"
"blobfuse2/common/log"
"blobfuse2/internal"
"blobfuse2/internal/handlemap"
"container/list"
"context"
"errors"
@ -51,6 +46,12 @@ import (
"testing"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/config"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/suite"
@ -69,10 +70,10 @@ var defaultSize = int64(0)
var defaultMode = 0777
func newTestAttrCache(next internal.Component, configuration string) *AttrCache {
config.ReadConfigFromReader(strings.NewReader(configuration))
_ = config.ReadConfigFromReader(strings.NewReader(configuration))
attrCache := NewAttrCacheComponent()
attrCache.SetNextComponent(next)
attrCache.Configure()
_ = attrCache.Configure(true)
return attrCache.(*AttrCache)
}
@ -192,11 +193,11 @@ func (suite *attrCacheTestSuite) setupTestHelper(config string) {
suite.mockCtrl = gomock.NewController(suite.T())
suite.mock = internal.NewMockComponent(suite.mockCtrl)
suite.attrCache = newTestAttrCache(suite.mock, config)
suite.attrCache.Start(context.Background())
_ = suite.attrCache.Start(context.Background())
}
func (suite *attrCacheTestSuite) cleanupTest() {
suite.attrCache.Stop()
_ = suite.attrCache.Stop()
suite.mockCtrl.Finish()
}
@ -850,8 +851,8 @@ func (suite *attrCacheTestSuite) TestGetAttrExistsDeleted() {
// delete directory a and file ac
suite.mock.EXPECT().DeleteDir(gomock.Any()).Return(nil)
suite.mock.EXPECT().DeleteFile(gomock.Any()).Return(nil)
suite.attrCache.DeleteDir(internal.DeleteDirOptions{Name: "a"})
suite.attrCache.DeleteFile(internal.DeleteFileOptions{Name: "ac"})
_ = suite.attrCache.DeleteDir(internal.DeleteDirOptions{Name: "a"})
_ = suite.attrCache.DeleteFile(internal.DeleteFileOptions{Name: "ac"})
options := internal.GetAttrOptions{Name: path}
// no call to mock component since attributes are accessible

Просмотреть файл

@ -34,10 +34,11 @@
package attr_cache
import (
"blobfuse2/common"
"blobfuse2/internal"
"os"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/internal"
)
// Flags represented in BitMap for various flags in the attr cache item

Просмотреть файл

@ -34,7 +34,7 @@
package azstorage
import (
"blobfuse2/common/log"
"github.com/Azure/azure-storage-fuse/v2/common/log"
)
// AzAuthConfig : Config to authenticate to storage
@ -61,7 +61,8 @@ type azAuthConfig struct {
ClientSecret string
ActiveDirectoryEndpoint string
Endpoint string
Endpoint string
AuthResource string
}
// azAuth : Interface to define a generic authentication type

Просмотреть файл

@ -1,4 +1,5 @@
// +build !authtest
/*
_____ _____ _____ ____ ______ _____ ------
| | | | | | | | | | | | |
@ -35,26 +36,28 @@
package azstorage
import (
"blobfuse2/common"
"blobfuse2/common/log"
"encoding/json"
"fmt"
"io/ioutil"
"os"
"testing"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/suite"
)
type storageTestConfiguration struct {
// Get the mount path from command line argument
BlockAccount string `json:"block-acct"`
AdlsAccount string `json:"adls-acct"`
FileAccount string `json:"file-acct"`
BlockContainer string `json:"block-cont"`
AdlsContainer string `json:"adls-cont"`
FileContainer string `json:"file-cont"`
BlockAccount string `json:"block-acct"`
AdlsAccount string `json:"adls-acct"`
FileAccount string `json:"file-acct"`
BlockContainer string `json:"block-cont"`
AdlsContainer string `json:"adls-cont"`
FileContainer string `json:"file-cont"`
// AdlsDirectory string `json:"adls-dir"`
BlockContainerHuge string `json:"block-cont-huge"`
AdlsContainerHuge string `json:"adls-cont-huge"`
FileContainerHuge string `json:"file-cont-huge"`
@ -62,15 +65,19 @@ type storageTestConfiguration struct {
AdlsKey string `json:"adls-key"`
FileKey string `json:"file-key"`
BlockSas string `json:"block-sas"`
BlockContSasUbn18 string `json:"block-cont-sas-ubn-18"`
BlockContSasUbn20 string `json:"block-cont-sas-ubn-20"`
AdlsSas string `json:"adls-sas"`
FileSas string `json:"file-sas"`
MsiAppId string `json:"msi-appid"`
MsiResId string `json:"msi-resid"`
SpnClientId string `json:"spn-client"`
SpnTenantId string `json:"spn-tenant"`
SpnClientSecret string `json:"spn-secret"`
SkipMsi bool `json:"skip-msi"`
ProxyAddress string `json:"proxy-address"`
// AdlsDirSasUbn18 string `json:"adls-dir-sas-ubn-18"`
// AdlsDirSasUbn20 string `json:"adls-dir-sas-ubn-20"`
MsiAppId string `json:"msi-appid"`
MsiResId string `json:"msi-resid"`
SpnClientId string `json:"spn-client"`
SpnTenantId string `json:"spn-tenant"`
SpnClientSecret string `json:"spn-secret"`
SkipMsi bool `json:"skip-msi"`
ProxyAddress string `json:"proxy-address"`
}
var storageTestConfigurationParameters storageTestConfiguration
@ -86,7 +93,11 @@ func (suite *authTestSuite) SetupTest() {
FileCount: 10,
Level: common.ELogLevel.LOG_DEBUG(),
}
log.SetDefaultLogger("base", cfg)
err := log.SetDefaultLogger("base", cfg)
if err != nil {
fmt.Println("Unable to set default logger")
os.Exit(1)
}
homeDir, err := os.UserHomeDir()
if err != nil {
@ -143,6 +154,113 @@ func generateEndpoint(useHttp bool, accountName string, accountType AccountType)
return endpoint
}
func (suite *authTestSuite) TestBlockInvalidAuth() {
defer suite.cleanupTest()
stgConfig := AzStorageConfig{
container: storageTestConfigurationParameters.BlockContainer,
authConfig: azAuthConfig{
AuthMode: EAuthType.INVALID_AUTH(),
AccountType: EAccountType.BLOCK(),
AccountName: storageTestConfigurationParameters.BlockAccount,
AccountKey: storageTestConfigurationParameters.BlockKey,
Endpoint: generateEndpoint(false, storageTestConfigurationParameters.BlockAccount, EAccountType.BLOCK()),
},
}
assert := assert.New(suite.T())
stg := NewAzStorageConnection(stgConfig)
if stg == nil {
assert.Fail("TestInvalidAuth : Failed to create Storage object")
}
if err := stg.SetupPipeline(); err == nil {
assert.Fail("TestInvalidAuth : Setup pipeline even though auth is invalid")
}
}
func (suite *authTestSuite) TestAdlsInvalidAuth() {
defer suite.cleanupTest()
stgConfig := AzStorageConfig{
container: storageTestConfigurationParameters.AdlsContainer,
authConfig: azAuthConfig{
AuthMode: EAuthType.INVALID_AUTH(),
AccountType: EAccountType.ADLS(),
AccountName: storageTestConfigurationParameters.AdlsAccount,
AccountKey: storageTestConfigurationParameters.AdlsKey,
Endpoint: generateEndpoint(false, storageTestConfigurationParameters.AdlsAccount, EAccountType.ADLS()),
},
}
assert := assert.New(suite.T())
stg := NewAzStorageConnection(stgConfig)
if stg == nil {
assert.Fail("TestInvalidAuth : Failed to create Storage object")
}
if err := stg.SetupPipeline(); err == nil {
assert.Fail("TestInvalidAuth : Setup pipeline even though auth is invalid")
}
}
func (suite *authTestSuite) TestInvalidAccountType() {
defer suite.cleanupTest()
stgConfig := AzStorageConfig{
container: storageTestConfigurationParameters.BlockContainer,
authConfig: azAuthConfig{
AuthMode: EAuthType.KEY(),
AccountType: EAccountType.INVALID_ACC(),
AccountName: storageTestConfigurationParameters.BlockAccount,
AccountKey: storageTestConfigurationParameters.BlockKey,
Endpoint: generateEndpoint(false, storageTestConfigurationParameters.BlockAccount, EAccountType.BLOCK()),
},
}
assert := assert.New(suite.T())
stg := NewAzStorageConnection(stgConfig)
if stg != nil {
assert.Fail("TestInvalidAuth : Created Storage object even though account type is invalid")
}
}
func (suite *authTestSuite) TestBlockInvalidSharedKey() {
defer suite.cleanupTest()
stgConfig := AzStorageConfig{
container: storageTestConfigurationParameters.BlockContainer,
authConfig: azAuthConfig{
AuthMode: EAuthType.KEY(),
AccountType: EAccountType.BLOCK(),
AccountName: storageTestConfigurationParameters.BlockAccount,
AccountKey: "",
Endpoint: generateEndpoint(false, storageTestConfigurationParameters.BlockAccount, EAccountType.BLOCK()),
},
}
assert := assert.New(suite.T())
stg := NewAzStorageConnection(stgConfig)
if stg == nil {
assert.Fail("TestBlockInvalidSharedKey : Failed to create Storage object")
}
if err := stg.SetupPipeline(); err == nil {
assert.Fail("TestBlockInvalidSharedKey : Setup pipeline even though shared key is invalid")
}
}
func (suite *authTestSuite) TestBlockInvalidSharedKey2() {
defer suite.cleanupTest()
stgConfig := AzStorageConfig{
container: storageTestConfigurationParameters.BlockContainer,
authConfig: azAuthConfig{
AuthMode: EAuthType.KEY(),
AccountType: EAccountType.BLOCK(),
AccountName: storageTestConfigurationParameters.BlockAccount,
AccountKey: "abcd>=", // string that will fail to base64 decode
Endpoint: generateEndpoint(false, storageTestConfigurationParameters.BlockAccount, EAccountType.BLOCK()),
},
}
assert := assert.New(suite.T())
stg := NewAzStorageConnection(stgConfig)
if stg == nil {
assert.Fail("TestBlockInvalidSharedKey : Failed to create Storage object")
}
if err := stg.SetupPipeline(); err == nil {
assert.Fail("TestBlockInvalidSharedKey : Setup pipeline even though shared key is invalid")
}
}
func (suite *authTestSuite) TestBlockSharedKey() {
defer suite.cleanupTest()
stgConfig := AzStorageConfig{
@ -172,6 +290,29 @@ func (suite *authTestSuite) TestHttpBlockSharedKey() {
}
suite.validateStorageTest("TestHttpBlockSharedKey", stgConfig)
}
func (suite *authTestSuite) TestAdlsInvalidSharedKey() {
defer suite.cleanupTest()
stgConfig := AzStorageConfig{
container: storageTestConfigurationParameters.AdlsContainer,
authConfig: azAuthConfig{
AuthMode: EAuthType.KEY(),
AccountType: EAccountType.ADLS(),
AccountName: storageTestConfigurationParameters.AdlsAccount,
AccountKey: "",
Endpoint: generateEndpoint(false, storageTestConfigurationParameters.AdlsAccount, EAccountType.ADLS()),
},
}
assert := assert.New(suite.T())
stg := NewAzStorageConnection(stgConfig)
if stg == nil {
assert.Fail("TestAdlsInvalidSharedKey : Failed to create Storage object")
}
if err := stg.SetupPipeline(); err == nil {
assert.Fail("TestAdlsInvalidSharedKey : Setup pipeline even though shared key is invalid")
}
}
func (suite *authTestSuite) TestAdlsSharedKey() {
defer suite.cleanupTest()
stgConfig := AzStorageConfig{
@ -234,6 +375,28 @@ func (suite *authTestSuite) TestHttpFileSharedKey() {
suite.validateStorageTest("TestHttpFileSharedKey", stgConfig)
}
func (suite *authTestSuite) TestBlockInvalidSasKey() {
defer suite.cleanupTest()
stgConfig := AzStorageConfig{
container: storageTestConfigurationParameters.BlockContainer,
authConfig: azAuthConfig{
AuthMode: EAuthType.SAS(),
AccountType: EAccountType.BLOCK(),
AccountName: storageTestConfigurationParameters.BlockAccount,
SASKey: "",
Endpoint: generateEndpoint(false, storageTestConfigurationParameters.BlockAccount, EAccountType.BLOCK()),
},
}
assert := assert.New(suite.T())
stg := NewAzStorageConnection(stgConfig)
if stg == nil {
assert.Fail("TestBlockInvalidSasKey : Failed to create Storage object")
}
if err := stg.SetupPipeline(); err == nil {
assert.Fail("TestBlockInvalidSasKey : Setup pipeline even though sas key is invalid")
}
}
func (suite *authTestSuite) TestBlockSasKey() {
defer suite.cleanupTest()
stgConfig := AzStorageConfig{
@ -265,6 +428,105 @@ func (suite *authTestSuite) TestHttpBlockSasKey() {
suite.validateStorageTest("TestHttpBlockSasKey", stgConfig)
}
func (suite *authTestSuite) TestBlockContSasKey() {
defer suite.cleanupTest()
sas := ""
if storageTestConfigurationParameters.BlockContainer == "test-cnt-ubn-18" {
sas = storageTestConfigurationParameters.BlockContSasUbn18
} else if storageTestConfigurationParameters.BlockContainer == "test-cnt-ubn-20" {
sas = storageTestConfigurationParameters.BlockContSasUbn20
} else {
return
}
stgConfig := AzStorageConfig{
container: storageTestConfigurationParameters.BlockContainer,
authConfig: azAuthConfig{
AuthMode: EAuthType.SAS(),
AccountType: EAccountType.BLOCK(),
AccountName: storageTestConfigurationParameters.BlockAccount,
SASKey: sas,
Endpoint: generateEndpoint(false, storageTestConfigurationParameters.BlockAccount, EAccountType.BLOCK()),
},
}
suite.validateStorageTest("TestBlockContSasKey", stgConfig)
}
func (suite *authTestSuite) TestHttpBlockContSasKey() {
defer suite.cleanupTest()
sas := ""
if storageTestConfigurationParameters.BlockContainer == "test-cnt-ubn-18" {
sas = storageTestConfigurationParameters.BlockContSasUbn18
} else if storageTestConfigurationParameters.BlockContainer == "test-cnt-ubn-20" {
sas = storageTestConfigurationParameters.BlockContSasUbn20
} else {
return
}
stgConfig := AzStorageConfig{
container: storageTestConfigurationParameters.BlockContainer,
authConfig: azAuthConfig{
AuthMode: EAuthType.SAS(),
AccountType: EAccountType.BLOCK(),
AccountName: storageTestConfigurationParameters.BlockAccount,
SASKey: sas,
UseHTTP: true,
Endpoint: generateEndpoint(true, storageTestConfigurationParameters.BlockAccount, EAccountType.BLOCK()),
},
}
suite.validateStorageTest("TestHttpBlockContSasKey", stgConfig)
}
func (suite *authTestSuite) TestBlockSasKeySetOption() {
defer suite.cleanupTest()
stgConfig := AzStorageConfig{
container: storageTestConfigurationParameters.BlockContainer,
authConfig: azAuthConfig{
AuthMode: EAuthType.SAS(),
AccountType: EAccountType.BLOCK(),
AccountName: storageTestConfigurationParameters.BlockAccount,
SASKey: storageTestConfigurationParameters.BlockSas,
Endpoint: generateEndpoint(false, storageTestConfigurationParameters.BlockAccount, EAccountType.BLOCK()),
},
}
assert := assert.New(suite.T())
stg := NewAzStorageConnection(stgConfig)
if stg == nil {
assert.Fail("TestBlockSasKeySetOption : Failed to create Storage object")
}
stg.SetupPipeline()
stg.NewCredentialKey("saskey", storageTestConfigurationParameters.BlockSas)
if err := stg.SetupPipeline(); err != nil {
assert.Fail("TestBlockSasKeySetOption : Failed to setup pipeline")
}
err := stg.TestPipeline()
if err != nil {
assert.Fail("TestBlockSasKeySetOption : Failed to TestPipeline")
}
}
func (suite *authTestSuite) TestAdlsInvalidSasKey() {
defer suite.cleanupTest()
stgConfig := AzStorageConfig{
container: storageTestConfigurationParameters.AdlsContainer,
authConfig: azAuthConfig{
AuthMode: EAuthType.SAS(),
AccountType: EAccountType.ADLS(),
AccountName: storageTestConfigurationParameters.AdlsAccount,
SASKey: "",
Endpoint: generateEndpoint(false, storageTestConfigurationParameters.AdlsAccount, EAccountType.ADLS()),
},
}
assert := assert.New(suite.T())
stg := NewAzStorageConnection(stgConfig)
if stg == nil {
assert.Fail("TestAdlsInvalidSasKey : Failed to create Storage object")
}
if err := stg.SetupPipeline(); err == nil {
assert.Fail("TestAdlsInvalidSasKey : Setup pipeline even though sas key is invalid")
}
}
// ADLS tests container SAS by default since ADLS account SAS does not support permissions.
func (suite *authTestSuite) TestAdlsSasKey() {
defer suite.cleanupTest()
stgConfig := AzStorageConfig{
@ -327,6 +589,85 @@ func (suite *authTestSuite) TestHttpFileSasKey() {
suite.validateStorageTest("TestHttpFileSasKey", stgConfig)
}
// func (suite *authTestSuite) TestAdlsDirSasKey() {
// defer suite.cleanupTest()
// assert := assert.New(suite.T())
// sas := ""
// if storageTestConfigurationParameters.AdlsDirectory == "test-dir-ubn-18" {
// sas = storageTestConfigurationParameters.AdlsDirSasUbn18
// } else if storageTestConfigurationParameters.AdlsDirectory == "test-dir-ubn-20" {
// sas = storageTestConfigurationParameters.AdlsDirSasUbn20
// } else {
// assert.Fail("TestAdlsDirSasKey : Unknown Directory for Sas Test")
// }
// stgConfig := AzStorageConfig{
// container: storageTestConfigurationParameters.AdlsContainer,
// prefixPath: storageTestConfigurationParameters.AdlsDirectory,
// authConfig: azAuthConfig{
// AuthMode: EAuthType.SAS(),
// AccountType: EAccountType.ADLS(),
// AccountName: storageTestConfigurationParameters.AdlsAccount,
// SASKey: sas,
// Endpoint: generateEndpoint(false, storageTestConfigurationParameters.AdlsAccount, EAccountType.ADLS()),
// },
// }
// suite.validateStorageTest("TestAdlsDirSasKey", stgConfig)
// }
// func (suite *authTestSuite) TestHttpAdlsDirSasKey() {
// defer suite.cleanupTest()
// assert := assert.New(suite.T())
// sas := ""
// if storageTestConfigurationParameters.AdlsDirectory == "test-dir-ubn-18" {
// sas = storageTestConfigurationParameters.AdlsDirSasUbn18
// } else if storageTestConfigurationParameters.AdlsDirectory == "test-dir-ubn-20" {
// sas = storageTestConfigurationParameters.AdlsDirSasUbn20
// } else {
// assert.Fail("TestHttpAdlsDirSasKey : Unknown Directory for Sas Test")
// }
// stgConfig := AzStorageConfig{
// container: storageTestConfigurationParameters.AdlsContainer,
// prefixPath: storageTestConfigurationParameters.AdlsDirectory,
// authConfig: azAuthConfig{
// AuthMode: EAuthType.SAS(),
// AccountType: EAccountType.ADLS(),
// AccountName: storageTestConfigurationParameters.AdlsAccount,
// SASKey: sas,
// UseHTTP: true,
// Endpoint: generateEndpoint(true, storageTestConfigurationParameters.AdlsAccount, EAccountType.ADLS()),
// },
// }
// suite.validateStorageTest("TestHttpAdlsDirSasKey", stgConfig)
// }
func (suite *authTestSuite) TestAdlsSasKeySetOption() {
defer suite.cleanupTest()
stgConfig := AzStorageConfig{
container: storageTestConfigurationParameters.AdlsContainer,
authConfig: azAuthConfig{
AuthMode: EAuthType.SAS(),
AccountType: EAccountType.ADLS(),
AccountName: storageTestConfigurationParameters.AdlsAccount,
SASKey: storageTestConfigurationParameters.AdlsSas,
Endpoint: generateEndpoint(false, storageTestConfigurationParameters.AdlsAccount, EAccountType.ADLS()),
},
}
assert := assert.New(suite.T())
stg := NewAzStorageConnection(stgConfig)
if stg == nil {
assert.Fail("TestBlockSasKeySetOption : Failed to create Storage object")
}
stg.SetupPipeline()
stg.NewCredentialKey("saskey", storageTestConfigurationParameters.AdlsSas)
if err := stg.SetupPipeline(); err != nil {
assert.Fail("TestBlockSasKeySetOption : Failed to setup pipeline")
}
err := stg.TestPipeline()
if err != nil {
assert.Fail("TestBlockSasKeySetOption : Failed to TestPipeline")
}
}
func (suite *authTestSuite) TestBlockMsiAppId() {
defer suite.cleanupTest()
if !storageTestConfigurationParameters.SkipMsi {
@ -361,7 +702,7 @@ func (suite *authTestSuite) TestBlockMsiResId() {
}
// Can't use HTTP requests with MSI/SPN credentials
func (suite *authTestSuite) TestAdlskMsiAppId() {
func (suite *authTestSuite) TestAdlsMsiAppId() {
defer suite.cleanupTest()
if !storageTestConfigurationParameters.SkipMsi {
stgConfig := AzStorageConfig{
@ -374,7 +715,7 @@ func (suite *authTestSuite) TestAdlskMsiAppId() {
Endpoint: generateEndpoint(false, storageTestConfigurationParameters.AdlsAccount, EAccountType.ADLS()),
},
}
suite.validateStorageTest("TestAdlskMsiAppId", stgConfig)
suite.validateStorageTest("TestAdlsMsiAppId", stgConfig)
}
}
@ -394,6 +735,31 @@ func (suite *authTestSuite) TestAdlskMsiResId() {
suite.validateStorageTest("TestAdlskMsiResId", stgConfig)
}
}
func (suite *authTestSuite) TestBlockInvalidSpn() {
defer suite.cleanupTest()
stgConfig := AzStorageConfig{
container: storageTestConfigurationParameters.BlockContainer,
authConfig: azAuthConfig{
AuthMode: EAuthType.SPN(),
AccountType: EAccountType.BLOCK(),
AccountName: storageTestConfigurationParameters.BlockAccount,
ClientID: storageTestConfigurationParameters.SpnClientId,
TenantID: storageTestConfigurationParameters.SpnTenantId,
ClientSecret: "",
Endpoint: generateEndpoint(false, storageTestConfigurationParameters.BlockAccount, EAccountType.BLOCK()),
},
}
assert := assert.New(suite.T())
stg := NewAzStorageConnection(stgConfig)
if stg == nil {
assert.Fail("TestBlockInvalidSpn : Failed to create Storage object")
}
if err := stg.SetupPipeline(); err == nil {
assert.Fail("TestBlockInvalidSpn : Setup pipeline even though spn is invalid")
}
}
func (suite *authTestSuite) TestBlockSpn() {
defer suite.cleanupTest()
stgConfig := AzStorageConfig{
@ -411,6 +777,30 @@ func (suite *authTestSuite) TestBlockSpn() {
suite.validateStorageTest("TestBlockSpn", stgConfig)
}
func (suite *authTestSuite) TestAdlsInvalidSpn() {
defer suite.cleanupTest()
stgConfig := AzStorageConfig{
container: storageTestConfigurationParameters.AdlsContainer,
authConfig: azAuthConfig{
AuthMode: EAuthType.SPN(),
AccountType: EAccountType.ADLS(),
AccountName: storageTestConfigurationParameters.AdlsAccount,
ClientID: storageTestConfigurationParameters.SpnClientId,
TenantID: storageTestConfigurationParameters.SpnTenantId,
ClientSecret: "",
Endpoint: generateEndpoint(false, storageTestConfigurationParameters.AdlsAccount, EAccountType.ADLS()),
},
}
assert := assert.New(suite.T())
stg := NewAzStorageConnection(stgConfig)
if stg == nil {
assert.Fail("TestAdlsInvalidSpn : Failed to create Storage object")
}
if err := stg.SetupPipeline(); err == nil {
assert.Fail("TestAdlsInvalidSpn : Setup pipeline even though spn is invalid")
}
}
func (suite *authTestSuite) TestAdlsSpn() {
defer suite.cleanupTest()
stgConfig := AzStorageConfig{
@ -429,7 +819,7 @@ func (suite *authTestSuite) TestAdlsSpn() {
}
func (suite *authTestSuite) cleanupTest() {
log.Destroy()
_ = log.Destroy()
}
func TestAuthTestSuite(t *testing.T) {

Просмотреть файл

@ -34,7 +34,7 @@
package azstorage
import (
"blobfuse2/common/log"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-azcopy/v10/azbfs"
"github.com/Azure/azure-storage-blob-go/azblob"

Просмотреть файл

@ -34,12 +34,14 @@
package azstorage
import (
"blobfuse2/common/log"
"time"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-azcopy/v10/azbfs"
"github.com/Azure/azure-storage-blob-go/azblob"
"github.com/Azure/go-autorest/autorest/adal"
"github.com/Azure/go-autorest/autorest/azure"
)
// Verify that the Auth implement the correct AzAuth interfaces
@ -52,8 +54,18 @@ type azAuthMSI struct {
// fetchToken : Generates a token based on the config
func (azmsi *azAuthMSI) fetchToken() (*adal.ServicePrincipalToken, error) {
resourceURL := azmsi.getEndpoint()
spt, err := adal.NewServicePrincipalTokenFromManagedIdentity(resourceURL, &adal.ManagedIdentityOptions{
// Resource string is fixed and has no relation with any of the user inputs
// This is not the resource URL, rather a way to identify the resource type and tenant
// There are two options in the structure datalake and storage but datalake is not populated
// and does not work in all types of clouds (US, German, China etc).
// resource := azure.PublicCloud.ResourceIdentifiers.Datalake
resource := azure.PublicCloud.ResourceIdentifiers.Storage
if azmsi.config.AuthResource != "" {
resource = azmsi.config.AuthResource
}
log.Info("AzAuthMSI::fetchToken : Resource : %s", resource)
spt, err := adal.NewServicePrincipalTokenFromManagedIdentity(resource, &adal.ManagedIdentityOptions{
ClientID: azmsi.config.ApplicationID,
IdentityResourceID: azmsi.config.ResourceID,
}, func(token adal.Token) error { return nil })

Просмотреть файл

@ -34,9 +34,10 @@
package azstorage
import (
"blobfuse2/common/log"
"fmt"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-azcopy/v10/azbfs"
"github.com/Azure/azure-storage-blob-go/azblob"
"github.com/Azure/azure-storage-file-go/azfile"

Просмотреть файл

@ -34,9 +34,10 @@
package azstorage
import (
"blobfuse2/common/log"
"time"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-azcopy/v10/azbfs"
"github.com/Azure/azure-storage-blob-go/azblob"
"github.com/Azure/go-autorest/autorest/adal"

Просмотреть файл

@ -34,17 +34,18 @@
package azstorage
import (
"blobfuse2/common"
"blobfuse2/common/config"
"blobfuse2/common/log"
"blobfuse2/internal"
"blobfuse2/internal/handlemap"
"context"
"fmt"
"sync/atomic"
"syscall"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/config"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
"github.com/spf13/cobra"
)
@ -75,7 +76,7 @@ func (az *AzStorage) SetNextComponent(c internal.Component) {
}
// Configure : Pipeline will call this method after constructor so that you can read config and initialize yourself
func (az *AzStorage) Configure() error {
func (az *AzStorage) Configure(isParent bool) error {
log.Trace("AzStorage::Configure : %s", az.Name())
conf := AzStorageOptions{}
@ -91,7 +92,7 @@ func (az *AzStorage) Configure() error {
return fmt.Errorf("config error in %s [%s]", az.Name(), err.Error())
}
err = az.configureAndTest()
err = az.configureAndTest(isParent)
if err != nil {
log.Err("AzStorage::Configure : Failed to validate storage account (%s)", err.Error())
return err
@ -121,10 +122,14 @@ func (az *AzStorage) OnConfigChange() {
return
}
az.storage.UpdateConfig(az.stConfig)
err = az.storage.UpdateConfig(az.stConfig)
if err != nil {
log.Err("AzStorage::OnConfigChange : failed to UpdateConfig", err.Error())
return
}
}
func (az *AzStorage) configureAndTest() error {
func (az *AzStorage) configureAndTest(isParent bool) error {
az.storage = NewAzStorageConnection(az.stConfig)
err := az.storage.SetupPipeline()
@ -133,12 +138,19 @@ func (az *AzStorage) configureAndTest() error {
return err
}
az.storage.SetPrefixPath(az.stConfig.prefixPath)
err = az.storage.TestPipeline()
err = az.storage.SetPrefixPath(az.stConfig.prefixPath)
if err != nil {
log.Err("AzStorage::configureAndTest : Failed to validate credentials (%s)", err.Error())
return fmt.Errorf("failed to authenticate credentials for %s", az.Name())
log.Err("AzStorage::configureAndTest : Failed to set prefix path (%s)", err.Error())
return err
}
// The daemon runs all pipeline Configure code twice. isParent allows us to only validate credentials in parent mode, preventing a second unnecessary REST call.
if isParent {
err = az.storage.TestPipeline()
if err != nil {
log.Err("AzStorage::configureAndTest : Failed to validate credentials (%s)", err.Error())
return fmt.Errorf("failed to authenticate credentials for %s", az.Name())
}
}
return nil
@ -243,6 +255,17 @@ func (az *AzStorage) ReadDir(options internal.ReadDirOptions) ([]*internal.ObjAt
func (az *AzStorage) StreamDir(options internal.StreamDirOptions) ([]*internal.ObjAttr, string, error) {
log.Trace("AzStorage::StreamDir : Path %s, offset %d, count %d", options.Name, options.Offset, options.Count)
if az.listBlocked {
diff := time.Since(az.startTime)
if diff.Seconds() > float64(az.stConfig.cancelListForSeconds) {
az.listBlocked = false
log.Info("AzStorage::StreamDir : Unblocked List API")
} else {
log.Info("AzStorage::StreamDir : Blocked List API for %d more seconds", int(az.stConfig.cancelListForSeconds)-int(diff.Seconds()))
return make([]*internal.ObjAttr, 0), "", nil
}
}
path := formatListDirName(options.Name)
new_list, new_marker, err := az.storage.List(path, &options.Token, options.Count)

Просмотреть файл

@ -34,9 +34,6 @@
package azstorage
import (
"blobfuse2/common"
"blobfuse2/common/log"
"blobfuse2/internal"
"bytes"
"context"
"encoding/base64"
@ -49,6 +46,10 @@ import (
"syscall"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-blob-go/azblob"
)
@ -66,6 +67,7 @@ type BlockBlob struct {
blobCPKOpt azblob.ClientProvidedKeyOptions
downloadOptions azblob.DownloadFromBlobOptions
listDetails azblob.BlobListingDetails
blockLocks common.KeyedMutex
}
// Verify that BlockBlob implements AzConnection interface
@ -193,7 +195,9 @@ func (bb *BlockBlob) TestPipeline() error {
marker := (azblob.Marker{})
listBlob, err := bb.Container.ListBlobsHierarchySegment(context.Background(), marker, "/",
azblob.ListBlobsSegmentOptions{MaxResults: 2})
azblob.ListBlobsSegmentOptions{MaxResults: 2,
Prefix: bb.Config.prefixPath,
})
if err != nil {
log.Err("BlockBlob::TestPipeline : Failed to validate account with given auth %s", err.Error)
@ -234,15 +238,6 @@ func (bb *BlockBlob) SetPrefixPath(path string) error {
return nil
}
// Exists : Check whether or not a given blob exists
func (bb *BlockBlob) Exists(name string) bool {
log.Trace("BlockBlob::Exists : name %s", name)
if _, err := bb.GetAttr(name); err == syscall.ENOENT {
return false
}
return true
}
// CreateFile : Create a new file in the container/virtual directory
func (bb *BlockBlob) CreateFile(name string, mode os.FileMode) error {
log.Trace("BlockBlob::CreateFile : name %s", name)
@ -311,7 +306,10 @@ func (bb *BlockBlob) DeleteDirectory(name string) (err error) {
// Process the blobs returned in this result segment (if the segment is empty, the loop body won't execute)
for _, blobInfo := range listBlob.Segment.BlobItems {
bb.DeleteFile(split(bb.Config.prefixPath, blobInfo.Name))
err = bb.DeleteFile(split(bb.Config.prefixPath, blobInfo.Name))
if err != nil {
log.Err("BlockBlob::DeleteDirectory : Failed to delete file %s (%s)", blobInfo.Name, err.Error)
}
}
}
return bb.DeleteFile(name)
@ -378,7 +376,10 @@ func (bb *BlockBlob) RenameDirectory(source string, target string) error {
// Process the blobs returned in this result segment (if the segment is empty, the loop body won't execute)
for _, blobInfo := range listBlob.Segment.BlobItems {
srcPath := split(bb.Config.prefixPath, blobInfo.Name)
bb.RenameFile(srcPath, strings.Replace(srcPath, source, target, 1))
err = bb.RenameFile(srcPath, strings.Replace(srcPath, source, target, 1))
if err != nil {
log.Err("BlockBlob::RenameDirectory : Failed to rename file %s (%s)", srcPath, err.Error)
}
}
}
@ -732,7 +733,7 @@ func (bb *BlockBlob) GetFileBlockOffsets(name string) (*common.BlockOffsetList,
return &common.BlockOffsetList{}, err
}
// if block list empty its a small file
if len(blockList.BlockList) == 0 {
if len(storageBlockList.CommittedBlocks) == 0 {
blockList.Flags.Set(common.SmallFile)
return &blockList, nil
}
@ -745,6 +746,7 @@ func (bb *BlockBlob) GetFileBlockOffsets(name string) (*common.BlockOffsetList,
blockOffset += block.Size
blockList.BlockList = append(blockList.BlockList, blk)
}
// blockList.Etag = storageBlockList.ETag()
blockList.BlockIdLength = common.GetIdLength(blockList.BlockList[0].Id)
return &blockList, nil
}
@ -793,7 +795,12 @@ func (bb *BlockBlob) removeBlocks(blockList *common.BlockOffsetList, size int64,
blk.EndIndex = size
blk.Data = make([]byte, blk.EndIndex-blk.StartIndex)
blk.Flags.Set(common.DirtyBlock)
bb.ReadInBuffer(name, blk.StartIndex, blk.EndIndex-blk.StartIndex, blk.Data)
err := bb.ReadInBuffer(name, blk.StartIndex, blk.EndIndex-blk.StartIndex, blk.Data)
if err != nil {
log.Err("BlockBlob::removeBlocks : Failed to remove blocks %s (%s)", name, err.Error())
}
}
blockList.BlockList = blockList.BlockList[:index+1]
@ -932,7 +939,10 @@ func (bb *BlockBlob) Write(options internal.WriteFileOptions) error {
oldDataBuffer := make([]byte, oldDataSize+newBufferSize)
if !appendOnly {
// fetch the blocks that will be impacted by the new changes so we can overwrite them
bb.ReadInBuffer(name, fileOffsets.BlockList[index].StartIndex, oldDataSize, oldDataBuffer)
err = bb.ReadInBuffer(name, fileOffsets.BlockList[index].StartIndex, oldDataSize, oldDataBuffer)
if err != nil {
log.Err("BlockBlob::Write : Failed to read data in buffer %s (%s)", name, err.Error())
}
}
// this gives us where the offset with respect to the buffer that holds our old data - so we can start writing the new data
blockOffset := offset - fileOffsets.BlockList[index].StartIndex
@ -980,6 +990,10 @@ func (bb *BlockBlob) stageAndCommitModifiedBlocks(name string, data []byte, offs
}
func (bb *BlockBlob) StageAndCommit(name string, bol *common.BlockOffsetList) error {
// lock on the blob name so that no stage and commit race condition occur causing failure
blobMtx := bb.blockLocks.GetLock(name)
blobMtx.Lock()
defer blobMtx.Unlock()
blobURL := bb.Container.NewBlockBlobURL(filepath.Join(bb.Config.prefixPath, name))
var blockIDList []string
var data []byte
@ -1013,6 +1027,7 @@ func (bb *BlockBlob) StageAndCommit(name string, bol *common.BlockOffsetList) er
azblob.BlobHTTPHeaders{ContentType: getContentType(name)},
nil,
bb.blobAccCond,
// azblob.BlobAccessConditions{ModifiedAccessConditions: azblob.ModifiedAccessConditions{IfMatch: bol.Etag}},
bb.Config.defaultTier,
nil, // datalake doesn't support tags here
bb.downloadOptions.ClientProvidedKeyOptions)
@ -1020,6 +1035,8 @@ func (bb *BlockBlob) StageAndCommit(name string, bol *common.BlockOffsetList) er
log.Err("BlockBlob::StageAndCommit : Failed to commit block list to blob %s (%s)", name, err.Error())
return err
}
// update the etag
// bol.Etag = resp.ETag()
}
return nil
}

Просмотреть файл

@ -1,4 +1,5 @@
// +build !authtest
/*
_____ _____ _____ ____ ______ _____ ------
| | | | | | | | | | | | |
@ -35,11 +36,6 @@
package azstorage
import (
"blobfuse2/common"
"blobfuse2/common/config"
"blobfuse2/common/log"
"blobfuse2/internal"
"blobfuse2/internal/handlemap"
"bytes"
"container/list"
"context"
@ -58,6 +54,12 @@ import (
"testing"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/config"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
"github.com/Azure/azure-pipeline-go/pipeline"
"github.com/Azure/azure-storage-blob-go/azblob"
"github.com/stretchr/testify/assert"
@ -171,9 +173,9 @@ type blockBlobTestSuite struct {
}
func newTestAzStorage(configuration string) (*AzStorage, error) {
config.ReadConfigFromReader(strings.NewReader(configuration))
_ = config.ReadConfigFromReader(strings.NewReader(configuration))
az := NewazstorageComponent()
err := az.Configure()
err := az.Configure(true)
return az.(*AzStorage), err
}
@ -186,7 +188,7 @@ func (s *blockBlobTestSuite) SetupTest() {
FileCount: 10,
Level: common.ELogLevel.LOG_DEBUG(),
}
log.SetDefaultLogger("base", cfg)
_ = log.SetDefaultLogger("base", cfg)
homeDir, err := os.UserHomeDir()
if err != nil {
@ -224,26 +226,26 @@ func (s *blockBlobTestSuite) setupTestHelper(configuration string, container str
s.assert = assert.New(s.T())
s.az, _ = newTestAzStorage(configuration)
s.az.Start(ctx) // Note: Start->TestValidation will fail but it doesn't matter. We are creating the container a few lines below anyway.
_ = s.az.Start(ctx) // Note: Start->TestValidation will fail but it doesn't matter. We are creating the container a few lines below anyway.
// We could create the container before but that requires rewriting the code to new up a service client.
s.serviceUrl = s.az.storage.(*BlockBlob).Service // Grab the service client to do some validation
s.containerUrl = s.serviceUrl.NewContainerURL(s.container)
if create {
s.containerUrl.Create(ctx, azblob.Metadata{}, azblob.PublicAccessNone)
_, _ = s.containerUrl.Create(ctx, azblob.Metadata{}, azblob.PublicAccessNone)
}
}
func (s *blockBlobTestSuite) tearDownTestHelper(delete bool) {
s.az.Stop()
_ = s.az.Stop()
if delete {
s.containerUrl.Delete(ctx, azblob.ContainerAccessConditions{})
_, _ = s.containerUrl.Delete(ctx, azblob.ContainerAccessConditions{})
}
}
func (s *blockBlobTestSuite) cleanupTest() {
s.tearDownTestHelper(true)
log.Destroy()
_ = log.Destroy()
}
func (s *blockBlobTestSuite) TestInvalidBlockSize() {
@ -1121,7 +1123,7 @@ func (s *blockBlobTestSuite) TestWriteFile() {
s.assert.EqualValues(testData, output)
}
func (s *blockBlobTestSuite) TestTruncateFileSmaller() {
func (s *blockBlobTestSuite) TestTruncateSmallFileSmaller() {
defer s.cleanupTest()
// Setup
name := generateFileName()
@ -1143,7 +1145,33 @@ func (s *blockBlobTestSuite) TestTruncateFileSmaller() {
s.assert.EqualValues(testData[:truncatedLength], output)
}
func (s *blockBlobTestSuite) TestTruncateFileEqual() {
func (s *blockBlobTestSuite) TestTruncateChunkedFileSmaller() {
defer s.cleanupTest()
// Setup
name := generateFileName()
s.az.CreateFile(internal.CreateFileOptions{Name: name})
testData := "test data"
data := []byte(testData)
truncatedLength := 5
// use our method to make the max upload size (size before a blob is broken down to blocks) to 4 Bytes
_, err := uploadReaderAtToBlockBlob(ctx, bytes.NewReader(data), int64(len(data)), 4, s.containerUrl.NewBlockBlobURL(name), azblob.UploadToBlockBlobOptions{
BlockSize: 4,
})
s.assert.Nil(err)
err = s.az.TruncateFile(internal.TruncateFileOptions{Name: name, Size: int64(truncatedLength)})
s.assert.Nil(err)
// Blob should have updated data
file := s.containerUrl.NewBlobURL(name)
resp, err := file.Download(ctx, 0, int64(truncatedLength), azblob.BlobAccessConditions{}, false, azblob.ClientProvidedKeyOptions{})
s.assert.Nil(err)
s.assert.EqualValues(truncatedLength, resp.ContentLength())
output, _ := ioutil.ReadAll(resp.Body(azblob.RetryReaderOptions{}))
s.assert.EqualValues(testData[:truncatedLength], output)
}
func (s *blockBlobTestSuite) TestTruncateSmallFileEqual() {
defer s.cleanupTest()
// Setup
name := generateFileName()
@ -1165,7 +1193,33 @@ func (s *blockBlobTestSuite) TestTruncateFileEqual() {
s.assert.EqualValues(testData, output)
}
func (s *blockBlobTestSuite) TestTruncateFileBigger() {
func (s *blockBlobTestSuite) TestTruncateChunkedFileEqual() {
defer s.cleanupTest()
// Setup
name := generateFileName()
s.az.CreateFile(internal.CreateFileOptions{Name: name})
testData := "test data"
data := []byte(testData)
truncatedLength := 9
// use our method to make the max upload size (size before a blob is broken down to blocks) to 4 Bytes
_, err := uploadReaderAtToBlockBlob(ctx, bytes.NewReader(data), int64(len(data)), 4, s.containerUrl.NewBlockBlobURL(name), azblob.UploadToBlockBlobOptions{
BlockSize: 4,
})
s.assert.Nil(err)
err = s.az.TruncateFile(internal.TruncateFileOptions{Name: name, Size: int64(truncatedLength)})
s.assert.Nil(err)
// Blob should have updated data
file := s.containerUrl.NewBlobURL(name)
resp, err := file.Download(ctx, 0, int64(truncatedLength), azblob.BlobAccessConditions{}, false, azblob.ClientProvidedKeyOptions{})
s.assert.Nil(err)
s.assert.EqualValues(truncatedLength, resp.ContentLength())
output, _ := ioutil.ReadAll(resp.Body(azblob.RetryReaderOptions{}))
s.assert.EqualValues(testData, output)
}
func (s *blockBlobTestSuite) TestTruncateSmallFileBigger() {
defer s.cleanupTest()
// Setup
name := generateFileName()
@ -1187,6 +1241,32 @@ func (s *blockBlobTestSuite) TestTruncateFileBigger() {
s.assert.EqualValues(testData, output[:len(data)])
}
func (s *blockBlobTestSuite) TestTruncateChunkedFileBigger() {
defer s.cleanupTest()
// Setup
name := generateFileName()
s.az.CreateFile(internal.CreateFileOptions{Name: name})
testData := "test data"
data := []byte(testData)
truncatedLength := 15
// use our method to make the max upload size (size before a blob is broken down to blocks) to 4 Bytes
_, err := uploadReaderAtToBlockBlob(ctx, bytes.NewReader(data), int64(len(data)), 4, s.containerUrl.NewBlockBlobURL(name), azblob.UploadToBlockBlobOptions{
BlockSize: 4,
})
s.assert.Nil(err)
err = s.az.TruncateFile(internal.TruncateFileOptions{Name: name, Size: int64(truncatedLength)})
s.assert.Nil(err)
// Blob should have updated data
file := s.containerUrl.NewBlobURL(name)
resp, err := file.Download(ctx, 0, int64(truncatedLength), azblob.BlobAccessConditions{}, false, azblob.ClientProvidedKeyOptions{})
s.assert.Nil(err)
s.assert.EqualValues(truncatedLength, resp.ContentLength())
output, _ := ioutil.ReadAll(resp.Body(azblob.RetryReaderOptions{}))
s.assert.EqualValues(testData, output[:len(data)])
}
func (s *blockBlobTestSuite) TestTruncateFileError() {
defer s.cleanupTest()
// Setup
@ -1441,7 +1521,7 @@ func (s *blockBlobTestSuite) TestOverwriteAndAppendBlocks() {
s.assert.Nil(err)
f, _ = os.Open(f.Name())
len, err := f.Read(output)
len, _ := f.Read(output)
s.assert.EqualValues(dataLen, len)
s.assert.EqualValues(currentData, output)
f.Close()
@ -1474,7 +1554,7 @@ func (s *blockBlobTestSuite) TestAppendBlocks() {
s.assert.Nil(err)
f, _ = os.Open(f.Name())
len, err := f.Read(output)
len, _ := f.Read(output)
s.assert.EqualValues(dataLen, len)
s.assert.EqualValues(currentData, output)
f.Close()
@ -1507,7 +1587,7 @@ func (s *blockBlobTestSuite) TestAppendOffsetLargerThanSize() {
s.assert.Nil(err)
f, _ = os.Open(f.Name())
len, err := f.Read(output)
len, _ := f.Read(output)
s.assert.EqualValues(dataLen, len)
s.assert.EqualValues(currentData, output)
f.Close()
@ -1703,6 +1783,21 @@ func (s *blockBlobTestSuite) TestChmod() {
s.assert.EqualValues(syscall.ENOTSUP, err)
}
func (s *blockBlobTestSuite) TestChmodIgnore() {
defer s.cleanupTest()
// Setup
s.tearDownTestHelper(false) // Don't delete the generated container.
config := fmt.Sprintf("azstorage:\n account-name: %s\n endpoint: https://%s.blob.core.windows.net/\n type: block\n account-key: %s\n mode: key\n container: %s\n fail-unsupported-op: false\n",
storageTestConfigurationParameters.BlockAccount, storageTestConfigurationParameters.BlockAccount, storageTestConfigurationParameters.BlockKey, s.container)
s.setupTestHelper(config, s.container, true)
name := generateFileName()
s.az.CreateFile(internal.CreateFileOptions{Name: name})
err := s.az.Chmod(internal.ChmodOptions{Name: name, Mode: 0666})
s.assert.Nil(err)
}
func (s *blockBlobTestSuite) TestChown() {
defer s.cleanupTest()
// Setup
@ -1714,7 +1809,22 @@ func (s *blockBlobTestSuite) TestChown() {
s.assert.EqualValues(syscall.ENOTSUP, err)
}
func (s *blockBlobTestSuite) TestXBlockSize() {
func (s *blockBlobTestSuite) TestChownIgnore() {
defer s.cleanupTest()
// Setup
s.tearDownTestHelper(false) // Don't delete the generated container.
config := fmt.Sprintf("azstorage:\n account-name: %s\n endpoint: https://%s.blob.core.windows.net/\n type: block\n account-key: %s\n mode: key\n container: %s\n fail-unsupported-op: false\n",
storageTestConfigurationParameters.BlockAccount, storageTestConfigurationParameters.BlockAccount, storageTestConfigurationParameters.BlockKey, s.container)
s.setupTestHelper(config, s.container, true)
name := generateFileName()
s.az.CreateFile(internal.CreateFileOptions{Name: name})
err := s.az.Chown(internal.ChownOptions{Name: name, Owner: 6, Group: 5})
s.assert.Nil(err)
}
func (s *blockBlobTestSuite) TestBlockSize() {
defer s.cleanupTest()
// Setup
name := generateFileName()
@ -1802,6 +1912,46 @@ func (s *blockBlobTestSuite) TestXBlockSize() {
s.assert.EqualValues(block, 0)
}
func (s *blockBlobTestSuite) TestGetFileBlockOffsetsSmallFile() {
defer s.cleanupTest()
// Setup
name := generateFileName()
h, _ := s.az.CreateFile(internal.CreateFileOptions{Name: name})
testData := "testdatates1dat1tes2dat2tes3dat3tes4dat4"
data := []byte(testData)
s.az.WriteFile(internal.WriteFileOptions{Handle: h, Offset: 0, Data: data})
// GetFileBlockOffsets
offsetList, err := s.az.GetFileBlockOffsets(internal.GetFileBlockOffsetsOptions{Name: name})
s.assert.Nil(err)
s.assert.Len(offsetList.BlockList, 0)
s.assert.True(offsetList.SmallFile())
s.assert.EqualValues(0, offsetList.BlockIdLength)
}
func (s *blockBlobTestSuite) TestGetFileBlockOffsetsChunkedFile() {
defer s.cleanupTest()
// Setup
name := generateFileName()
s.az.CreateFile(internal.CreateFileOptions{Name: name})
testData := "testdatates1dat1tes2dat2tes3dat3tes4dat4"
data := []byte(testData)
// use our method to make the max upload size (size before a blob is broken down to blocks) to 4 Bytes
_, err := uploadReaderAtToBlockBlob(ctx, bytes.NewReader(data), int64(len(data)), 4, s.containerUrl.NewBlockBlobURL(name), azblob.UploadToBlockBlobOptions{
BlockSize: 4,
})
s.assert.Nil(err)
// GetFileBlockOffsets
offsetList, err := s.az.GetFileBlockOffsets(internal.GetFileBlockOffsetsOptions{Name: name})
s.assert.Nil(err)
s.assert.Len(offsetList.BlockList, 10)
s.assert.Zero(offsetList.Flags)
s.assert.EqualValues(16, offsetList.BlockIdLength)
}
// func (s *blockBlobTestSuite) TestRAGRS() {
// defer s.cleanupTest()
// // Setup

Просмотреть файл

@ -34,12 +34,13 @@
package azstorage
import (
"blobfuse2/common/config"
"blobfuse2/common/log"
"errors"
"reflect"
"strings"
"github.com/Azure/azure-storage-fuse/v2/common/config"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-blob-go/azblob"
"github.com/JeffreyRichter/enum/enum"
)
@ -166,6 +167,7 @@ type AzStorageOptions struct {
HttpsProxyAddress string `config:"https-proxy" yaml:"https-proxy,omitempty"`
SdkTrace bool `config:"sdk-trace" yaml:"sdk-trace,omitempty"`
FailUnsupportedOp bool `config:"fail-unsupported-op" yaml:"fail-unsupported-op,omitempty"`
AuthResourceString string `config:"auth-resource" yaml:"auth-resource,omitempty"`
}
// RegisterEnvVariables : Register environment varilables
@ -246,7 +248,12 @@ func ParseAndValidateConfig(az *AzStorage, opt AzStorageOptions) error {
}
var accountType AccountType
accountType.Parse(opt.AccountType)
err := accountType.Parse(opt.AccountType)
if err != nil {
log.Err("ParseAndValidateConfig : Failed to parse account type %s", opt.AccountType)
return errors.New("invalid account type")
}
az.stConfig.authConfig.AccountType = accountType
if accountType == EAccountType.INVALID_ACC() {
log.Err("ParseAndValidateConfig : Invalid account type %s", opt.AccountType)
@ -254,7 +261,10 @@ func ParseAndValidateConfig(az *AzStorage, opt AzStorageOptions) error {
}
// Validate container name is present or not
config.UnmarshalKey("mount-all-containers", &az.stConfig.mountAllContainers)
err = config.UnmarshalKey("mount-all-containers", &az.stConfig.mountAllContainers)
if err != nil {
log.Err("ParseAndValidateConfig : Failed to detect mount-all-container")
}
if !az.stConfig.mountAllContainers && opt.Container == "" {
return errors.New("container name not provided")
@ -294,18 +304,18 @@ func ParseAndValidateConfig(az *AzStorage, opt AzStorageOptions) error {
az.stConfig.proxyAddress = opt.HttpsProxyAddress
} else {
if httpProxyProvided {
log.Err("BlockBlob::ParseAndValidateConfig : `http-proxy` Invalid : must set `use-http: true` in your config file")
log.Err("ParseAndValidateConfig : `http-proxy` Invalid : must set `use-http: true` in your config file")
return errors.New("`http-proxy` Invalid : must set `use-http: true` in your config file")
}
}
}
log.Info("BlockBlob::ParseAndValidateConfig : using the following proxy address from the config file: %s", az.stConfig.proxyAddress)
log.Info("ParseAndValidateConfig : using the following proxy address from the config file: %s", az.stConfig.proxyAddress)
az.stConfig.sdkTrace = opt.SdkTrace
log.Info("BlockBlob::ParseAndValidateConfig : sdk logging from the config file: %t", az.stConfig.sdkTrace)
log.Info("ParseAndValidateConfig : sdk logging from the config file: %t", az.stConfig.sdkTrace)
err := ParseAndReadDynamicConfig(az, opt, false)
err = ParseAndReadDynamicConfig(az, opt, false)
if err != nil {
return err
}
@ -315,7 +325,12 @@ func ParseAndValidateConfig(az *AzStorage, opt AzStorageOptions) error {
opt.AuthMode = "key"
}
authType.Parse(opt.AuthMode)
err = authType.Parse(opt.AuthMode)
if err != nil {
log.Err("ParseAndValidateConfig : Invalid auth type %s", opt.AccountType)
return errors.New("invalid auth type")
}
switch authType {
case EAuthType.KEY():
az.stConfig.authConfig.AuthMode = EAuthType.KEY()
@ -328,7 +343,7 @@ func ParseAndValidateConfig(az *AzStorage, opt AzStorageOptions) error {
if opt.SaSKey == "" {
return errors.New("SAS key not provided")
}
az.stConfig.authConfig.SASKey = opt.SaSKey
az.stConfig.authConfig.SASKey = sanitizeSASKey(opt.SaSKey)
case EAuthType.MSI():
az.stConfig.authConfig.AuthMode = EAuthType.MSI()
if opt.ApplicationID == "" && opt.ResourceID == "" {
@ -336,6 +351,7 @@ func ParseAndValidateConfig(az *AzStorage, opt AzStorageOptions) error {
}
az.stConfig.authConfig.ApplicationID = opt.ApplicationID
az.stConfig.authConfig.ResourceID = opt.ResourceID
case EAuthType.SPN():
az.stConfig.authConfig.AuthMode = EAuthType.SPN()
if opt.ClientID == "" || opt.ClientSecret == "" || opt.TenantID == "" {
@ -348,6 +364,7 @@ func ParseAndValidateConfig(az *AzStorage, opt AzStorageOptions) error {
log.Err("ParseAndValidateConfig : Invalid auth mode %s", opt.AuthMode)
return errors.New("invalid auth mode")
}
az.stConfig.authConfig.AuthResource = opt.AuthResourceString
// Retry policy configuration
// A user provided value of 0 doesn't make sense for MaxRetries, MaxTimeout, BackoffTime, or MaxRetryDelay.
@ -408,13 +425,14 @@ func ParseAndReadDynamicConfig(az *AzStorage, opt AzStorageOptions, reload bool)
}
oldSas := az.stConfig.authConfig.SASKey
az.stConfig.authConfig.SASKey = opt.SaSKey
az.stConfig.authConfig.SASKey = sanitizeSASKey(opt.SaSKey)
if reload {
log.Info("ParseAndReadDynamicConfig : SAS Key updated")
if err := az.storage.NewCredentialKey("saskey", az.stConfig.authConfig.SASKey); err != nil {
az.stConfig.authConfig.SASKey = oldSas
az.storage.NewCredentialKey("saskey", az.stConfig.authConfig.SASKey)
_ = az.storage.NewCredentialKey("saskey", az.stConfig.authConfig.SASKey)
return errors.New("SAS key update failure")
}
}

Просмотреть файл

@ -34,12 +34,13 @@
package azstorage
import (
"blobfuse2/common"
"blobfuse2/common/log"
"blobfuse2/internal"
"net/url"
"os"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-pipeline-go/pipeline"
"github.com/Azure/azure-storage-blob-go/azblob"
)
@ -90,7 +91,6 @@ type AzConnection interface {
// This is just for test, shall not be used otherwise
SetPrefixPath(string) error
Exists(name string) bool
CreateFile(name string, mode os.FileMode) error
CreateDirectory(name string) error
CreateLink(source string, target string) error

Просмотреть файл

@ -34,9 +34,6 @@
package azstorage
import (
"blobfuse2/common"
"blobfuse2/common/log"
"blobfuse2/internal"
"context"
"errors"
"net/url"
@ -46,6 +43,10 @@ import (
"syscall"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-azcopy/v10/azbfs"
)
@ -130,6 +131,10 @@ func (dl *Datalake) getCredential() azbfs.Credential {
}
cred := dl.Auth.getCredential()
if cred == nil {
log.Err("Datalake::getCredential : Failed to get credential")
return nil
}
return cred.(azbfs.Credential)
}
@ -182,10 +187,10 @@ func (dl *Datalake) TestPipeline() error {
return nil
}
var maxResults int32
maxResults = 2
maxResults := int32(2)
listPath, err := dl.Filesystem.ListPaths(context.Background(),
azbfs.ListPathsFilesystemOptions{
Path: &dl.Config.prefixPath,
Recursive: false,
MaxResults: &maxResults,
})
@ -212,12 +217,6 @@ func (dl *Datalake) SetPrefixPath(path string) error {
return dl.BlockBlob.SetPrefixPath(path)
}
// Exists : Check whether or not a given path exists
func (dl *Datalake) Exists(name string) bool {
log.Trace("Datalake::Exists : name %s", name)
return dl.BlockBlob.Exists(name)
}
// CreateFile : Create a new file in the filesystem/directory
func (dl *Datalake) CreateFile(name string, mode os.FileMode) error {
log.Trace("Datalake::CreateFile : name %s", name)

Просмотреть файл

@ -1,4 +1,5 @@
// +build !authtest
/*
_____ _____ _____ ____ ______ _____ ------
| | | | | | | | | | | | |
@ -35,10 +36,6 @@
package azstorage
import (
"blobfuse2/common"
"blobfuse2/common/log"
"blobfuse2/internal"
"blobfuse2/internal/handlemap"
"bytes"
"container/list"
"encoding/json"
@ -50,6 +47,11 @@ import (
"testing"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
"github.com/Azure/azure-storage-azcopy/v10/azbfs"
"github.com/Azure/azure-storage-blob-go/azblob"
"github.com/stretchr/testify/assert"
@ -1303,7 +1305,7 @@ func (s *datalakeTestSuite) TestWriteFile() {
s.assert.EqualValues(testData, output)
}
func (s *datalakeTestSuite) TestTruncateFileSmaller() {
func (s *datalakeTestSuite) TestTruncateSmallFileSmaller() {
defer s.cleanupTest()
// Setup
name := generateFileName()
@ -1325,7 +1327,34 @@ func (s *datalakeTestSuite) TestTruncateFileSmaller() {
s.assert.EqualValues(testData[:truncatedLength], output)
}
func (s *datalakeTestSuite) TestTruncateFileEqual() {
func (s *datalakeTestSuite) TestTruncateChunkedFileSmaller() {
defer s.cleanupTest()
// Setup
name := generateFileName()
s.az.CreateFile(internal.CreateFileOptions{Name: name})
testData := "test data"
data := []byte(testData)
truncatedLength := 5
// use our method to make the max upload size (size before a blob is broken down to blocks) to 4 Bytes
_, err := uploadReaderAtToBlockBlob(ctx, bytes.NewReader(data), int64(len(data)), 4,
s.az.storage.(*Datalake).BlockBlob.Container.NewBlockBlobURL(name), azblob.UploadToBlockBlobOptions{
BlockSize: 4,
})
s.assert.Nil(err)
err = s.az.TruncateFile(internal.TruncateFileOptions{Name: name, Size: int64(truncatedLength)})
s.assert.Nil(err)
// Blob should have updated data
file := s.containerUrl.NewRootDirectoryURL().NewFileURL(name)
resp, err := file.Download(ctx, 0, int64(truncatedLength))
s.assert.Nil(err)
s.assert.EqualValues(truncatedLength, resp.ContentLength())
output, _ := ioutil.ReadAll(resp.Body(azbfs.RetryReaderOptions{}))
s.assert.EqualValues(testData[:truncatedLength], output)
}
func (s *datalakeTestSuite) TestTruncateSmallFileEqual() {
defer s.cleanupTest()
// Setup
name := generateFileName()
@ -1347,7 +1376,34 @@ func (s *datalakeTestSuite) TestTruncateFileEqual() {
s.assert.EqualValues(testData, output)
}
func (s *datalakeTestSuite) TestTruncateFileBigger() {
func (s *datalakeTestSuite) TestTruncateChunkedFileEqual() {
defer s.cleanupTest()
// Setup
name := generateFileName()
s.az.CreateFile(internal.CreateFileOptions{Name: name})
testData := "test data"
data := []byte(testData)
truncatedLength := 9
// use our method to make the max upload size (size before a blob is broken down to blocks) to 4 Bytes
_, err := uploadReaderAtToBlockBlob(ctx, bytes.NewReader(data), int64(len(data)), 4,
s.az.storage.(*Datalake).BlockBlob.Container.NewBlockBlobURL(name), azblob.UploadToBlockBlobOptions{
BlockSize: 4,
})
s.assert.Nil(err)
err = s.az.TruncateFile(internal.TruncateFileOptions{Name: name, Size: int64(truncatedLength)})
s.assert.Nil(err)
// Blob should have updated data
file := s.containerUrl.NewRootDirectoryURL().NewFileURL(name)
resp, err := file.Download(ctx, 0, int64(truncatedLength))
s.assert.Nil(err)
s.assert.EqualValues(truncatedLength, resp.ContentLength())
output, _ := ioutil.ReadAll(resp.Body(azbfs.RetryReaderOptions{}))
s.assert.EqualValues(testData, output)
}
func (s *datalakeTestSuite) TestTruncateSmallFileBigger() {
defer s.cleanupTest()
// Setup
name := generateFileName()
@ -1369,6 +1425,33 @@ func (s *datalakeTestSuite) TestTruncateFileBigger() {
s.assert.EqualValues(testData, output[:len(data)])
}
func (s *datalakeTestSuite) TestTruncateChunkedFileBigger() {
defer s.cleanupTest()
// Setup
name := generateFileName()
s.az.CreateFile(internal.CreateFileOptions{Name: name})
testData := "test data"
data := []byte(testData)
truncatedLength := 15
// use our method to make the max upload size (size before a blob is broken down to blocks) to 4 Bytes
_, err := uploadReaderAtToBlockBlob(ctx, bytes.NewReader(data), int64(len(data)), 4,
s.az.storage.(*Datalake).BlockBlob.Container.NewBlockBlobURL(name), azblob.UploadToBlockBlobOptions{
BlockSize: 4,
})
s.assert.Nil(err)
s.az.TruncateFile(internal.TruncateFileOptions{Name: name, Size: int64(truncatedLength)})
s.assert.Nil(err)
// Blob should have updated data
file := s.containerUrl.NewRootDirectoryURL().NewFileURL(name)
resp, err := file.Download(ctx, 0, int64(truncatedLength))
s.assert.Nil(err)
s.assert.EqualValues(truncatedLength, resp.ContentLength())
output, _ := ioutil.ReadAll(resp.Body(azbfs.RetryReaderOptions{}))
s.assert.EqualValues(testData, output[:len(data)])
}
func (s *datalakeTestSuite) TestTruncateFileError() {
defer s.cleanupTest()
// Setup

Просмотреть файл

@ -34,9 +34,6 @@
package azstorage
import (
"blobfuse2/common"
"blobfuse2/common/log"
"blobfuse2/internal"
"context"
"errors"
"net/url"
@ -46,6 +43,10 @@ import (
"syscall"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-file-go/azfile"
)

Просмотреть файл

@ -35,15 +35,16 @@
package azstorage
import (
"blobfuse2/common"
"blobfuse2/common/log"
"blobfuse2/internal"
"encoding/json"
"fmt"
"io/ioutil"
"os"
"testing"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-file-go/azfile"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/suite"

Просмотреть файл

@ -34,9 +34,6 @@
package azstorage
import (
"blobfuse2/common"
"blobfuse2/common/log"
"blobfuse2/internal"
"context"
"encoding/base64"
"encoding/json"
@ -53,6 +50,9 @@ import (
"github.com/Azure/azure-storage-azcopy/v10/azbfs"
"github.com/Azure/azure-storage-blob-go/azblob"
"github.com/Azure/azure-storage-file-go/azfile"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
)
// ----------- Helper to create pipeline options ---------------
@ -463,7 +463,7 @@ var ContentTypes = map[string]string{
".rar": "application/vnd.rar",
".tar": "application/x-tar",
".zip": "application/x-zip-compressed",
"7z": "application/x-7z-compressed",
".7z": "application/x-7z-compressed",
".3g2": "video/3gpp2",
".sh": "application/x-sh",
@ -480,7 +480,7 @@ func getContentType(key string) string {
return "application/octet-stream"
}
func populateContentType(newSet string) error {
func populateContentType(newSet string) error { //nolint
var data map[string]string
if err := json.Unmarshal([]byte(newSet), &data); err != nil {
log.Err("Failed to parse config file : %s (%s)", newSet, err.Error())
@ -547,30 +547,6 @@ func getACLPermissions(mode os.FileMode) string {
return sb.String()
}
// Called by x method
func getAccessControlList(mode os.FileMode) string {
// The format for the value x-ms-acl is user::rwx,group::rwx,mask::rwx,other::rwx
// Since fuse has no way to expose mask to the user, we only are concerned about
// user, group and other.
var sb strings.Builder
sb.WriteString("user::")
writePermission(&sb, mode&(1<<8) != 0, 'r')
writePermission(&sb, mode&(1<<7) != 0, 'w')
writePermission(&sb, mode&(1<<6) != 0, 'x')
sb.WriteString(",group::")
writePermission(&sb, mode&(1<<5) != 0, 'r')
writePermission(&sb, mode&(1<<4) != 0, 'w')
writePermission(&sb, mode&(1<<3) != 0, 'x')
sb.WriteString(",other::")
writePermission(&sb, mode&(1<<2) != 0, 'r')
writePermission(&sb, mode&(1<<1) != 0, 'w')
writePermission(&sb, mode&(1<<0) != 0, 'x')
return sb.String()
}
func writePermission(sb *strings.Builder, permitted bool, permission rune) {
if permitted {
sb.WriteRune(permission)
@ -617,3 +593,15 @@ func split(prefixPath string, path string) string {
}
return filepath.Join(paths...)
}
func sanitizeSASKey(key string) string {
if key == "" {
return key
}
if key[0] != '?' {
return ("?" + key)
}
return key
}

Просмотреть файл

@ -1,8 +1,42 @@
/*
_____ _____ _____ ____ ______ _____ ------
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| --- | | | | |-----| |---- | | |-----| |----- ------
| | | | | | | | | | | | |
| ____| |_____ | ____| | ____| | |_____| _____| |_____ |_____
Licensed under the MIT License <http://opensource.org/licenses/MIT>.
Copyright © 2020-2022 Microsoft Corporation. All rights reserved.
Author : <blobfusedev@microsoft.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
*/
package azstorage
import (
"testing"
"github.com/Azure/azure-storage-blob-go/azblob"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/suite"
)
@ -32,6 +66,117 @@ func (s *utilsTestSuite) TestContentType() {
assert.EqualValues(val, "video/mp4")
}
type contentTypeVal struct {
val string
result string
}
func (s *utilsTestSuite) TestGetContentType() {
assert := assert.New(s.T())
var inputs = []contentTypeVal{
{val: "a.css", result: "text/css"},
{val: "a.pdf", result: "application/pdf"},
{val: "a.xml", result: "text/xml"},
{val: "a.csv", result: "text/csv"},
{val: "a.json", result: "application/json"},
{val: "a.rtf", result: "application/rtf"},
{val: "a.txt", result: "text/plain"},
{val: "a.java", result: "text/plain"},
{val: "a.dat", result: "text/plain"},
{val: "a.htm", result: "text/html"},
{val: "a.html", result: "text/html"},
{val: "a.gif", result: "image/gif"},
{val: "a.jpeg", result: "image/jpeg"},
{val: "a.jpg", result: "image/jpeg"},
{val: "a.png", result: "image/png"},
{val: "a.bmp", result: "image/bmp"},
{val: "a.js", result: "application/javascript"},
{val: "a.mjs", result: "application/javascript"},
{val: "a.svg", result: "image/svg+xml"},
{val: "a.wasm", result: "application/wasm"},
{val: "a.webp", result: "image/webp"},
{val: "a.wav", result: "audio/wav"},
{val: "a.mp3", result: "audio/mpeg"},
{val: "a.mpeg", result: "video/mpeg"},
{val: "a.aac", result: "audio/aac"},
{val: "a.avi", result: "video/x-msvideo"},
{val: "a.m3u8", result: "application/x-mpegURL"},
{val: "a.ts", result: "video/MP2T"},
{val: "a.mid", result: "audio/midiaudio/x-midi"},
{val: "a.3gp", result: "video/3gpp"},
{val: "a.mp4", result: "video/mp4"},
{val: "a.doc", result: "application/msword"},
{val: "a.docx", result: "application/vnd.openxmlformats-officedocument.wordprocessingml.document"},
{val: "a.ppt", result: "application/vnd.ms-powerpoint"},
{val: "a.pptx", result: "application/vnd.openxmlformats-officedocument.presentationml.presentation"},
{val: "a.xls", result: "application/vnd.ms-excel"},
{val: "a.xlsx", result: "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"},
{val: "a.gz", result: "application/x-gzip"},
{val: "a.jar", result: "application/java-archive"},
{val: "a.rar", result: "application/vnd.rar"},
{val: "a.tar", result: "application/x-tar"},
{val: "a.zip", result: "application/x-zip-compressed"},
{val: "a.7z", result: "application/x-7z-compressed"},
{val: "a.3g2", result: "video/3gpp2"},
{val: "a.sh", result: "application/x-sh"},
{val: "a.exe", result: "application/x-msdownload"},
{val: "a.dll", result: "application/x-msdownload"},
}
for _, i := range inputs {
s.Run(i.val, func() {
output := getContentType(i.val)
assert.EqualValues(i.result, output)
})
}
}
type accesTierVal struct {
val string
result azblob.AccessTierType
}
func (s *utilsTestSuite) TestGetAccessTierType() {
assert := assert.New(s.T())
var inputs = []accesTierVal{
{val: "", result: azblob.AccessTierNone},
{val: "none", result: azblob.AccessTierNone},
{val: "hot", result: azblob.AccessTierHot},
{val: "cool", result: azblob.AccessTierCool},
{val: "archive", result: azblob.AccessTierArchive},
{val: "p4", result: azblob.AccessTierP4},
{val: "p6", result: azblob.AccessTierP6},
{val: "p10", result: azblob.AccessTierP10},
{val: "p15", result: azblob.AccessTierP15},
{val: "p20", result: azblob.AccessTierP20},
{val: "p30", result: azblob.AccessTierP30},
{val: "p40", result: azblob.AccessTierP40},
{val: "p50", result: azblob.AccessTierP50},
{val: "p60", result: azblob.AccessTierP60},
{val: "p70", result: azblob.AccessTierP70},
{val: "p80", result: azblob.AccessTierP80},
{val: "random", result: azblob.AccessTierNone},
}
for _, i := range inputs {
s.Run(i.val, func() {
output := getAccessTierType(i.val)
assert.EqualValues(i.result, output)
})
}
}
func (s *utilsTestSuite) TestSanitizeSASKey() {
assert := assert.New(s.T())
key := sanitizeSASKey("")
assert.EqualValues("", key)
key = sanitizeSASKey("?abcd")
assert.EqualValues("?abcd", key)
key = sanitizeSASKey("abcd")
assert.EqualValues("?abcd", key)
}
func TestUtilsTestSuite(t *testing.T) {
suite.Run(t, new(utilsTestSuite))
}

Просмотреть файл

@ -34,13 +34,14 @@
package file_cache
import (
"blobfuse2/common"
"blobfuse2/common/log"
"bytes"
"os"
"os/exec"
"strconv"
"strings"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
)
const DefaultEvictTime = 10
@ -65,9 +66,9 @@ type cachePolicy interface {
UpdateConfig(cachePolicyConfig) error
CacheValid(name string) error // Mark the file as hit
CacheInvalidate(name string) error // Invalidate the file
CachePurge(name string) error // Schedule the file for deletion
CacheValid(name string) // Mark the file as hit
CacheInvalidate(name string) // Invalidate the file
CachePurge(name string) // Schedule the file for deletion
IsCached(name string) bool // Whether or not the cache policy considers this file cached
@ -98,7 +99,8 @@ func getUsage(path string) float64 {
if size == "0" {
return 0
}
// some OS's use "," instead of "." that will not work for float parsing - replace it
size = strings.Replace(size, ",", ".", 1)
parsed, err := strconv.ParseFloat(size[:len(size)-1], 64)
if err != nil {
log.Err("cachePolicy::getCacheUsage : error parsing folder size [%s]", err.Error())

Просмотреть файл

@ -0,0 +1,96 @@
/*
_____ _____ _____ ____ ______ _____ ------
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| --- | | | | |-----| |---- | | |-----| |----- ------
| | | | | | | | | | | | |
| ____| |_____ | ____| | ____| | |_____| _____| |_____ |_____
Licensed under the MIT License <http://opensource.org/licenses/MIT>.
Copyright © 2020-2022 Microsoft Corporation. All rights reserved.
Author : <blobfusedev@microsoft.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
*/
package file_cache
import (
"io/fs"
"math"
"os"
"testing"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/suite"
)
type cachePolicyTestSuite struct {
suite.Suite
assert *assert.Assertions
}
func (suite *cachePolicyTestSuite) SetupTest() {
err := log.SetDefaultLogger("silent", common.LogConfig{Level: common.ELogLevel.LOG_DEBUG()})
if err != nil {
panic("Unable to set silent logger as default.")
}
suite.assert = assert.New(suite.T())
os.Mkdir(cache_path, fs.FileMode(0777))
}
func (suite *cachePolicyTestSuite) cleanupTest() {
os.RemoveAll(cache_path)
}
func (suite *cachePolicyTestSuite) TestGetUsage() {
defer suite.cleanupTest()
f, _ := os.Create(cache_path + "/test")
data := make([]byte, 1024*1024)
f.Write(data)
result := getUsage(cache_path)
suite.assert.Equal(float64(1), math.Floor(result))
}
func (suite *cachePolicyTestSuite) TestGetUsagePercentage() {
defer suite.cleanupTest()
f, _ := os.Create(cache_path + "/test")
data := make([]byte, 1024*1024)
f.Write(data)
result := getUsagePercentage(cache_path, 4)
// since the value might defer a little distro to distro
suite.assert.GreaterOrEqual(result, float64(25))
suite.assert.LessOrEqual(result, float64(30))
}
func (suite *cachePolicyTestSuite) TestDeleteFile() {
defer suite.cleanupTest()
f, _ := os.Create(cache_path + "/test")
result := deleteFile(f.Name() + "not_exist")
suite.assert.Equal(nil, result)
}
func TestCachePolicyTestSuite(t *testing.T) {
suite.Run(t, new(cachePolicyTestSuite))
}

Просмотреть файл

@ -1,255 +0,0 @@
/*
_____ _____ _____ ____ ______ _____ ------
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| --- | | | | |-----| |---- | | |-----| |----- ------
| | | | | | | | | | | | |
| ____| |_____ | ____| | ____| | |_____| _____| |_____ |_____
Licensed under the MIT License <http://opensource.org/licenses/MIT>.
Copyright © 2020-2022 Microsoft Corporation. All rights reserved.
Author : <blobfusedev@microsoft.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
*/
package cacheheap
import "sync"
type CacheFileAttr struct {
Times int64
Path string
}
type Heap struct {
sync.RWMutex
fileData []*CacheFileAttr
filePathToIdx map[string]int
}
func (h *Heap) parent(i int) int {
return (i - 1) / 2
}
func (h *Heap) left(i int) int {
return (2 * i) + 1
}
func (h *Heap) right(i int) int {
return (2 * i) + 2
}
func (h *Heap) heapify(i int) {
if len(h.fileData) <= 1 {
return
}
h.RLock()
least := i
leastInfo := h.fileData[least]
l := h.left(i)
r := h.right(i)
var lInfo, rInfo *CacheFileAttr
if l < len(h.fileData) {
lInfo = h.fileData[l]
}
if r < len(h.fileData) {
rInfo = h.fileData[r]
}
if l < len(h.fileData) && lInfo.Times < leastInfo.Times {
least = l
leastInfo = lInfo
}
if r < len(h.fileData) && rInfo.Times < leastInfo.Times {
least = r
leastInfo = rInfo
}
h.RUnlock()
if least != i {
h.Lock()
ithInfo := h.fileData[i]
//change indices of the Path to new indices
h.filePathToIdx[ithInfo.Path] = least
h.filePathToIdx[leastInfo.Path] = i
//swap the CacheFileAttr directly in Heap
h.fileData[i] = leastInfo
h.fileData[least] = ithInfo
h.Unlock()
//reset the Heap structure after change
h.heapify(least)
}
}
//Requires Lock()
func (h *Heap) Increment(path string) {
go func() {
h.RLock()
idx, ok := h.filePathToIdx[path]
h.RUnlock()
if ok {
h.Lock()
h.fileData[idx].Times += 1
h.Unlock()
h.heapify(idx)
}
}()
}
//Requires Lock()
func (h *Heap) Insert(path string) {
info := &CacheFileAttr{
Times: 1,
Path: path,
}
h.Lock()
defer h.Unlock()
h.fileData = append(h.fileData, info)
h.filePathToIdx[path] = len(h.fileData) - 1
i := len(h.fileData) - 1
ithInfo := h.fileData[i]
parent := h.parent(i)
parentInfo := h.fileData[parent]
for i != 0 && ithInfo.Times < parentInfo.Times {
h.filePathToIdx[parentInfo.Path] = i
h.filePathToIdx[ithInfo.Path] = parent
h.fileData[i] = parentInfo
h.fileData[parent] = ithInfo
i = parent
ithInfo = h.fileData[i]
parent = h.parent(i)
parentInfo = h.fileData[parent]
}
}
func (h *Heap) HasValue(name string) bool {
_, ok := h.filePathToIdx[name]
return ok
}
func (h *Heap) InsertFromAttr(path string, info *CacheFileAttr) {
h.Lock()
defer h.Unlock()
h.fileData = append(h.fileData, info)
h.filePathToIdx[path] = len(h.fileData) - 1
i := len(h.fileData) - 1
ithInfo := h.fileData[i]
parent := h.parent(i)
parentInfo := h.fileData[parent]
for i != 0 && ithInfo.Times < parentInfo.Times {
h.filePathToIdx[parentInfo.Path] = i
h.filePathToIdx[ithInfo.Path] = parent
h.fileData[i] = parentInfo
h.fileData[parent] = ithInfo
i = parent
ithInfo = h.fileData[i]
parent = h.parent(i)
parentInfo = h.fileData[parent]
}
}
func (h *Heap) Delete(path string) {
if len(h.fileData) == 0 {
return
}
h.RLock()
intf, ok := h.filePathToIdx[path]
h.RUnlock()
if !ok {
return
}
toDeleteIdx := intf
h.Lock()
delete(h.filePathToIdx, path)
h.Unlock()
if toDeleteIdx == len(h.fileData)-1 {
h.Lock()
h.fileData = h.fileData[:len(h.fileData)-1]
h.Unlock()
} else if len(h.fileData) > 1 {
h.Lock()
lastInfo := h.fileData[len(h.fileData)-1]
h.fileData[toDeleteIdx] = lastInfo
h.filePathToIdx[lastInfo.Path] = toDeleteIdx
h.fileData = h.fileData[:len(h.fileData)-1]
h.Unlock()
h.heapify(toDeleteIdx)
} else {
h.fileData = make([]*CacheFileAttr, 0)
h.filePathToIdx = make(map[string]int)
}
}
func (h *Heap) ExtractMin() string {
if len(h.fileData) <= 0 {
return ""
}
if len(h.fileData) == 1 {
info := h.fileData[0]
h.fileData = make([]*CacheFileAttr, 0)
h.filePathToIdx = make(map[string]int)
return info.Path
}
h.Lock()
minInfo := h.fileData[0]
h.fileData[0] = h.fileData[len(h.fileData)-1]
zeroth := h.fileData[0]
delete(h.filePathToIdx, minInfo.Path)
h.filePathToIdx[zeroth.Path] = 0
h.fileData = h.fileData[:len(h.fileData)-1]
h.Unlock()
h.heapify(0)
return minInfo.Path
}
func (h *Heap) GetMin() *CacheFileAttr {
h.RLock()
defer h.RUnlock()
if len(h.fileData) <= 0 {
return nil
}
info := h.fileData[0]
return info
}
func New() *Heap {
return &Heap{
fileData: make([]*CacheFileAttr, 0),
filePathToIdx: make(map[string]int),
}
}

Просмотреть файл

@ -34,11 +34,6 @@
package file_cache
import (
"blobfuse2/common"
"blobfuse2/common/config"
"blobfuse2/common/log"
"blobfuse2/internal"
"blobfuse2/internal/handlemap"
"context"
"fmt"
"io"
@ -51,6 +46,12 @@ import (
"syscall"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/config"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
"github.com/spf13/cobra"
)
@ -132,14 +133,21 @@ func (c *FileCache) Start(ctx context.Context) error {
log.Trace("Starting component : %s", c.Name())
if c.cleanupOnStart {
c.TempCacheCleanup()
err := c.TempCacheCleanup()
if err != nil {
return fmt.Errorf("error in %s error [fail to cleanup temp cache]", c.Name())
}
}
if c.policy == nil {
return fmt.Errorf("config error in %s error [cache policy missing]", c.Name())
}
c.policy.StartPolicy()
err := c.policy.StartPolicy()
if err != nil {
return fmt.Errorf("config error in %s error [fail to start policy]", c.Name())
}
return nil
}
@ -147,8 +155,8 @@ func (c *FileCache) Start(ctx context.Context) error {
func (c *FileCache) Stop() error {
log.Trace("Stopping component : %s", c.Name())
c.policy.ShutdownPolicy()
c.TempCacheCleanup()
_ = c.policy.ShutdownPolicy()
_ = c.TempCacheCleanup()
return nil
}
@ -173,7 +181,7 @@ func (c *FileCache) TempCacheCleanup() error {
// Configure : Pipeline will call this method after constructor so that you can read config and initialize yourself
// Return failure if any config is not valid to exit the process
func (c *FileCache) Configure() error {
func (c *FileCache) Configure(_ bool) error {
log.Trace("FileCache::Configure : %s", c.Name())
conf := FileCacheOptions{}
@ -271,7 +279,7 @@ func (c *FileCache) OnConfigChange() {
c.policyTrace = conf.EnablePolicyTrace
c.offloadIO = conf.OffloadIO
c.maxCacheSize = conf.MaxSizeMB
c.policy.UpdateConfig(c.GetPolicyConfig(conf))
_ = c.policy.UpdateConfig(c.GetPolicyConfig(conf))
}
func (c *FileCache) StatFs() (*syscall.Statfs_t, bool, error) {
@ -334,33 +342,37 @@ func isLocalDirEmpty(path string) bool {
}
// invalidateDirectory: Recursively invalidates a directory in the file cache.
func (fc *FileCache) invalidateDirectory(name string) error {
func (fc *FileCache) invalidateDirectory(name string) {
log.Trace("FileCache::invalidateDirectory : %s", name)
localPath := filepath.Join(fc.tmpPath, name)
_, err := os.Stat(localPath)
if os.IsNotExist(err) {
log.Info("FileCache::invalidateDirectory : %s does not exist in local cache.", name)
return nil
return
} else if err != nil {
log.Debug("FileCache::invalidateDirectory : %s stat err [%s].", name, err.Error())
return err
return
}
// TODO : wouldn't this cause a race condition? a thread might get the lock before we purge - and the file would be non-existent
filepath.WalkDir(localPath, func(path string, d fs.DirEntry, err error) error {
err = filepath.WalkDir(localPath, func(path string, d fs.DirEntry, err error) error {
if err == nil && d != nil {
log.Debug("FileCache::invalidateDirectory : %s (%d) getting removed from cache", path, d.IsDir())
if !d.IsDir() {
fc.policy.CachePurge(path)
} else {
deleteFile(path)
_ = deleteFile(path)
}
}
return nil
})
deleteFile(localPath)
return nil
if err != nil {
log.Debug("FileCache::invalidateDirectory : Failed to iterate directory %s [%s].", localPath, err.Error())
return
}
_ = deleteFile(localPath)
}
// Note: The primary purpose of the file cache is to keep track of files that are opened by the user.
@ -512,29 +524,33 @@ func (fc *FileCache) IsDirEmpty(options internal.IsDirEmptyOptions) bool {
// If the directory does not exist locally then call the next component
localPath := filepath.Join(fc.tmpPath, options.Name)
f, err := os.Open(localPath)
if os.IsNotExist(err) {
if err == nil {
log.Debug("FileCache::IsDirEmpty : %s found in local cache", options.Name)
// Check local cache directory is empty or not
path, err := f.Readdirnames(1)
// If the local directory has a path in it, it is likely due to !createEmptyFile.
if err == nil && !fc.createEmptyFile && len(path) > 0 {
log.Debug("FileCache::IsDirEmpty : %s had a subpath in the local cache", options.Name)
return false
}
// If there are files in local cache then dont allow deletion of directory
if err != io.EOF {
// Local directory is not empty fail the call
log.Debug("FileCache::IsDirEmpty : %s was not empty in local cache", options.Name)
return false
}
} else if os.IsNotExist(err) {
// Not found in local cache so check with container
log.Debug("FileCache::IsDirEmpty : %s not found in local cache", options.Name)
return fc.NextComponent().IsDirEmpty(options)
}
if err != nil {
log.Err("FileCache::IsDirEmpty : error opening directory %s [%s]", options.Name, err.Error())
return false
}
// The file cache policy handles deleting locally empty directories in the cache
// If the directory exists locally and is empty, it was probably recently emptied and we can trust this result.
path, err := f.Readdirnames(1)
if err == io.EOF {
log.Debug("FileCache::IsDirEmpty : %s was empty in local cache", options.Name)
return true
}
// If the local directory has a path in it, it is likely due to !createEmptyFile.
if err == nil && !fc.createEmptyFile && len(path) > 0 {
log.Debug("FileCache::IsDirEmpty : %s had a subpath in the local cache", options.Name)
return false
} else {
// Unknown error, check with container
log.Err("FileCache::IsDirEmpty : %s failed while checking local cache (%s)", options.Name, err.Error())
}
log.Debug("FileCache::IsDirEmpty : %s checking with container", options.Name)
return fc.NextComponent().IsDirEmpty(options)
}
@ -669,7 +685,11 @@ func (fc *FileCache) DeleteFile(options internal.DeleteFileOptions) error {
}
localPath := filepath.Join(fc.tmpPath, options.Name)
deleteFile(localPath)
err = deleteFile(localPath)
if err != nil && !os.IsNotExist(err) {
log.Err("FileCache::DeleteFile : failed to delete local file %s [%s]", localPath, err.Error())
}
fc.policy.CachePurge(localPath)
return nil
}
@ -747,7 +767,7 @@ func (fc *FileCache) OpenFile(options internal.OpenFileOptions) (*handlemap.Hand
log.Debug("FileCache::OpenFile : Delete cached file %s", options.Name)
err := deleteFile(localPath)
if err != nil {
if err != nil && !os.IsNotExist(err) {
log.Err("FileCache::OpenFile : Failed to delete old file %s", options.Name)
}
} else {
@ -852,7 +872,7 @@ func (fc *FileCache) CloseFile(options internal.CloseFileOptions) error {
if options.Handle.Dirty() {
log.Info("FileCache::CloseFile : name=%s, handle=%d dirty. Flushing the file.", options.Handle.Path, options.Handle.ID)
err := fc.FlushFile(internal.FlushFileOptions{Handle: options.Handle})
err := fc.FlushFile(internal.FlushFileOptions{Handle: options.Handle}) //nolint
if err != nil {
log.Err("FileCache::CloseFile : failed to flush file %s", options.Handle.Path)
return err
@ -881,7 +901,12 @@ func (fc *FileCache) CloseFile(options internal.CloseFileOptions) error {
if options.Handle.Fsynced() {
log.Trace("FileCache::CloseFile : fsync/sync op, purging %s", options.Handle.Path)
localPath := filepath.Join(fc.tmpPath, options.Handle.Path)
deleteFile(localPath)
err = deleteFile(localPath)
if err != nil && !os.IsNotExist(err) {
log.Err("FileCache::CloseFile : failed to delete local file %s [%s]", localPath, err.Error())
}
fc.policy.CachePurge(localPath)
return nil
}
@ -939,8 +964,7 @@ func (fc *FileCache) ReadInBuffer(options internal.ReadInBufferOptions) (int, er
// Removing f.ReadAt as it involves lot of house keeping and then calls syscall.Pread
// Instead we will call syscall directly for better perf
return f.ReadAt(options.Data, options.Offset)
//return syscall.Pread(options.Handle.FD(), options.Data, options.Offset)
return syscall.Pread(options.Handle.FD(), options.Data, options.Offset)
}
// WriteFile: Write to the local file
@ -963,12 +987,13 @@ func (fc *FileCache) WriteFile(options internal.WriteFileOptions) (int, error) {
// Removing f.WriteAt as it involves lot of house keeping and then calls syscall.Pwrite
// Instead we will call syscall directly for better perf
bytesWritten, err := f.WriteAt(options.Data, options.Offset)
//bytesWritten, err := syscall.Pwrite(options.Handle.FD(), options.Data, options.Offset)
bytesWritten, err := syscall.Pwrite(options.Handle.FD(), options.Data, options.Offset)
if err == nil {
// Mark the handle dirty so the file is written back to storage on FlushFile.
options.Handle.Flags.Set(handlemap.HandleFlagDirty)
} else {
log.Err("FileCache::WriteFile : failed to write %s (%s)", options.Handle.Path, err.Error())
}
return bytesWritten, err
@ -1179,6 +1204,7 @@ func (fc *FileCache) RenameFile(options internal.RenameFileOptions) error {
if err != nil && !os.IsNotExist(err) {
log.Err("FileCache::RenameFile : %s failed to delete local file %s [%s]", localDstPath, err.Error())
}
fc.policy.CachePurge(localDstPath)
}

Просмотреть файл

@ -34,12 +34,6 @@
package file_cache
import (
"blobfuse2/common"
"blobfuse2/common/config"
"blobfuse2/common/log"
"blobfuse2/component/loopback"
"blobfuse2/internal"
"blobfuse2/internal/handlemap"
"context"
"fmt"
"math/rand"
@ -50,6 +44,13 @@ import (
"testing"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/config"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/component/loopback"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/suite"
)
@ -67,7 +68,7 @@ type fileCacheTestSuite struct {
func newLoopbackFS() internal.Component {
loopback := loopback.NewLoopbackFSComponent()
loopback.Configure()
loopback.Configure(true)
return loopback
}
@ -76,7 +77,7 @@ func newTestFileCache(next internal.Component) *FileCache {
fileCache := NewFileCacheComponent()
fileCache.SetNextComponent(next)
err := fileCache.Configure()
err := fileCache.Configure(true)
if err != nil {
panic("Unable to configure file cache.")
}
@ -391,6 +392,65 @@ func (suite *fileCacheTestSuite) TestReadDirError() {
suite.assert.Empty(dir)
}
func (suite *fileCacheTestSuite) TestStreamDirCase1() {
defer suite.cleanupTest()
// Setup
name := "dir"
subdir := filepath.Join(name, "subdir")
file1 := filepath.Join(name, "file1")
file2 := filepath.Join(name, "file2")
file3 := filepath.Join(name, "file3")
// Create files directly in "fake_storage"
suite.loopback.CreateDir(internal.CreateDirOptions{Name: name, Mode: 0777})
suite.loopback.CreateDir(internal.CreateDirOptions{Name: subdir, Mode: 0777})
suite.loopback.CreateFile(internal.CreateFileOptions{Name: file1})
suite.loopback.CreateFile(internal.CreateFileOptions{Name: file2})
suite.loopback.CreateFile(internal.CreateFileOptions{Name: file3})
// Read the Directory
dir, _, err := suite.fileCache.StreamDir(internal.StreamDirOptions{Name: name})
suite.assert.Nil(err)
suite.assert.NotEmpty(dir)
suite.assert.EqualValues(4, len(dir))
suite.assert.EqualValues(file1, dir[0].Path)
suite.assert.EqualValues(file2, dir[1].Path)
suite.assert.EqualValues(file3, dir[2].Path)
suite.assert.EqualValues(subdir, dir[3].Path)
}
//TODO: case3 requires more thought due to the way loopback fs is designed, specifically getAttr and streamDir
func (suite *fileCacheTestSuite) TestStreamDirCase2() {
defer suite.cleanupTest()
// Setup
name := "dir"
subdir := filepath.Join(name, "subdir")
file1 := filepath.Join(name, "file1")
file2 := filepath.Join(name, "file2")
file3 := filepath.Join(name, "file3")
suite.fileCache.CreateDir(internal.CreateDirOptions{Name: name, Mode: 0777})
suite.fileCache.CreateDir(internal.CreateDirOptions{Name: subdir, Mode: 0777})
// By default createEmptyFile is false, so we will not create these files in storage until they are closed.
suite.fileCache.CreateFile(internal.CreateFileOptions{Name: file1, Mode: 0777})
suite.fileCache.CreateFile(internal.CreateFileOptions{Name: file2, Mode: 0777})
suite.fileCache.CreateFile(internal.CreateFileOptions{Name: file3, Mode: 0777})
// Read the Directory
dir, _, err := suite.fileCache.StreamDir(internal.StreamDirOptions{Name: name})
suite.assert.Nil(err)
suite.assert.NotEmpty(dir)
suite.assert.EqualValues(4, len(dir))
suite.assert.EqualValues(subdir, dir[0].Path)
suite.assert.EqualValues(file1, dir[1].Path)
suite.assert.EqualValues(file2, dir[2].Path)
suite.assert.EqualValues(file3, dir[3].Path)
}
func (suite *fileCacheTestSuite) TestFileUsed() {
defer suite.cleanupTest()
suite.fileCache.FileUsed("temp")
suite.fileCache.policy.IsCached("temp")
}
// File cache does not have CreateDir Method implemented hence results are undefined here
func (suite *fileCacheTestSuite) TestIsDirEmpty() {
defer suite.cleanupTest()
@ -689,7 +749,7 @@ func (suite *fileCacheTestSuite) TestCloseFileTimeout() {
defer suite.cleanupTest()
suite.cleanupTest() // teardown the default file cache generated
cacheTimeout := 5
config := fmt.Sprintf("file_cache:\n path: %s\n offload-io: true\n timeout: %d\n\nloopbackfs:\n path: %s",
config := fmt.Sprintf("file_cache:\n path: %s\n offload-io: true\n timeout-sec: %d\n\nloopbackfs:\n path: %s",
suite.cache_path, cacheTimeout, suite.fake_storage_path)
suite.setupTestHelper(config) // setup a new file cache with a custom config (teardown will occur after the test as usual)
@ -1378,13 +1438,13 @@ func (suite *fileCacheTestSuite) TestChownCase2() {
func (suite *fileCacheTestSuite) TestZZMountPathConflict() {
defer suite.cleanupTest()
cacheTimeout := 1
configuration := fmt.Sprintf("file_cache:\n path: %s\n offload-io: true\n timeout: %d\n\nloopbackfs:\n path: %s",
configuration := fmt.Sprintf("file_cache:\n path: %s\n offload-io: true\n timeout-sec: %d\n\nloopbackfs:\n path: %s",
suite.cache_path, cacheTimeout, suite.fake_storage_path)
fileCache := NewFileCacheComponent()
config.ReadConfigFromReader(strings.NewReader(configuration))
config.Set("mount-path", suite.cache_path)
err := fileCache.Configure()
err := fileCache.Configure(true)
suite.assert.NotNil(err)
suite.assert.Contains(err.Error(), "[tmp-path is same as mount path]")
}
@ -1420,7 +1480,7 @@ func (suite *fileCacheTestSuite) TestCachePathSymlink() {
func (suite *fileCacheTestSuite) TestZZOffloadIO() {
defer suite.cleanupTest()
configuration := fmt.Sprintf("file_cache:\n path: %s\n timeout: 0\n\nloopbackfs:\n path: %s",
configuration := fmt.Sprintf("file_cache:\n path: %s\n timeout-sec: 0\n\nloopbackfs:\n path: %s",
suite.cache_path, suite.fake_storage_path)
suite.setupTestHelper(configuration)
@ -1434,6 +1494,26 @@ func (suite *fileCacheTestSuite) TestZZOffloadIO() {
suite.fileCache.CloseFile(internal.CloseFileOptions{Handle: handle})
}
func (suite *fileCacheTestSuite) TestStatFS() {
defer suite.cleanupTest()
cacheTimeout := 5
maxSizeMb := 2
config := fmt.Sprintf("file_cache:\n path: %s\n max-size-mb: %d\n offload-io: true\n timeout-sec: %d\n\nloopbackfs:\n path: %s",
suite.cache_path, maxSizeMb, cacheTimeout, suite.fake_storage_path)
os.Mkdir(suite.cache_path, 0777)
suite.setupTestHelper(config) // setup a new file cache with a custom config (teardown will occur after the test as usual)
file := "file"
handle, _ := suite.fileCache.CreateFile(internal.CreateFileOptions{Name: file, Mode: 0777})
data := make([]byte, 1024*1024)
suite.fileCache.WriteFile(internal.WriteFileOptions{Handle: handle, Offset: 0, Data: data})
suite.fileCache.FlushFile(internal.FlushFileOptions{Handle: handle})
stat, ret, err := suite.fileCache.StatFs()
suite.assert.Equal(ret, true)
suite.assert.Equal(err, nil)
suite.assert.NotEqual(stat, &syscall.Statfs_t{})
}
// In order for 'go test' to run this suite, we need to create
// a normal test function and pass our suite to suite.Run
func TestFileCacheTestSuite(t *testing.T) {

Просмотреть файл

@ -34,11 +34,12 @@
package file_cache
import (
"blobfuse2/common/log"
"os"
"strings"
"sync"
"time"
"github.com/Azure/azure-storage-fuse/v2/common/log"
)
type lfuPolicy struct {
@ -82,27 +83,24 @@ func (l *lfuPolicy) UpdateConfig(config cachePolicyConfig) error {
return nil
}
func (l *lfuPolicy) CacheValid(name string) error {
func (l *lfuPolicy) CacheValid(name string) {
log.Trace("lfuPolicy::CacheValid : %s", name)
l.list.Lock()
defer l.list.Unlock()
l.list.put(name)
return nil
}
func (l *lfuPolicy) CacheInvalidate(name string) error {
func (l *lfuPolicy) CacheInvalidate(name string) {
log.Trace("lfuPolicy::CacheInvalidate : %s", name)
if l.cacheTimeout == 0 {
return l.CachePurge(name)
l.CachePurge(name)
}
return nil
}
func (l *lfuPolicy) CachePurge(name string) error {
func (l *lfuPolicy) CachePurge(name string) {
log.Trace("lfuPolicy::CachePurge : %s", name)
l.list.Lock()
@ -110,8 +108,6 @@ func (l *lfuPolicy) CachePurge(name string) error {
l.list.delete(name)
l.removeFiles <- name
return nil
}
func (l *lfuPolicy) IsCached(name string) bool {
@ -158,7 +154,10 @@ func (l *lfuPolicy) clearItemFromCache(path string) {
}
// There are no open handles for this file so its safe to remove this
deleteFile(path)
err := deleteFile(path)
if err != nil && !os.IsNotExist(err) {
log.Err("lfuPolicy::DeleteItem : failed to delete local file %s [%s]", path, err.Error())
}
// File was deleted so try clearing its parent directory
// TODO: Delete directories up the path recursively that are "safe to delete". Ensure there is no race between this code and code that creates directories (like OpenFile)
@ -181,13 +180,6 @@ func (l *lfuPolicy) clearCache() {
}
func rethrowOnUnblock(f *os.File, path string, throwChan chan string) {
log.Trace("lfuPolicy::rethrowOnUnblock : %s", path)
log.Debug("lfuPolicy::rethrowOnUnblock : ex lock acquired [%s]", path)
throwChan <- path
}
func NewLFUPolicy(cfg cachePolicyConfig) cachePolicy {
pol := &lfuPolicy{
cachePolicyConfig: cfg,

Просмотреть файл

@ -34,8 +34,6 @@
package file_cache
import (
"blobfuse2/common"
"blobfuse2/common/log"
"fmt"
"io/fs"
"os"
@ -43,6 +41,9 @@ import (
"testing"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/suite"
)
@ -129,6 +130,14 @@ func (suite *lfuPolicyTestSuite) TestCacheValidNew() {
suite.assert.EqualValues(2, node.frequency) // the get will promote the node
}
func (suite *lfuPolicyTestSuite) TestClearItemFromCache() {
defer suite.cleanupTest()
f, _ := os.Create(cache_path + "/test")
suite.policy.clearItemFromCache(f.Name())
_, attr := os.Stat(f.Name())
suite.assert.NotEqual(nil, attr.Error())
}
func (suite *lfuPolicyTestSuite) TestCacheValidExisting() {
defer suite.cleanupTest()
suite.policy.CacheValid("temp")

Просмотреть файл

@ -34,10 +34,12 @@
package file_cache
import (
"blobfuse2/common/log"
"os"
"strings"
"sync"
"time"
"github.com/Azure/azure-storage-fuse/v2/common/log"
)
type lruNode struct {
@ -148,17 +150,16 @@ func (p *lruPolicy) UpdateConfig(c cachePolicyConfig) error {
return nil
}
func (p *lruPolicy) CacheValid(name string) error {
func (p *lruPolicy) CacheValid(name string) {
_, found := p.nodeMap.Load(name)
if !found {
p.cacheValidate(name)
} else {
p.validateChan <- name
}
return nil
}
func (p *lruPolicy) CacheInvalidate(name string) error {
func (p *lruPolicy) CacheInvalidate(name string) {
log.Trace("lruPolicy::CacheInvalidate : %s", name)
// We check if the file is not in the nodeMap to deal with the case
@ -171,17 +172,13 @@ func (p *lruPolicy) CacheInvalidate(name string) error {
if p.cacheTimeout == 0 || !found {
p.CachePurge(name)
}
return nil
}
func (p *lruPolicy) CachePurge(name string) error {
func (p *lruPolicy) CachePurge(name string) {
log.Trace("lruPolicy::CachePurge : %s", name)
p.removeNode(name)
p.deleteEvent <- name
return nil
}
func (p *lruPolicy) IsCached(name string) bool {
@ -305,14 +302,14 @@ func (p *lruPolicy) clearCache() {
}
}
func (p *lruPolicy) removeNode(name string) error {
func (p *lruPolicy) removeNode(name string) {
log.Trace("lruPolicy::removeNode : %s", name)
var node *lruNode = nil
val, found := p.nodeMap.Load(name)
if !found || val == nil {
return nil
return
}
p.nodeMap.Delete(name)
@ -327,7 +324,7 @@ func (p *lruPolicy) removeNode(name string) error {
p.head = node.next
p.head.prev = nil
node.next = nil
return nil
return
}
if node.next != nil {
@ -339,8 +336,6 @@ func (p *lruPolicy) removeNode(name string) error {
}
node.prev = nil
node.next = nil
return nil
}
func (p *lruPolicy) updateMarker() {
@ -412,7 +407,7 @@ func (p *lruPolicy) deleteExpiredNodes() {
log.Debug("lruPolicy::deleteExpiredNodes : Ends")
}
func (p *lruPolicy) deleteItem(name string) error {
func (p *lruPolicy) deleteItem(name string) {
log.Trace("lruPolicy::deleteItem : Deleting %s", name)
azPath := strings.TrimPrefix(name, p.tmpPath)
@ -424,7 +419,7 @@ func (p *lruPolicy) deleteItem(name string) error {
if p.fileLocks.Locked(azPath) {
log.Warn("lruPolicy::DeleteItem : File in under download %s", azPath)
p.CacheValid(name)
return nil
return
}
flock.Lock()
@ -434,16 +429,18 @@ func (p *lruPolicy) deleteItem(name string) error {
if flock.Count() > 0 {
log.Warn("lruPolicy::DeleteItem : File in use %s", name)
p.CacheValid(name)
return nil
return
}
// There are no open handles for this file so its safe to remove this
deleteFile(name)
err := deleteFile(name)
if err != nil && !os.IsNotExist(err) {
log.Err("lruPolicy::DeleteItem : failed to delete local file %s [%s]", name, err.Error())
}
// File was deleted so try clearing its parent directory
// TODO: Delete directories up the path recursively that are "safe to delete". Ensure there is no race between this code and code that creates directories (like OpenFile)
// This might require something like hierarchical locking.
return nil
}
func (p *lruPolicy) printNodes() {

Просмотреть файл

@ -34,13 +34,14 @@
package file_cache
import (
"blobfuse2/common"
"fmt"
"io/fs"
"os"
"testing"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/suite"
)

Просмотреть файл

@ -34,12 +34,13 @@
package libfuse
import (
"blobfuse2/common"
"blobfuse2/common/config"
"blobfuse2/common/log"
"blobfuse2/internal"
"context"
"fmt"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/config"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
)
/* NOTES:
@ -132,7 +133,11 @@ func (lf *Libfuse) Start(ctx context.Context) error {
fuseFS = lf
// This starts the libfuse process and hence shall always be the last statement
lf.initFuse()
err := lf.initFuse()
if err != nil {
log.Err("Libfuse::Start : Failed to init fuse [%s]", err.Error())
return err
}
return nil
}
@ -140,7 +145,7 @@ func (lf *Libfuse) Start(ctx context.Context) error {
// Stop : Stop the component functionality and kill all threads started
func (lf *Libfuse) Stop() error {
log.Trace("Libfuse::Stop : Stopping component %s", lf.Name())
lf.destroyFuse()
_ = lf.destroyFuse()
return nil
}
@ -195,7 +200,7 @@ func (lf *Libfuse) Validate(opt *LibfuseOptions) error {
// Configure : Pipeline will call this method after constructor so that you can read config and initialize yourself
// Return failure if any config is not valid to exit the process
func (lf *Libfuse) Configure() error {
func (lf *Libfuse) Configure(_ bool) error {
log.Trace("Libfuse::Configure : %s", lf.Name())
// >> If you do not need any config parameters remove below code and return nil

Просмотреть файл

@ -44,12 +44,12 @@ package libfuse
// #include "extension_handler.h"
import "C"
import (
"blobfuse2/common"
"blobfuse2/common/log"
"blobfuse2/internal"
"blobfuse2/internal/handlemap"
"errors"
"fmt"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
"io"
"io/fs"
"os"
@ -88,7 +88,7 @@ func trimFusePath(path *C.char) string {
return str
}
var fuse_opts C.fuse_options_t
var fuse_opts C.fuse_options_t // nolint
// convertConfig converts the config options from Go to C
func (lf *Libfuse) convertConfig() *C.fuse_options_t {
@ -937,6 +937,6 @@ func libfuse2_utimens(path *C.char, tv *C.timespec_t) C.int {
func blobfuse_cache_update(path *C.char) C.int {
name := trimFusePath(path)
name = common.NormalizeObjectName(name)
go fuseFS.NextComponent().FileUsed(name)
go fuseFS.NextComponent().FileUsed(name) //nolint
return 0
}

Просмотреть файл

@ -40,12 +40,12 @@ package libfuse
// #include "libfuse_wrapper.h"
import "C"
import (
"blobfuse2/common"
"blobfuse2/common/config"
"blobfuse2/common/log"
"blobfuse2/internal"
"blobfuse2/internal/handlemap"
"errors"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/config"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
"io/fs"
"strings"
"syscall"
@ -78,7 +78,7 @@ func newTestLibfuse(next internal.Component, configuration string) *Libfuse {
config.ReadConfigFromReader(strings.NewReader(configuration))
libfuse := NewLibfuseComponent()
libfuse.SetNextComponent(next)
libfuse.Configure()
libfuse.Configure(true)
return libfuse.(*Libfuse)
}

Просмотреть файл

@ -42,12 +42,9 @@ package libfuse
// #cgo LDFLAGS: -lfuse3 -ldl
// #include "libfuse_wrapper.h"
// #include "extension_handler.h"
import "C"
import "C" //nolint
import (
"blobfuse2/common"
"blobfuse2/common/log"
"blobfuse2/internal"
"blobfuse2/internal/handlemap"
"errors"
"fmt"
"io"
@ -55,6 +52,11 @@ import (
"os"
"syscall"
"unsafe"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
)
/* --- IMPORTANT NOTE ---
@ -272,11 +274,12 @@ func libfuse_init(conn *C.fuse_conn_info_t, cfg *C.fuse_config_t) (res unsafe.Po
conn.want |= C.FUSE_CAP_SPLICE_WRITE
}
if (conn.capable & C.FUSE_CAP_WRITEBACK_CACHE) != 0 {
// Buffer write requests at libfuse and then hand it off to application
log.Info("Libfuse::libfuse_init : Enable Capability : FUSE_CAP_WRITEBACK_CACHE")
conn.want |= C.FUSE_CAP_WRITEBACK_CACHE
}
/*
FUSE_CAP_WRITEBACK_CACHE flag is not suitable for network filesystems. If a partial page is
written, then the page needs to be first read from userspace. This means, that
even for files opened for O_WRONLY it is possible that READ requests will be
generated by the kernel. This will result in error in file cache
*/
// Max background thread on the fuse layer for high parallelism
conn.max_background = 128

Просмотреть файл

@ -34,10 +34,11 @@
package libfuse
import (
"blobfuse2/common"
"io/fs"
"testing"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/stretchr/testify/suite"
)
@ -59,7 +60,7 @@ func (suite *libfuseTestSuite) TestDefault() {
func (suite *libfuseTestSuite) TestConfig() {
defer suite.cleanupTest()
suite.cleanupTest() // clean up the default libfuse generated
config := "allow-other: true\nread-only: true\nlibfuse:\n default-permission: 0777\n attribute-expiration-sec: 60\n entry-expiration-sec: 60\n negative-entry-expiration-sec: 60\n fuse-trace: true\n"
config := "allow-other: true\nread-only: true\nlibfuse:\n attribute-expiration-sec: 60\n entry-expiration-sec: 60\n negative-entry-expiration-sec: 60\n fuse-trace: true\n"
suite.setupTestHelper(config) // setup a new libfuse with a custom config (clean up will occur after the test as usual)
suite.assert.Equal(suite.libfuse.Name(), "libfuse")
@ -77,16 +78,34 @@ func (suite *libfuseTestSuite) TestConfig() {
func (suite *libfuseTestSuite) TestConfigZero() {
defer suite.cleanupTest()
suite.cleanupTest() // clean up the default libfuse generated
config := "allow-other: true\nread-only: true\nlibfuse:\n default-permission: 0777\n attribute-expiration-sec: 0\n entry-expiration-sec: 0\n negative-entry-expiration-sec: 0\n fuse-trace: true\n"
config := "read-only: true\nlibfuse:\n attribute-expiration-sec: 0\n entry-expiration-sec: 0\n negative-entry-expiration-sec: 0\n fuse-trace: true\n"
suite.setupTestHelper(config) // setup a new libfuse with a custom config (clean up will occur after the test as usual)
suite.assert.Equal(suite.libfuse.Name(), "libfuse")
suite.assert.Empty(suite.libfuse.mountPath)
suite.assert.True(suite.libfuse.readOnly)
suite.assert.True(suite.libfuse.traceEnable)
suite.assert.True(suite.libfuse.allowOther)
suite.assert.Equal(suite.libfuse.dirPermission, uint(fs.FileMode(0777)))
suite.assert.Equal(suite.libfuse.filePermission, uint(fs.FileMode(0777)))
suite.assert.False(suite.libfuse.allowOther)
suite.assert.Equal(suite.libfuse.dirPermission, uint(fs.FileMode(0775)))
suite.assert.Equal(suite.libfuse.filePermission, uint(fs.FileMode(0755)))
suite.assert.Equal(suite.libfuse.entryExpiration, uint32(0))
suite.assert.Equal(suite.libfuse.attributeExpiration, uint32(0))
suite.assert.Equal(suite.libfuse.negativeTimeout, uint32(0))
}
func (suite *libfuseTestSuite) TestConfigDefaultPermission() {
defer suite.cleanupTest()
suite.cleanupTest() // clean up the default libfuse generated
config := "read-only: true\nlibfuse:\n default-permission: 0555\n attribute-expiration-sec: 0\n entry-expiration-sec: 0\n negative-entry-expiration-sec: 0\n fuse-trace: true\n"
suite.setupTestHelper(config) // setup a new libfuse with a custom config (clean up will occur after the test as usual)
suite.assert.Equal(suite.libfuse.Name(), "libfuse")
suite.assert.Empty(suite.libfuse.mountPath)
suite.assert.True(suite.libfuse.readOnly)
suite.assert.True(suite.libfuse.traceEnable)
suite.assert.False(suite.libfuse.allowOther)
suite.assert.Equal(suite.libfuse.dirPermission, uint(fs.FileMode(0555)))
suite.assert.Equal(suite.libfuse.filePermission, uint(fs.FileMode(0555)))
suite.assert.Equal(suite.libfuse.entryExpiration, uint32(0))
suite.assert.Equal(suite.libfuse.attributeExpiration, uint32(0))
suite.assert.Equal(suite.libfuse.negativeTimeout, uint32(0))

Просмотреть файл

@ -40,17 +40,18 @@ package libfuse
// #include "libfuse_wrapper.h"
import "C"
import (
"blobfuse2/common"
"blobfuse2/common/config"
"blobfuse2/common/log"
"blobfuse2/internal"
"blobfuse2/internal/handlemap"
"errors"
"io/fs"
"strings"
"syscall"
"unsafe"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/config"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/suite"
@ -77,7 +78,7 @@ func newTestLibfuse(next internal.Component, configuration string) *Libfuse {
config.ReadConfigFromReader(strings.NewReader(configuration))
libfuse := NewLibfuseComponent()
libfuse.SetNextComponent(next)
libfuse.Configure()
libfuse.Configure(true)
return libfuse.(*Libfuse)
}

Просмотреть файл

@ -34,10 +34,6 @@
package loopback
import (
"blobfuse2/common/config"
"blobfuse2/common/log"
"blobfuse2/internal"
"blobfuse2/internal/handlemap"
"context"
"fmt"
"io"
@ -46,6 +42,11 @@ import (
"path/filepath"
"strings"
"syscall"
"github.com/Azure/azure-storage-fuse/v2/common/config"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
)
//LoopbackFS component Config specifications:
@ -68,7 +69,7 @@ type LoopbackFSOptions struct {
Path string `config:"path"`
}
func (lfs *LoopbackFS) Configure() error {
func (lfs *LoopbackFS) Configure(_ bool) error {
conf := LoopbackFSOptions{}
err := config.UnmarshalKey(compName, &conf)
if err != nil {
@ -158,6 +159,43 @@ func (lfs *LoopbackFS) ReadDir(options internal.ReadDirOptions) ([]*internal.Obj
return attrList, nil
}
// TODO: we can make it more intricate by generating a token and splitting streamed dir mimicking storage
func (lfs *LoopbackFS) StreamDir(options internal.StreamDirOptions) ([]*internal.ObjAttr, string, error) {
if options.Token == "na" {
return nil, "", nil
}
log.Trace("LoopbackFS::StreamDir : name=%s", options.Name)
attrList := make([]*internal.ObjAttr, 0)
path := filepath.Join(lfs.path, options.Name)
log.Debug("LoopbackFS: StreamDir requested for %s", path)
files, err := ioutil.ReadDir(path)
if err != nil {
log.Err("LoopbackFS: StreamDir error[%s]", err)
return nil, "", err
}
log.Debug("LoopbackFS: StreamDir on %s returned %d items", path, len(files))
for _, file := range files {
attr := &internal.ObjAttr{
Path: filepath.Join(options.Name, file.Name()),
Name: file.Name(),
Size: file.Size(),
Mode: file.Mode(),
Mtime: file.ModTime(),
}
attr.Flags.Set(internal.PropFlagMetadataRetrieved)
attr.Flags.Set(internal.PropFlagModeDefault)
if file.IsDir() {
attr.Flags.Set(internal.PropFlagIsDir)
}
attrList = append(attrList, attr)
}
return attrList, "", nil
}
func (lfs *LoopbackFS) RenameDir(options internal.RenameDirOptions) error {
log.Trace("LoopbackFS::RenameDir : %s -> %s", options.Src, options.Dst)
oldPath := filepath.Join(lfs.path, options.Src)
@ -409,7 +447,6 @@ func (lfs *LoopbackFS) Chown(options internal.ChownOptions) error {
}
func (lfs *LoopbackFS) InvalidateObject(_ string) {
return
}
func NewLoopbackFSComponent() internal.Component {

Просмотреть файл

@ -34,13 +34,14 @@
package loopback
import (
"blobfuse2/internal"
"context"
"fmt"
"os"
"path/filepath"
"testing"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/suite"
)

Просмотреть файл

@ -34,8 +34,8 @@
package stream
import (
"blobfuse2/internal"
"blobfuse2/internal/handlemap"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
)
type StreamConnection interface {
@ -58,11 +58,11 @@ func NewStreamConnection(cfg StreamOptions, stream *Stream) StreamConnection {
if cfg.readOnly {
r := ReadCache{}
r.Stream = stream
r.Configure(cfg)
_ = r.Configure(cfg)
return &r
}
rw := ReadWriteCache{}
rw.Stream = stream
rw.Configure(cfg)
_ = rw.Configure(cfg)
return &rw
}

Просмотреть файл

@ -34,13 +34,14 @@
package stream
import (
"blobfuse2/common"
"blobfuse2/common/log"
"blobfuse2/internal"
"blobfuse2/internal/handlemap"
"io"
"sync/atomic"
"syscall"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
)
type ReadCache struct {
@ -185,7 +186,7 @@ func (r *ReadCache) ReadInBuffer(options internal.ReadInBufferOptions) (int, err
func (r *ReadCache) CloseFile(options internal.CloseFileOptions) error {
log.Trace("Stream::CloseFile : name=%s, handle=%d", options.Handle.Path, options.Handle.ID)
r.NextComponent().CloseFile(options)
_ = r.NextComponent().CloseFile(options)
if !r.StreamOnly && !options.Handle.CacheObj.StreamOnly {
options.Handle.CacheObj.Lock()
defer options.Handle.CacheObj.Unlock()

Просмотреть файл

@ -33,11 +33,6 @@
package stream
import (
"blobfuse2/common"
"blobfuse2/common/config"
"blobfuse2/common/log"
"blobfuse2/internal"
"blobfuse2/internal/handlemap"
"context"
"crypto/rand"
"os"
@ -47,6 +42,12 @@ import (
"testing"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/config"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/suite"
@ -70,12 +71,12 @@ const MB = 1024 * 1024
// Helper methods for setup and getting options/data ========================================
func newTestStream(next internal.Component, configuration string, ro bool) (*Stream, error) {
config.ReadConfigFromReader(strings.NewReader(configuration))
_ = config.ReadConfigFromReader(strings.NewReader(configuration))
// we must be in read-only mode for read stream
config.SetBool("read-only", ro)
stream := NewStreamComponent()
stream.SetNextComponent(next)
err := stream.Configure()
err := stream.Configure(true)
return stream.(*Stream), err
}
@ -86,7 +87,7 @@ func (suite *streamTestSuite) setupTestHelper(config string, ro bool) {
suite.mock = internal.NewMockComponent(suite.mockCtrl)
suite.stream, err = newTestStream(suite.mock, config, ro)
suite.assert.Equal(err, nil)
suite.stream.Start(context.Background())
_ = suite.stream.Start(context.Background())
}
func (suite *streamTestSuite) SetupTest() {
@ -98,7 +99,7 @@ func (suite *streamTestSuite) SetupTest() {
}
func (suite *streamTestSuite) cleanupTest() {
suite.stream.Stop()
_ = suite.stream.Stop()
suite.mockCtrl.Finish()
}
@ -118,7 +119,7 @@ func (suite *streamTestSuite) getRequestOptions(fileIndex int, handle *handlemap
// return data buffer populated with data of the given size
func getBlockData(suite *streamTestSuite, size int) *[]byte {
dataBuffer := make([]byte, size)
rand.Read(dataBuffer)
_, _ = rand.Read(dataBuffer)
return &dataBuffer
}
@ -131,17 +132,17 @@ func getCachedBlock(suite *streamTestSuite, offset int64, handle *handlemap.Hand
// Concurrency helpers with wait group terminations ========================================
func asyncReadInBuffer(suite *streamTestSuite, readInBufferOptions internal.ReadInBufferOptions) {
suite.stream.ReadInBuffer(readInBufferOptions)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
wg.Done()
}
func asyncOpenFile(suite *streamTestSuite, openFileOptions internal.OpenFileOptions) {
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
wg.Done()
}
func asyncCloseFile(suite *streamTestSuite, closeFileOptions internal.CloseFileOptions) {
suite.stream.CloseFile(closeFileOptions)
_ = suite.stream.CloseFile(closeFileOptions)
wg.Done()
}
@ -223,7 +224,7 @@ func (suite *streamTestSuite) TestCacheOnOpenFile() {
openFileOptions, readInBufferOptions, _ := suite.getRequestOptions(0, handle, false, int64(100*MB), 0, 0)
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 1, handle)
@ -258,7 +259,7 @@ func (suite *streamTestSuite) TestFileKeyEviction() {
openFileOptions, readInBufferOptions, _ := suite.getRequestOptions(i, handle, false, int64(100*MB), 0, 0)
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
}
@ -278,12 +279,12 @@ func (suite *streamTestSuite) TestBlockEviction() {
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
_, readInBufferOptions, _ = suite.getRequestOptions(0, handle, false, int64(100*MB), 16*MB, 0)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
suite.stream.ReadInBuffer(readInBufferOptions)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
// we expect our first block to have been evicted
assertBlockNotCached(suite, 0, handle)
@ -303,15 +304,15 @@ func (suite *streamTestSuite) TestHandles() {
closeFileOptions := internal.CloseFileOptions{Handle: handle}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
suite.mock.EXPECT().CloseFile(closeFileOptions).Return(nil)
suite.stream.CloseFile(closeFileOptions)
_ = suite.stream.CloseFile(closeFileOptions)
// we expect to call read in buffer again since we cleaned the cache after the file was closed
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
}
func (suite *streamTestSuite) TestStreamOnlyHandleLimit() {
@ -327,21 +328,21 @@ func (suite *streamTestSuite) TestStreamOnlyHandleLimit() {
closeFileOptions := internal.CloseFileOptions{Handle: handle1}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle1, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
assertHandleNotStreamOnly(suite, handle1)
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle2, nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
assertHandleStreamOnly(suite, handle2)
suite.mock.EXPECT().CloseFile(closeFileOptions).Return(nil)
suite.stream.CloseFile(closeFileOptions)
_ = suite.stream.CloseFile(closeFileOptions)
// we expect to call read in buffer again since we cleaned the cache after the file was closed
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle3, nil)
readInBufferOptions.Handle = handle3
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
assertHandleNotStreamOnly(suite, handle3)
}
@ -357,7 +358,7 @@ func (suite *streamTestSuite) TestBlockDataOverlap() {
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
// options of our request from the stream component
@ -365,7 +366,7 @@ func (suite *streamTestSuite) TestBlockDataOverlap() {
// options the stream component should request for the second block
_, streamMissingBlockReadInBufferOptions, _ := suite.getRequestOptions(0, handle, false, int64(100*MB), 16*MB, 0)
suite.mock.EXPECT().ReadInBuffer(streamMissingBlockReadInBufferOptions).Return(int(16*MB), nil)
suite.stream.ReadInBuffer(userReadInBufferOptions)
_, _ = suite.stream.ReadInBuffer(userReadInBufferOptions)
// we expect 0-16MB, and 16MB-32MB be cached since our second request is at offset 1MB
@ -387,7 +388,7 @@ func (suite *streamTestSuite) TestFileSmallerThanBlockSize() {
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
// we expect our request to be 15MB
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(15*MB), nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
blk := getCachedBlock(suite, 0, handle)
@ -409,7 +410,7 @@ func (suite *streamTestSuite) TestEmptyFile() {
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
// we expect our request to be 0
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(0), nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
blk := getCachedBlock(suite, 0, handle)
@ -430,11 +431,11 @@ func (suite *streamTestSuite) TestCachePurge() {
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
}
suite.stream.Stop()
_ = suite.stream.Stop()
assertBlockCached(suite, 0, handle_1)
assertBlockCached(suite, 0, handle_2)
}
@ -457,10 +458,10 @@ func (suite *streamTestSuite) TestCachedData() {
if off == 0 {
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle_1, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
} else {
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
suite.stream.ReadInBuffer(readInBufferOptions)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
}
assertBlockCached(suite, off*MB, handle_1)
@ -471,12 +472,12 @@ func (suite *streamTestSuite) TestCachedData() {
// now let's assert that it doesn't call next component and that the data retrieved is accurate
// case1: data within a cached block
_, readInBufferOptions, dataBuffer = suite.getRequestOptions(0, handle_1, true, int64(32*MB), int64(2*MB), int64(3*MB))
suite.stream.ReadInBuffer(readInBufferOptions)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
suite.assert.Equal(data[2*MB:3*MB], *dataBuffer)
// case2: data cached within two blocks
_, readInBufferOptions, dataBuffer = suite.getRequestOptions(0, handle_1, true, int64(32*MB), int64(14*MB), int64(20*MB))
suite.stream.ReadInBuffer(readInBufferOptions)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
suite.assert.Equal(data[14*MB:20*MB], *dataBuffer)
}
@ -499,10 +500,10 @@ func (suite *streamTestSuite) TestAsyncReadAndEviction() {
if off == 0 {
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle_1, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
} else {
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
suite.stream.ReadInBuffer(readInBufferOptions)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
}
assertBlockCached(suite, off*MB, handle_1)
@ -512,7 +513,7 @@ func (suite *streamTestSuite) TestAsyncReadAndEviction() {
// test concurrent data access to the same file
// call 1: data within a cached block
_, readInBufferOptions, blockOneDataBuffer = suite.getRequestOptions(0, handle_1, true, int64(16*MB), int64(2*MB), int64(3*MB))
suite.stream.ReadInBuffer(readInBufferOptions)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
wg.Add(2)
// call 2: data cached within two blocks

Просмотреть файл

@ -34,16 +34,16 @@
package stream
import (
"blobfuse2/common"
"blobfuse2/common/log"
"blobfuse2/internal"
"blobfuse2/internal/handlemap"
"encoding/base64"
"errors"
"io"
"strings"
"sync/atomic"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
"github.com/pbnjay/memory"
)
@ -98,8 +98,6 @@ func (rw *ReadWriteCache) OpenFile(options internal.OpenFileOptions) (*handlemap
func (rw *ReadWriteCache) ReadInBuffer(options internal.ReadInBufferOptions) (int, error) {
// log.Trace("Stream::ReadInBuffer : name=%s, handle=%d, offset=%d", options.Handle.Path, options.Handle.ID, options.Offset)
options.Handle.CacheObj.Lock()
defer options.Handle.CacheObj.Unlock()
if !rw.StreamOnly && options.Handle.CacheObj.StreamOnly {
err := rw.createHandleCache(options.Handle)
if err != nil {
@ -114,6 +112,8 @@ func (rw *ReadWriteCache) ReadInBuffer(options internal.ReadInBufferOptions) (in
}
return data, err
}
options.Handle.CacheObj.Lock()
defer options.Handle.CacheObj.Unlock()
if atomic.LoadInt64(&options.Handle.Size) == 0 {
return 0, nil
}
@ -126,8 +126,6 @@ func (rw *ReadWriteCache) ReadInBuffer(options internal.ReadInBufferOptions) (in
func (rw *ReadWriteCache) WriteFile(options internal.WriteFileOptions) (int, error) {
// log.Trace("Stream::WriteFile : name=%s, handle=%d, offset=%d", options.Handle.Path, options.Handle.ID, options.Offset)
options.Handle.CacheObj.Lock()
defer options.Handle.CacheObj.Unlock()
if !rw.StreamOnly && options.Handle.CacheObj.StreamOnly {
err := rw.createHandleCache(options.Handle)
if err != nil {
@ -142,6 +140,8 @@ func (rw *ReadWriteCache) WriteFile(options internal.WriteFileOptions) (int, err
}
return data, err
}
options.Handle.CacheObj.Lock()
defer options.Handle.CacheObj.Unlock()
written, err := rw.readWriteBlocks(options.Handle, options.Offset, options.Data, true)
if err != nil {
log.Err("Stream::WriteFile : error failed to write data to %s: [%s]", options.Handle.Path, err.Error())
@ -151,27 +151,26 @@ func (rw *ReadWriteCache) WriteFile(options internal.WriteFileOptions) (int, err
func (rw *ReadWriteCache) TruncateFile(options internal.TruncateFileOptions) error {
log.Trace("Stream::TruncateFile : name=%s, size=%d", options.Name, options.Size)
var err error
if !rw.StreamOnly {
handleMap := handlemap.GetHandles()
handleMap.Range(func(key, value interface{}) bool {
handle := value.(*handlemap.Handle)
if handle.CacheObj != nil && !handle.CacheObj.StreamOnly {
if handle.Path == options.Name {
err := rw.purge(handle, options.Size, true)
if err != nil {
log.Err("Stream::TruncateFile : failed to flush and purge handle cache %s [%s]", handle.Path, err.Error())
return false
}
}
}
return true
})
if err != nil {
return err
}
}
err = rw.NextComponent().TruncateFile(options)
// if !rw.StreamOnly {
// handleMap := handlemap.GetHandles()
// handleMap.Range(func(key, value interface{}) bool {
// handle := value.(*handlemap.Handle)
// if handle.CacheObj != nil && !handle.CacheObj.StreamOnly {
// if handle.Path == options.Name {
// err := rw.purge(handle, options.Size, true)
// if err != nil {
// log.Err("Stream::TruncateFile : failed to flush and purge handle cache %s [%s]", handle.Path, err.Error())
// return false
// }
// }
// }
// return true
// })
// if err != nil {
// return err
// }
// }
err := rw.NextComponent().TruncateFile(options)
if err != nil {
log.Err("Stream::TruncateFile : error truncating file %s [%s]", options.Name, err.Error())
return err
@ -181,25 +180,27 @@ func (rw *ReadWriteCache) TruncateFile(options internal.TruncateFileOptions) err
func (rw *ReadWriteCache) RenameFile(options internal.RenameFileOptions) error {
log.Trace("Stream::RenameFile : name=%s", options.Src)
var err error
handleMap := handlemap.GetHandles()
handleMap.Range(func(key, value interface{}) bool {
handle := value.(*handlemap.Handle)
if handle.CacheObj != nil && !handle.CacheObj.StreamOnly {
if handle.Path == options.Src {
err := rw.purge(handle, -1, true)
if err != nil {
log.Err("Stream::RenameFile : failed to flush and purge handle cache %s [%s]", handle.Path, err.Error())
return false
}
}
}
return true
})
if err != nil {
return err
}
err = rw.NextComponent().RenameFile(options)
// if !rw.StreamOnly {
// var err error
// handleMap := handlemap.GetHandles()
// handleMap.Range(func(key, value interface{}) bool {
// handle := value.(*handlemap.Handle)
// if handle.CacheObj != nil && !handle.CacheObj.StreamOnly {
// if handle.Path == options.Src {
// err := rw.purge(handle, -1, true)
// if err != nil {
// log.Err("Stream::RenameFile : failed to flush and purge handle cache %s [%s]", handle.Path, err.Error())
// return false
// }
// }
// }
// return true
// })
// if err != nil {
// return err
// }
// }
err := rw.NextComponent().RenameFile(options)
if err != nil {
log.Err("Stream::RenameFile : error renaming file %s [%s]", options.Src, err.Error())
return err
@ -225,20 +226,22 @@ func (rw *ReadWriteCache) CloseFile(options internal.CloseFileOptions) error {
func (rw *ReadWriteCache) DeleteFile(options internal.DeleteFileOptions) error {
log.Trace("Stream::DeleteFile : name=%s", options.Name)
handleMap := handlemap.GetHandles()
handleMap.Range(func(key, value interface{}) bool {
handle := value.(*handlemap.Handle)
if handle.CacheObj != nil && !handle.CacheObj.StreamOnly {
if handle.Path == options.Name {
err := rw.purge(handle, -1, false)
if err != nil {
log.Err("Stream::DeleteFile : failed to purge handle cache %s [%s]", handle.Path, err.Error())
return false
}
}
}
return true
})
// if !rw.StreamOnly {
// handleMap := handlemap.GetHandles()
// handleMap.Range(func(key, value interface{}) bool {
// handle := value.(*handlemap.Handle)
// if handle.CacheObj != nil && !handle.CacheObj.StreamOnly {
// if handle.Path == options.Name {
// err := rw.purge(handle, -1, false)
// if err != nil {
// log.Err("Stream::DeleteFile : failed to purge handle cache %s [%s]", handle.Path, err.Error())
// return false
// }
// }
// }
// return true
// })
// }
err := rw.NextComponent().DeleteFile(options)
if err != nil {
log.Err("Stream::DeleteFile : error deleting file %s [%s]", options.Name, err.Error())
@ -249,20 +252,22 @@ func (rw *ReadWriteCache) DeleteFile(options internal.DeleteFileOptions) error {
func (rw *ReadWriteCache) DeleteDirectory(options internal.DeleteDirOptions) error {
log.Trace("Stream::DeleteDirectory : name=%s", options.Name)
handleMap := handlemap.GetHandles()
handleMap.Range(func(key, value interface{}) bool {
handle := value.(*handlemap.Handle)
if handle.CacheObj != nil && !handle.CacheObj.StreamOnly {
if strings.HasPrefix(handle.Path, options.Name) {
err := rw.purge(handle, -1, false)
if err != nil {
log.Err("Stream::DeleteDirectory : failed to purge handle cache %s [%s]", handle.Path, err.Error())
return false
}
}
}
return true
})
// if !rw.StreamOnly {
// handleMap := handlemap.GetHandles()
// handleMap.Range(func(key, value interface{}) bool {
// handle := value.(*handlemap.Handle)
// if handle.CacheObj != nil && !handle.CacheObj.StreamOnly {
// if strings.HasPrefix(handle.Path, options.Name) {
// err := rw.purge(handle, -1, false)
// if err != nil {
// log.Err("Stream::DeleteDirectory : failed to purge handle cache %s [%s]", handle.Path, err.Error())
// return false
// }
// }
// }
// return true
// })
// }
err := rw.NextComponent().DeleteDir(options)
if err != nil {
log.Err("Stream::DeleteDirectory : error deleting directory %s [%s]", options.Name, err.Error())
@ -273,25 +278,27 @@ func (rw *ReadWriteCache) DeleteDirectory(options internal.DeleteDirOptions) err
func (rw *ReadWriteCache) RenameDirectory(options internal.RenameDirOptions) error {
log.Trace("Stream::RenameDirectory : name=%s", options.Src)
var err error
handleMap := handlemap.GetHandles()
handleMap.Range(func(key, value interface{}) bool {
handle := value.(*handlemap.Handle)
if handle.CacheObj != nil && !handle.CacheObj.StreamOnly {
if strings.HasPrefix(handle.Path, options.Src) {
err := rw.purge(handle, -1, true)
if err != nil {
log.Err("Stream::RenameDirectory : failed to flush and purge handle cache %s [%s]", handle.Path, err.Error())
return false
}
}
}
return true
})
if err != nil {
return err
}
err = rw.NextComponent().RenameDir(options)
// if !rw.StreamOnly {
// var err error
// handleMap := handlemap.GetHandles()
// handleMap.Range(func(key, value interface{}) bool {
// handle := value.(*handlemap.Handle)
// if handle.CacheObj != nil && !handle.CacheObj.StreamOnly {
// if strings.HasPrefix(handle.Path, options.Src) {
// err := rw.purge(handle, -1, true)
// if err != nil {
// log.Err("Stream::RenameDirectory : failed to flush and purge handle cache %s [%s]", handle.Path, err.Error())
// return false
// }
// }
// }
// return true
// })
// if err != nil {
// return err
// }
// }
err := rw.NextComponent().RenameDir(options)
if err != nil {
log.Err("Stream::RenameDirectory : error renaming directory %s [%s]", options.Src, err.Error())
return err
@ -302,18 +309,20 @@ func (rw *ReadWriteCache) RenameDirectory(options internal.RenameDirOptions) err
// Stop : Stop the component functionality and kill all threads started
func (rw *ReadWriteCache) Stop() error {
log.Trace("Stopping component : %s", rw.Name())
handleMap := handlemap.GetHandles()
handleMap.Range(func(key, value interface{}) bool {
handle := value.(*handlemap.Handle)
if handle.CacheObj != nil && !handle.CacheObj.StreamOnly {
err := rw.purge(handle, -1, false)
if err != nil {
log.Err("Stream::Stop : failed to purge handle cache %s [%s]", handle.Path, err.Error())
return false
if !rw.StreamOnly {
handleMap := handlemap.GetHandles()
handleMap.Range(func(key, value interface{}) bool {
handle := value.(*handlemap.Handle)
if handle.CacheObj != nil && !handle.CacheObj.StreamOnly {
err := rw.purge(handle, -1, false)
if err != nil {
log.Err("Stream::Stop : failed to purge handle cache %s [%s]", handle.Path, err.Error())
return false
}
}
}
return true
})
return true
})
}
return nil
}
@ -444,7 +453,7 @@ func (rw *ReadWriteCache) readWriteBlocks(handle *handlemap.Handle, offset int64
} else if write {
emptyByteLength := offset - lastBlock.EndIndex
// if the data to append + our last block existing data do not exceed block size - just append to last block
if (lastBlock.EndIndex-lastBlock.StartIndex)+(emptyByteLength+dataLeft) <= rw.BlockSize {
if (lastBlock.EndIndex-lastBlock.StartIndex)+(emptyByteLength+dataLeft) <= rw.BlockSize || lastBlock.EndIndex == 0 {
_, _, err := rw.getBlock(handle, lastBlock)
if err != nil {
return dataRead, err
@ -460,10 +469,6 @@ func (rw *ReadWriteCache) readWriteBlocks(handle *handlemap.Handle, offset int64
lastBlock.Flags.Set(common.DirtyBlock)
atomic.StoreInt64(&handle.Size, lastBlock.EndIndex)
dataRead += int(dataLeft)
err = rw.NextComponent().FlushFile(internal.FlushFileOptions{Handle: handle})
if err != nil {
return dataRead, err
}
return dataRead, nil
}
blk := &common.Block{
@ -481,8 +486,7 @@ func (rw *ReadWriteCache) readWriteBlocks(handle *handlemap.Handle, offset int64
}
atomic.StoreInt64(&handle.Size, blk.EndIndex)
dataRead += int(dataCopied)
err = rw.NextComponent().FlushFile(internal.FlushFileOptions{Handle: handle})
return dataRead, err
return dataRead, nil
} else {
return dataRead, nil
}

Просмотреть файл

@ -34,12 +34,13 @@
package stream
import (
"blobfuse2/common"
"blobfuse2/internal"
"blobfuse2/internal/handlemap"
"os"
"testing"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
"github.com/stretchr/testify/suite"
)
@ -62,6 +63,126 @@ func (suite *streamTestSuite) TestWriteConfig() {
suite.assert.EqualValues(true, suite.stream.StreamOnly)
}
// ============================================== stream only tests ========================================
func (suite *streamTestSuite) TestStreamOnlyOpenFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 4\n handle-buffer-size-mb: 32\n handle-limit: 0\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle1, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyCloseFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 4\n handle-buffer-size-mb: 0\n handle-limit: 10\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 2, Path: fileNames[0]}
closeFileOptions := internal.CloseFileOptions{Handle: handle1}
suite.mock.EXPECT().CloseFile(closeFileOptions).Return(nil)
_ = suite.stream.CloseFile(closeFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyCreateFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 0\n handle-buffer-size-mb: 32\n handle-limit: 1\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
createFileoptions := internal.CreateFileOptions{Name: handle1.Path, Mode: 0777}
suite.mock.EXPECT().CreateFile(createFileoptions).Return(handle1, nil)
_, _ = suite.stream.CreateFile(createFileoptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyDeleteFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 0\n handle-buffer-size-mb: 32\n handle-limit: 1\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
deleteFileOptions := internal.DeleteFileOptions{Name: handle1.Path}
suite.mock.EXPECT().DeleteFile(deleteFileOptions).Return(nil)
_ = suite.stream.DeleteFile(deleteFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyRenameFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 0\n handle-buffer-size-mb: 32\n handle-limit: 1\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
renameFileOptions := internal.RenameFileOptions{Src: handle1.Path, Dst: handle1.Path + "new"}
suite.mock.EXPECT().RenameFile(renameFileOptions).Return(nil)
_ = suite.stream.RenameFile(renameFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyRenameDirectory() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 0\n handle-buffer-size-mb: 32\n handle-limit: 1\n"
suite.setupTestHelper(config, false)
renameDirOptions := internal.RenameDirOptions{Src: "/test/path", Dst: "/test/path_new"}
suite.mock.EXPECT().RenameDir(renameDirOptions).Return(nil)
_ = suite.stream.RenameDir(renameDirOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyDeleteDirectory() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 0\n handle-buffer-size-mb: 32\n handle-limit: 1\n"
suite.setupTestHelper(config, false)
deleteDirOptions := internal.DeleteDirOptions{Name: "/test/path"}
suite.mock.EXPECT().DeleteDir(deleteDirOptions).Return(nil)
_ = suite.stream.DeleteDir(deleteDirOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyTruncateFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 0\n handle-buffer-size-mb: 32\n handle-limit: 1\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
truncateFileOptions := internal.TruncateFileOptions{Name: handle1.Path}
suite.mock.EXPECT().TruncateFile(truncateFileOptions).Return(nil)
_ = suite.stream.TruncateFile(truncateFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
// ============================================================================ read tests ====================================================
// test small file caching
func (suite *streamTestSuite) TestCacheSmallFileOnOpen() {
defer suite.cleanupTest()
@ -80,7 +201,7 @@ func (suite *streamTestSuite) TestCacheSmallFileOnOpen() {
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockNotCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 0, handle)
@ -99,7 +220,7 @@ func (suite *streamTestSuite) TestCacheSmallFileOnOpen() {
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(len(readInBufferOptions.Data), nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 1, handle)
@ -123,7 +244,7 @@ func (suite *streamTestSuite) TestOpenLargeFile() {
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockNotCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 0, handle)
@ -147,7 +268,7 @@ func (suite *streamTestSuite) TestStreamOnly() {
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
assertHandleNotStreamOnly(suite, handle)
// create new handle
@ -159,7 +280,7 @@ func (suite *streamTestSuite) TestStreamOnly() {
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockNotCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 0, handle)
@ -183,7 +304,7 @@ func (suite *streamTestSuite) TestReadLargeFileBlocks() {
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle1, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockNotCached(suite, 0, handle1)
assertNumberOfCachedFileBlocks(suite, 0, handle1)
@ -206,7 +327,7 @@ func (suite *streamTestSuite) TestReadLargeFileBlocks() {
Offset: 1 * MB,
Data: make([]byte, 1*MB)}).Return(len(readInBufferOptions.Data), nil)
suite.stream.ReadInBuffer(readInBufferOptions)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
assertBlockCached(suite, 0, handle1)
assertBlockCached(suite, 1*MB, handle1)
@ -235,7 +356,7 @@ func (suite *streamTestSuite) TestPurgeOnClose() {
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(len(readInBufferOptions.Data), nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 1, handle)
@ -243,7 +364,7 @@ func (suite *streamTestSuite) TestPurgeOnClose() {
suite.mock.EXPECT().FlushFile(internal.FlushFileOptions{Handle: handle}).Return(nil)
suite.mock.EXPECT().CloseFile(internal.CloseFileOptions{Handle: handle}).Return(nil)
suite.stream.CloseFile(internal.CloseFileOptions{Handle: handle})
_ = suite.stream.CloseFile(internal.CloseFileOptions{Handle: handle})
assertBlockNotCached(suite, 0, handle)
}
@ -273,7 +394,7 @@ func (suite *streamTestSuite) TestWriteToSmallFileEviction() {
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(len(readInBufferOptions.Data), nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 1, handle)
@ -283,8 +404,7 @@ func (suite *streamTestSuite) TestWriteToSmallFileEviction() {
Offset: 1 * MB,
Data: make([]byte, 1*MB),
}
suite.mock.EXPECT().FlushFile(internal.FlushFileOptions{Handle: handle}).Return(nil)
suite.stream.WriteFile(writeFileOptions)
_, _ = suite.stream.WriteFile(writeFileOptions)
assertBlockNotCached(suite, 0, handle)
assertBlockCached(suite, 1*MB, handle)
@ -318,10 +438,10 @@ func (suite *streamTestSuite) TestLargeFileEviction() {
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(len(readInBufferOptions.Data), nil)
suite.stream.ReadInBuffer(readInBufferOptions)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
assertBlockCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 1, handle)
@ -334,7 +454,7 @@ func (suite *streamTestSuite) TestLargeFileEviction() {
}
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(len(readInBufferOptions.Data), nil)
suite.stream.ReadInBuffer(readInBufferOptions)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
assertBlockCached(suite, 1*MB, handle)
assertNumberOfCachedFileBlocks(suite, 2, handle)
@ -345,11 +465,11 @@ func (suite *streamTestSuite) TestLargeFileEviction() {
Offset: 1*MB + 2,
Data: make([]byte, 2),
}
suite.stream.WriteFile(writeFileOptions)
_, _ = suite.stream.WriteFile(writeFileOptions)
// write to first block
writeFileOptions.Offset = 2
suite.stream.WriteFile(writeFileOptions)
_, _ = suite.stream.WriteFile(writeFileOptions)
// append to file
writeFileOptions.Offset = 2*MB + 4
@ -360,9 +480,8 @@ func (suite *streamTestSuite) TestLargeFileEviction() {
block2.Flags.Clear(common.DirtyBlock)
}
suite.mock.EXPECT().FlushFile(internal.FlushFileOptions{Handle: handle}).Do(callbackFunc).Return(nil)
suite.mock.EXPECT().FlushFile(internal.FlushFileOptions{Handle: handle}).Return(nil)
suite.stream.WriteFile(writeFileOptions)
_, _ = suite.stream.WriteFile(writeFileOptions)
assertBlockCached(suite, 0, handle)
assertBlockCached(suite, 2*MB, handle)
@ -388,7 +507,7 @@ func (suite *streamTestSuite) TestStreamOnlyHandle() {
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle1, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockNotCached(suite, 0, handle1)
assertNumberOfCachedFileBlocks(suite, 0, handle1)
@ -397,7 +516,7 @@ func (suite *streamTestSuite) TestStreamOnlyHandle() {
handle2 := &handlemap.Handle{Size: int64(2 * MB), Path: fileNames[0]}
openFileOptions = internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle2, nil)
suite.stream.OpenFile(openFileOptions)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockNotCached(suite, 0, handle2)
assertNumberOfCachedFileBlocks(suite, 0, handle2)
@ -408,7 +527,7 @@ func (suite *streamTestSuite) TestStreamOnlyHandle() {
closeFileOptions := internal.CloseFileOptions{Handle: handle1}
suite.mock.EXPECT().FlushFile(internal.FlushFileOptions{Handle: handle1}).Return(nil)
suite.mock.EXPECT().CloseFile(closeFileOptions).Return(nil)
suite.stream.CloseFile(closeFileOptions)
_ = suite.stream.CloseFile(closeFileOptions)
// get block for second handle and confirm it gets cached
readInBufferOptions := internal.ReadInBufferOptions{
@ -422,7 +541,7 @@ func (suite *streamTestSuite) TestStreamOnlyHandle() {
Handle: handle2,
Offset: 0,
Data: make([]byte, 1*MB)}).Return(len(readInBufferOptions.Data), nil)
suite.stream.ReadInBuffer(readInBufferOptions)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
assertBlockCached(suite, 0, handle2)
assertNumberOfCachedFileBlocks(suite, 1, handle2)
@ -446,10 +565,68 @@ func (suite *streamTestSuite) TestCreateFile() {
suite.mock.EXPECT().CreateFile(createFileoptions).Return(handle1, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
suite.stream.CreateFile(createFileoptions)
_, _ = suite.stream.CreateFile(createFileoptions)
assertHandleNotStreamOnly(suite, handle1)
}
func (suite *streamTestSuite) TestTruncateFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 4\n handle-buffer-size-mb: 32\n handle-limit: 1\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 1, Path: fileNames[0]}
truncateFileOptions := internal.TruncateFileOptions{Name: handle1.Path}
suite.mock.EXPECT().TruncateFile(truncateFileOptions).Return(nil)
_ = suite.stream.TruncateFile(truncateFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, false)
}
func (suite *streamTestSuite) TestRenameFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 4\n handle-buffer-size-mb: 32\n handle-limit: 1\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
renameFileOptions := internal.RenameFileOptions{Src: handle1.Path, Dst: handle1.Path + "new"}
suite.mock.EXPECT().RenameFile(renameFileOptions).Return(nil)
_ = suite.stream.RenameFile(renameFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, false)
}
func (suite *streamTestSuite) TestRenameDirectory() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 4\n handle-buffer-size-mb: 32\n handle-limit: 1\n"
suite.setupTestHelper(config, false)
renameDirOptions := internal.RenameDirOptions{Src: "/test/path", Dst: "/test/path_new"}
suite.mock.EXPECT().RenameDir(renameDirOptions).Return(nil)
_ = suite.stream.RenameDir(renameDirOptions)
suite.assert.Equal(suite.stream.StreamOnly, false)
}
func (suite *streamTestSuite) TestDeleteDirectory() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 4\n handle-buffer-size-mb: 32\n handle-limit: 1\n"
suite.setupTestHelper(config, false)
deleteDirOptions := internal.DeleteDirOptions{Name: "/test/path"}
suite.mock.EXPECT().DeleteDir(deleteDirOptions).Return(nil)
_ = suite.stream.DeleteDir(deleteDirOptions)
suite.assert.Equal(suite.stream.StreamOnly, false)
}
// func (suite *streamTestSuite) TestFlushFile() {
// }

Просмотреть файл

@ -34,14 +34,15 @@
package stream
import (
"blobfuse2/common/config"
"blobfuse2/common/log"
"blobfuse2/internal"
"blobfuse2/internal/handlemap"
"context"
"errors"
"fmt"
"github.com/Azure/azure-storage-fuse/v2/common/config"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
"github.com/pbnjay/memory"
)
@ -63,9 +64,8 @@ type StreamOptions struct {
}
const (
compName = "stream"
mb = 1024 * 1024
defaultDiskTimeoutSec = (30 * 60)
compName = "stream"
mb = 1024 * 1024
)
var _ internal.Component = &Stream{}
@ -91,7 +91,7 @@ func (st *Stream) Start(ctx context.Context) error {
return nil
}
func (st *Stream) Configure() error {
func (st *Stream) Configure(_ bool) error {
log.Trace("Stream::Configure : %s", st.Name())
conf := StreamOptions{}
err := config.UnmarshalKey(compName, &conf)
@ -154,6 +154,10 @@ func (st *Stream) RenameDir(options internal.RenameDirOptions) error {
return st.cache.RenameDirectory(options)
}
func (st *Stream) TruncateFile(options internal.TruncateFileOptions) error {
return st.cache.TruncateFile(options)
}
// ------------------------- Factory -------------------------------------------
// Pipeline will call this method to create your object, initialize your variables here

Просмотреть файл

@ -6,12 +6,12 @@ copyLine=`grep -h $searchStr LICENSE`
if [[ "$1" == "replace" ]]
then
for i in $(find -name \*.go | grep -v ./test/ | grep -v main_test.go); do
for i in $(find -name \*.go); do
result=$(grep "$searchStr" $i)
if [ $? -ne 1 ]
then
echo "Replacing in $i"
result=$(grep "+build !authtest" $i)
result=$(grep "+build" $i)
if [ $? -ne 1 ]
then
sed -i -e '3,32{R LICENSE' -e 'd}' $i
@ -22,25 +22,32 @@ then
done
else
for i in $(find -name \*.go); do
if [[ $i == *"_test.go"* ]]; then
echo "Ignoring Test Script : $i"
else
result=$(grep "$searchStr" $i)
if [ $? -eq 1 ]
result=$(grep "$searchStr" $i)
if [ $? -eq 1 ]
then
echo "Adding Copyright to $i"
result=$(grep "+build" $i)
if [ $? -ne 1 ]
then
echo "Adding Copyright to $i"
echo $result > __temp__
echo -n >> __temp__
echo "/*" >> __temp__
cat LICENSE >> __temp__
echo -e "*/" >> __temp__
tail -n+2 $i >> __temp__
else
echo "/*" > __temp__
cat LICENSE >> __temp__
echo -e "*/\n\n" >> __temp__
cat $i >> __temp__
mv __temp__ $i
else
currYear_found=$(echo $result | grep $currYear)
if [ $? -eq 1 ]
then
echo "Updating Copyright in $i"
sed -i "/$searchStr/c\\$copyLine" $i
fi
fi
mv __temp__ $i
else
currYear_found=$(echo $result | grep $currYear)
if [ $? -eq 1 ]
then
echo "Updating Copyright in $i"
sed -i "/$searchStr/c\\$copyLine" $i
fi
fi
done

7
go.mod
Просмотреть файл

@ -1,4 +1,4 @@
module blobfuse2
module github.com/Azure/azure-storage-fuse/v2
go 1.16
@ -7,10 +7,9 @@ require (
github.com/Azure/azure-storage-azcopy/v10 v10.13.1-0.20211218014522-24209b81028e
github.com/Azure/azure-storage-blob-go v0.13.1-0.20210823171415-e7932f52ad61
github.com/Azure/azure-storage-file-go v0.6.1-0.20201111053559-3c1754dc00a5
github.com/Azure/go-autorest/autorest v0.11.18
github.com/Azure/go-autorest/autorest/adal v0.9.14
github.com/Azure/go-autorest/autorest v0.11.27
github.com/Azure/go-autorest/autorest/adal v0.9.20
github.com/JeffreyRichter/enum v0.0.0-20180725232043-2567042f9cda
github.com/bluele/gcache v0.0.2
github.com/fsnotify/fsnotify v1.4.9
github.com/golang/mock v1.6.0
github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0 // indirect

24
go.sum
Просмотреть файл

@ -54,15 +54,18 @@ github.com/Azure/azure-storage-file-go v0.6.1-0.20201111053559-3c1754dc00a5 h1:a
github.com/Azure/azure-storage-file-go v0.6.1-0.20201111053559-3c1754dc00a5/go.mod h1:++L7GP2pRyUNuastZ7m02vYV69JHmqlWXfCaGoL0v4s=
github.com/Azure/go-autorest v14.2.0+incompatible h1:V5VMDjClD3GiElqLWO7mz2MxNAK/vTfRHdAubSIPRgs=
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest/autorest v0.11.18 h1:90Y4srNYrwOtAgVo3ndrQkTYn6kf1Eg/AjTFJ8Is2aM=
github.com/Azure/go-autorest/autorest v0.11.18/go.mod h1:dSiJPy22c3u0OtOKDNttNgqpNFY/GeWa7GH/Pz56QRA=
github.com/Azure/go-autorest/autorest v0.11.27 h1:F3R3q42aWytozkV8ihzcgMO4OA4cuqr3bNlsEuF6//A=
github.com/Azure/go-autorest/autorest v0.11.27/go.mod h1:7l8ybrIdUmGqZMTD0sRtAr8NvbHjfofbf8RSP2q7w7U=
github.com/Azure/go-autorest/autorest/adal v0.9.13/go.mod h1:W/MM4U6nLxnIskrw4UwWzlHfGjwUS50aOsc/I3yuU8M=
github.com/Azure/go-autorest/autorest/adal v0.9.14 h1:G8hexQdV5D4khOXrWG2YuLCFKhWYmWD8bHYaXN5ophk=
github.com/Azure/go-autorest/autorest/adal v0.9.14/go.mod h1:W/MM4U6nLxnIskrw4UwWzlHfGjwUS50aOsc/I3yuU8M=
github.com/Azure/go-autorest/autorest/adal v0.9.18/go.mod h1:XVVeme+LZwABT8K5Lc3hA4nAe8LDBVle26gTrguhhPQ=
github.com/Azure/go-autorest/autorest/adal v0.9.20 h1:gJ3E98kMpFB1MFqQCvA1yFab8vthOeD4VlFRQULxahg=
github.com/Azure/go-autorest/autorest/adal v0.9.20/go.mod h1:XVVeme+LZwABT8K5Lc3hA4nAe8LDBVle26gTrguhhPQ=
github.com/Azure/go-autorest/autorest/date v0.3.0 h1:7gUk1U5M/CQbp9WoqinNzJar+8KY+LPI6wiWrP/myHw=
github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74=
github.com/Azure/go-autorest/autorest/mocks v0.4.1 h1:K0laFcLE6VLTOwNgSxaGbUcLPuGXlNkbVvq4cW4nIHk=
github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
github.com/Azure/go-autorest/autorest/mocks v0.4.2 h1:PGN4EDXnuQbojHbU0UWoNvmu9AGVwYHG9/fkDYhtAfw=
github.com/Azure/go-autorest/autorest/mocks v0.4.2/go.mod h1:Vy7OitM9Kei0i1Oj+LvyAWMXJHeKH1MVlzFugfVrmyU=
github.com/Azure/go-autorest/logger v0.2.1 h1:IG7i4p/mDa2Ce4TRyAO8IHnVhAVF3RFU+ZtXWSmf4Tg=
github.com/Azure/go-autorest/logger v0.2.1/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
github.com/Azure/go-autorest/tracing v0.6.0 h1:TYi4+3m5t6K48TGI9AUdb+IzbnSxvnvUMfuitfgcfuo=
@ -80,8 +83,6 @@ github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmV
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/bketelsen/crypt v0.0.4/go.mod h1:aI6NrJ0pMGgvZKL1iVgXLnfIFJtfV+bKCoqOes/6LfM=
github.com/bluele/gcache v0.0.2 h1:WcbfdXICg7G/DGBh1PFfcirkWOQV+v077yF1pSy3DGw=
github.com/bluele/gcache v0.0.2/go.mod h1:m15KV+ECjptwSPxKhOhQoAFQVtUFjTVkc3H8o0t/fp0=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
@ -111,7 +112,6 @@ github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.m
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/form3tech-oss/jwt-go v3.2.5+incompatible h1:/l4kBbb4/vGSsdtB5nUe8L7B9mImVMaBPw9L/0TBHU8=
github.com/form3tech-oss/jwt-go v3.2.5+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
@ -124,6 +124,9 @@ github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2
github.com/go-ini/ini v1.62.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt/v4 v4.0.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
github.com/golang-jwt/jwt/v4 v4.2.0 h1:besgBTC8w8HjP6NzQdxwKH9Z5oQMZ24ThTrHp3cZ8eU=
github.com/golang-jwt/jwt/v4 v4.2.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
@ -349,8 +352,10 @@ golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8U
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201002170205-7f63de1d35b0/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210812204632-0ba0e8f03122 h1:AOT7vJYHE32m61R8d1WlcqhOO1AocesDsKpcMq+UOaA=
golang.org/x/crypto v0.0.0-20210812204632-0ba0e8f03122/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3 h1:0es+/5331RGQPcXlMfP+WrnIIS6dNnNRe0WB02W0F4M=
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@ -425,8 +430,9 @@ golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v
golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d h1:20cMwl2fHAzkJMEA+8J4JgqBQcQGzbisXo31MIeenXI=
golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2 h1:CIJ76btIcR3eFI5EgSo6k1qKw9KJexJuRLI9G7Hp5wE=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=

Просмотреть файл

@ -34,9 +34,10 @@
package internal
import (
"blobfuse2/common"
"os"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
)
func NewDirBitMap() common.BitMap16 {

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше