[C#] Improve wait handling when tail insertion triggers a flush (#446)

* WIP on UpsertAsync and AllocateAsync
* Fixed async code for upsert
* Updated FasterLog to use new allocation API
* removed NeedToWait
* Added stress test based on Parallel.For
* speed up stress test
* remove concurrentdict for tasks
* edit text
* edit output text
* add option to enable OS file buffering in MLSD
* Add scripts for benchmark run/comparison; fix kUseSmallData Uniform subset loading; modify AsyncStress to call MLSD with buffering and take cmdline args for single-threading
* Ensure last epoch suspender drains all pending actions.
* Revert incidental change to LightEpoch, for clarity.
* ProtectDrain in ephemeral loop to ensure trigger of flush action.
* Add commit thread to testcase
* Remove lock around filestream ops
* Revert "Remove lock around filestream ops"
* Added test to playground
* Alternate commit strategy using LIFO work queue for FasterLog
* FlushAsync
* Fixing commit issue
* Rename playground project to avoid clash on Test name.
* Clean up AsyncStress:
- Add commandline args for threading mode, task count, number of ops, option to enable OS read buffering, usage message
- Use ValueTask instead of Task
- Add chunked operations
- Add option for sync vs. async lambdas for Parallel.For
- Repoort TailAddress and pending upsert count
Update .gitignore to ignore LaunchSettings.json anywhere
Fix size setting in AsyncPool ctor
* Add fast path to BlockAllocate.
* Use deleteOnClose by default for hlog
* Ensure FASTER.benchmark -k checkpoints are snapshot; use session pool in AsyncStress Chunk methods; make BlockAllocate fast path inlined
* Add session.CompletePending overload that returns a CompletedOutputsd; use this in FasterWrapper; add UTs for it and modify TestTypes to add AdvancedFunctions and variations of this and Functions that take a context
* Added try/catch around work invocation.
* Change to CompletedOutputEnumerator for speed and move it outside of FasterKV to its own file
* Add RecordInfo and address to CompletedOutput; add CloneAndBuild param comment in run_benchmark.ps1; update docs; add docs folder to .sln
* add session.CompletePendingWithOutputsAsync();
* Replace SpinWait with semaphore wait in InternalCompletePending
* Cleanup, make CompletePendingAsync similar to CompletePending.
* Add CompletePendingWithOutputs() as separate call
* FasterWrapper made generic

Co-authored-by: TedHartMS <15467143+TedHartMS@users.noreply.github.com>
This commit is contained in:
Badrish Chandramouli 2021-04-12 17:20:44 -07:00 коммит произвёл GitHub
Родитель 0a92c0f697
Коммит ebb5e783df
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
45 изменённых файлов: 2988 добавлений и 452 удалений

2
.gitignore поставляемый
Просмотреть файл

@ -192,4 +192,4 @@ packages/
.vs/
*.lib
nativebin/
/cs/benchmark/Properties/launchSettings.json
/cs/**/launchSettings.json

Просмотреть файл

@ -59,6 +59,29 @@ Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "VersionedRead", "samples\Re
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "MemOnlyCache", "samples\MemOnlyCache\MemOnlyCache.csproj", "{998D4C78-B0C5-40FF-9BDC-716BAC8CF864}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "AsyncStress", "playground\AsyncStress\AsyncStress.csproj", "{9EFCF8C5-320B-473C-83DE-3815981D465B}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "FasterLogStress", "playground\FasterLogMLSDTest\FasterLogStress.csproj", "{E8C7FB0F-38B8-468A-B1CA-8793DF8F2693}"
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "docs", "docs", "{C60F148B-2C8D-4E7C-8620-3CCD68CE0868}"
ProjectSection(SolutionItems) = preProject
..\docs\_docs\01-quick-start-guide.md = ..\docs\_docs\01-quick-start-guide.md
..\docs\_docs\02-faqs.md = ..\docs\_docs\02-faqs.md
..\docs\_docs\20-fasterkv-basics.md = ..\docs\_docs\20-fasterkv-basics.md
..\docs\_docs\23-fasterkv-tuning.md = ..\docs\_docs\23-fasterkv-tuning.md
..\docs\_docs\26-fasterkv-samples.md = ..\docs\_docs\26-fasterkv-samples.md
..\docs\_docs\29-fasterkv-cpp.md = ..\docs\_docs\29-fasterkv-cpp.md
..\docs\_docs\40-fasterlog-basics.md = ..\docs\_docs\40-fasterlog-basics.md
..\docs\_docs\43-fasterlog-tuning.md = ..\docs\_docs\43-fasterlog-tuning.md
..\docs\_docs\46-fasterlog-samples.md = ..\docs\_docs\46-fasterlog-samples.md
..\docs\_docs\80-build-and-test.md = ..\docs\_docs\80-build-and-test.md
..\docs\_docs\82-code-structure.md = ..\docs\_docs\82-code-structure.md
..\docs\_docs\84-roadmap.md = ..\docs\_docs\84-roadmap.md
..\docs\_docs\90-td-introduction.md = ..\docs\_docs\90-td-introduction.md
..\docs\_docs\95-research-papers.md = ..\docs\_docs\95-research-papers.md
..\docs\_docs\96-slides-videos.md = ..\docs\_docs\96-slides-videos.md
EndProjectSection
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
@ -219,14 +242,14 @@ Global
{EBE313E5-22D2-4C74-BA1F-16B60404B335}.Release|Any CPU.Build.0 = Release|x64
{EBE313E5-22D2-4C74-BA1F-16B60404B335}.Release|x64.ActiveCfg = Release|x64
{EBE313E5-22D2-4C74-BA1F-16B60404B335}.Release|x64.Build.0 = Release|x64
{33ED9E1B-1EF0-4984-A07A-7A26C205A446}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{33ED9E1B-1EF0-4984-A07A-7A26C205A446}.Debug|Any CPU.Build.0 = Debug|Any CPU
{33ED9E1B-1EF0-4984-A07A-7A26C205A446}.Debug|x64.ActiveCfg = Debug|Any CPU
{33ED9E1B-1EF0-4984-A07A-7A26C205A446}.Debug|x64.Build.0 = Debug|Any CPU
{33ED9E1B-1EF0-4984-A07A-7A26C205A446}.Release|Any CPU.ActiveCfg = Release|Any CPU
{33ED9E1B-1EF0-4984-A07A-7A26C205A446}.Release|Any CPU.Build.0 = Release|Any CPU
{33ED9E1B-1EF0-4984-A07A-7A26C205A446}.Release|x64.ActiveCfg = Release|Any CPU
{33ED9E1B-1EF0-4984-A07A-7A26C205A446}.Release|x64.Build.0 = Release|Any CPU
{33ED9E1B-1EF0-4984-A07A-7A26C205A446}.Debug|Any CPU.ActiveCfg = Debug|x64
{33ED9E1B-1EF0-4984-A07A-7A26C205A446}.Debug|Any CPU.Build.0 = Debug|x64
{33ED9E1B-1EF0-4984-A07A-7A26C205A446}.Debug|x64.ActiveCfg = Debug|x64
{33ED9E1B-1EF0-4984-A07A-7A26C205A446}.Debug|x64.Build.0 = Debug|x64
{33ED9E1B-1EF0-4984-A07A-7A26C205A446}.Release|Any CPU.ActiveCfg = Release|x64
{33ED9E1B-1EF0-4984-A07A-7A26C205A446}.Release|Any CPU.Build.0 = Release|x64
{33ED9E1B-1EF0-4984-A07A-7A26C205A446}.Release|x64.ActiveCfg = Release|x64
{33ED9E1B-1EF0-4984-A07A-7A26C205A446}.Release|x64.Build.0 = Release|x64
{998D4C78-B0C5-40FF-9BDC-716BAC8CF864}.Debug|Any CPU.ActiveCfg = Debug|x64
{998D4C78-B0C5-40FF-9BDC-716BAC8CF864}.Debug|Any CPU.Build.0 = Debug|x64
{998D4C78-B0C5-40FF-9BDC-716BAC8CF864}.Debug|x64.ActiveCfg = Debug|x64
@ -235,6 +258,22 @@ Global
{998D4C78-B0C5-40FF-9BDC-716BAC8CF864}.Release|Any CPU.Build.0 = Release|x64
{998D4C78-B0C5-40FF-9BDC-716BAC8CF864}.Release|x64.ActiveCfg = Release|x64
{998D4C78-B0C5-40FF-9BDC-716BAC8CF864}.Release|x64.Build.0 = Release|x64
{9EFCF8C5-320B-473C-83DE-3815981D465B}.Debug|Any CPU.ActiveCfg = Debug|x64
{9EFCF8C5-320B-473C-83DE-3815981D465B}.Debug|Any CPU.Build.0 = Debug|x64
{9EFCF8C5-320B-473C-83DE-3815981D465B}.Debug|x64.ActiveCfg = Debug|x64
{9EFCF8C5-320B-473C-83DE-3815981D465B}.Debug|x64.Build.0 = Debug|x64
{9EFCF8C5-320B-473C-83DE-3815981D465B}.Release|Any CPU.ActiveCfg = Release|x64
{9EFCF8C5-320B-473C-83DE-3815981D465B}.Release|Any CPU.Build.0 = Release|x64
{9EFCF8C5-320B-473C-83DE-3815981D465B}.Release|x64.ActiveCfg = Release|x64
{9EFCF8C5-320B-473C-83DE-3815981D465B}.Release|x64.Build.0 = Release|x64
{E8C7FB0F-38B8-468A-B1CA-8793DF8F2693}.Debug|Any CPU.ActiveCfg = Debug|x64
{E8C7FB0F-38B8-468A-B1CA-8793DF8F2693}.Debug|Any CPU.Build.0 = Debug|x64
{E8C7FB0F-38B8-468A-B1CA-8793DF8F2693}.Debug|x64.ActiveCfg = Debug|x64
{E8C7FB0F-38B8-468A-B1CA-8793DF8F2693}.Debug|x64.Build.0 = Debug|x64
{E8C7FB0F-38B8-468A-B1CA-8793DF8F2693}.Release|Any CPU.ActiveCfg = Release|x64
{E8C7FB0F-38B8-468A-B1CA-8793DF8F2693}.Release|Any CPU.Build.0 = Release|x64
{E8C7FB0F-38B8-468A-B1CA-8793DF8F2693}.Release|x64.ActiveCfg = Release|x64
{E8C7FB0F-38B8-468A-B1CA-8793DF8F2693}.Release|x64.Build.0 = Release|x64
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
@ -263,6 +302,8 @@ Global
{EBE313E5-22D2-4C74-BA1F-16B60404B335} = {62BC1134-B6E1-476A-B894-7CA278A8B6DE}
{33ED9E1B-1EF0-4984-A07A-7A26C205A446} = {62BC1134-B6E1-476A-B894-7CA278A8B6DE}
{998D4C78-B0C5-40FF-9BDC-716BAC8CF864} = {62BC1134-B6E1-476A-B894-7CA278A8B6DE}
{9EFCF8C5-320B-473C-83DE-3815981D465B} = {E6026D6A-01C5-4582-B2C1-64751490DABE}
{E8C7FB0F-38B8-468A-B1CA-8793DF8F2693} = {E6026D6A-01C5-4582-B2C1-64751490DABE}
EndGlobalSection
GlobalSection(ExtensibilityGlobals) = postSolution
SolutionGuid = {A0750637-2CCB-4139-B25E-F2CE740DCFAC}

Просмотреть файл

@ -69,7 +69,7 @@ namespace FASTER.benchmark
for (int i = 0; i < 8; i++)
input_[i].value = i;
device = Devices.CreateLogDevice(TestLoader.DevicePath, preallocateFile: true);
device = Devices.CreateLogDevice(TestLoader.DevicePath, preallocateFile: true, deleteOnClose: true);
if (YcsbConstants.kSmallMemoryLog)
store = new FasterKV<SpanByte, SpanByte>

Просмотреть файл

@ -66,7 +66,7 @@ namespace FASTER.benchmark
for (int i = 0; i < 8; i++)
input_[i].value = i;
device = Devices.CreateLogDevice(TestLoader.DevicePath, preallocateFile: true);
device = Devices.CreateLogDevice(TestLoader.DevicePath, preallocateFile: true, deleteOnClose: true);
if (YcsbConstants.kSmallMemoryLog)
store = new FasterKV<Key, Value>

Просмотреть файл

@ -170,11 +170,19 @@ namespace FASTER.benchmark
if (!initValueSet.Contains(value))
{
if (init_count >= init_keys.Length)
continue;
{
if (distribution == YcsbConstants.ZipfDist)
continue;
initValueSet.Add(value);
keySetter.Set(init_keys, init_count, value);
++init_count;
// Uniform distribution at current small-data counts is about a 1% hit rate, which is too slow here, so just modulo.
value %= init_keys.Length;
}
else
{
initValueSet.Add(value);
keySetter.Set(init_keys, init_count, value);
++init_count;
}
}
keySetter.Set(txn_keys, txn_count, value);
++txn_count;
@ -353,7 +361,7 @@ namespace FASTER.benchmark
{
Console.WriteLine($"Checkpointing FasterKV to {this.BackupPath} for fast restart");
Stopwatch sw = Stopwatch.StartNew();
store.TakeFullCheckpoint(out _);
store.TakeFullCheckpoint(out _, CheckpointType.Snapshot);
store.CompleteCheckpointAsync().GetAwaiter().GetResult();
sw.Stop();
Console.WriteLine($" Completed checkpoint in {(double)sw.ElapsedMilliseconds / 1000:N3} seconds");

Просмотреть файл

@ -0,0 +1,260 @@
<#
.SYNOPSIS
Compares two directories of output from FASTER.benchmark.exe, usually run by run_benchmark.ps1.
.DESCRIPTION
See run_benchmark.ps1 for instructions on setting up a perf directory and running the benchmark parameter permutations.
Once run_benchmark.ps1 has completed, you can either run this script from the perf directory if it was copied there, or from
another machine that has access to the perf directory on the perf machine.
This script will:
1. Compare result files of the same name in the two directories (the names are set by run_benchmark.ps1 to identify the parameter permutation used in that file).
2. List any result files that were not matched.
3. Display two Grids, one showing the results of the comparison for Loading times, and one for Experiment Run times. These differences are:
a. The difference in Mean throughput in inserts or transactions (operations) per second.
b. The percentage difference in throughput. The initial ordering of the grid is by this column, descending; thus the files for which the best performance
improvement was made are shown first.
c. The difference in Standard Deviation.
d. The difference in Standard Deviation as a percentage of Mean.
e. All other parameters of the run (these are the same between the two files).
.PARAMETER OldDir
The directory containing the results of the baseline run; the result comparison is "NewDir throughput minus OldDir throughput".
.PARAMETER RunSeconds
The directory containing the results of the new run, with the changes to be tested for impact; the result comparison is "NewDir throughput minus OldDir throughput".
.EXAMPLE
./compare_runs.ps1 './baseline' './refactor_FASTERImpl'
#>
param (
[Parameter(Mandatory)] [String]$OldDir,
[Parameter(Mandatory)] [String]$NewDir
)
class Result : System.IComparable, System.IEquatable[Object] {
# To make things work in one class, name the properties "Left", "Right", and "Diff"--they aren't displayed until the Diff is calculated.
[double]$BaselineMean
[double]$BaselineStdDev
[double]$CurrentMean
[double]$CurrentStdDev
[double]$MeanDiff
[double]$MeanDiffPercent
[double]$StdDevDiff
[double]$StdDevDiffPercent
[double]$Overlap
[double]$OverlapPercent
[double]$Separation
[uint]$Numa
[string]$Distribution
[int]$ReadPercent
[uint]$ThreadCount
[uint]$LockMode
[uint]$Iterations
[bool]$SmallData
[bool]$SmallMemory
[bool]$SyntheticData
Result([string]$line) {
$fields = $line.Split(';')
foreach($field in ($fields | Select-Object -skip 1)) {
$arg, $value = $field.Split(':')
$value = $value.Trim()
switch ($arg.Trim()) {
"ins/sec" { $this.MeanDiff = $this.BaselineMean = $value }
"ops/sec" { $this.MeanDiff = $this.BaselineMean = $value }
"stdev" { $this.StdDevDiff = $this.BaselineStdDev = $value }
"stdev%" { $this.StdDevDiffPercent = $value }
"n" { $this.Numa = $value }
"d" { $this.Distribution = $value }
"r" { $this.ReadPercent = $value }
"t" { $this.ThreadCount = $value }
"z" { $this.LockMode = $value }
"i" { $this.Iterations = $value }
"sd" { $this.SmallData = $value -eq "y" }
"sm" { $this.SmallMemory = $value -eq "y" }
"sy" { $this.SyntheticData = $value -eq "y" }
}
}
}
Result([Result]$other) {
$this.Numa = $other.Numa
$this.Distribution = $other.Distribution
$this.ReadPercent = $other.ReadPercent
$this.ThreadCount = $other.ThreadCount
$this.LockMode = $other.LockMode
$this.Iterations = $other.Iterations
$this.SmallData = $other.SmallData
$this.SmallMemory = $other.SmallMemory
$this.SyntheticData = $other.SyntheticData
}
[Result] CalculateDifference([Result]$newResult) {
$result = [Result]::new($newResult)
$result.MeanDiff = $newResult.MeanDiff - $this.MeanDiff
$result.MeanDiffPercent = [System.Math]::Round(($result.MeanDiff / $this.MeanDiff) * 100, 1)
$result.StdDevDiff = $newResult.StdDevDiff - $this.StdDevDiff
$result.StdDevDiffPercent = $newResult.StdDevDiffPercent - $this.StdDevDiffPercent
$result.BaselineMean = $this.BaselineMean
$result.BaselineStdDev = $this.BaselineStdDev
$result.CurrentMean = $newResult.BaselineMean
$result.CurrentStdDev = $newResult.BaselineStdDev
$oldMin = $result.BaselineMean - $result.BaselineStdDev
$oldMax = $result.BaselineMean + $result.BaselineStdDev
$newMin = $result.CurrentMean - $result.CurrentStdDev
$newMax = $result.CurrentMean + $result.CurrentStdDev
$lowestMax = [System.Math]::Min($oldMax, $newMax)
$highestMin = [System.Math]::Max($oldMin, $newMin)
$result.Overlap = $lowestMax - $highestMin
# Overlap % is the percentage of the new stddev range covered by the overlap, or 0 if there is no overlap.
# Separation is how many new stddevs separates the two stddev spaces (positive for perf gain, negative for
# perf loss), or 0 if they overlap. TODO: Calculate significance.
if ($result.Overlap -le 0) {
$result.OverlapPercent = 0
$adjustedOverlap = ($oldMax -lt $newMin) ? -$result.Overlap : $result.Overlap
$result.Separation = [System.Math]::Round($adjustedOverlap / $result.CurrentStdDev, 1)
} else {
$result.OverlapPercent = [System.Math]::Round(($result.Overlap / ($result.CurrentStdDev * 2)) * 100, 1)
$result.Separation = 0
}
return $result
}
[int] CompareTo($other)
{
If (-Not($other -is [Result])) {
Throw "'other' is not Result"
}
# Sort in descending order
$cmp = $other.MeanDiffPercent.CompareTo($this.MeanDiffPercent)
return ($cmp -eq 0) ? $other.MeanDiff.CompareTo($this.MeanDiff) : $cmp
}
[bool] Equals($other)
{
Write-Host "in Equals"
If (-Not($other -is [Result])) {
Throw "'other' is not Result"
}
return $this.Numa -eq $other.Numa
-and $this.Distribution -eq $other.Distribution
-and $this.ReadPercent -eq $other.ReadPercent
-and $this.ThreadCount -eq $other.ThreadCount
-and $this.LockMode -eq $other.LockMode
-and $this.Iterations -eq $other.Iterations
-and $this.SmallData -eq $other.SmallData
-and $this.SmallMemory -eq $other.SmallMemory
-and $this.SyntheticData -eq $other.SyntheticData
}
[int] GetHashCode() {
return ($this.Numa, $this.Distribution, $this.ReadPercent, $this.ThreadCount, $this.LockMode,
$this.Iterations, $this.SmallData, $this.SmallMemory, $this.SyntheticData).GetHashCode();
}
}
# These have the same name format in each directory, qualified by parameters.
$oldOnlyFileNames = New-Object Collections.Generic.List[String]
$newOnlyFileNames = New-Object Collections.Generic.List[String]
$LoadResults = New-Object Collections.Generic.List[Result]
$RunResults = New-Object Collections.Generic.List[Result]
function ParseResultFile([String]$fileName) {
$loadResult = $null
$runResult = $null
foreach($line in Get-Content($fileName)) {
if ($line.StartsWith("##20;")) {
$loadResult = [Result]::new($line)
continue
}
if ($line.StartsWith("##21;")) {
$runResult = [Result]::new($line)
continue
}
}
if ($null -eq $loadResult) {
Throw "$fileName has no Load Result"
}
if ($null -eq $runResult) {
Throw "$fileName has no Run Result"
}
return ($loadResult, $runResult)
}
foreach($oldFile in Get-ChildItem "$OldDir/results_*") {
$newName = "$NewDir/$($oldFile.Name)";
if (!(Test-Path $newName)) {
$oldOnlyFileNames.Add($oldFile)
continue
}
$newFile = Get-ChildItem $newName
$oldLoadResult, $oldRunResult = ParseResultFile $oldFile.FullName
$newLoadResult, $newRunResult = ParseResultFile $newFile.FullName
$LoadResults.Add($oldLoadResult.CalculateDifference($newLoadResult))
$RunResults.Add($oldRunResult.CalculateDifference($newRunResult))
}
foreach($newFile in Get-ChildItem "$NewDir/results_*") {
$oldName = "$OldDir/$($newFile.Name)";
if (!(Test-Path $oldName)) {
$newOnlyFileNames.Add($newFile)
continue
}
}
if ($oldOnlyFileNames.Count -gt 0) {
Write-Host "The following files were found only in $OldDir"
foreach ($fileName in $oldOnlyFileNames) {
Write-Host " $fileName"
}
}
if ($newOnlyFileNames.Count -gt 0) {
Write-Host "The following files were found only in $NewDir"
foreach ($fileName in $newOnlyFileNames) {
Write-Host " $fileName"
}
}
if ($oldOnlyFileNames.Count -gt 0 -or $newOnlyFileNames.Count -gt 0) {
Start-Sleep -Seconds 3
}
$LoadResults.Sort()
$RunResults.Sort()
function RenameProperties([System.Object[]]$results) {
# Use this to rename "Percent" suffix to "%"
$results | Select-Object `
BaselineMean,
BaselineStdDev,
CurrentMean,
CurrentStdDev,
MeanDiff,
@{N='MeanDiff %';E={$_.MeanDiffPercent}},
StdDevDiff,
@{N='StdDevDiff %';E={$_.StdDevDiffPercent}},
Overlap,
@{N='Overlap %';E={$_.OverlapPercent}},
Separation,
Numa,
Distribution,
ReadPercent,
ThreadCount,
LockMode,
Iterations,
SmallData,
SmallMemory,
SyntheticData
}
RenameProperties $LoadResults | Out-GridView -Title "Loading Comparison (Inserts Per Second): $OldDir -vs- $NewDir"
RenameProperties $RunResults | Out-GridView -Title "Experiment Run Comparison(Operations Per Second): $OldDir -vs- $NewDir"

Просмотреть файл

@ -0,0 +1,173 @@
<#
.SYNOPSIS
Runs one or more builds of FASTER.benchmark.exe with multiple parameter permutations and generates corresponding directories of result files named for those permutations.
.DESCRIPTION
This is intended to run performance-testing parameter permutations on one or more builds of FASTER.benchmark.exe, to be compared by compare_runs.ps1.
The default execution of this script does a performance run on all FASTER.benchmark.exes identified in ExeDirs, and places their output into correspondingly-named
result directories, to be evaluated with compare_runs.ps1.
This script functions best if you have a dedicated performance-testing machine that is not your build machine. Use the following steps:
1. Create a directory on the perf machine for your test
2. You may either copy already-built binaries (e.g. containing changes you don't want to push) to the performance directory, or supply branch names to be git-cloned and built:
A. Copy existing build: Xcopy the baseline build's Release directory to your perf folder, as well as all comparison builds. This script will start at the netcoreapp3.1 directory to traverse to FASTER.benchmark.exe. Name these folders something that indicates their role, such as 'baseline', 'master' / 'branch', etc.
-or-
B. Supply branch names to be built: In the ExeDirs argument, pass the names of all branches you want to run. For each branch name, this script will clone that branch into a directory named as that branch, build FASTER.sln for Release, and run the FASTER.benchmark.exe from its built location.
3. Copy this script and, if you will want to compare runs on the perf machine, compare_runs.ps1 to the perf folder.
4. In a remote desktop on the perf machine, change to your folder, and run this file with those directory names. See .EXAMPLE for details.
.PARAMETER ExeDirs
One or more directories from which to run FASTER.benchmark.exe builds. This is a Powershell array of strings; thus from the windows command line
the directory names should be joined by , (comma) with no spaces:
pwsh -c ./run_benchmark.ps1 './baseline','./refactor_FASTERImpl'
Single (or double) quotes are optional and may be omitted if the directory paths do not contain spaces.
.PARAMETER RunSeconds
Number of seconds to run the experiment.
Used primarily to debug changes to this script or do a quick one-off run; the default is 30 seconds.
.PARAMETER NumThreads
Number of threads to use.
Used primarily to debug changes to this script or do a quick one-off run; the default is multiple counts as defined in the script.
.PARAMETER LockMode
Locking mode to use: 0 = No locking, 1 = RecordInfo locking
Used primarily to debug changes to this script or do a quick one-off run; the default is multiple counts as defined in the script.
.PARAMETER UseRecover
Recover the FasterKV from a checkpoint of a previous run rather than loading it from data.
Used primarily to debug changes to this script or do a quick one-off run; the default is false.
.PARAMETER CloneAndBuild
Clone the repo and switch to the branches in ExeDirs, then build these.
.EXAMPLE
pwsh -c "./run_benchmark.ps1 './baseline','./refactor_FASTERImpl'"
If run from your perf directory using the setup from .DESCRIPTION, this will create and populate the following folders:
./results/baseline
./results/refactor_FASTERImpl
You can then run compare.ps1 on those two directories.
.EXAMPLE
pwsh -c "./run_benchmark.ps1 './baseline','./refactor_FASTERImpl' -RunSeconds 3 -NumThreads 8 -UseRecover"
Does a quick run (e.g. test changes to this file).
.EXAMPLE
pwsh -c "./run_benchmark.ps1 './baseline','./one_local_change','./another_local_change' <other args>"
Runs 3 directories.
.EXAMPLE
pwsh -c "./run_benchmark.ps1 master,branch_with_my_changes -CloneAndBuild <other args>"
Clones the master branch to the .\master folder, the branch_with_my_changes to the branch_with_my_changes folder, and runs those with any <other args> specified.
#>
param (
[Parameter(Mandatory=$true)] [string[]]$ExeDirs,
[Parameter(Mandatory=$false)] [int]$RunSeconds = 30,
[Parameter(Mandatory=$false)] [int]$ThreadCount = -1,
[Parameter(Mandatory=$false)] [int]$lockMode = -1,
[Parameter(Mandatory=$false)] [switch]$UseRecover,
[Parameter(Mandatory=$false)] [switch]$CloneAndBuild
)
if (-not(Test-Path d:/data)) {
throw "Cannot find d:/data"
}
$benchmarkExe = "netcoreapp3.1/win7-x64/FASTER.benchmark.exe"
if ($CloneAndBuild) {
$exeNames = [String[]]($ExeDirs | ForEach-Object{"$_/cs/benchmark/bin/x64/Release/$benchmarkExe"})
Foreach ($branch in $exeDirs) {
git clone https://github.com/microsoft/FASTER.git $branch
cd $branch
git checkout $branch
dotnet build cs/FASTER.sln -c Release
cd ..
}
} else {
$exeNames = [String[]]($ExeDirs | ForEach-Object{"$_/$benchmarkExe"})
}
Foreach ($exeName in $exeNames) {
if (Test-Path "$exeName") {
Write-Host "Found: $exeName"
continue
}
throw "Cannot find: $exeName"
}
$resultDirs = [String[]]($ExeDirs | ForEach-Object{"./results/" + (Get-Item $_).Name})
Foreach ($resultDir in $resultDirs) {
Write-Host $resultDir
if (Test-Path $resultDir) {
throw "$resultDir already exists (or possible duplication of leaf name in ExeDirs)"
}
New-Item "$resultDir" -ItemType Directory
}
$iterations = 7
$distributions = ("uniform", "zipf")
$readPercents = (0, 100)
$threadCounts = (1, 20, 40, 60, 80)
$lockModes = (0, 1)
$smallDatas = (0) #, 1)
$smallMemories = (0) #, 1)
$syntheticDatas = (0) #, 1)
$k = ""
if ($ThreadCount -ge 0) {
$threadCounts = ($ThreadCount)
}
if ($LockMode -ge 0) {
$lockModes = ($LockMode)
}
if ($UseRecover) {
$k = "-k"
}
# Numa will always be set in the internal loop body to either 0 or 1, so "Numas.Count" is effectively 1
$permutations = $distributions.Count *
$readPercents.Count *
$threadCounts.Count *
$lockModes.Count *
$smallDatas.Count *
$smallMemories.Count *
$syntheticDatas.Count
$permutation = 1
foreach ($d in $distributions) {
foreach ($r in $readPercents) {
foreach ($t in $threadCounts) {
foreach ($z in $lockModes) {
foreach ($sd in $smallDatas) {
foreach ($sm in $smallMemories) {
foreach ($sy in $syntheticDatas) {
Write-Host
Write-Host "Permutation $permutation of $permutations"
# Only certain combinations of Numa/Threads are supported
$n = ($t -lt 48) ? 0 : 1;
for($ii = 0; $ii -lt $exeNames.Count; ++$ii) {
$exeName = $exeNames[$ii]
$resultDir = $resultDirs[$ii]
Write-Host
Write-Host "Permutation $permutation/$permutations generating results $($ii + 1)/$($exeNames.Count) to $resultDir for: -n $n -d $d -r $r -t $t -z $z -i $iterations --runsec $RunSeconds $k"
# RunSec and Recover are for one-off operations and are not recorded in the filenames.
& "$exeName" -b 0 -n $n -d $d -r $r -t $t -z $z -i $iterations --runsec $RunSeconds $k | Tee-Object "$resultDir/results_n-$($n)_d-$($d)_r-$($r)_t-$($t)_z-$($z).txt"
}
++$permutation
}
}
}
}
}
}
}

Просмотреть файл

@ -0,0 +1,18 @@
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp3.1</TargetFramework>
<Platforms>x64</Platforms>
<AllowUnsafeBlocks>true</AllowUnsafeBlocks>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="xunit.assert" Version="2.4.1" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\..\src\core\FASTER.core.csproj" />
</ItemGroup>
</Project>

Просмотреть файл

@ -0,0 +1,151 @@
using FASTER.core;
using Xunit;
using System;
using System.IO;
using System.Threading;
using System.Threading.Tasks;
namespace AsyncStress
{
public class FasterWrapper<Key, Value>
{
readonly FasterKV<Key, Value> _store;
readonly AsyncPool<ClientSession<Key, Value, Value, Value, Empty, SimpleFunctions<Key, Value, Empty>>> _sessionPool;
// OS Buffering is safe to use in this app because Reads are done after all updates
internal static bool useOsReadBuffering = false;
public FasterWrapper()
{
var logDirectory ="d:/FasterLogs";
var logFileName = Guid.NewGuid().ToString();
var logSettings = new LogSettings
{
LogDevice = new ManagedLocalStorageDevice(Path.Combine(logDirectory, $"{logFileName}.log"), deleteOnClose: true, osReadBuffering: useOsReadBuffering),
ObjectLogDevice = new ManagedLocalStorageDevice(Path.Combine(logDirectory, $"{logFileName}.log"), deleteOnClose: true, osReadBuffering: useOsReadBuffering),
PageSizeBits = 12,
MemorySizeBits = 13
};
Console.WriteLine($" Using {logSettings.LogDevice.GetType()}");
_store = new FasterKV<Key, Value>(1L << 20, logSettings);
_sessionPool = new AsyncPool<ClientSession<Key, Value, Value, Value, Empty, SimpleFunctions<Key, Value, Empty>>>(
logSettings.LogDevice.ThrottleLimit,
() => _store.For(new SimpleFunctions<Key, Value, Empty>()).NewSession<SimpleFunctions<Key, Value, Empty>>());
}
// This can be used to verify the same amount data is loaded.
public long TailAddress => _store.Log.TailAddress;
// Indicates how many operations went pending
public int UpsertPendingCount = 0;
public int ReadPendingCount = 0;
public async ValueTask UpsertAsync(Key key, Value value)
{
if (!_sessionPool.TryGet(out var session))
session = await _sessionPool.GetAsync();
var r = await session.UpsertAsync(key, value);
while (r.Status == Status.PENDING)
{
Interlocked.Increment(ref UpsertPendingCount);
r = await r.CompleteAsync();
}
_sessionPool.Return(session);
}
public void Upsert(Key key, Value value)
{
if (!_sessionPool.TryGet(out var session))
session = _sessionPool.GetAsync().GetAwaiter().GetResult();
var status = session.Upsert(key, value);
if (status == Status.PENDING)
{
// This should not happen for sync Upsert().
Interlocked.Increment(ref UpsertPendingCount);
session.CompletePending();
}
_sessionPool.Return(session);
}
public async ValueTask UpsertChunkAsync((Key, Value)[] chunk)
{
if (!_sessionPool.TryGet(out var session))
session = _sessionPool.GetAsync().GetAwaiter().GetResult();
for (var ii = 0; ii < chunk.Length; ++ii)
{
var r = await session.UpsertAsync(chunk[ii].Item1, chunk[ii].Item2);
while (r.Status == Status.PENDING)
{
Interlocked.Increment(ref UpsertPendingCount);
r = await r.CompleteAsync();
}
}
_sessionPool.Return(session);
}
public async ValueTask<(Status, Value)> ReadAsync(Key key)
{
if (!_sessionPool.TryGet(out var session))
session = await _sessionPool.GetAsync();
var result = (await session.ReadAsync(key).ConfigureAwait(false)).Complete();
_sessionPool.Return(session);
return result;
}
public ValueTask<(Status, Value)> Read(Key key)
{
if (!_sessionPool.TryGet(out var session))
session = _sessionPool.GetAsync().GetAwaiter().GetResult();
var result = session.Read(key);
if (result.status == Status.PENDING)
{
Interlocked.Increment(ref ReadPendingCount);
session.CompletePendingWithOutputs(out var completedOutputs, wait: true);
int count = 0;
for (; completedOutputs.Next(); ++count)
{
Assert.Equal(key, completedOutputs.Current.Key);
result = (Status.OK, completedOutputs.Current.Output);
}
completedOutputs.Dispose();
Assert.Equal(1, count);
}
_sessionPool.Return(session);
return new ValueTask<(Status, Value)>(result);
}
public async ValueTask ReadChunkAsync(Key[] chunk, ValueTask<(Status, Value)>[] results, int offset)
{
if (!_sessionPool.TryGet(out var session))
session = _sessionPool.GetAsync().GetAwaiter().GetResult();
// Reads in chunk are performed serially
for (var ii = 0; ii < chunk.Length; ++ii)
results[offset + ii] = new ValueTask<(Status, Value)>((await session.ReadAsync(chunk[ii])).Complete());
_sessionPool.Return(session);
}
public async ValueTask<(Status, Value)[]> ReadChunkAsync(Key[] chunk)
{
if (!_sessionPool.TryGet(out var session))
session = _sessionPool.GetAsync().GetAwaiter().GetResult();
// Reads in chunk are performed serially
(Status, Value)[] result = new (Status, Value)[chunk.Length];
for (var ii = 0; ii < chunk.Length; ++ii)
result[ii] = (await session.ReadAsync(chunk[ii]).ConfigureAwait(false)).Complete();
_sessionPool.Return(session);
return result;
}
public void Dispose()
{
_sessionPool.Dispose();
_store.Dispose();
}
}
}

Просмотреть файл

@ -0,0 +1,206 @@
using System;
using System.Diagnostics;
using System.Threading.Tasks;
using Xunit;
using FASTER.core;
using System.Linq;
namespace AsyncStress
{
public class Program
{
// ThreadingMode descriptions are listed in Usage()
enum ThreadingMode
{
None,
Single,
ParallelAsync,
ParallelSync,
Chunks
}
static ThreadingMode upsertThreadingMode = ThreadingMode.ParallelAsync;
static ThreadingMode readThreadingMode = ThreadingMode.ParallelAsync;
static int numTasks = 4;
static int numOperations = 1_000_000;
static void Usage()
{
Console.WriteLine($"Options:");
Console.WriteLine($" -u <mode>: Upsert threading mode (listed below); default is {ThreadingMode.ParallelAsync}");
Console.WriteLine($" -r <mode>: Read threading mode (listed below); default is {ThreadingMode.ParallelAsync}");
Console.WriteLine($" -t #: Number of tasks for {ThreadingMode.ParallelSync} and {ThreadingMode.Chunks}; default is {numTasks}");
Console.WriteLine($" -n #: Number of operations; default is {numOperations}");
Console.WriteLine($" -b #: Use OS buffering for reads; default is {FasterWrapper<int, int>.useOsReadBuffering}");
Console.WriteLine($" -?, /?, --help: Show this screen");
Console.WriteLine();
Console.WriteLine($"Threading Modes:");
Console.WriteLine($" None: Do not run this operation");
Console.WriteLine($" Single: Run this operation single-threaded");
Console.WriteLine($" ParallelAsync: Run this operation using Parallel.For with an Async lambda");
Console.WriteLine($" ParallelSync: Run this operation using Parallel.For with an Sync lambda and parallelism limited to numTasks");
Console.WriteLine($" Chunks: Run this operation using a set number of async tasks to operate on partitioned chunks");
}
public static async Task Main(string[] args)
{
if (args.Length > 0)
{
for (var ii = 0; ii < args.Length; ++ii)
{
var arg = args[ii];
string nextArg()
{
var next = ii < args.Length - 1 && args[ii + 1][0] != '-' ? args[++ii] : string.Empty;
if (next.Length == 0)
throw new ApplicationException($"Need arg value for {arg}");
return next;
}
if (arg == "-u")
upsertThreadingMode = Enum.Parse<ThreadingMode>(nextArg(), ignoreCase: true);
else if (arg == "-r")
readThreadingMode = Enum.Parse<ThreadingMode>(nextArg(), ignoreCase: true);
else if (arg == "-t")
numTasks = int.Parse(nextArg());
else if (arg == "-n")
numOperations = int.Parse(nextArg());
else if (arg == "-b")
FasterWrapper<int, int>.useOsReadBuffering = true;
else if (arg == "-?" || arg == "/?" || arg == "--help")
{
Usage();
return;
}
else
throw new ApplicationException($"Unknown switch: {arg}");
}
}
await ProfileStore(new FasterWrapper<int, int>());
}
private static async Task ProfileStore(FasterWrapper<int, int> store)
{
static string threadingModeString(ThreadingMode threadingMode)
=> threadingMode switch
{
ThreadingMode.Single => "Single threading",
ThreadingMode.ParallelAsync => "Parallel.For using async lambda",
ThreadingMode.ParallelSync => $"Parallel.For using sync lambda and {numTasks} tasks",
ThreadingMode.Chunks => $"Chunks partitioned across {numTasks} tasks",
_ => throw new ApplicationException("Unknown threading mode")
};
int chunkSize = numOperations / numTasks;
// Insert
if (upsertThreadingMode == ThreadingMode.None)
{
throw new ApplicationException("Cannot Skip Upserts");
}
else
{
Console.WriteLine($" Inserting {numOperations} records with {threadingModeString(upsertThreadingMode)} ...");
var sw = Stopwatch.StartNew();
if (upsertThreadingMode == ThreadingMode.Single)
{
for (int i = 0; i < numOperations; i++)
await store.UpsertAsync(i, i);
}
else if (upsertThreadingMode == ThreadingMode.ParallelAsync)
{
var writeTasks = new ValueTask[numOperations];
Parallel.For(0, numOperations, key => writeTasks[key] = store.UpsertAsync(key, key));
foreach (var task in writeTasks)
await task;
}
else if (upsertThreadingMode == ThreadingMode.ParallelSync)
{
// Without throttling parallelism, this ends up very slow with many threads waiting on FlushTask.
var parallelOptions = new ParallelOptions { MaxDegreeOfParallelism = numTasks };
Parallel.For(0, numOperations, parallelOptions, key => store.Upsert(key, key));
}
else
{
Debug.Assert(upsertThreadingMode == ThreadingMode.Chunks);
var chunkTasks = new ValueTask[numTasks];
for (int ii = 0; ii < numTasks; ii++)
{
var chunk = new (int, int)[chunkSize];
for (int i = 0; i < chunkSize; i++) chunk[i] = (ii * chunkSize + i, ii * chunkSize + i);
chunkTasks[ii] = store.UpsertChunkAsync(chunk);
}
foreach (var chunkTask in chunkTasks)
await chunkTask;
}
sw.Stop();
Console.WriteLine($" Insertion complete in {sw.ElapsedMilliseconds} ms; TailAddress = {store.TailAddress}, Pending = {store.UpsertPendingCount}");
}
// Read
Console.WriteLine();
if (readThreadingMode == ThreadingMode.None)
{
Console.WriteLine(" Skipping Reads");
}
else
{
Console.WriteLine($" Reading {numOperations} records with {threadingModeString(readThreadingMode)} (OS buffering: {FasterWrapper<int, int>.useOsReadBuffering}) ...");
var readTasks = new ValueTask<(Status, int)>[numOperations];
var readPendingString = string.Empty;
var sw = Stopwatch.StartNew();
if (readThreadingMode == ThreadingMode.Single)
{
for (int ii = 0; ii < numOperations; ii++)
{
readTasks[ii] = store.ReadAsync(ii);
await readTasks[ii];
}
}
else if (readThreadingMode == ThreadingMode.ParallelAsync)
{
Parallel.For(0, numOperations, key => readTasks[key] = store.ReadAsync(key));
foreach (var task in readTasks)
await task;
}
else if (readThreadingMode == ThreadingMode.ParallelSync)
{
// Without throttling parallelism, this ends up very slow with many threads waiting on completion.
var parallelOptions = new ParallelOptions { MaxDegreeOfParallelism = numTasks };
Parallel.For(0, numOperations, parallelOptions, key => readTasks[key] = store.Read(key));
foreach (var task in readTasks)
await task;
readPendingString = $"; Pending = {store.ReadPendingCount}";
}
else
{
var chunkTasks = Enumerable.Range(0, numTasks).Select(ii =>
{
var chunk = new int[chunkSize];
for (int i = 0; i < chunkSize; i++) chunk[i] = ii * chunkSize + i;
return store.ReadChunkAsync(chunk, readTasks, ii * chunkSize);
}).ToArray();
foreach (var chunkTask in chunkTasks)
await chunkTask;
}
sw.Stop();
Console.WriteLine($" Reads complete in {sw.ElapsedMilliseconds} ms{readPendingString}");
// Verify
Console.WriteLine(" Verifying read results ...");
Parallel.For(0, numOperations, key =>
{
(Status status, int? result) = readTasks[key].Result;
Assert.Equal(Status.OK, status);
Assert.Equal(key, result);
});
Console.WriteLine(" Results verified");
}
store.Dispose();
}
}
}

Просмотреть файл

@ -0,0 +1,14 @@
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp3.1</TargetFramework>
<Platforms>x64</Platforms>
<AllowUnsafeBlocks>true</AllowUnsafeBlocks>
</PropertyGroup>
<ItemGroup>
<ProjectReference Include="..\..\src\core\FASTER.core.csproj" />
</ItemGroup>
</Project>

Просмотреть файл

@ -0,0 +1,151 @@
using System;
using System.Diagnostics;
using System.IO;
using System.Threading;
using FASTER.core;
namespace FasterLogStress
{
public class Program
{
private static FasterLog log;
private static IDevice device;
static readonly byte[] entry = new byte[100];
private static string commitPath;
public static void Main()
{
commitPath = "FasterLogStress/";
// Clean up log files from previous test runs in case they weren't cleaned up
// We loop to ensure clean-up as deleteOnClose does not always work for MLSD
while (Directory.Exists(commitPath))
Directory.Delete(commitPath, true);
// Create devices \ log for test
device = new ManagedLocalStorageDevice(commitPath + "ManagedLocalStore.log", deleteOnClose: true);
log = new FasterLog(new FasterLogSettings { LogDevice = device, PageSizeBits = 12, MemorySizeBits = 14 });
ManagedLocalStoreBasicTest();
log.Dispose();
device.Dispose();
// Clean up log files
if (Directory.Exists(commitPath))
Directory.Delete(commitPath, true);
}
public static void ManagedLocalStoreBasicTest()
{
int entryLength = 20;
int numEntries = 500_000;
int numEnqueueThreads = 1;
int numIterThreads = 1;
bool commitThread = false;
// Set Default entry data
for (int i = 0; i < entryLength; i++)
{
entry[i] = (byte)i;
}
bool disposeCommitThread = false;
var commit =
new Thread(() =>
{
while (!disposeCommitThread)
{
Thread.Sleep(10);
log.Commit(true);
}
});
if (commitThread)
commit.Start();
Thread[] th = new Thread[numEnqueueThreads];
for (int t = 0; t < numEnqueueThreads; t++)
{
th[t] =
new Thread(() =>
{
// Enqueue but set each Entry in a way that can differentiate between entries
for (int i = 0; i < numEntries; i++)
{
// Flag one part of entry data that corresponds to index
entry[0] = (byte)i;
// Default is add bytes so no need to do anything with it
log.Enqueue(entry);
}
});
}
Console.WriteLine("Populating log...");
var sw = Stopwatch.StartNew();
for (int t = 0; t < numEnqueueThreads; t++)
th[t].Start();
for (int t = 0; t < numEnqueueThreads; t++)
th[t].Join();
sw.Stop();
Console.WriteLine($"{numEntries} items enqueued to the log by {numEnqueueThreads} threads in {sw.ElapsedMilliseconds} ms");
if (commitThread)
{
disposeCommitThread = true;
commit.Join();
}
// Final commit to the log
log.Commit(true);
// flag to make sure data has been checked
bool datacheckrun = false;
Thread[] th2 = new Thread[numIterThreads];
for (int t = 0; t < numIterThreads; t++)
{
th2[t] =
new Thread(() =>
{
// Read the log - Look for the flag so know each entry is unique
int currentEntry = 0;
using (var iter = log.Scan(0, long.MaxValue))
{
while (iter.GetNext(out byte[] result, out _, out _))
{
// set check flag to show got in here
datacheckrun = true;
if (numEnqueueThreads == 1)
if (result[0] != (byte)currentEntry)
throw new Exception("Fail - Result[" + currentEntry.ToString() + "]:" + result[0].ToString());
currentEntry++;
}
}
if (currentEntry != numEntries * numEnqueueThreads)
throw new Exception("Error");
});
}
sw.Restart();
for (int t = 0; t < numIterThreads; t++)
th2[t].Start();
for (int t = 0; t < numIterThreads; t++)
th2[t].Join();
sw.Stop();
Console.WriteLine($"{numEntries} items iterated in the log by {numIterThreads} threads in {sw.ElapsedMilliseconds} ms");
// if data verification was skipped, then pop a fail
if (datacheckrun == false)
throw new Exception("Failure -- data loop after log.Scan never entered so wasn't verified. ");
}
}
}

Просмотреть файл

@ -3,6 +3,7 @@
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp3.1</TargetFramework>
<Platforms>x64</Platforms>
</PropertyGroup>
<ItemGroup>

Просмотреть файл

@ -159,8 +159,8 @@ namespace ReadAddress
var status = session.Read(ref key, ref input, ref output, ref recordInfo, userContext: context, serialNo: maxLap + 1);
if (status == Status.PENDING)
{
// This will spin CPU for each retrieved record; not recommended for performance-critical code or when retrieving chains for multiple records.
session.CompletePending(spinWait: true);
// This will wait for each retrieved record; not recommended for performance-critical code or when retrieving chains for multiple records.
session.CompletePending(wait: true);
recordInfo = context.recordInfo;
status = context.status;
}

Просмотреть файл

@ -31,8 +31,7 @@ namespace StoreAsyncApi
var checkpointSettings = new CheckpointSettings { CheckpointDir = path, CheckPointType = CheckpointType.FoldOver };
var serializerSettings = new SerializerSettings<CacheKey, CacheValue> { keySerializer = () => new CacheKeySerializer(), valueSerializer = () => new CacheValueSerializer() };
faster = new FasterKV<CacheKey, CacheValue>
(1L << 20, logSettings, checkpointSettings, serializerSettings);
faster = new FasterKV<CacheKey, CacheValue>(1L << 20, logSettings, checkpointSettings, serializerSettings);
const int NumParallelTasks = 1;
ThreadPool.SetMinThreads(2 * Environment.ProcessorCount, 2 * Environment.ProcessorCount);
@ -64,27 +63,40 @@ namespace StoreAsyncApi
using var session = faster.For(new CacheFunctions()).NewSession<CacheFunctions>(id.ToString());
Random rand = new Random(id);
bool batched = true;
bool batched = true; // whether we batch upserts on session
bool asyncUpsert = false; // whether we use sync or async upsert calls
bool waitForCommit = false; // whether we wait for commit after each operation (or batch) on this session
int batchSize = 100; // batch size
await Task.Yield();
var context = new CacheContext();
var taskBatch = new ValueTask<FasterKV<CacheKey, CacheValue>.UpsertAsyncResult<CacheInput, CacheOutput, CacheContext>>[batchSize];
long seqNo = 0;
if (!batched)
{
// Single commit version - upsert each item and wait for commit
// Single upsert at a time, optionally waiting for commit
// Needs high parallelism (NumParallelTasks) for perf
// Needs separate commit thread to perform regular checkpoints
// Separate commit thread performs regular checkpoints
while (true)
{
try
{
var key = new CacheKey(rand.Next());
var value = new CacheValue(rand.Next());
session.Upsert(ref key, ref value, context);
await session.WaitForCommitAsync();
if (asyncUpsert)
{
var r = await session.UpsertAsync(ref key, ref value, context, seqNo++);
while (r.Status == Status.PENDING)
r = await r.CompleteAsync();
}
else
{
session.Upsert(ref key, ref value, context, seqNo++);
}
if (waitForCommit)
await session.WaitForCommitAsync();
Interlocked.Increment(ref numOps);
}
catch (Exception ex)
@ -104,12 +116,28 @@ namespace StoreAsyncApi
var key = new CacheKey(rand.Next());
var value = new CacheValue(rand.Next());
session.Upsert(ref key, ref value, context);
if (count++ % 100 == 0)
if (asyncUpsert)
{
await session.WaitForCommitAsync();
Interlocked.Add(ref numOps, 100);
taskBatch[count % batchSize] = session.UpsertAsync(ref key, ref value, context, seqNo++);
}
else
{
session.Upsert(ref key, ref value, context, seqNo++);
}
if (count++ % batchSize == 0)
{
if (asyncUpsert)
{
for (int i = 0; i < batchSize; i++)
{
var r = await taskBatch[i];
while (r.Status == Status.PENDING)
r = await r.CompleteAsync();
}
}
if (waitForCommit)
await session.WaitForCommitAsync();
Interlocked.Add(ref numOps, batchSize);
}
}
}
@ -125,13 +153,12 @@ namespace StoreAsyncApi
while (true)
{
Thread.Sleep(5000);
Thread.Sleep(1000);
var nowTime = sw.ElapsedMilliseconds;
var nowValue = numOps;
Console.WriteLine("Operation Throughput: {0} ops/sec, Tail: {1}",
(nowValue - lastValue) / (1000 * (nowTime - lastTime)), faster.Log.TailAddress);
1000.0*(nowValue - lastValue) / (nowTime - lastTime), faster.Log.TailAddress);
lastValue = nowValue;
lastTime = nowTime;
}
@ -141,8 +168,8 @@ namespace StoreAsyncApi
{
while (true)
{
Thread.Sleep(5000);
faster.TakeFullCheckpointAsync(CheckpointType.FoldOver).GetAwaiter().GetResult();
Thread.Sleep(100);
faster.TakeHybridLogCheckpointAsync(CheckpointType.FoldOver).GetAwaiter().GetResult();
}
}
}

Просмотреть файл

@ -94,7 +94,7 @@ namespace StoreAsyncApi
public override void CheckpointCompletionCallback(string sessionId, CommitPoint commitPoint)
{
Console.WriteLine("Session {0} reports persistence until {1}", sessionId, commitPoint.UntilSerialNo);
// Console.WriteLine("Session {0} reports persistence until {1}", sessionId, commitPoint.UntilSerialNo);
}
public override void ReadCompletionCallback(ref CacheKey key, ref CacheInput input, ref CacheOutput output, CacheContext ctx, Status status)

Просмотреть файл

@ -6,6 +6,7 @@ using System.Diagnostics;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
using System.Threading;
using System.Threading.Tasks;
namespace FASTER.core
{
@ -38,7 +39,7 @@ namespace FASTER.core
/// </summary>
/// <typeparam name="Key"></typeparam>
/// <typeparam name="Value"></typeparam>
public unsafe abstract partial class AllocatorBase<Key, Value> : IDisposable
public abstract partial class AllocatorBase<Key, Value> : IDisposable
{
/// <summary>
/// Epoch information
@ -233,6 +234,16 @@ namespace FASTER.core
/// </summary>
internal IObserver<IFasterScanIterator<Key, Value>> OnEvictionObserver;
/// <summary>
/// The TaskCompletionSource for flush completion
/// </summary>
private TaskCompletionSource<long> flushTcs = new TaskCompletionSource<long>(TaskCreationOptions.RunContinuationsAsynchronously);
/// <summary>
/// The task ato be waited on for flush completion by the initiator of an operation
/// </summary>
internal Task<long> FlushTask => flushTcs.Task;
#region Abstract methods
/// <summary>
/// Initialize
@ -268,7 +279,7 @@ namespace FASTER.core
/// </summary>
/// <param name="ptr"></param>
/// <returns></returns>
public abstract ref RecordInfo GetInfoFromBytePointer(byte* ptr);
public unsafe abstract ref RecordInfo GetInfoFromBytePointer(byte* ptr);
/// <summary>
/// Get key
@ -295,13 +306,13 @@ namespace FASTER.core
/// </summary>
/// <param name="physicalAddress"></param>
/// <returns></returns>
public abstract AddressInfo* GetKeyAddressInfo(long physicalAddress);
public abstract unsafe AddressInfo* GetKeyAddressInfo(long physicalAddress);
/// <summary>
/// Get address info for value
/// </summary>
/// <param name="physicalAddress"></param>
/// <returns></returns>
public abstract AddressInfo* GetValueAddressInfo(long physicalAddress);
public abstract unsafe AddressInfo* GetValueAddressInfo(long physicalAddress);
/// <summary>
/// Get record size
@ -375,7 +386,7 @@ namespace FASTER.core
/// <param name="src"></param>
/// <param name="required_bytes"></param>
/// <param name="destinationPage"></param>
internal abstract void PopulatePage(byte* src, int required_bytes, long destinationPage);
internal abstract unsafe void PopulatePage(byte* src, int required_bytes, long destinationPage);
/// <summary>
/// Write async to device
/// </summary>
@ -398,7 +409,7 @@ namespace FASTER.core
/// <param name="callback"></param>
/// <param name="context"></param>
/// <param name="result"></param>
protected abstract void AsyncReadRecordObjectsToMemory(long fromLogical, int numBytes, DeviceIOCompletionCallback callback, AsyncIOContext<Key, Value> context, SectorAlignedMemory result = default);
protected abstract unsafe void AsyncReadRecordObjectsToMemory(long fromLogical, int numBytes, DeviceIOCompletionCallback callback, AsyncIOContext<Key, Value> context, SectorAlignedMemory result = default);
/// <summary>
/// Read page (async)
/// </summary>
@ -431,7 +442,7 @@ namespace FASTER.core
/// <param name="record"></param>
/// <param name="ctx"></param>
/// <returns></returns>
protected abstract bool RetrievedFullRecord(byte* record, ref AsyncIOContext<Key, Value> ctx);
protected abstract unsafe bool RetrievedFullRecord(byte* record, ref AsyncIOContext<Key, Value> ctx);
/// <summary>
/// Retrieve value from context
@ -749,10 +760,9 @@ namespace FASTER.core
/// <summary>
/// Try allocate, no thread spinning allowed
/// May return 0 in case of inability to allocate
/// </summary>
/// <param name="numSlots"></param>
/// <returns></returns>
/// <param name="numSlots">Number of slots to allocate</param>
/// <returns>The allocated logical address, or 0 in case of inability to allocate</returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public long TryAllocate(int numSlots = 1)
{
@ -760,11 +770,16 @@ namespace FASTER.core
throw new FasterException("Entry does not fit on page");
PageOffset localTailPageOffset = default;
localTailPageOffset.PageAndOffset = TailPageOffset.PageAndOffset;
// Necessary to check because threads keep retrying and we do not
// want to overflow offset more than once per thread
if (TailPageOffset.Offset > PageSize)
return 0;
if (localTailPageOffset.Offset > PageSize)
{
if (NeedToWait(localTailPageOffset.Page + 1))
return 0; // RETRY_LATER
return -1; // RETRY_NOW
}
// Determine insertion index.
localTailPageOffset.PageAndOffset = Interlocked.Add(ref TailPageOffset.PageAndOffset, numSlots);
@ -775,26 +790,33 @@ namespace FASTER.core
#region HANDLE PAGE OVERFLOW
if (localTailPageOffset.Offset > PageSize)
{
if (offset > PageSize)
{
return 0;
}
// The thread that "makes" the offset incorrect
// is the one that is elected to fix it and
// shift read-only/head.
// All overflow threads try to shift addresses
long shiftAddress = ((long)(localTailPageOffset.Page + 1)) << LogPageSizeBits;
PageAlignedShiftReadOnlyAddress(shiftAddress);
PageAlignedShiftHeadAddress(shiftAddress);
if (CannotAllocate(localTailPageOffset.Page + 1))
if (offset > PageSize)
{
// We should not allocate the next page; reset to end of page
// so that next attempt can retry
if (NeedToWait(localTailPageOffset.Page + 1))
return 0; // RETRY_LATER
return -1; // RETRY_NOW
}
if (NeedToWait(localTailPageOffset.Page + 1))
{
// Reset to end of page so that next attempt can retry
localTailPageOffset.Offset = PageSize;
Interlocked.Exchange(ref TailPageOffset.PageAndOffset, localTailPageOffset.PageAndOffset);
return 0;
return 0; // RETRY_LATER
}
// The thread that "makes" the offset incorrect should allocate next page and set new tail
if (CannotAllocate(localTailPageOffset.Page + 1))
{
// Reset to end of page so that next attempt can retry
localTailPageOffset.Offset = PageSize;
Interlocked.Exchange(ref TailPageOffset.PageAndOffset, localTailPageOffset.PageAndOffset);
return -1; // RETRY_NOW
}
// Allocate next page in advance, if needed
@ -805,22 +827,74 @@ namespace FASTER.core
}
localTailPageOffset.Page++;
localTailPageOffset.Offset = 0;
localTailPageOffset.Offset = numSlots;
TailPageOffset = localTailPageOffset;
return 0;
page++;
offset = 0;
}
#endregion
return (((long)page) << LogPageSizeBits) | ((long)offset);
}
private bool CannotAllocate(int page)
/// <summary>
/// Async wrapper for TryAllocate
/// </summary>
/// <param name="numSlots">Number of slots to allocate</param>
/// <param name="token">Cancellation token</param>
/// <returns>The allocated logical address</returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public async ValueTask<long> AllocateAsync(int numSlots = 1, CancellationToken token = default)
{
return
(page >= BufferSize + (ClosedUntilAddress >> LogPageSizeBits));
var spins = 0;
while (true)
{
var flushTask = this.FlushTask;
var logicalAddress = this.TryAllocate(numSlots);
if (logicalAddress > 0)
return logicalAddress;
if (logicalAddress == 0)
{
if (spins++ < Constants.kFlushSpinCount)
{
Thread.Yield();
continue;
}
try
{
epoch.Suspend();
await flushTask.WithCancellationAsync(token);
}
finally
{
epoch.Resume();
}
}
this.TryComplete();
epoch.ProtectAndDrain();
Thread.Yield();
}
}
/// <summary>
/// Try allocate, spin for RETRY_NOW case
/// </summary>
/// <param name="numSlots">Number of slots to allocate</param>
/// <returns>The allocated logical address, or 0 in case of inability to allocate</returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public long TryAllocateRetryNow(int numSlots = 1)
{
long logicalAddress;
while ((logicalAddress = TryAllocate(numSlots)) < 0)
epoch.ProtectAndDrain();
return logicalAddress;
}
private bool CannotAllocate(int page) => page >= BufferSize + (ClosedUntilAddress >> LogPageSizeBits);
private bool NeedToWait(int page) => page >= BufferSize + (FlushedUntilAddress >> LogPageSizeBits);
/// <summary>
/// Used by applications to make the current state of the database immutable quickly
/// </summary>
@ -870,6 +944,7 @@ namespace FASTER.core
var b = oldBeginAddress >> LogSegmentSizeBits != newBeginAddress >> LogSegmentSizeBits;
// Shift read-only address
var flushTask = FlushTask;
try
{
epoch.Resume();
@ -881,7 +956,19 @@ namespace FASTER.core
}
// Wait for flush to complete
while (FlushedUntilAddress < newBeginAddress) Thread.Yield();
var spins = 0;
while (true)
{
if (FlushedUntilAddress >= newBeginAddress)
break;
if (++spins < Constants.kFlushSpinCount)
{
Thread.Yield();
continue;
}
flushTask.Wait();
flushTask = FlushTask;
}
// Then shift head address
var h = Utility.MonotonicUpdate(ref HeadAddress, newBeginAddress, out long old);
@ -1108,12 +1195,22 @@ namespace FASTER.core
FlushCallback?.Invoke(
new CommitInfo
{
BeginAddress = BeginAddress,
FromAddress = oldFlushedUntilAddress,
UntilAddress = currentFlushedUntilAddress,
ErrorCode = errorCode
});
var newFlushTcs = new TaskCompletionSource<long>(TaskCreationOptions.RunContinuationsAsynchronously);
while (true)
{
var _flushTcs = flushTcs;
if (Interlocked.CompareExchange(ref flushTcs, newFlushTcs, _flushTcs) == _flushTcs)
{
_flushTcs.TrySetResult(errorCode);
break;
}
}
if (errorList.Count > 0)
{
errorList.RemoveUntil(currentFlushedUntilAddress);
@ -1218,7 +1315,7 @@ namespace FASTER.core
/// <param name="callback"></param>
/// <param name="context"></param>
///
internal void AsyncReadRecordToMemory(long fromLogical, int numBytes, DeviceIOCompletionCallback callback, AsyncIOContext<Key, Value> context)
internal unsafe void AsyncReadRecordToMemory(long fromLogical, int numBytes, DeviceIOCompletionCallback callback, AsyncIOContext<Key, Value> context)
{
ulong fileOffset = (ulong)(AlignedPageSizeBytes * (fromLogical >> LogPageSizeBits) + (fromLogical & PageSizeMask));
ulong alignedFileOffset = (ulong)(((long)fileOffset / sectorSize) * sectorSize);
@ -1248,7 +1345,7 @@ namespace FASTER.core
/// <param name="numBytes"></param>
/// <param name="callback"></param>
/// <param name="context"></param>
internal void AsyncReadRecordToMemory(long fromLogical, int numBytes, DeviceIOCompletionCallback callback, ref SimpleReadContext context)
internal unsafe void AsyncReadRecordToMemory(long fromLogical, int numBytes, DeviceIOCompletionCallback callback, ref SimpleReadContext context)
{
ulong fileOffset = (ulong)(AlignedPageSizeBytes * (fromLogical >> LogPageSizeBits) + (fromLogical & PageSizeMask));
ulong alignedFileOffset = (ulong)(((long)fileOffset / sectorSize) * sectorSize);
@ -1549,7 +1646,7 @@ namespace FASTER.core
AsyncReadRecordObjectsToMemory(fromLogical, numBytes, AsyncGetFromDiskCallback, context, result);
}
private void AsyncGetFromDiskCallback(uint errorCode, uint numBytes, object context)
private unsafe void AsyncGetFromDiskCallback(uint errorCode, uint numBytes, object context)
{
if (errorCode != 0)
{

Просмотреть файл

@ -0,0 +1,77 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT license.
using System;
using System.Collections.Concurrent;
using System.Threading;
using System.Threading.Tasks;
namespace FASTER.core
{
/// <summary>
/// Shared work queue that ensures one worker at any given time. Uses LIFO ordering of work.
/// </summary>
/// <typeparam name="T"></typeparam>
class WorkQueueLIFO<T>
{
const int kMaxQueueSize = 1 << 30;
readonly ConcurrentStack<T> _queue;
readonly Action<T> _work;
int _count;
public WorkQueueLIFO(Action<T> work)
{
_queue = new ConcurrentStack<T>();
_work = work;
_count = 0;
}
/// <summary>
/// Enqueue work item, take ownership of draining the work queue
/// if needed
/// </summary>
/// <param name="work">Work to enqueue</param>
/// <param name="asTask">Process work as separate task</param>
public void EnqueueAndTryWork(T work, bool asTask)
{
Interlocked.Increment(ref _count);
_queue.Push(work);
// Try to take over work queue processing if needed
while (true)
{
int count = _count;
if (count >= kMaxQueueSize) return;
if (Interlocked.CompareExchange(ref _count, count + kMaxQueueSize, count) == count)
break;
}
if (asTask)
_ = Task.Run(() => ProcessQueue());
else
ProcessQueue();
}
private void ProcessQueue()
{
// Process items in qork queue
while (true)
{
while (_queue.TryPop(out var workItem))
{
Interlocked.Decrement(ref _count);
try
{
_work(workItem);
}
catch { }
}
int count = _count;
if (count != kMaxQueueSize) continue;
if (Interlocked.CompareExchange(ref _count, 0, count) == count)
break;
}
}
}
}

Просмотреть файл

@ -35,6 +35,8 @@ namespace FASTER.core
internal readonly IVariableLengthStruct<Value, Input> variableLengthStruct;
internal readonly IVariableLengthStruct<Input> inputVariableLengthStruct;
internal CompletedOutputIterator<Key, Value, Input, Output, Context> completedOutputs;
internal readonly InternalFasterSession FasterSession;
internal const string NotAsyncSessionErr = ClientSession<int, int, int, int, Empty, SimpleFunctions<int, int>>.NotAsyncSessionErr;
@ -143,6 +145,7 @@ namespace FASTER.core
/// </summary>
public void Dispose()
{
this.completedOutputs?.Dispose();
CompletePending(true);
fht.DisposeClientSession(ID);
@ -229,7 +232,7 @@ namespace FASTER.core
/// <param name="serialNo"></param>
/// <returns></returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public (Status, Output) Read(Key key, Context userContext = default, long serialNo = 0)
public (Status status, Output output) Read(Key key, Context userContext = default, long serialNo = 0)
{
Input input = default;
Output output = default;
@ -444,6 +447,37 @@ namespace FASTER.core
public Status Upsert(Key key, Value desiredValue, Context userContext = default, long serialNo = 0)
=> Upsert(ref key, ref desiredValue, userContext, serialNo);
/// <summary>
/// Async Upsert operation
/// Await operation in session before issuing next one
/// </summary>
/// <param name="key"></param>
/// <param name="desiredValue"></param>
/// <param name="userContext"></param>
/// <param name="serialNo"></param>
/// <param name="token"></param>
/// <returns>ValueTask for Upsert result, user needs to await and then call Complete() on the result</returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public ValueTask<FasterKV<Key, Value>.UpsertAsyncResult<Input, Output, Context>> UpsertAsync(ref Key key, ref Value desiredValue, Context userContext = default, long serialNo = 0, CancellationToken token = default)
{
Debug.Assert(SupportAsync, NotAsyncSessionErr);
return fht.UpsertAsync(this.FasterSession, this.ctx, ref key, ref desiredValue, userContext, serialNo, token);
}
/// <summary>
/// Async Upsert operation
/// Await operation in session before issuing next one
/// </summary>
/// <param name="key"></param>
/// <param name="desiredValue"></param>
/// <param name="userContext"></param>
/// <param name="serialNo"></param>
/// <param name="token"></param>
/// <returns>ValueTask for Upsert result, user needs to await and then call Complete() on the result</returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public ValueTask<FasterKV<Key, Value>.UpsertAsyncResult<Input, Output, Context>> UpsertAsync(Key key, Value desiredValue, Context userContext = default, long serialNo = 0, CancellationToken token = default)
=> UpsertAsync(ref key, ref desiredValue, userContext, serialNo, token);
/// <summary>
/// RMW operation
/// </summary>
@ -541,6 +575,35 @@ namespace FASTER.core
public Status Delete(Key key, Context userContext = default, long serialNo = 0)
=> Delete(ref key, userContext, serialNo);
/// <summary>
/// Async Delete operation
/// Await operation in session before issuing next one
/// </summary>
/// <param name="key"></param>
/// <param name="userContext"></param>
/// <param name="serialNo"></param>
/// <param name="token"></param>
/// <returns>ValueTask for Delete result, user needs to await and then call Complete() on the result</returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public ValueTask<FasterKV<Key, Value>.DeleteAsyncResult<Input, Output, Context>> DeleteAsync(ref Key key, Context userContext = default, long serialNo = 0, CancellationToken token = default)
{
Debug.Assert(SupportAsync, NotAsyncSessionErr);
return fht.DeleteAsync(this.FasterSession, this.ctx, ref key, userContext, serialNo, token);
}
/// <summary>
/// Async Delete operation
/// Await operation in session before issuing next one
/// </summary>
/// <param name="key"></param>
/// <param name="userContext"></param>
/// <param name="serialNo"></param>
/// <param name="token"></param>
/// <returns>ValueTask for Delete result, user needs to await and then call Complete() on the result</returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public ValueTask<FasterKV<Key, Value>.DeleteAsyncResult<Input, Output, Context>> DeleteAsync(Key key, Context userContext = default, long serialNo = 0, CancellationToken token = default)
=> DeleteAsync(ref key, userContext, serialNo, token);
/// <summary>
/// Experimental feature
/// Checks whether specified record is present in memory
@ -595,33 +658,61 @@ namespace FASTER.core
}
/// <summary>
/// Sync complete all outstanding pending operations
/// Async operations (ReadAsync) must be completed individually
/// Synchronously complete outstanding pending synchronous operations.
/// Async operations must be completed individually.
/// </summary>
/// <param name="spinWait">Spin-wait for all pending operations on session to complete</param>
/// <param name="spinWaitForCommit">Extend spin-wait until ongoing commit/checkpoint, if any, completes</param>
/// <returns></returns>
public bool CompletePending(bool spinWait = false, bool spinWaitForCommit = false)
/// <param name="wait">Wait for all pending operations on session to complete</param>
/// <param name="spinWaitForCommit">Spin-wait until ongoing commit/checkpoint, if any, completes</param>
/// <returns>True if all pending operations have completed, false otherwise</returns>
public bool CompletePending(bool wait = false, bool spinWaitForCommit = false)
=> CompletePending(false, wait, spinWaitForCommit);
/// <summary>
/// Synchronously complete outstanding pending synchronous operations, returning outputs for the completed operations.
/// Async operations must be completed individually.
/// </summary>
/// <param name="completedOutputs">Outputs completed by this operation</param>
/// <param name="wait">Wait for all pending operations on session to complete</param>
/// <param name="spinWaitForCommit">Spin-wait until ongoing commit/checkpoint, if any, completes</param>
/// <returns>True if all pending operations have completed, false otherwise</returns>
public bool CompletePendingWithOutputs(out CompletedOutputIterator<Key, Value, Input, Output, Context> completedOutputs, bool wait = false, bool spinWaitForCommit = false)
{
InitializeCompletedOutputs();
var result = CompletePending(true, wait, spinWaitForCommit);
completedOutputs = this.completedOutputs;
return result;
}
void InitializeCompletedOutputs()
{
if (this.completedOutputs is null)
this.completedOutputs = new CompletedOutputIterator<Key, Value, Input, Output, Context>();
else
this.completedOutputs.Dispose();
}
private bool CompletePending(bool getOutputs, bool wait, bool spinWaitForCommit)
{
if (SupportAsync) UnsafeResumeThread();
try
{
var result = fht.InternalCompletePending(ctx, FasterSession, spinWait);
var requestedOutputs = getOutputs ? this.completedOutputs : default;
var result = fht.InternalCompletePending(ctx, FasterSession, wait, requestedOutputs);
if (spinWaitForCommit)
{
if (spinWait != true)
if (wait != true)
{
throw new FasterException("Can spin-wait for checkpoint completion only if spinWait is true");
throw new FasterException("Can spin-wait for commit (checkpoint completion) only if wait is true");
}
do
{
fht.InternalCompletePending(ctx, FasterSession, spinWait);
fht.InternalCompletePending(ctx, FasterSession, wait, requestedOutputs);
if (fht.InRestPhase())
{
fht.InternalCompletePending(ctx, FasterSession, spinWait);
fht.InternalCompletePending(ctx, FasterSession, wait, requestedOutputs);
return true;
}
} while (spinWait);
} while (wait);
}
return result;
}
@ -632,11 +723,26 @@ namespace FASTER.core
}
/// <summary>
/// Complete all outstanding pending operations asynchronously
/// Async operations (ReadAsync) must be completed individually
/// Complete all pending synchronous FASTER operations.
/// Async operations must be completed individually.
/// </summary>
/// <returns></returns>
public async ValueTask CompletePendingAsync(bool waitForCommit = false, CancellationToken token = default)
public ValueTask CompletePendingAsync(bool waitForCommit = false, CancellationToken token = default)
=> CompletePendingAsync(false, waitForCommit, token);
/// <summary>
/// Complete all pending synchronous FASTER operations, returning outputs for the completed operations.
/// Async operations must be completed individually.
/// </summary>
/// <returns>Outputs completed by this operation</returns>
public async ValueTask<CompletedOutputIterator<Key, Value, Input, Output, Context>> CompletePendingWithOutputsAsync(bool waitForCommit = false, CancellationToken token = default)
{
InitializeCompletedOutputs();
await CompletePendingAsync(true, waitForCommit, token);
return this.completedOutputs;
}
private async ValueTask CompletePendingAsync(bool getOutputs, bool waitForCommit = false, CancellationToken token = default)
{
token.ThrowIfCancellationRequested();
@ -644,7 +750,7 @@ namespace FASTER.core
throw new NotSupportedException("Async operations not supported over protected epoch");
// Complete all pending operations on session
await fht.CompletePendingAsync(this.FasterSession, this.ctx, token);
await fht.CompletePendingAsync(this.FasterSession, this.ctx, token, getOutputs ? this.completedOutputs : null);
// Wait for commit if necessary
if (waitForCommit)
@ -652,8 +758,8 @@ namespace FASTER.core
}
/// <summary>
/// Check if at least one request is ready for CompletePending to be called on
/// Returns completed immediately if there are no outstanding requests
/// Check if at least one synchronous request is ready for CompletePending to be called on
/// Returns completed immediately if there are no outstanding synchronous requests
/// </summary>
/// <param name="token"></param>
/// <returns></returns>
@ -676,7 +782,10 @@ namespace FASTER.core
{
token.ThrowIfCancellationRequested();
// Complete all pending operations on session
if (!ctx.prevCtx.pendingReads.IsEmpty || !ctx.pendingReads.IsEmpty)
throw new FasterException("Make sure all async operations issued on this session are awaited and completed first");
// Complete all pending sync operations on session
await CompletePendingAsync(token: token);
var task = fht.CheckpointTask;

Просмотреть файл

@ -35,6 +35,8 @@ namespace FASTER.core
internal readonly IVariableLengthStruct<Value, Input> variableLengthStruct;
internal readonly IVariableLengthStruct<Input> inputVariableLengthStruct;
internal CompletedOutputIterator<Key, Value, Input, Output, Context> completedOutputs;
internal readonly InternalFasterSession FasterSession;
internal const string NotAsyncSessionErr = "Session does not support async operations";
@ -143,6 +145,7 @@ namespace FASTER.core
/// </summary>
public void Dispose()
{
this.completedOutputs?.Dispose();
CompletePending(true);
fht.DisposeClientSession(ID);
@ -229,7 +232,7 @@ namespace FASTER.core
/// <param name="serialNo"></param>
/// <returns></returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public (Status, Output) Read(Key key, Context userContext = default, long serialNo = 0)
public (Status status, Output output) Read(Key key, Context userContext = default, long serialNo = 0)
{
Input input = default;
Output output = default;
@ -406,6 +409,37 @@ namespace FASTER.core
public Status Upsert(Key key, Value desiredValue, Context userContext = default, long serialNo = 0)
=> Upsert(ref key, ref desiredValue, userContext, serialNo);
/// <summary>
/// Async Upsert operation
/// Await operation in session before issuing next one
/// </summary>
/// <param name="key"></param>
/// <param name="desiredValue"></param>
/// <param name="userContext"></param>
/// <param name="serialNo"></param>
/// <param name="token"></param>
/// <returns>ValueTask for Upsert result, user needs to await and then call Complete() on the result</returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public ValueTask<FasterKV<Key, Value>.UpsertAsyncResult<Input, Output, Context>> UpsertAsync(ref Key key, ref Value desiredValue, Context userContext = default, long serialNo = 0, CancellationToken token = default)
{
Debug.Assert(SupportAsync, NotAsyncSessionErr);
return fht.UpsertAsync(this.FasterSession, this.ctx, ref key, ref desiredValue, userContext, serialNo, token);
}
/// <summary>
/// Async Upsert operation
/// Await operation in session before issuing next one
/// </summary>
/// <param name="key"></param>
/// <param name="desiredValue"></param>
/// <param name="userContext"></param>
/// <param name="serialNo"></param>
/// <param name="token"></param>
/// <returns>ValueTask for Upsert result, user needs to await and then call Complete() on the result</returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public ValueTask<FasterKV<Key, Value>.UpsertAsyncResult<Input, Output, Context>> UpsertAsync(Key key, Value desiredValue, Context userContext = default, long serialNo = 0, CancellationToken token = default)
=> UpsertAsync(ref key, ref desiredValue, userContext, serialNo, token);
/// <summary>
/// RMW operation
/// </summary>
@ -503,6 +537,35 @@ namespace FASTER.core
public Status Delete(Key key, Context userContext = default, long serialNo = 0)
=> Delete(ref key, userContext, serialNo);
/// <summary>
/// Async Delete operation
/// Await operation in session before issuing next one
/// </summary>
/// <param name="key"></param>
/// <param name="userContext"></param>
/// <param name="serialNo"></param>
/// <param name="token"></param>
/// <returns>ValueTask for Delete result, user needs to await and then call Complete() on the result</returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public ValueTask<FasterKV<Key, Value>.DeleteAsyncResult<Input, Output, Context>> DeleteAsync(ref Key key, Context userContext = default, long serialNo = 0, CancellationToken token = default)
{
Debug.Assert(SupportAsync, NotAsyncSessionErr);
return fht.DeleteAsync(this.FasterSession, this.ctx, ref key, userContext, serialNo, token);
}
/// <summary>
/// Async Delete operation
/// Await operation in session before issuing next one
/// </summary>
/// <param name="key"></param>
/// <param name="userContext"></param>
/// <param name="serialNo"></param>
/// <param name="token"></param>
/// <returns>ValueTask for Delete result, user needs to await and then call Complete() on the result</returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public ValueTask<FasterKV<Key, Value>.DeleteAsyncResult<Input, Output, Context>> DeleteAsync(Key key, Context userContext = default, long serialNo = 0, CancellationToken token = default)
=> DeleteAsync(ref key, userContext, serialNo, token);
/// <summary>
/// Experimental feature
/// Checks whether specified record is present in memory
@ -557,33 +620,61 @@ namespace FASTER.core
}
/// <summary>
/// Sync complete all outstanding pending operations
/// Async operations (ReadAsync) must be completed individually
/// Synchronously complete outstanding pending synchronous operations.
/// Async operations must be completed individually.
/// </summary>
/// <param name="spinWait">Spin-wait for all pending operations on session to complete</param>
/// <param name="spinWaitForCommit">Extend spin-wait until ongoing commit/checkpoint, if any, completes</param>
/// <returns></returns>
public bool CompletePending(bool spinWait = false, bool spinWaitForCommit = false)
/// <param name="wait">Wait for all pending operations on session to complete</param>
/// <param name="spinWaitForCommit">Spin-wait until ongoing commit/checkpoint, if any, completes</param>
/// <returns>True if all pending operations have completed, false otherwise</returns>
public bool CompletePending(bool wait = false, bool spinWaitForCommit = false)
=> CompletePending(false, wait, spinWaitForCommit);
/// <summary>
/// Synchronously complete outstanding pending synchronous operations, returning outputs for the completed operations.
/// Async operations must be completed individually.
/// </summary>
/// <param name="completedOutputs">Outputs completed by this operation</param>
/// <param name="wait">Wait for all pending operations on session to complete</param>
/// <param name="spinWaitForCommit">Spin-wait until ongoing commit/checkpoint, if any, completes</param>
/// <returns>True if all pending operations have completed, false otherwise</returns>
public bool CompletePendingWithOutputs(out CompletedOutputIterator<Key, Value, Input, Output, Context> completedOutputs, bool wait = false, bool spinWaitForCommit = false)
{
InitializeCompletedOutputs();
var result = CompletePending(true, wait, spinWaitForCommit);
completedOutputs = this.completedOutputs;
return result;
}
void InitializeCompletedOutputs()
{
if (this.completedOutputs is null)
this.completedOutputs = new CompletedOutputIterator<Key, Value, Input, Output, Context>();
else
this.completedOutputs.Dispose();
}
private bool CompletePending(bool getOutputs, bool wait, bool spinWaitForCommit)
{
if (SupportAsync) UnsafeResumeThread();
try
{
var result = fht.InternalCompletePending(ctx, FasterSession, spinWait);
var requestedOutputs = getOutputs ? this.completedOutputs : default;
var result = fht.InternalCompletePending(ctx, FasterSession, wait, requestedOutputs);
if (spinWaitForCommit)
{
if (spinWait != true)
if (wait != true)
{
throw new FasterException("Can spin-wait for checkpoint completion only if spinWait is true");
throw new FasterException("Can spin-wait for commit only if wait is true");
}
do
{
fht.InternalCompletePending(ctx, FasterSession, spinWait);
fht.InternalCompletePending(ctx, FasterSession, wait, requestedOutputs);
if (fht.InRestPhase())
{
fht.InternalCompletePending(ctx, FasterSession, spinWait);
fht.InternalCompletePending(ctx, FasterSession, wait, requestedOutputs);
return true;
}
} while (spinWait);
} while (wait);
}
return result;
}
@ -594,11 +685,26 @@ namespace FASTER.core
}
/// <summary>
/// Complete all outstanding pending operations asynchronously
/// Async operations (ReadAsync) must be completed individually
/// Complete all pending synchronous FASTER operations.
/// Async operations must be completed individually.
/// </summary>
/// <returns></returns>
public async ValueTask CompletePendingAsync(bool waitForCommit = false, CancellationToken token = default)
public ValueTask CompletePendingAsync(bool waitForCommit = false, CancellationToken token = default)
=> CompletePendingAsync(false, waitForCommit, token);
/// <summary>
/// Complete all pending synchronous FASTER operations, returning outputs for the completed operations.
/// Async operations must be completed individually.
/// </summary>
/// <returns>Outputs completed by this operation</returns>
public async ValueTask<CompletedOutputIterator<Key, Value, Input, Output, Context>> CompletePendingWithOutputsAsync(bool waitForCommit = false, CancellationToken token = default)
{
InitializeCompletedOutputs();
await CompletePendingAsync(true, waitForCommit, token);
return this.completedOutputs;
}
private async ValueTask CompletePendingAsync(bool getOutputs, bool waitForCommit = false, CancellationToken token = default)
{
token.ThrowIfCancellationRequested();
@ -606,7 +712,7 @@ namespace FASTER.core
throw new NotSupportedException("Async operations not supported over protected epoch");
// Complete all pending operations on session
await fht.CompletePendingAsync(this.FasterSession, this.ctx, token);
await fht.CompletePendingAsync(this.FasterSession, this.ctx, token, getOutputs ? this.completedOutputs : null);
// Wait for commit if necessary
if (waitForCommit)
@ -614,8 +720,8 @@ namespace FASTER.core
}
/// <summary>
/// Check if at least one request is ready for CompletePending to be called on
/// Returns completed immediately if there are no outstanding requests
/// Check if at least one synchronous request is ready for CompletePending to be called on
/// Returns completed immediately if there are no outstanding synchronous requests
/// </summary>
/// <param name="token"></param>
/// <returns></returns>
@ -638,7 +744,10 @@ namespace FASTER.core
{
token.ThrowIfCancellationRequested();
// Complete all pending operations on session
if (!ctx.prevCtx.pendingReads.IsEmpty || !ctx.pendingReads.IsEmpty)
throw new FasterException("Make sure all async operations issued on this session are awaited and completed first");
// Complete all pending sync operations on session
await CompletePendingAsync(token: token);
var task = fht.CheckpointTask;

Просмотреть файл

@ -1,8 +1,6 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT license.
#pragma warning disable 0162
using System;
using System.Diagnostics;
using System.Runtime.CompilerServices;
@ -19,7 +17,7 @@ namespace FASTER.core
/// <typeparam name="Value">Value</typeparam>
public partial class FasterKV<Key, Value> : FasterBase, IFasterKV<Key, Value>
{
#region CompletePendingAsync
/// <summary>
/// Check if at least one (sync) request is ready for CompletePending to operate on
/// </summary>
@ -49,50 +47,58 @@ namespace FASTER.core
/// </summary>
/// <returns></returns>
internal async ValueTask CompletePendingAsync<Input, Output, Context>(IFasterSession<Key, Value, Input, Output, Context> fasterSession,
FasterExecutionContext<Input, Output, Context> currentCtx, CancellationToken token = default)
FasterExecutionContext<Input, Output, Context> currentCtx, CancellationToken token,
CompletedOutputIterator<Key, Value, Input, Output, Context> completedOutputs)
{
bool done = true;
#region Previous pending requests
if (!RelaxedCPR)
while (true)
{
if (currentCtx.phase == Phase.IN_PROGRESS
||
currentCtx.phase == Phase.WAIT_PENDING)
bool done = true;
#region Previous pending requests
if (!RelaxedCPR)
{
await currentCtx.prevCtx.pendingReads.WaitUntilEmptyAsync(token);
await InternalCompletePendingRequestsAsync(currentCtx.prevCtx, currentCtx, fasterSession, token);
Debug.Assert(currentCtx.prevCtx.SyncIoPendingCount == 0);
if (currentCtx.prevCtx.retryRequests.Count > 0)
if (currentCtx.phase == Phase.IN_PROGRESS || currentCtx.phase == Phase.WAIT_PENDING)
{
fasterSession.UnsafeResumeThread();
InternalCompleteRetryRequests(currentCtx.prevCtx, currentCtx, fasterSession);
fasterSession.UnsafeSuspendThread();
try
{
InternalCompletePendingRequests(currentCtx.prevCtx, currentCtx, fasterSession, completedOutputs);
InternalCompleteRetryRequests(currentCtx.prevCtx, currentCtx, fasterSession);
}
finally
{
fasterSession.UnsafeSuspendThread();
}
await currentCtx.prevCtx.WaitPendingAsync(token);
done &= currentCtx.prevCtx.HasNoPendingRequests;
}
done &= (currentCtx.prevCtx.HasNoPendingRequests);
}
}
#endregion
#endregion
await InternalCompletePendingRequestsAsync(currentCtx, currentCtx, fasterSession, token);
fasterSession.UnsafeResumeThread();
try
{
InternalCompletePendingRequests(currentCtx, currentCtx, fasterSession, completedOutputs);
InternalCompleteRetryRequests(currentCtx, currentCtx, fasterSession);
}
finally
{
fasterSession.UnsafeSuspendThread();
}
fasterSession.UnsafeResumeThread();
InternalCompleteRetryRequests(currentCtx, currentCtx, fasterSession);
fasterSession.UnsafeSuspendThread();
await currentCtx.WaitPendingAsync(token);
done &= currentCtx.HasNoPendingRequests;
Debug.Assert(currentCtx.HasNoPendingRequests);
if (done) return;
done &= (currentCtx.HasNoPendingRequests);
InternalRefresh(currentCtx, fasterSession);
if (!done)
{
throw new Exception("CompletePendingAsync did not complete");
Thread.Yield();
}
}
#endregion CompletePendingAsync
#region ReadAsync
internal sealed class ReadAsyncInternal<Input, Output, Context>
{
const int Completed = 1;
@ -207,7 +213,7 @@ namespace FASTER.core
/// Complete the read operation, after any I/O is completed.
/// </summary>
/// <returns>The read result, or throws an exception if error encountered.</returns>
public (Status, Output) Complete()
public (Status status, Output output) Complete()
{
if (status != Status.PENDING)
return (status, output);
@ -219,7 +225,7 @@ namespace FASTER.core
/// Complete the read operation, after any I/O is completed.
/// </summary>
/// <returns>The read result and the previous address in the Read key's hash chain, or throws an exception if error encountered.</returns>
public (Status, Output) Complete(out RecordInfo recordInfo)
public (Status status, Output output) Complete(out RecordInfo recordInfo)
{
if (status != Status.PENDING)
{
@ -298,8 +304,60 @@ namespace FASTER.core
return new ReadAsyncResult<Input, Output, Context>(@this, fasterSession, currentCtx, pendingContext, diskRequest, exceptionDispatchInfo);
}
#endregion ReadAsync
internal sealed class RmwAsyncInternal<Input, Output, Context>
#region UpdelAsync
// UpsertAsync and DeleteAsync can only go pending when they generate Flush operations on BlockAllocate when inserting new records at the tail.
// Define a couple interfaces to allow defining a shared UpdelAsyncInternal class rather than duplicating.
internal interface IUpdelAsyncOperation<Input, Output, Context, TAsyncResult>
{
TAsyncResult CreateResult(OperationStatus internalStatus);
OperationStatus DoFastOperation(FasterKV<Key, Value> fasterKV, PendingContext<Input, Output, Context> pendingContext, IFasterSession<Key, Value, Input, Output, Context> fasterSession,
FasterExecutionContext<Input, Output, Context> currentCtx);
ValueTask<TAsyncResult> DoSlowOperation(FasterKV<Key, Value> fasterKV, IFasterSession<Key, Value, Input, Output, Context> fasterSession,
FasterExecutionContext<Input, Output, Context> currentCtx, PendingContext<Input, Output, Context> pendingContext,
Task flushTask, CancellationToken token);
}
internal interface IUpdelAsyncResult<Input, Output, Context, TAsyncResult>
{
ValueTask<TAsyncResult> CompleteAsync(CancellationToken token = default);
Status Status { get; }
}
internal struct UpsertAsyncOperation<Input, Output, Context> : IUpdelAsyncOperation<Input, Output, Context, UpsertAsyncResult<Input, Output, Context>>
{
public UpsertAsyncResult<Input, Output, Context> CreateResult(OperationStatus internalStatus) => new UpsertAsyncResult<Input, Output, Context>(internalStatus);
public OperationStatus DoFastOperation(FasterKV<Key, Value> fasterKV, PendingContext<Input, Output, Context> pendingContext, IFasterSession<Key, Value, Input, Output, Context> fasterSession,
FasterExecutionContext<Input, Output, Context> currentCtx)
=> fasterKV.InternalUpsert(ref pendingContext.key.Get(), ref pendingContext.value.Get(), ref pendingContext.userContext, ref pendingContext, fasterSession, currentCtx, pendingContext.serialNum);
public ValueTask<UpsertAsyncResult<Input, Output, Context>> DoSlowOperation(FasterKV<Key, Value> fasterKV, IFasterSession<Key, Value, Input, Output, Context> fasterSession,
FasterExecutionContext<Input, Output, Context> currentCtx, PendingContext<Input, Output, Context> pendingContext, Task flushTask, CancellationToken token)
=> SlowUpsertAsync(fasterKV, fasterSession, currentCtx, pendingContext, flushTask, token);
}
internal struct DeleteAsyncOperation<Input, Output, Context> : IUpdelAsyncOperation<Input, Output, Context, DeleteAsyncResult<Input, Output, Context>>
{
public DeleteAsyncResult<Input, Output, Context> CreateResult(OperationStatus internalStatus) => new DeleteAsyncResult<Input, Output, Context>(internalStatus);
public OperationStatus DoFastOperation(FasterKV<Key, Value> fasterKV, PendingContext<Input, Output, Context> pendingContext, IFasterSession<Key, Value, Input, Output, Context> fasterSession,
FasterExecutionContext<Input, Output, Context> currentCtx)
=> fasterKV.InternalDelete(ref pendingContext.key.Get(), ref pendingContext.userContext, ref pendingContext, fasterSession, currentCtx, pendingContext.serialNum);
public ValueTask<DeleteAsyncResult<Input, Output, Context>> DoSlowOperation(FasterKV<Key, Value> fasterKV, IFasterSession<Key, Value, Input, Output, Context> fasterSession,
FasterExecutionContext<Input, Output, Context> currentCtx, PendingContext<Input, Output, Context> pendingContext, Task flushTask, CancellationToken token)
=> SlowDeleteAsync(fasterKV, fasterSession, currentCtx, pendingContext, flushTask, token);
}
internal sealed class UpdelAsyncInternal<Input, Output, Context, TAsyncOperation, TAsyncResult>
where TAsyncOperation : IUpdelAsyncOperation<Input, Output, Context, TAsyncResult>, new()
where TAsyncResult : IUpdelAsyncResult<Input, Output, Context, TAsyncResult>
{
const int Completed = 1;
const int Pending = 0;
@ -307,44 +365,51 @@ namespace FASTER.core
readonly FasterKV<Key, Value> _fasterKV;
readonly IFasterSession<Key, Value, Input, Output, Context> _fasterSession;
readonly FasterExecutionContext<Input, Output, Context> _currentCtx;
internal readonly TAsyncOperation asyncOperation;
PendingContext<Input, Output, Context> _pendingContext;
AsyncIOContext<Key, Value> _diskRequest;
Task _flushTask;
int CompletionComputeStatus;
internal RmwAsyncInternal(FasterKV<Key, Value> fasterKV, IFasterSession<Key, Value, Input, Output, Context> fasterSession,
FasterExecutionContext<Input, Output, Context> currentCtx, PendingContext<Input, Output, Context> pendingContext,
AsyncIOContext<Key, Value> diskRequest, ExceptionDispatchInfo exceptionDispatchInfo)
internal UpdelAsyncInternal(FasterKV<Key, Value> fasterKV, IFasterSession<Key, Value, Input, Output, Context> fasterSession,
FasterExecutionContext<Input, Output, Context> currentCtx, PendingContext<Input, Output, Context> pendingContext,
Task flushTask, ExceptionDispatchInfo exceptionDispatchInfo)
{
_exception = exceptionDispatchInfo;
_fasterKV = fasterKV;
_fasterSession = fasterSession;
_currentCtx = currentCtx;
_pendingContext = pendingContext;
_diskRequest = diskRequest;
_flushTask = flushTask;
asyncOperation = new TAsyncOperation();
CompletionComputeStatus = Pending;
}
internal ValueTask<RmwAsyncResult<Input, Output, Context>> CompleteAsync(CancellationToken token = default)
internal async ValueTask<TAsyncResult> CompleteAsync(CancellationToken token = default)
{
Debug.Assert(_fasterKV.RelaxedCPR);
AsyncIOContext<Key, Value> newDiskRequest = default;
if (_diskRequest.asyncOperation != null
&& CompletionComputeStatus != Completed
if (CompletionComputeStatus != Completed
&& Interlocked.CompareExchange(ref CompletionComputeStatus, Completed, Pending) == Pending)
{
try
{
if (_exception == default)
{
await _flushTask.WithCancellationAsync(token);
_fasterSession.UnsafeResumeThread();
try
{
var status = _fasterKV.InternalCompletePendingRequestFromContext(_currentCtx, _currentCtx, _fasterSession, _diskRequest, ref _pendingContext, true, out newDiskRequest);
OperationStatus internalStatus;
do
{
_flushTask = _fasterKV.hlog.FlushTask;
internalStatus = asyncOperation.DoFastOperation(_fasterKV, _pendingContext, _fasterSession, _currentCtx);
} while (internalStatus == OperationStatus.RETRY_NOW);
_pendingContext.Dispose();
if (status != Status.PENDING)
return new ValueTask<RmwAsyncResult<Input, Output, Context>>(new RmwAsyncResult<Input, Output, Context>(status, default));
if (internalStatus == OperationStatus.SUCCESS || internalStatus == OperationStatus.NOTFOUND)
return asyncOperation.CreateResult(internalStatus);
Debug.Assert(internalStatus == OperationStatus.ALLOCATE_FAILED);
}
finally
{
@ -358,7 +423,308 @@ namespace FASTER.core
}
finally
{
_currentCtx.ioPendingRequests.Remove(_pendingContext.id);
_currentCtx.asyncPendingCount--;
}
}
if (_exception != default)
_exception.Throw();
return await asyncOperation.DoSlowOperation(_fasterKV, _fasterSession, _currentCtx, _pendingContext, _flushTask, token);
}
internal Status Complete()
{
var t = this.CompleteAsync();
if (t.IsCompleted)
return t.Result.Status;
// Handle rare case
var r = t.GetAwaiter().GetResult();
while (r.Status == Status.PENDING)
r = r.CompleteAsync().GetAwaiter().GetResult();
return r.Status;
}
}
private static Status TranslateStatus(OperationStatus internalStatus)
{
if (internalStatus == OperationStatus.SUCCESS || internalStatus == OperationStatus.NOTFOUND)
return (Status)internalStatus;
Debug.Assert(internalStatus == OperationStatus.ALLOCATE_FAILED);
return Status.PENDING;
}
private static ExceptionDispatchInfo GetSlowUpdelAsyncExceptionDispatchInfo<Input, Output, Context>(FasterKV<Key, Value> @this, FasterExecutionContext<Input, Output, Context> currentCtx, CancellationToken token)
{
currentCtx.asyncPendingCount++;
ExceptionDispatchInfo exceptionDispatchInfo = default;
try
{
token.ThrowIfCancellationRequested();
if (@this.epoch.ThisInstanceProtected())
throw new NotSupportedException("Async operations not supported over protected epoch");
}
catch (Exception e)
{
exceptionDispatchInfo = ExceptionDispatchInfo.Capture(e);
}
return exceptionDispatchInfo;
}
#endregion Upsert
#region Upsert
/// <summary>
/// State storage for the completion of an async Upsert, or the result if the Upsert was completed synchronously
/// </summary>
public struct UpsertAsyncResult<Input, Output, Context> : IUpdelAsyncResult<Input, Output, Context, UpsertAsyncResult<Input, Output, Context>>
{
private readonly OperationStatus internalStatus;
internal readonly UpdelAsyncInternal<Input, Output, Context, UpsertAsyncOperation<Input, Output, Context>, UpsertAsyncResult<Input, Output, Context>> updelAsyncInternal;
/// <summary>Current status of the Upsert operation</summary>
public Status Status => TranslateStatus(internalStatus);
internal UpsertAsyncResult(OperationStatus internalStatus)
{
Debug.Assert(internalStatus == OperationStatus.SUCCESS || internalStatus == OperationStatus.NOTFOUND);
this.internalStatus = internalStatus;
this.updelAsyncInternal = default;
}
internal UpsertAsyncResult(FasterKV<Key, Value> fasterKV, IFasterSession<Key, Value, Input, Output, Context> fasterSession,
FasterExecutionContext<Input, Output, Context> currentCtx, PendingContext<Input, Output, Context> pendingContext, Task flushTask, ExceptionDispatchInfo exceptionDispatchInfo)
{
internalStatus = OperationStatus.ALLOCATE_FAILED;
updelAsyncInternal = new UpdelAsyncInternal<Input, Output, Context, UpsertAsyncOperation<Input, Output, Context>, UpsertAsyncResult<Input, Output, Context>>(
fasterKV, fasterSession, currentCtx, pendingContext, flushTask, exceptionDispatchInfo);
}
/// <summary>Complete the Upsert operation, issuing additional allocation asynchronously if needed. It is usually preferable to use Complete() instead of this.</summary>
/// <returns>ValueTask for Upsert result. User needs to await again if result status is Status.PENDING.</returns>
public ValueTask<UpsertAsyncResult<Input, Output, Context>> CompleteAsync(CancellationToken token = default)
{
if (internalStatus != OperationStatus.ALLOCATE_FAILED)
return new ValueTask<UpsertAsyncResult<Input, Output, Context>>(new UpsertAsyncResult<Input, Output, Context>(internalStatus));
return updelAsyncInternal.CompleteAsync(token);
}
/// <summary>Complete the Upsert operation, issuing additional (rare) I/O synchronously if needed.</summary>
/// <returns>Status of Upsert operation</returns>
public Status Complete()
{
if (internalStatus != OperationStatus.ALLOCATE_FAILED)
return this.Status;
return updelAsyncInternal.Complete();
}
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
internal ValueTask<UpsertAsyncResult<Input, Output, Context>> UpsertAsync<Input, Output, Context>(IFasterSession<Key, Value, Input, Output, Context> fasterSession,
FasterExecutionContext<Input, Output, Context> currentCtx, ref Key key, ref Value value, Context userContext, long serialNo, CancellationToken token = default)
{
var pcontext = default(PendingContext<Input, Output, Context>);
pcontext.IsAsync = true;
Task flushTask;
fasterSession.UnsafeResumeThread();
try
{
OperationStatus internalStatus;
do
{
flushTask = hlog.FlushTask;
internalStatus = InternalUpsert(ref key, ref value, ref userContext, ref pcontext, fasterSession, currentCtx, serialNo);
} while (internalStatus == OperationStatus.RETRY_NOW);
if (internalStatus == OperationStatus.SUCCESS || internalStatus == OperationStatus.NOTFOUND)
return new ValueTask<UpsertAsyncResult<Input, Output, Context>>(new UpsertAsyncResult<Input, Output, Context>(internalStatus));
Debug.Assert(internalStatus == OperationStatus.ALLOCATE_FAILED);
}
finally
{
Debug.Assert(serialNo >= currentCtx.serialNum, "Operation serial numbers must be non-decreasing");
currentCtx.serialNum = serialNo;
fasterSession.UnsafeSuspendThread();
}
return SlowUpsertAsync(this, fasterSession, currentCtx, pcontext, flushTask, token);
}
private static ValueTask<UpsertAsyncResult<Input, Output, Context>> SlowUpsertAsync<Input, Output, Context>(
FasterKV<Key, Value> @this,
IFasterSession<Key, Value, Input, Output, Context> fasterSession,
FasterExecutionContext<Input, Output, Context> currentCtx,
PendingContext<Input, Output, Context> pendingContext, Task flushTask, CancellationToken token = default)
{
ExceptionDispatchInfo exceptionDispatchInfo = GetSlowUpdelAsyncExceptionDispatchInfo(@this, currentCtx, token);
return new ValueTask<UpsertAsyncResult<Input, Output, Context>>(new UpsertAsyncResult<Input, Output, Context>(@this, fasterSession, currentCtx, pendingContext, flushTask, exceptionDispatchInfo));
}
#region Delete
/// <summary>
/// Contained state storage for the completion of an async Delete, or the result if the Delete was completed synchronously
/// </summary>
public struct DeleteAsyncResult<Input, Output, Context> : IUpdelAsyncResult<Input, Output, Context, DeleteAsyncResult<Input, Output, Context>>
{
private readonly OperationStatus internalStatus;
internal readonly UpdelAsyncInternal<Input, Output, Context, DeleteAsyncOperation<Input, Output, Context>, DeleteAsyncResult<Input, Output, Context>> updelAsyncInternal;
/// <summary>Current status of the Upsert operation</summary>
public Status Status => TranslateStatus(internalStatus);
internal DeleteAsyncResult(OperationStatus internalStatus)
{
Debug.Assert(internalStatus == OperationStatus.SUCCESS || internalStatus == OperationStatus.NOTFOUND);
this.internalStatus = internalStatus;
this.updelAsyncInternal = default;
}
internal DeleteAsyncResult(FasterKV<Key, Value> fasterKV, IFasterSession<Key, Value, Input, Output, Context> fasterSession,
FasterExecutionContext<Input, Output, Context> currentCtx, PendingContext<Input, Output, Context> pendingContext, Task flushTask, ExceptionDispatchInfo exceptionDispatchInfo)
{
internalStatus = OperationStatus.ALLOCATE_FAILED;
updelAsyncInternal = new UpdelAsyncInternal<Input, Output, Context, DeleteAsyncOperation<Input, Output, Context>, DeleteAsyncResult<Input, Output, Context>>(
fasterKV, fasterSession, currentCtx, pendingContext, flushTask, exceptionDispatchInfo);
}
/// <summary>Complete the Delete operation, issuing additional allocation asynchronously if needed. It is usually preferable to use Complete() instead of this.</summary>
/// <returns>ValueTask for Delete result. User needs to await again if result status is Status.PENDING.</returns>
public ValueTask<DeleteAsyncResult<Input, Output, Context>> CompleteAsync(CancellationToken token = default)
{
if (internalStatus != OperationStatus.ALLOCATE_FAILED)
return new ValueTask<DeleteAsyncResult<Input, Output, Context>>(new DeleteAsyncResult<Input, Output, Context>(internalStatus));
return updelAsyncInternal.CompleteAsync(token);
}
/// <summary>Complete the Delete operation, issuing additional (rare) I/O synchronously if needed.</summary>
/// <returns>Status of Delete operation</returns>
public Status Complete()
{
if (internalStatus != OperationStatus.ALLOCATE_FAILED)
return this.Status;
return updelAsyncInternal.Complete();
}
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
internal ValueTask<DeleteAsyncResult<Input, Output, Context>> DeleteAsync<Input, Output, Context>(IFasterSession<Key, Value, Input, Output, Context> fasterSession,
FasterExecutionContext<Input, Output, Context> currentCtx, ref Key key, Context userContext, long serialNo, CancellationToken token = default)
{
var pcontext = default(PendingContext<Input, Output, Context>);
pcontext.IsAsync = true;
Task flushTask;
fasterSession.UnsafeResumeThread();
try
{
OperationStatus internalStatus;
do
{
flushTask = hlog.FlushTask;
internalStatus = InternalDelete(ref key, ref userContext, ref pcontext, fasterSession, currentCtx, serialNo);
} while (internalStatus == OperationStatus.RETRY_NOW);
if (internalStatus == OperationStatus.SUCCESS || internalStatus == OperationStatus.NOTFOUND)
return new ValueTask<DeleteAsyncResult<Input, Output, Context>>(new DeleteAsyncResult<Input, Output, Context>(internalStatus));
Debug.Assert(internalStatus == OperationStatus.ALLOCATE_FAILED);
}
finally
{
Debug.Assert(serialNo >= currentCtx.serialNum, "Operation serial numbers must be non-decreasing");
currentCtx.serialNum = serialNo;
fasterSession.UnsafeSuspendThread();
}
return SlowDeleteAsync(this, fasterSession, currentCtx, pcontext, flushTask, token);
}
private static ValueTask<DeleteAsyncResult<Input, Output, Context>> SlowDeleteAsync<Input, Output, Context>(
FasterKV<Key, Value> @this,
IFasterSession<Key, Value, Input, Output, Context> fasterSession,
FasterExecutionContext<Input, Output, Context> currentCtx,
PendingContext<Input, Output, Context> pendingContext, Task flushTask, CancellationToken token = default)
{
ExceptionDispatchInfo exceptionDispatchInfo = GetSlowUpdelAsyncExceptionDispatchInfo(@this, currentCtx, token);
return new ValueTask<DeleteAsyncResult<Input, Output, Context>>(new DeleteAsyncResult<Input, Output, Context>(@this, fasterSession, currentCtx, pendingContext, flushTask, exceptionDispatchInfo));
}
#endregion Delete
#endregion UpdelAsync
#region RMWAsync
internal sealed class RmwAsyncInternal<Input, Output, Context>
{
const int Completed = 1;
const int Pending = 0;
ExceptionDispatchInfo _exception;
readonly FasterKV<Key, Value> _fasterKV;
readonly IFasterSession<Key, Value, Input, Output, Context> _fasterSession;
readonly FasterExecutionContext<Input, Output, Context> _currentCtx;
readonly Task _flushTask;
PendingContext<Input, Output, Context> _pendingContext;
AsyncIOContext<Key, Value> _diskRequest;
int CompletionComputeStatus;
internal RmwAsyncInternal(FasterKV<Key, Value> fasterKV, IFasterSession<Key, Value, Input, Output, Context> fasterSession,
FasterExecutionContext<Input, Output, Context> currentCtx, PendingContext<Input, Output, Context> pendingContext,
AsyncIOContext<Key, Value> diskRequest, Task flushTask, ExceptionDispatchInfo exceptionDispatchInfo)
{
_exception = exceptionDispatchInfo;
_fasterKV = fasterKV;
_fasterSession = fasterSession;
_currentCtx = currentCtx;
_pendingContext = pendingContext;
_diskRequest = diskRequest;
_flushTask = flushTask;
CompletionComputeStatus = Pending;
}
internal async ValueTask<RmwAsyncResult<Input, Output, Context>> CompleteAsync(CancellationToken token = default)
{
Debug.Assert(_fasterKV.RelaxedCPR);
AsyncIOContext<Key, Value> newDiskRequest = default;
if ((_diskRequest.asyncOperation != null || _flushTask is { })
&& CompletionComputeStatus != Completed
&& Interlocked.CompareExchange(ref CompletionComputeStatus, Completed, Pending) == Pending)
{
try
{
if (_exception == default)
{
// If we are here because of _flushTask, then _diskRequest is default--there is no pending disk operation. Await _flushTask, then go back
// to the top of the RmwAsync call, reissuing the normal sync call. Do this *before* UnsafeResumeThread.
if (_flushTask is { })
{
await _flushTask.WithCancellationAsync(token);
return await _fasterKV.RmwAsync(_fasterSession, _currentCtx, ref _pendingContext.key.Get(), ref _pendingContext.input.Get(), _pendingContext.userContext, _pendingContext.serialNum, token);
}
_fasterSession.UnsafeResumeThread();
try
{
var status = _fasterKV.InternalCompletePendingRequestFromContext(_currentCtx, _currentCtx, _fasterSession, _diskRequest, ref _pendingContext, true, out newDiskRequest);
_pendingContext.Dispose();
if (status != Status.PENDING)
return new RmwAsyncResult<Input, Output, Context>(status, default);
}
finally
{
_fasterSession.UnsafeSuspendThread();
}
}
}
catch (Exception e)
{
_exception = ExceptionDispatchInfo.Capture(e);
}
finally
{
if (_flushTask is null)
_currentCtx.ioPendingRequests.Remove(_pendingContext.id);
_currentCtx.asyncPendingCount--;
_currentCtx.pendingReads.Remove();
}
@ -367,36 +733,36 @@ namespace FASTER.core
if (_exception != default)
_exception.Throw();
return SlowRmwAsync(_fasterKV, _fasterSession, _currentCtx, _pendingContext, newDiskRequest, token);
return await SlowRmwAsync(_fasterKV, _fasterSession, _currentCtx, _pendingContext, newDiskRequest, _flushTask, token);
}
}
/// <summary>
/// State storage for the completion of an async Read, or the result if the read was completed synchronously
/// State storage for the completion of an async RMW, or the result if the RMW was completed synchronously
/// </summary>
public struct RmwAsyncResult<Input, Output, Context>
{
internal readonly Status status;
/// <summary>Current status of the RMW operation</summary>
public Status Status { get; }
internal readonly Output output;
internal readonly RmwAsyncInternal<Input, Output, Context> rmwAsyncInternal;
private readonly RmwAsyncInternal<Input, Output, Context> rmwAsyncInternal;
internal RmwAsyncResult(Status status, Output output)
{
this.status = status;
this.Status = status;
this.output = output;
this.rmwAsyncInternal = default;
}
internal RmwAsyncResult(
FasterKV<Key, Value> fasterKV,
IFasterSession<Key, Value, Input, Output, Context> fasterSession,
FasterExecutionContext<Input, Output, Context> currentCtx,
PendingContext<Input, Output, Context> pendingContext, AsyncIOContext<Key, Value> diskRequest, ExceptionDispatchInfo exceptionDispatchInfo)
internal RmwAsyncResult( FasterKV<Key, Value> fasterKV, IFasterSession<Key, Value, Input, Output, Context> fasterSession,
FasterExecutionContext<Input, Output, Context> currentCtx, PendingContext<Input, Output, Context> pendingContext,
AsyncIOContext<Key, Value> diskRequest, Task flushTask, ExceptionDispatchInfo exceptionDispatchInfo)
{
status = Status.PENDING;
Status = Status.PENDING;
output = default;
rmwAsyncInternal = new RmwAsyncInternal<Input, Output, Context>(fasterKV, fasterSession, currentCtx, pendingContext, diskRequest, exceptionDispatchInfo);
rmwAsyncInternal = new RmwAsyncInternal<Input, Output, Context>(fasterKV, fasterSession, currentCtx, pendingContext, diskRequest, flushTask, exceptionDispatchInfo);
}
/// <summary>
@ -406,9 +772,8 @@ namespace FASTER.core
/// <returns>ValueTask for RMW result. User needs to await again if result status is Status.PENDING.</returns>
public ValueTask<RmwAsyncResult<Input, Output, Context>> CompleteAsync(CancellationToken token = default)
{
if (status != Status.PENDING)
return new ValueTask<RmwAsyncResult<Input, Output, Context>>(new RmwAsyncResult<Input, Output, Context>(status, default));
if (Status != Status.PENDING)
return new ValueTask<RmwAsyncResult<Input, Output, Context>>(new RmwAsyncResult<Input, Output, Context>(Status, default));
return rmwAsyncInternal.CompleteAsync(token);
}
@ -418,20 +783,19 @@ namespace FASTER.core
/// <returns>Status of RMW operation</returns>
public Status Complete()
{
if (status != Status.PENDING)
return status;
if (Status != Status.PENDING)
return Status;
var t = rmwAsyncInternal.CompleteAsync();
if (t.IsCompleted)
return t.Result.status;
return t.Result.Status;
// Handle rare case
var r = t.GetAwaiter().GetResult();
while (r.status == Status.PENDING)
while (r.Status == Status.PENDING)
r = r.CompleteAsync().GetAwaiter().GetResult();
return r.status;
return r.Status;
}
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
@ -439,26 +803,28 @@ namespace FASTER.core
FasterExecutionContext<Input, Output, Context> currentCtx, ref Key key, ref Input input, Context context, long serialNo, CancellationToken token = default)
{
var pcontext = default(PendingContext<Input, Output, Context>);
pcontext.IsAsync = true;
var diskRequest = default(AsyncIOContext<Key, Value>);
Task flushTask;
fasterSession.UnsafeResumeThread();
try
{
OperationStatus internalStatus;
do
{
flushTask = hlog.FlushTask;
internalStatus = InternalRMW(ref key, ref input, ref context, ref pcontext, fasterSession, currentCtx, serialNo);
while (internalStatus == OperationStatus.RETRY_NOW || internalStatus == OperationStatus.RETRY_LATER);
} while (internalStatus == OperationStatus.RETRY_NOW || internalStatus == OperationStatus.RETRY_LATER);
if (internalStatus == OperationStatus.SUCCESS || internalStatus == OperationStatus.NOTFOUND)
{
return new ValueTask<RmwAsyncResult<Input, Output, Context>>(new RmwAsyncResult<Input, Output, Context>((Status)internalStatus, default));
}
else
else if (internalStatus != OperationStatus.ALLOCATE_FAILED)
{
flushTask = null;
var status = HandleOperationStatus(currentCtx, currentCtx, ref pcontext, fasterSession, internalStatus, true, out diskRequest);
if (status != Status.PENDING)
return new ValueTask<RmwAsyncResult<Input, Output, Context>>(new RmwAsyncResult<Input, Output, Context>(status, default));
}
@ -470,14 +836,13 @@ namespace FASTER.core
fasterSession.UnsafeSuspendThread();
}
return SlowRmwAsync(this, fasterSession, currentCtx, pcontext, diskRequest, token);
return SlowRmwAsync(this, fasterSession, currentCtx, pcontext, diskRequest, flushTask, token);
}
private static async ValueTask<RmwAsyncResult<Input, Output, Context>> SlowRmwAsync<Input, Output, Context>(
FasterKV<Key, Value> @this,
IFasterSession<Key, Value, Input, Output, Context> fasterSession,
FasterExecutionContext<Input, Output, Context> currentCtx,
PendingContext<Input, Output, Context> pendingContext, AsyncIOContext<Key, Value> diskRequest, CancellationToken token = default)
FasterKV<Key, Value> @this, IFasterSession<Key, Value, Input, Output, Context> fasterSession,
FasterExecutionContext<Input, Output, Context> currentCtx, PendingContext<Input, Output, Context> pendingContext,
AsyncIOContext<Key, Value> diskRequest, Task flushTask, CancellationToken token = default)
{
currentCtx.asyncPendingCount++;
currentCtx.pendingReads.Add();
@ -490,15 +855,19 @@ namespace FASTER.core
if (@this.epoch.ThisInstanceProtected())
throw new NotSupportedException("Async operations not supported over protected epoch");
using (token.Register(() => diskRequest.asyncOperation.TrySetCanceled()))
diskRequest = await diskRequest.asyncOperation.Task;
if (flushTask is null)
{
using (token.Register(() => diskRequest.asyncOperation.TrySetCanceled()))
diskRequest = await diskRequest.asyncOperation.Task;
}
}
catch (Exception e)
{
exceptionDispatchInfo = ExceptionDispatchInfo.Capture(e);
}
return new RmwAsyncResult<Input, Output, Context>(@this, fasterSession, currentCtx, pendingContext, diskRequest, exceptionDispatchInfo);
return new RmwAsyncResult<Input, Output, Context>(@this, fasterSession, currentCtx, pendingContext, diskRequest, flushTask, exceptionDispatchInfo);
}
#endregion RMWAsync
}
}

Просмотреть файл

@ -81,7 +81,7 @@ namespace FASTER.core
/// <param name="userContext">User application context passed in case the read goes pending due to IO</param>
/// <param name="serialNo">The serial number of the operation (used in recovery)</param>
/// <returns>A tuple of (<see cref="Status"/>, <typeparamref name="Output"/>)</returns>
public (Status, Output) Read(Key key, Context userContext = default, long serialNo = 0);
public (Status status, Output output) Read(Key key, Context userContext = default, long serialNo = 0);
/// <summary>
/// Read operation that accepts a <paramref name="recordInfo"/> ref argument to start the lookup at instead of starting at the hash table entry for <paramref name="key"/>,
@ -287,21 +287,38 @@ namespace FASTER.core
public void Refresh();
/// <summary>
/// Sync complete all outstanding pending operations
/// Async operations (ReadAsync) must be completed individually
/// Synchronously complete outstanding pending synchronous operations.
/// Async operations must be completed individually.
/// </summary>
/// <param name="spinWait">Spin-wait for all pending operations on session to complete</param>
/// <param name="spinWaitForCommit">Extend spin-wait until ongoing commit/checkpoint, if any, completes</param>
/// <returns></returns>
public bool CompletePending(bool spinWait = false, bool spinWaitForCommit = false);
/// <param name="wait">Wait for all pending operations on session to complete</param>
/// <param name="spinWaitForCommit">Spin-wait until ongoing commit/checkpoint, if any, completes</param>
/// <returns>True if all pending operations have completed, false otherwise</returns>
public bool CompletePending(bool wait = false, bool spinWaitForCommit = false);
/// <summary>
/// Complete all outstanding pending operations asynchronously
/// Async operations (ReadAsync) must be completed individually
/// Synchronously complete outstanding pending synchronous operations, returning outputs for the completed operations.
/// Async operations must be completed individually.
/// </summary>
/// <param name="completedOutputs">Outputs completed by this operation</param>
/// <param name="wait">Wait for all pending operations on session to complete</param>
/// <param name="spinWaitForCommit">Spin-wait until ongoing commit/checkpoint, if any, completes</param>
/// <returns>True if all pending operations have completed, false otherwise</returns>
public bool CompletePendingWithOutputs(out CompletedOutputIterator<Key, Value, Input, Output, Context> completedOutputs, bool wait = false, bool spinWaitForCommit = false);
/// <summary>
/// Complete all pending synchronous FASTER operations.
/// Async operations must be completed individually.
/// </summary>
/// <returns></returns>
public ValueTask CompletePendingAsync(bool waitForCommit = false, CancellationToken token = default);
/// <summary>
/// Complete all pending synchronous FASTER operations, returning outputs for the completed operations.
/// Async operations must be completed individually.
/// </summary>
/// <returns>Outputs completed by this operation</returns>
public ValueTask<CompletedOutputIterator<Key, Value, Input, Output, Context>> CompletePendingWithOutputsAsync(bool waitForCommit = false, CancellationToken token = default);
/// <summary>
/// Check if at least one request is ready for CompletePending to be called on
/// Returns completed immediately if there are no outstanding requests

Просмотреть файл

@ -13,24 +13,34 @@ namespace FASTER.core
/// Supports sync get (TryGet) for fast path
/// </summary>
/// <typeparam name="T"></typeparam>
class AsyncPool<T> : IDisposable where T : IDisposable
public class AsyncPool<T> : IDisposable where T : IDisposable
{
readonly int size;
readonly SemaphoreSlim handleAvailable;
readonly ConcurrentQueue<T> itemQueue;
bool disposed = false;
SemaphoreSlim handleAvailable;
ConcurrentQueue<T> itemQueue;
int disposedCount = 0;
/// <summary>
/// Constructor
/// </summary>
/// <param name="size"></param>
/// <param name="creator"></param>
public AsyncPool(int size, Func<T> creator)
{
this.size = 1;
this.size = size;
this.handleAvailable = new SemaphoreSlim(size);
this.itemQueue = new ConcurrentQueue<T>();
for (int i = 0; i < size; i++)
itemQueue.Enqueue(creator());
}
public async Task<T> GetAsync(CancellationToken token = default)
/// <summary>
/// Get item
/// </summary>
/// <param name="token"></param>
/// <returns></returns>
public async ValueTask<T> GetAsync(CancellationToken token = default)
{
for (; ; )
{
@ -43,6 +53,11 @@ namespace FASTER.core
}
}
/// <summary>
/// Try get item
/// </summary>
/// <param name="item"></param>
/// <returns></returns>
public bool TryGet(out T item)
{
if (disposed)
@ -53,12 +68,20 @@ namespace FASTER.core
return itemQueue.TryDequeue(out item);
}
/// <summary>
/// Return item to pool
/// </summary>
/// <param name="item"></param>
public void Return(T item)
{
itemQueue.Enqueue(item);
handleAvailable.Release();
if (handleAvailable.CurrentCount < itemQueue.Count)
handleAvailable.Release();
}
/// <summary>
/// Dispose
/// </summary>
public void Dispose()
{
disposed = true;

Просмотреть файл

@ -18,6 +18,7 @@ namespace FASTER.core
{
private readonly bool preallocateFile;
private readonly bool deleteOnClose;
private readonly bool osReadBuffering;
private readonly SafeConcurrentDictionary<int, (AsyncPool<Stream>, AsyncPool<Stream>)> logHandles;
private readonly SectorAlignedBufferPool pool;
@ -36,7 +37,8 @@ namespace FASTER.core
/// <param name="deleteOnClose"></param>
/// <param name="capacity">The maximal number of bytes this storage device can accommondate, or CAPACITY_UNSPECIFIED if there is no such limit</param>
/// <param name="recoverDevice">Whether to recover device metadata from existing files</param>
public ManagedLocalStorageDevice(string filename, bool preallocateFile = false, bool deleteOnClose = false, long capacity = Devices.CAPACITY_UNSPECIFIED, bool recoverDevice = false)
/// <param name="osReadBuffering">Enable OS read buffering</param>
public ManagedLocalStorageDevice(string filename, bool preallocateFile = false, bool deleteOnClose = false, long capacity = Devices.CAPACITY_UNSPECIFIED, bool recoverDevice = false, bool osReadBuffering = false)
: base(filename, GetSectorSize(filename), capacity)
{
pool = new SectorAlignedBufferPool(1, 1);
@ -49,6 +51,7 @@ namespace FASTER.core
this._disposed = false;
this.preallocateFile = preallocateFile;
this.deleteOnClose = deleteOnClose;
this.osReadBuffering = osReadBuffering;
logHandles = new SafeConcurrentDictionary<int, (AsyncPool<Stream>, AsyncPool<Stream>)>();
if (recoverDevice)
RecoverFiles();
@ -395,7 +398,7 @@ namespace FASTER.core
memory?.Return();
#endif
// Sequentialize all writes to same handle
((FileStream)logWriteHandle).Flush(true);
await ((FileStream)logWriteHandle).FlushAsync();
streampool?.Return(logWriteHandle);
// Issue user callback
@ -477,10 +480,11 @@ namespace FASTER.core
{
const int FILE_FLAG_NO_BUFFERING = 0x20000000;
FileOptions fo =
(FileOptions)FILE_FLAG_NO_BUFFERING |
FileOptions.WriteThrough |
FileOptions.Asynchronous |
FileOptions.None;
if (!osReadBuffering)
fo |= (FileOptions)FILE_FLAG_NO_BUFFERING;
var logReadHandle = new FileStream(
GetSegmentName(segmentId), FileMode.OpenOrCreate,

Просмотреть файл

@ -42,9 +42,10 @@ namespace FASTER.core
/// <summary>
/// List of action, epoch pairs containing actions to performed
/// when an epoch becomes safe to reclaim.
/// when an epoch becomes safe to reclaim. Marked volatile to
/// ensure latest value is seen by the last suspended thread.
/// </summary>
private int drainCount = 0;
private volatile int drainCount = 0;
private readonly EpochActionPair[] drainList = new EpochActionPair[kDrainListSize];
/// <summary>
@ -182,6 +183,9 @@ namespace FASTER.core
{
while (drainCount > 0)
{
// Barrier ensures we see the latest epoch table entries. Ensures
// that the last suspended thread drains all pending actions.
Thread.MemoryBarrier();
for (int index = 1; index <= kTableSize; ++index)
{
int entry_epoch = (*(tableAligned + index)).localCurrentEpoch;
@ -285,7 +289,7 @@ namespace FASTER.core
/// Increment global current epoch
/// </summary>
/// <returns></returns>
public int BumpCurrentEpoch()
private int BumpCurrentEpoch()
{
int nextEpoch = Interlocked.Add(ref CurrentEpoch, 1);
@ -301,7 +305,7 @@ namespace FASTER.core
/// </summary>
/// <param name="onDrain">Trigger action</param>
/// <returns></returns>
public int BumpCurrentEpoch(Action onDrain)
public void BumpCurrentEpoch(Action onDrain)
{
int PriorEpoch = BumpCurrentEpoch() - 1;
@ -348,8 +352,6 @@ namespace FASTER.core
}
ProtectAndDrain();
return PriorEpoch + 1;
}
/// <summary>

Просмотреть файл

@ -0,0 +1,142 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT license.
using System;
using System.Collections.Generic;
namespace FASTER.core
{
/// <summary>
/// A list of <see cref="CompletedOutputIterator{TKey, TValue, TInput, TOutput, TContext}"/> for completed outputs from a pending operation.
/// </summary>
/// <typeparam name="TKey">The Key type of the <see cref="FasterKV{Key, Value}"/></typeparam>
/// <typeparam name="TValue">The Value type of the <see cref="FasterKV{Key, Value}"/></typeparam>
/// <typeparam name="TInput">The session input type</typeparam>
/// <typeparam name="TOutput">The session output type</typeparam>
/// <typeparam name="TContext">The session context type</typeparam>
/// <remarks>The session holds this list and returns an enumeration to the caller of an appropriate CompletePending overload. The session will handle
/// disposing and clearing this list, but it is best if the caller calls Dispose() after processing the results, so the key, input, and heap containers
/// are released as soon as possible.</remarks>
public class CompletedOutputIterator<TKey, TValue, TInput, TOutput, TContext> : IDisposable
{
internal const int kInitialAlloc = 32;
internal const int kReallocMultuple = 2;
internal CompletedOutput<TKey, TValue, TInput, TOutput, TContext>[] vector = new CompletedOutput<TKey, TValue, TInput, TOutput, TContext>[kInitialAlloc];
internal int maxIndex = -1;
internal int currentIndex = -1;
internal void Add(ref FasterKV<TKey, TValue>.PendingContext<TInput, TOutput, TContext> pendingContext)
{
// Note: vector is never null
if (this.maxIndex >= vector.Length - 1)
Array.Resize(ref this.vector, this.vector.Length * kReallocMultuple);
++maxIndex;
this.vector[maxIndex].Set(ref pendingContext);
}
/// <summary>
/// Advance the iterator to the next element.
/// </summary>
/// <returns>False if this advances past the last element of the array, else true</returns>
public bool Next()
{
if (this.currentIndex < this.maxIndex)
{
++this.currentIndex;
return true;
}
this.currentIndex = vector.Length;
return false;
}
/// <summary>
/// Returns a reference to the current element of the enumeration.
/// </summary>
/// <returns>A reference to the current element of the enumeration</returns>
/// <exception cref="IndexOutOfRangeException"> if there is no current element, either because Next() has not been called or it has advanced
/// past the last element of the array
/// </exception>
public ref CompletedOutput<TKey, TValue, TInput, TOutput, TContext> Current => ref this.vector[this.currentIndex];
/// <inheritdoc/>
public void Dispose()
{
for (; this.maxIndex >= 0; --this.maxIndex)
this.vector[maxIndex].Dispose();
this.currentIndex = -1;
}
}
/// <summary>
/// Structure to hold a key and its output for a pending operation.
/// </summary>
/// <typeparam name="TKey">The Key type of the <see cref="FasterKV{Key, Value}"/></typeparam>
/// <typeparam name="TValue">The Value type of the <see cref="FasterKV{Key, Value}"/></typeparam>
/// <typeparam name="TInput">The session input type</typeparam>
/// <typeparam name="TOutput">The session output type</typeparam>
/// <typeparam name="TContext">The session context type</typeparam>
/// <remarks>The session holds a list of these that it returns to the caller of an appropriate CompletePending overload. The session will handle disposing
/// and clearing, and will manage Dispose(), but it is best if the caller calls Dispose() after processing the results, so the key, input, and heap containers
/// are released as soon as possible.</remarks>
public struct CompletedOutput<TKey, TValue, TInput, TOutput, TContext>
{
private IHeapContainer<TKey> keyContainer;
private IHeapContainer<TInput> inputContainer;
/// <summary>
/// The key for this pending operation.
/// </summary>
public ref TKey Key => ref keyContainer.Get();
/// <summary>
/// The input for this pending operation.
/// </summary>
public ref TInput Input => ref inputContainer.Get();
/// <summary>
/// The output for this pending operation.
/// </summary>
public TOutput Output;
/// <summary>
/// The context for this pending operation.
/// </summary>
public TContext Context;
/// <summary>
/// The header of the record for this operation
/// </summary>
public RecordInfo RecordInfo;
/// <summary>
/// The logical address of the record for this operation
/// </summary>
public long Address;
internal void Set(ref FasterKV<TKey, TValue>.PendingContext<TInput, TOutput, TContext> pendingContext)
{
this.keyContainer = pendingContext.key;
this.inputContainer = pendingContext.input;
this.Output = pendingContext.output;
this.Context = pendingContext.userContext;
this.RecordInfo = pendingContext.recordInfo;
this.Address = pendingContext.logicalAddress;
}
internal void Dispose()
{
var tempKeyContainer = keyContainer;
keyContainer = default;
if (tempKeyContainer is { })
tempKeyContainer.Dispose();
var tempInputContainer = inputContainer;
inputContainer = default;
if (tempInputContainer is { })
tempInputContainer.Dispose();
Output = default;
Context = default;
}
}
}

Просмотреть файл

@ -9,6 +9,7 @@ using System.IO;
using System.Linq;
using System.Runtime.CompilerServices;
using System.Threading;
using System.Threading.Tasks;
namespace FASTER.core
{
@ -30,7 +31,8 @@ namespace FASTER.core
RECORD_ON_DISK,
SUCCESS_UNMARK,
CPR_SHIFT_DETECTED,
CPR_PENDING_DETECTED
CPR_PENDING_DETECTED,
ALLOCATE_FAILED
}
internal class SerializedFasterExecutionContext
@ -89,7 +91,25 @@ namespace FASTER.core
internal const byte kSkipReadCache = 0x01;
internal const byte kNoKey = 0x02;
internal const byte kSkipCopyReadsToTail = 0x04;
internal const byte kIsAsync = 0x08;
[MethodImpl(MethodImplOptions.AggressiveInlining)]
internal IHeapContainer<Key> DetachKey()
{
var tempKeyContainer = this.key;
this.key = default; // transfer ownership
return tempKeyContainer;
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
internal IHeapContainer<Input> DetachInput()
{
var tempInputContainer = this.input;
this.input = default; // transfer ownership
return tempInputContainer;
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
internal static byte GetOperationFlags(ReadFlags readFlags, bool noKey = false)
{
Debug.Assert((byte)ReadFlags.SkipReadCache == kSkipReadCache);
@ -119,6 +139,12 @@ namespace FASTER.core
set => operationFlags = value ? (byte)(operationFlags | kSkipCopyReadsToTail) : (byte)(operationFlags & ~kSkipCopyReadsToTail);
}
internal bool IsAsync
{
get => (operationFlags & kIsAsync) != 0;
set => operationFlags = value ? (byte)(operationFlags | kIsAsync) : (byte)(operationFlags & ~kIsAsync);
}
public void Dispose()
{
key?.Dispose();
@ -151,6 +177,28 @@ namespace FASTER.core
}
}
public void WaitPending(LightEpoch epoch)
{
if (SyncIoPendingCount > 0)
{
try
{
epoch.Suspend();
readyResponses.WaitForEntry();
}
finally
{
epoch.Resume();
}
}
}
public async ValueTask WaitPendingAsync(CancellationToken token = default)
{
if (SyncIoPendingCount > 0)
await readyResponses.WaitForEntryAsync(token);
}
public FasterExecutionContext<Input, Output, Context> prevCtx;
}
}

Просмотреть файл

@ -59,7 +59,10 @@ namespace FASTER.core
public const long kInvalidEntry = 0;
/// Number of times to retry a compare-and-swap before failure
public const long kRetryThreshold = 1000000;
public const long kRetryThreshold = 1000000; // TODO unused
/// Number of times to spin before awaiting or Waiting for a Flush Task.
public const long kFlushSpinCount = 10; // TODO verify this number
/// Number of merge/split chunks.
public const int kNumMergeChunkBits = 8;

Просмотреть файл

@ -398,7 +398,9 @@ namespace FASTER.core
{
// Immutable region or new record
status = CreateNewRecordUpsert(ref key, ref value, ref pendingContext, fasterSession, sessionCtx, bucket, slot, tag, entry, latestLogicalAddress);
goto LatchRelease;
if (status != OperationStatus.ALLOCATE_FAILED)
goto LatchRelease;
latchDestination = LatchDestination.CreatePendingContext;
}
#endregion
@ -514,7 +516,9 @@ namespace FASTER.core
where FasterSession : IFasterSession<Key, Value, Input, Output, Context>
{
var (actualSize, allocateSize) = hlog.GetRecordSize(ref key, ref value);
BlockAllocate(allocateSize, out long newLogicalAddress, sessionCtx, fasterSession);
BlockAllocate(allocateSize, out long newLogicalAddress, sessionCtx, fasterSession, pendingContext.IsAsync);
if (newLogicalAddress == 0)
return OperationStatus.ALLOCATE_FAILED;
var newPhysicalAddress = hlog.GetPhysicalAddress(newLogicalAddress);
RecordInfo.WriteInfo(ref hlog.GetInfo(newPhysicalAddress),
sessionCtx.version,
@ -729,11 +733,13 @@ namespace FASTER.core
if (latchDestination != LatchDestination.CreatePendingContext)
{
status = CreateNewRecordRMW(ref key, ref input, ref pendingContext, fasterSession, sessionCtx, bucket, slot, logicalAddress, physicalAddress, tag, entry, latestLogicalAddress);
goto LatchRelease;
if (status != OperationStatus.ALLOCATE_FAILED)
goto LatchRelease;
latchDestination = LatchDestination.CreatePendingContext;
}
#endregion
#endregion
#region Create failure context
#region Create failure context
Debug.Assert(latchDestination == LatchDestination.CreatePendingContext, $"RMW CreatePendingContext encountered latchDest == {latchDestination}");
{
pendingContext.type = OperationType.RMW;
@ -859,7 +865,9 @@ namespace FASTER.core
var (actualSize, allocatedSize) = (logicalAddress < hlog.BeginAddress) ?
hlog.GetInitialRecordSize(ref key, ref input, fasterSession) :
hlog.GetRecordSize(physicalAddress, ref input, fasterSession);
BlockAllocate(allocatedSize, out long newLogicalAddress, sessionCtx, fasterSession);
BlockAllocate(allocatedSize, out long newLogicalAddress, sessionCtx, fasterSession, pendingContext.IsAsync);
if (newLogicalAddress == 0)
return OperationStatus.ALLOCATE_FAILED;
var newPhysicalAddress = hlog.GetPhysicalAddress(newLogicalAddress);
RecordInfo.WriteInfo(ref hlog.GetInfo(newPhysicalAddress), sessionCtx.version,
tombstone: false, invalidBit: false,
@ -1115,7 +1123,12 @@ namespace FASTER.core
// Immutable region or new record
// Allocate default record size for tombstone
var (actualSize, allocateSize) = hlog.GetRecordSize(ref key, ref value);
BlockAllocate(allocateSize, out long newLogicalAddress, sessionCtx, fasterSession);
BlockAllocate(allocateSize, out long newLogicalAddress, sessionCtx, fasterSession, pendingContext.IsAsync);
if (newLogicalAddress == 0)
{
status = OperationStatus.ALLOCATE_FAILED;
goto CreatePendingContext;
}
var newPhysicalAddress = hlog.GetPhysicalAddress(newLogicalAddress);
RecordInfo.WriteInfo(ref hlog.GetInfo(newPhysicalAddress),
sessionCtx.version, tombstone:true, invalidBit:false,
@ -1292,9 +1305,11 @@ namespace FASTER.core
return OperationStatus.NOTFOUND;
// If NoKey, we do not have the key in the initial call and must use the key from the satisfied request.
ref Key key = ref pendingContext.NoKey ? ref hlog.GetContextRecordKey(ref request) : ref pendingContext.key.Get();
// With the new overload of CompletePending that returns CompletedOutputs, pendingContext must have the key.
if (pendingContext.NoKey)
pendingContext.key = hlog.GetKeyContainer(ref hlog.GetContextRecordKey(ref request));
fasterSession.SingleReader(ref key, ref pendingContext.input.Get(),
fasterSession.SingleReader(ref pendingContext.key.Get(), ref pendingContext.input.Get(),
ref hlog.GetContextRecordValue(ref request), ref pendingContext.output, request.logicalAddress);
if ((CopyReadsToTail != CopyReadsToTail.None && !pendingContext.SkipCopyReadsToTail) || (UseReadCache && !pendingContext.SkipReadCache))
@ -1519,6 +1534,8 @@ namespace FASTER.core
(actualSize, allocatedSize) = hlog.GetRecordSize(physicalAddress, ref pendingContext.input.Get(), fasterSession);
}
BlockAllocate(allocatedSize, out long newLogicalAddress, sessionCtx, fasterSession);
if (newLogicalAddress == 0)
return OperationStatus.ALLOCATE_FAILED;
var newPhysicalAddress = hlog.GetPhysicalAddress(newLogicalAddress);
RecordInfo.WriteInfo(ref hlog.GetInfo(newPhysicalAddress), opCtx.version,
tombstone:false, invalidBit:false,
@ -1752,31 +1769,69 @@ namespace FASTER.core
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private void BlockAllocate<Input, Output, Context, FasterSession>(
int recordSize,
out long logicalAddress,
FasterExecutionContext<Input, Output, Context> ctx,
FasterSession fasterSession)
where FasterSession : IFasterSession
int recordSize,
out long logicalAddress,
FasterExecutionContext<Input, Output, Context> ctx,
FasterSession fasterSession, bool isAsync = false)
where FasterSession : IFasterSession
{
while ((logicalAddress = hlog.TryAllocate(recordSize)) == 0)
{
hlog.TryComplete();
InternalRefresh(ctx, fasterSession);
Thread.Yield();
}
logicalAddress = hlog.TryAllocate(recordSize);
if (logicalAddress > 0)
return;
SpinBlockAllocate(hlog, recordSize, out logicalAddress, ctx, fasterSession, isAsync);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private void BlockAllocateReadCache<Input, Output, Context, FasterSession>(
int recordSize,
out long logicalAddress,
FasterExecutionContext<Input, Output, Context> currentCtx,
FasterSession fasterSession)
where FasterSession : IFasterSession
int recordSize,
out long logicalAddress,
FasterExecutionContext<Input, Output, Context> currentCtx,
FasterSession fasterSession)
where FasterSession : IFasterSession
{
while ((logicalAddress = readcache.TryAllocate(recordSize)) == 0)
logicalAddress = readcache.TryAllocate(recordSize);
if (logicalAddress > 0)
return;
SpinBlockAllocate(readcache, recordSize, out logicalAddress, currentCtx, fasterSession, isAsync: false);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private void SpinBlockAllocate<Input, Output, Context, FasterSession>(
AllocatorBase<Key, Value> allocator,
int recordSize,
out long logicalAddress,
FasterExecutionContext<Input, Output, Context> ctx,
FasterSession fasterSession, bool isAsync)
where FasterSession : IFasterSession
{
var spins = 0;
while (true)
{
InternalRefresh(currentCtx, fasterSession);
var flushTask = allocator.FlushTask;
logicalAddress = allocator.TryAllocate(recordSize);
if (logicalAddress > 0)
return;
if (logicalAddress == 0)
{
if (spins++ < Constants.kFlushSpinCount)
{
Thread.Yield();
continue;
}
if (isAsync) return;
try
{
epoch.Suspend();
flushTask.GetAwaiter().GetResult();
}
finally
{
epoch.Resume();
}
}
allocator.TryComplete();
InternalRefresh(ctx, fasterSession);
Thread.Yield();
}
}

Просмотреть файл

@ -143,10 +143,10 @@ namespace FASTER.core
internal bool InternalCompletePending<Input, Output, Context, FasterSession>(
FasterExecutionContext<Input, Output, Context> ctx,
FasterSession fasterSession,
bool wait = false)
bool wait = false, CompletedOutputIterator<Key, Value, Input, Output, Context> completedOutputs = null)
where FasterSession : IFasterSession<Key, Value, Input, Output, Context>
{
do
while (true)
{
bool done = true;
@ -155,35 +155,26 @@ namespace FASTER.core
{
if (ctx.phase == Phase.IN_PROGRESS || ctx.phase == Phase.WAIT_PENDING)
{
InternalCompletePendingRequests(ctx.prevCtx, ctx, fasterSession);
InternalCompletePendingRequests(ctx.prevCtx, ctx, fasterSession, completedOutputs);
InternalCompleteRetryRequests(ctx.prevCtx, ctx, fasterSession);
InternalRefresh(ctx, fasterSession);
done &= (ctx.prevCtx.HasNoPendingRequests);
if (wait) ctx.prevCtx.WaitPending(epoch);
done &= ctx.prevCtx.HasNoPendingRequests;
}
}
#endregion
InternalCompletePendingRequests(ctx, ctx, fasterSession);
InternalCompletePendingRequests(ctx, ctx, fasterSession, completedOutputs);
InternalCompleteRetryRequests(ctx, ctx, fasterSession);
if (wait) ctx.WaitPending(epoch);
done &= ctx.HasNoPendingRequests;
done &= (ctx.HasNoPendingRequests);
if (done)
{
return true;
}
if (done) return true;
InternalRefresh(ctx, fasterSession);
if (wait)
{
// Yield before checking again
Thread.Yield();
}
} while (wait);
return false;
if (!wait) return false;
Thread.Yield();
}
}
internal bool InRestPhase() => systemState.phase == Phase.REST;
@ -280,7 +271,7 @@ namespace FASTER.core
internal void InternalCompletePendingRequests<Input, Output, Context, FasterSession>(
FasterExecutionContext<Input, Output, Context> opCtx,
FasterExecutionContext<Input, Output, Context> currentCtx,
FasterSession fasterSession)
FasterSession fasterSession, CompletedOutputIterator<Key, Value, Input, Output, Context> completedOutputs)
where FasterSession : IFasterSession<Key, Value, Input, Output, Context>
{
hlog.TryComplete();
@ -289,7 +280,7 @@ namespace FASTER.core
while (opCtx.readyResponses.TryDequeue(out AsyncIOContext<Key, Value> request))
{
InternalCompletePendingRequest(opCtx, currentCtx, fasterSession, request);
InternalCompletePendingRequest(opCtx, currentCtx, fasterSession, request, completedOutputs);
}
}
@ -297,7 +288,8 @@ namespace FASTER.core
FasterExecutionContext<Input, Output, Context> opCtx,
FasterExecutionContext<Input, Output, Context> currentCtx,
FasterSession fasterSession,
CancellationToken token = default)
CancellationToken token,
CompletedOutputIterator<Key, Value, Input, Output, Context> completedOutputs)
where FasterSession : IFasterSession<Key, Value, Input, Output, Context>
{
while (opCtx.SyncIoPendingCount > 0)
@ -310,7 +302,7 @@ namespace FASTER.core
while (opCtx.readyResponses.Count > 0)
{
opCtx.readyResponses.TryDequeue(out request);
InternalCompletePendingRequest(opCtx, currentCtx, fasterSession, request);
InternalCompletePendingRequest(opCtx, currentCtx, fasterSession, request, completedOutputs);
}
fasterSession.UnsafeSuspendThread();
}
@ -319,7 +311,7 @@ namespace FASTER.core
request = await opCtx.readyResponses.DequeueAsync(token);
fasterSession.UnsafeResumeThread();
InternalCompletePendingRequest(opCtx, currentCtx, fasterSession, request);
InternalCompletePendingRequest(opCtx, currentCtx, fasterSession, request, completedOutputs);
fasterSession.UnsafeSuspendThread();
}
}
@ -329,7 +321,7 @@ namespace FASTER.core
FasterExecutionContext<Input, Output, Context> opCtx,
FasterExecutionContext<Input, Output, Context> currentCtx,
FasterSession fasterSession,
AsyncIOContext<Key, Value> request)
AsyncIOContext<Key, Value> request, CompletedOutputIterator<Key, Value, Input, Output, Context> completedOutputs)
where FasterSession : IFasterSession<Key, Value, Input, Output, Context>
{
if (opCtx.ioPendingRequests.TryGetValue(request.id, out var pendingContext))
@ -337,6 +329,8 @@ namespace FASTER.core
// Remove from pending dictionary
opCtx.ioPendingRequests.Remove(request.id);
InternalCompletePendingRequestFromContext(opCtx, currentCtx, fasterSession, request, ref pendingContext, false, out _);
if (completedOutputs is { })
completedOutputs.Add(ref pendingContext);
pendingContext.Dispose();
}
}
@ -367,6 +361,7 @@ namespace FASTER.core
{
internalStatus = InternalContinuePendingRMW(opCtx, request, ref pendingContext, fasterSession, currentCtx);
}
unsafe { pendingContext.recordInfo = hlog.GetInfoFromBytePointer(request.record.GetValidPointer()); }
request.Dispose();
@ -391,15 +386,12 @@ namespace FASTER.core
if (pendingContext.type == OperationType.READ)
{
RecordInfo recordInfo;
unsafe { recordInfo = hlog.GetInfoFromBytePointer(request.record.GetValidPointer()); }
fasterSession.ReadCompletionCallback(ref key,
ref pendingContext.input.Get(),
ref pendingContext.output,
pendingContext.userContext,
status,
recordInfo);
pendingContext.recordInfo);
}
else
{

Просмотреть файл

@ -12,11 +12,6 @@ namespace FASTER.core
/// </summary>
public struct CommitInfo
{
/// <summary>
/// Begin address
/// </summary>
public long BeginAddress;
/// <summary>
/// From address of commit range
/// </summary>

Просмотреть файл

@ -27,6 +27,8 @@ namespace FASTER.core
private readonly GetMemory getMemory;
private readonly int headerSize;
private readonly LogChecksumType logChecksum;
private readonly WorkQueueLIFO<CommitInfo> commitQueue;
internal readonly bool readOnlyMode;
private TaskCompletionSource<LinkedCommitInfo> commitTcs
@ -74,6 +76,11 @@ namespace FASTER.core
/// </summary>
internal Task<LinkedCommitInfo> CommitTask => commitTcs.Task;
/// <summary>
/// Task notifying log flush completions
/// </summary>
internal Task<long> FlushTask => allocator.FlushTask;
/// <summary>
/// Task notifying refresh uncommitted
/// </summary>
@ -85,6 +92,18 @@ namespace FASTER.core
internal readonly ConcurrentDictionary<string, FasterLogScanIterator> PersistedIterators
= new ConcurrentDictionary<string, FasterLogScanIterator>();
/// <summary>
/// Version number to track changes to commit metadata (begin address and persisted iterators)
/// </summary>
private long commitMetadataVersion;
/// <summary>
/// Committed view of commitMetadataVersion
/// </summary>
private long persistedCommitMetadataVersion;
internal Dictionary<string, long> LastPersistedIterators;
/// <summary>
/// Numer of references to log, including itself
/// Used to determine disposability of log
@ -141,7 +160,7 @@ namespace FASTER.core
CommittedUntilAddress = Constants.kFirstValidAddress;
CommittedBeginAddress = Constants.kFirstValidAddress;
SafeTailAddress = Constants.kFirstValidAddress;
commitQueue = new WorkQueueLIFO<CommitInfo>(ci => SerialCommitCallbackWorker(ci));
allocator = new BlittableAllocator<Empty, byte>(
logSettings.GetLogSettings(), null,
null, epoch, CommitCallback);
@ -230,7 +249,7 @@ namespace FASTER.core
epoch.Resume();
var length = entry.Length;
logicalAddress = allocator.TryAllocate(headerSize + Align(length));
logicalAddress = allocator.TryAllocateRetryNow(headerSize + Align(length));
if (logicalAddress == 0)
{
epoch.Suspend();
@ -259,7 +278,7 @@ namespace FASTER.core
epoch.Resume();
var length = entry.Length;
logicalAddress = allocator.TryAllocate(headerSize + Align(length));
logicalAddress = allocator.TryAllocateRetryNow(headerSize + Align(length));
if (logicalAddress == 0)
{
epoch.Suspend();
@ -309,18 +328,15 @@ namespace FASTER.core
long logicalAddress;
while (true)
{
var task = @this.CommitTask;
var task = @this.FlushTask;
if (@this.TryEnqueue(entry, out logicalAddress))
break;
if (@this.NeedToWait(@this.CommittedUntilAddress, @this.TailAddress))
// Wait for *some* flush - failure can be ignored except if the token was signaled (which the caller should handle correctly)
try
{
// Wait for *some* commit - failure can be ignored except if the token was signaled (which the caller should handle correctly)
try
{
await task.WithCancellationAsync(token);
}
catch when (!token.IsCancellationRequested) { }
await task.WithCancellationAsync(token);
}
catch when (!token.IsCancellationRequested) { }
}
return logicalAddress;
@ -347,18 +363,15 @@ namespace FASTER.core
long logicalAddress;
while (true)
{
var task = @this.CommitTask;
var task = @this.FlushTask;
if (@this.TryEnqueue(entry.Span, out logicalAddress))
break;
if (@this.NeedToWait(@this.CommittedUntilAddress, @this.TailAddress))
// Wait for *some* flush - failure can be ignored except if the token was signaled (which the caller should handle correctly)
try
{
// Wait for *some* commit - failure can be ignored except if the token was signaled (which the caller should handle correctly)
try
{
await task.WithCancellationAsync(token);
}
catch when (!token.IsCancellationRequested) { }
await task.WithCancellationAsync(token);
}
catch when (!token.IsCancellationRequested) { }
}
return logicalAddress;
@ -385,18 +398,15 @@ namespace FASTER.core
long logicalAddress;
while (true)
{
var task = @this.CommitTask;
var task = @this.FlushTask;
if (@this.TryEnqueue(readOnlySpanBatch, out logicalAddress))
break;
if (@this.NeedToWait(@this.CommittedUntilAddress, @this.TailAddress))
// Wait for *some* flush - failure can be ignored except if the token was signaled (which the caller should handle correctly)
try
{
// Wait for *some* commit - failure can be ignored except if the token was signaled (which the caller should handle correctly)
try
{
await task.WithCancellationAsync(token);
}
catch when (!token.IsCancellationRequested) { }
await task.WithCancellationAsync(token);
}
catch when (!token.IsCancellationRequested) { }
}
return logicalAddress;
@ -417,7 +427,7 @@ namespace FASTER.core
var tailAddress = untilAddress;
if (tailAddress == 0) tailAddress = allocator.GetTailAddress();
while (CommittedUntilAddress < tailAddress) Thread.Yield();
while (CommittedUntilAddress < tailAddress || persistedCommitMetadataVersion < commitMetadataVersion) Thread.Yield();
}
/// <summary>
@ -435,13 +445,10 @@ namespace FASTER.core
var tailAddress = untilAddress;
if (tailAddress == 0) tailAddress = allocator.GetTailAddress();
if (CommittedUntilAddress >= tailAddress)
return;
while (true)
while (CommittedUntilAddress < tailAddress || persistedCommitMetadataVersion < commitMetadataVersion)
{
var linkedCommitInfo = await task.WithCancellationAsync(token);
if (linkedCommitInfo.CommitInfo.UntilAddress < tailAddress)
if (linkedCommitInfo.CommitInfo.UntilAddress < tailAddress || persistedCommitMetadataVersion < commitMetadataVersion)
task = linkedCommitInfo.NextTask;
else
break;
@ -473,10 +480,10 @@ namespace FASTER.core
var task = CommitTask;
var tailAddress = CommitInternal();
while (CommittedUntilAddress < tailAddress)
while (CommittedUntilAddress < tailAddress || persistedCommitMetadataVersion < commitMetadataVersion)
{
var linkedCommitInfo = await task.WithCancellationAsync(token);
if (linkedCommitInfo.CommitInfo.UntilAddress < tailAddress)
if (linkedCommitInfo.CommitInfo.UntilAddress < tailAddress || persistedCommitMetadataVersion < commitMetadataVersion)
task = linkedCommitInfo.NextTask;
else
break;
@ -495,10 +502,10 @@ namespace FASTER.core
if (prevCommitTask == null) prevCommitTask = CommitTask;
var tailAddress = CommitInternal();
while (CommittedUntilAddress < tailAddress)
while (CommittedUntilAddress < tailAddress || persistedCommitMetadataVersion < commitMetadataVersion)
{
var linkedCommitInfo = await prevCommitTask.WithCancellationAsync(token);
if (linkedCommitInfo.CommitInfo.UntilAddress < tailAddress)
if (linkedCommitInfo.CommitInfo.UntilAddress < tailAddress || persistedCommitMetadataVersion < commitMetadataVersion)
prevCommitTask = linkedCommitInfo.NextTask;
else
return linkedCommitInfo.NextTask;
@ -599,33 +606,31 @@ namespace FASTER.core
{
token.ThrowIfCancellationRequested();
long logicalAddress;
Task<LinkedCommitInfo> task;
Task<long> flushTask;
Task<LinkedCommitInfo> commitTask;
// Phase 1: wait for commit to memory
while (true)
{
task = CommitTask;
flushTask = FlushTask;
commitTask = CommitTask;
if (TryEnqueue(entry, out logicalAddress))
break;
if (NeedToWait(CommittedUntilAddress, TailAddress))
try
{
// Wait for *some* commit - failure can be ignored except if the token was signaled (which the caller should handle correctly)
try
{
await task.WithCancellationAsync(token);
}
catch when (!token.IsCancellationRequested) { }
await flushTask.WithCancellationAsync(token);
}
catch when (!token.IsCancellationRequested) { }
}
// since the task object was read before enqueueing, there is no need for the CommittedUntilAddress >= logicalAddress check like in WaitForCommit
// Phase 2: wait for commit/flush to storage
// Since the task object was read before enqueueing, there is no need for the CommittedUntilAddress >= logicalAddress check like in WaitForCommit
while (true)
{
LinkedCommitInfo linkedCommitInfo;
try
{
linkedCommitInfo = await task.WithCancellationAsync(token);
linkedCommitInfo = await commitTask.WithCancellationAsync(token);
}
catch (CommitFailureException e)
{
@ -634,7 +639,7 @@ namespace FASTER.core
throw;
}
if (linkedCommitInfo.CommitInfo.UntilAddress < logicalAddress + 1)
task = linkedCommitInfo.NextTask;
commitTask = linkedCommitInfo.NextTask;
else
break;
}
@ -653,33 +658,31 @@ namespace FASTER.core
{
token.ThrowIfCancellationRequested();
long logicalAddress;
Task<LinkedCommitInfo> task;
Task<long> flushTask;
Task<LinkedCommitInfo> commitTask;
// Phase 1: wait for commit to memory
while (true)
{
task = CommitTask;
flushTask = FlushTask;
commitTask = CommitTask;
if (TryEnqueue(entry.Span, out logicalAddress))
break;
if (NeedToWait(CommittedUntilAddress, TailAddress))
try
{
// Wait for *some* commit - failure can be ignored except if the token was signaled (which the caller should handle correctly)
try
{
await task.WithCancellationAsync(token);
}
catch when (!token.IsCancellationRequested) { }
await flushTask.WithCancellationAsync(token);
}
catch when (!token.IsCancellationRequested) { }
}
// since the task object was read before enqueueing, there is no need for the CommittedUntilAddress >= logicalAddress check like in WaitForCommit
// Phase 2: wait for commit/flush to storage
// Since the task object was read before enqueueing, there is no need for the CommittedUntilAddress >= logicalAddress check like in WaitForCommit
while (true)
{
LinkedCommitInfo linkedCommitInfo;
try
{
linkedCommitInfo = await task.WithCancellationAsync(token);
linkedCommitInfo = await commitTask.WithCancellationAsync(token);
}
catch (CommitFailureException e)
{
@ -688,7 +691,7 @@ namespace FASTER.core
throw;
}
if (linkedCommitInfo.CommitInfo.UntilAddress < logicalAddress + 1)
task = linkedCommitInfo.NextTask;
commitTask = linkedCommitInfo.NextTask;
else
break;
}
@ -707,33 +710,31 @@ namespace FASTER.core
{
token.ThrowIfCancellationRequested();
long logicalAddress;
Task<LinkedCommitInfo> task;
Task<long> flushTask;
Task<LinkedCommitInfo> commitTask;
// Phase 1: wait for commit to memory
while (true)
{
task = CommitTask;
flushTask = FlushTask;
commitTask = CommitTask;
if (TryEnqueue(readOnlySpanBatch, out logicalAddress))
break;
if (NeedToWait(CommittedUntilAddress, TailAddress))
try
{
// Wait for *some* commit - failure can be ignored except if the token was signaled (which the caller should handle correctly)
try
{
await task.WithCancellationAsync(token);
}
catch when (!token.IsCancellationRequested) { }
await flushTask.WithCancellationAsync(token);
}
catch when (!token.IsCancellationRequested) { }
}
// since the task object was read before enqueueing, there is no need for the CommittedUntilAddress >= logicalAddress check like in WaitForCommit
// Phase 2: wait for commit/flush to storage
// Since the task object was read before enqueueing, there is no need for the CommittedUntilAddress >= logicalAddress check like in WaitForCommit
while (true)
{
LinkedCommitInfo linkedCommitInfo;
try
{
linkedCommitInfo = await task.WithCancellationAsync(token);
linkedCommitInfo = await commitTask.WithCancellationAsync(token);
}
catch (CommitFailureException e)
{
@ -742,7 +743,7 @@ namespace FASTER.core
throw;
}
if (linkedCommitInfo.CommitInfo.UntilAddress < logicalAddress + 1)
task = linkedCommitInfo.NextTask;
commitTask = linkedCommitInfo.NextTask;
else
break;
}
@ -856,13 +857,23 @@ namespace FASTER.core
/// </summary>
private void CommitCallback(CommitInfo commitInfo)
{
TaskCompletionSource<LinkedCommitInfo> _commitTcs = default;
commitQueue.EnqueueAndTryWork(commitInfo, asTask: true);
}
// We can only allow serial monotonic synchronous commit
lock (this)
private void SerialCommitCallbackWorker(CommitInfo commitInfo)
{
// Check if commit is already covered
if (CommittedBeginAddress >= BeginAddress &&
CommittedUntilAddress >= commitInfo.UntilAddress &&
persistedCommitMetadataVersion >= commitMetadataVersion &&
commitInfo.ErrorCode == 0)
return;
if (commitInfo.ErrorCode == 0)
{
if (CommittedBeginAddress > commitInfo.BeginAddress)
commitInfo.BeginAddress = CommittedBeginAddress;
// Capture CMV first, so metadata prior to CMV update is visible to commit
long _localCMV = commitMetadataVersion;
if (CommittedUntilAddress > commitInfo.FromAddress)
commitInfo.FromAddress = CommittedUntilAddress;
if (CommittedUntilAddress > commitInfo.UntilAddress)
@ -870,7 +881,7 @@ namespace FASTER.core
FasterLogRecoveryInfo info = new FasterLogRecoveryInfo
{
BeginAddress = commitInfo.BeginAddress,
BeginAddress = BeginAddress,
FlushedUntilAddress = commitInfo.UntilAddress
};
@ -878,20 +889,19 @@ namespace FASTER.core
info.SnapshotIterators(PersistedIterators);
logCommitManager.Commit(info.BeginAddress, info.FlushedUntilAddress, info.ToByteArray());
LastPersistedIterators = info.Iterators;
CommittedBeginAddress = info.BeginAddress;
CommittedUntilAddress = info.FlushedUntilAddress;
if (_localCMV > persistedCommitMetadataVersion)
persistedCommitMetadataVersion = _localCMV;
// Update completed address for persisted iterators
info.CommitIterators(PersistedIterators);
_commitTcs = commitTcs;
// If task is not faulted, create new task
// If task is faulted due to commit exception, create new task
if (commitTcs.Task.Status != TaskStatus.Faulted || commitTcs.Task.Exception.InnerException as CommitFailureException != null)
{
commitTcs = new TaskCompletionSource<LinkedCommitInfo>(TaskCreationOptions.RunContinuationsAsynchronously);
}
}
var _commitTcs = commitTcs;
commitTcs = new TaskCompletionSource<LinkedCommitInfo>(TaskCreationOptions.RunContinuationsAsynchronously);
var lci = new LinkedCommitInfo
{
CommitInfo = commitInfo,
@ -904,6 +914,29 @@ namespace FASTER.core
_commitTcs.TrySetException(new CommitFailureException(lci, $"Commit of address range [{commitInfo.FromAddress}-{commitInfo.UntilAddress}] failed with error code {commitInfo.ErrorCode}"));
}
private bool IteratorsChanged()
{
var _lastPersistedIterators = LastPersistedIterators;
if (_lastPersistedIterators == null)
{
if (PersistedIterators.Count == 0)
return false;
return true;
}
if (_lastPersistedIterators.Count != PersistedIterators.Count)
return true;
foreach (var item in _lastPersistedIterators)
{
if (PersistedIterators.TryGetValue(item.Key, out var other))
{
if (item.Value != other.requestedCompletedUntilAddress) return true;
}
else
return true;
}
return false;
}
/// <summary>
/// Read-only callback
/// </summary>
@ -960,7 +993,7 @@ namespace FASTER.core
// Update commit to release pending iterators.
var lci = new LinkedCommitInfo
{
CommitInfo = new CommitInfo { BeginAddress = BeginAddress, FromAddress = BeginAddress, UntilAddress = FlushedUntilAddress },
CommitInfo = new CommitInfo { FromAddress = BeginAddress, UntilAddress = FlushedUntilAddress },
NextTask = commitTcs.Task
};
_commitTcs?.TrySetResult(lci);
@ -1080,7 +1113,7 @@ namespace FASTER.core
epoch.Resume();
logicalAddress = allocator.TryAllocate(allocatedLength);
logicalAddress = allocator.TryAllocateRetryNow(allocatedLength);
if (logicalAddress == 0)
{
epoch.Suspend();
@ -1189,14 +1222,16 @@ namespace FASTER.core
// May need to commit begin address and/or iterators
epoch.Suspend();
var beginAddress = allocator.BeginAddress;
if (beginAddress > CommittedBeginAddress || PersistedIterators.Count > 0)
if (beginAddress > CommittedBeginAddress || IteratorsChanged())
{
Interlocked.Increment(ref commitMetadataVersion);
CommitCallback(new CommitInfo
{
BeginAddress = beginAddress,
FromAddress = CommittedUntilAddress > beginAddress ? CommittedUntilAddress : beginAddress,
UntilAddress = CommittedUntilAddress > beginAddress ? CommittedUntilAddress : beginAddress,
ErrorCode = 0
});
}
}
return tailAddress;
@ -1268,19 +1303,5 @@ namespace FASTER.core
*(ulong*)dest = Utility.XorBytes(dest + 8, length + 4);
}
}
/// <summary>
/// Do we need to await a commit to make forward progress?
/// </summary>
/// <param name="committedUntilAddress"></param>
/// <param name="tailAddress"></param>
/// <returns></returns>
private bool NeedToWait(long committedUntilAddress, long tailAddress)
{
Thread.Yield();
return
allocator.GetPage(committedUntilAddress) <=
(allocator.GetPage(tailAddress) - allocator.BufferSize);
}
}
}

Просмотреть файл

@ -38,6 +38,12 @@ namespace FASTER.core
TryCompleteAwaitingTask();
}
/// <summary>
/// Check if countdown is empty
/// </summary>
public bool IsEmpty => counter == 0;
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private void TryCompleteAwaitingTask()
{

Просмотреть файл

@ -58,6 +58,12 @@ namespace FASTER.core
}
}
/// <summary>
/// Wait for queue to have at least one entry
/// </summary>
/// <returns></returns>
public void WaitForEntry() => semaphore.Wait();
/// <summary>
/// Wait for queue to have at least one entry
/// </summary>

Просмотреть файл

@ -325,5 +325,38 @@ namespace FASTER.core
// make sure any exceptions in the task get unwrapped and exposed to the caller.
return await task.ConfigureAwait(continueOnCapturedContext);
}
/// <summary>
/// Throws OperationCanceledException if token cancels before the real task completes.
/// Doesn't abort the inner task, but allows the calling code to get "unblocked" and react to stuck tasks.
/// </summary>
internal static Task WithCancellationAsync(this Task task, CancellationToken token, bool useSynchronizationContext = false, bool continueOnCapturedContext = false)
{
if (!token.CanBeCanceled || task.IsCompleted)
{
return task;
}
else if (token.IsCancellationRequested)
{
return Task.FromCanceled(token);
}
return SlowWithCancellationAsync(task, token, useSynchronizationContext, continueOnCapturedContext);
}
private static async Task SlowWithCancellationAsync(Task task, CancellationToken token, bool useSynchronizationContext, bool continueOnCapturedContext)
{
var tcs = new TaskCompletionSource<bool>(TaskCreationOptions.RunContinuationsAsynchronously);
using (token.Register(s => ((TaskCompletionSource<bool>)s).TrySetResult(true), tcs, useSynchronizationContext))
{
if (task != await Task.WhenAny(task, tcs.Task))
{
token.ThrowIfCancellationRequested();
}
}
// make sure any exceptions in the task get unwrapped and exposed to the caller.
await task.ConfigureAwait(continueOnCapturedContext);
}
}
}

Просмотреть файл

@ -2,19 +2,13 @@
// Licensed under the MIT license.
using System;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using System.Collections.Generic;
using System.Linq;
using FASTER.core;
using System.IO;
using NUnit.Framework;
namespace FASTER.test
{
//** NOTE - more detailed / in depth Read tests in ReadAddressTests.cs
//** These tests ensure the basics are fully covered
@ -43,8 +37,6 @@ namespace FASTER.test
log.Dispose();
}
[Test]
[Category("FasterKV")]
public void NativeInMemWriteRead()
@ -607,9 +599,6 @@ namespace FASTER.test
{
InputStruct input = default;
OutputStruct output = default;
long invalidAddress = Constants.kInvalidAddress;
var key1 = new KeyStruct { kfield1 = 13, kfield2 = 14 };
var value = new ValueStruct { vfield1 = 23, vfield2 = 24 };
var readAtAddress = fht.Log.BeginAddress;
@ -617,7 +606,7 @@ namespace FASTER.test
session.Upsert(ref key1, ref value, Empty.Default, 0);
//**** When Bug Fixed ... use the invalidAddress line
//**** TODO: When Bug Fixed ... use the invalidAddress line
// Bug #136259
// Ah—slight bug here.I took a quick look to verify that the logicalAddress passed to SingleReader was kInvalidAddress(0),
//and while I got that right for the SingleWriter call, I missed it on the SingleReader.
@ -767,7 +756,5 @@ namespace FASTER.test
s.Read(ref key, ref output);
Assert.IsTrue(output == 10);
}
}
}

Просмотреть файл

@ -0,0 +1,218 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT license.
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using FASTER.core;
using NUnit.Framework;
namespace FASTER.test
{
[TestFixture]
class CompletePendingTests
{
private FasterKV<KeyStruct, ValueStruct> fht;
private IDevice log;
[SetUp]
public void Setup()
{
log = Devices.CreateLogDevice(TestContext.CurrentContext.TestDirectory + "/CompletePendingTests.log", preallocateFile: true, deleteOnClose: true);
fht = new FasterKV<KeyStruct, ValueStruct>(128, new LogSettings { LogDevice = log, MemorySizeBits = 29 });
}
[TearDown]
public void TearDown()
{
fht.Dispose();
fht = null;
log.Dispose();
}
const int numRecords = 1000;
static KeyStruct NewKeyStruct(int key) => new KeyStruct { kfield1 = key, kfield2 = key + numRecords * 10 };
static ValueStruct NewValueStruct(int key) => new ValueStruct { vfield1 = key, vfield2 = key + numRecords * 10 };
static InputStruct NewInputStruct(int key) => new InputStruct { ifield1 = key + numRecords * 30, ifield2 = key + numRecords * 40 };
static ContextStruct NewContextStruct(int key) => new ContextStruct { cfield1 = key + numRecords * 50, cfield2 = key + numRecords * 60 };
static void VerifyStructs(int key, ref KeyStruct keyStruct, ref InputStruct inputStruct, ref OutputStruct outputStruct, ref ContextStruct contextStruct)
{
Assert.AreEqual(key, keyStruct.kfield1);
Assert.AreEqual(key + numRecords * 10, keyStruct.kfield2);
Assert.AreEqual(key + numRecords * 30, inputStruct.ifield1);
Assert.AreEqual(key + numRecords * 40, inputStruct.ifield2);
Assert.AreEqual(key, outputStruct.value.vfield1);
Assert.AreEqual(key + numRecords * 10, outputStruct.value.vfield2);
Assert.AreEqual(key + numRecords * 50, contextStruct.cfield1);
Assert.AreEqual(key + numRecords * 60, contextStruct.cfield2);
}
// This class reduces code duplication due to ClientSession vs. AdvancedClietnSession
class ProcessPending
{
// Get the first chunk of outputs as a group, testing realloc.
private int deferredPendingMax = CompletedOutputIterator<KeyStruct, ValueStruct, InputStruct, OutputStruct, ContextStruct>.kInitialAlloc + 1;
private int deferredPending = 0;
internal Dictionary<int, long> keyAddressDict = new Dictionary<int, long>();
private bool isFirst = true;
internal bool IsFirst()
{
var temp = this.isFirst;
this.isFirst = false;
return temp;
}
internal bool DeferPending()
{
if (deferredPending < deferredPendingMax)
{
++deferredPending;
return true;
}
return false;
}
internal void Process(CompletedOutputIterator<KeyStruct, ValueStruct, InputStruct, OutputStruct, ContextStruct> completedOutputs)
{
Assert.AreEqual(CompletedOutputIterator<KeyStruct, ValueStruct, InputStruct, OutputStruct, ContextStruct>.kInitialAlloc *
CompletedOutputIterator<KeyStruct, ValueStruct, InputStruct, OutputStruct, ContextStruct>.kReallocMultuple, completedOutputs.vector.Length);
Assert.AreEqual(deferredPending, completedOutputs.maxIndex);
Assert.AreEqual(-1, completedOutputs.currentIndex);
var count = 0;
for (; completedOutputs.Next(); ++count)
{
ref var result = ref completedOutputs.Current;
VerifyStructs((int)result.Key.kfield1, ref result.Key, ref result.Input, ref result.Output, ref result.Context);
Assert.AreEqual(keyAddressDict[(int)result.Key.kfield1], result.Address);
}
completedOutputs.Dispose();
Assert.AreEqual(deferredPending + 1, count);
Assert.AreEqual(-1, completedOutputs.maxIndex);
Assert.AreEqual(-1, completedOutputs.currentIndex);
deferredPending = 0;
deferredPendingMax /= 2;
}
internal void VerifyNoDeferredPending()
{
Assert.AreEqual(0, this.deferredPendingMax); // This implicitly does a null check as well as ensures processing actually happened
Assert.AreEqual(0, this.deferredPending);
}
}
[Test]
[Category("FasterKV")]
public async ValueTask ReadAndCompleteWithPendingOutput([Values]bool isAsync)
{
using var session = fht.For(new FunctionsWithContext<ContextStruct>()).NewSession<FunctionsWithContext<ContextStruct>>();
Assert.IsNull(session.completedOutputs); // Do not instantiate until we need it
ProcessPending processPending = new ProcessPending();
for (var key = 0; key < numRecords; ++key)
{
var keyStruct = NewKeyStruct(key);
var valueStruct = NewValueStruct(key);
processPending.keyAddressDict[key] = fht.Log.TailAddress;
session.Upsert(ref keyStruct, ref valueStruct);
}
// Flush to make reads go pending.
fht.Log.FlushAndEvict(wait: true);
for (var key = 0; key < numRecords; ++key)
{
var keyStruct = NewKeyStruct(key);
var inputStruct = NewInputStruct(key);
var contextStruct = NewContextStruct(key);
OutputStruct outputStruct = default;
// We don't use input or context, but we test that they were carried through correctly.
var status = session.Read(ref keyStruct, ref inputStruct, ref outputStruct, contextStruct);
if (status == Status.PENDING)
{
if (processPending.IsFirst())
{
session.CompletePending(wait: true); // Test that this does not instantiate CompletedOutputIterator
Assert.IsNull(session.completedOutputs); // Do not instantiate until we need it
continue;
}
CompletedOutputIterator<KeyStruct, ValueStruct, InputStruct, OutputStruct, ContextStruct> completedOutputs;
if (!processPending.DeferPending())
{
if (isAsync)
completedOutputs = await session.CompletePendingWithOutputsAsync();
else
session.CompletePendingWithOutputs(out completedOutputs, wait: true);
processPending.Process(completedOutputs);
}
continue;
}
Assert.IsTrue(status == Status.OK);
}
processPending.VerifyNoDeferredPending();
}
[Test]
[Category("FasterKV")]
public async ValueTask AdvReadAndCompleteWithPendingOutput([Values]bool isAsync)
{
using var session = fht.For(new AdvancedFunctionsWithContext<ContextStruct>()).NewSession<AdvancedFunctionsWithContext<ContextStruct>>();
Assert.IsNull(session.completedOutputs); // Do not instantiate until we need it
ProcessPending processPending = new ProcessPending();
for (var key = 0; key < numRecords; ++key)
{
var keyStruct = NewKeyStruct(key);
var valueStruct = NewValueStruct(key);
processPending.keyAddressDict[key] = fht.Log.TailAddress;
session.Upsert(ref keyStruct, ref valueStruct);
}
// Flush to make reads go pending.
fht.Log.FlushAndEvict(wait: true);
for (var key = 0; key < numRecords; ++key)
{
var keyStruct = NewKeyStruct(key);
var inputStruct = NewInputStruct(key);
var contextStruct = NewContextStruct(key);
OutputStruct outputStruct = default;
// We don't use input or context, but we test that they were carried through correctly.
var status = session.Read(ref keyStruct, ref inputStruct, ref outputStruct, contextStruct);
if (status == Status.PENDING)
{
if (processPending.IsFirst())
{
session.CompletePending(wait: true); // Test that this does not instantiate CompletedOutputIterator
Assert.IsNull(session.completedOutputs); // Do not instantiate until we need it
continue;
}
if (!processPending.DeferPending())
{
CompletedOutputIterator<KeyStruct, ValueStruct, InputStruct, OutputStruct, ContextStruct> completedOutputs;
if (isAsync)
completedOutputs = await session.CompletePendingWithOutputsAsync();
else
session.CompletePendingWithOutputs(out completedOutputs, wait: true);
processPending.Process(completedOutputs);
}
continue;
}
Assert.IsTrue(status == Status.OK);
}
processPending.VerifyNoDeferredPending();
}
}
}

Просмотреть файл

@ -28,7 +28,6 @@ namespace FASTER.test
static int entryLength = 100;
static int numEntries = 1000;
static int entryFlag = 9999;
private GetMemory getMemoryData;
// Create and populate the log file so can do various scans
[SetUp]

Просмотреть файл

@ -284,37 +284,21 @@ namespace FASTER.test
await AssertGetNext(asyncByteVectorIter, asyncMemoryOwnerIter, iter, data1);
// This will fail due to page overflow, leaving a "hole"
// This no longer fails in latest TryAllocate improvement
appendResult = log.TryEnqueue(data1, out _);
Assert.IsFalse(appendResult);
Assert.IsTrue(appendResult);
await log.CommitAsync();
await iter.WaitAsync();
async Task retryAppend(bool waitTaskIsCompleted)
{
Assert.IsFalse(waitTaskIsCompleted);
Assert.IsTrue(log.TryEnqueue(data1, out _));
await log.CommitAsync();
}
switch (iteratorType)
{
case IteratorType.Sync:
// Should read the "hole" and return false
Assert.IsFalse(iter.GetNext(out _, out _, out _));
// Should wait for next item
var task = iter.WaitAsync();
await retryAppend(task.IsCompleted);
await task;
// Now the data is available.
Assert.IsTrue(iter.GetNext(out _, out _, out _));
break;
case IteratorType.AsyncByteVector:
{
// Because we have a hole, awaiting MoveNextAsync would hang; instead, hold onto the task that results from WaitAsync() inside MoveNextAsync().
// No more hole
var moveNextTask = asyncByteVectorIter.MoveNextAsync();
await retryAppend(moveNextTask.IsCompleted);
// Now the data is available.
Assert.IsTrue(await moveNextTask);
@ -322,9 +306,8 @@ namespace FASTER.test
break;
case IteratorType.AsyncMemoryOwner:
{
// Because we have a hole, awaiting MoveNextAsync would hang; instead, hold onto the task that results from WaitAsync() inside MoveNextAsync().
// No more hole
var moveNextTask = asyncMemoryOwnerIter.MoveNextAsync();
await retryAppend(moveNextTask.IsCompleted);
// Now the data is available, and must be disposed.
Assert.IsTrue(await moveNextTask);

Просмотреть файл

@ -66,6 +66,7 @@ namespace FASTER.test
int numEntries = 1000;
int numEnqueueThreads = 1;
int numIterThreads = 1;
bool commitThread = false;
// Set Default entry data
for (int i = 0; i < entryLength; i++)
@ -73,6 +74,20 @@ namespace FASTER.test
entry[i] = (byte)i;
}
bool disposeCommitThread = false;
var commit =
new Thread(() =>
{
while (!disposeCommitThread)
{
Thread.Sleep(10);
log.Commit(true);
}
});
if (commitThread)
commit.Start();
Thread[] th = new Thread[numEnqueueThreads];
for (int t = 0; t < numEnqueueThreads; t++)
{
@ -95,7 +110,13 @@ namespace FASTER.test
for (int t = 0; t < numEnqueueThreads; t++)
th[t].Join();
// Commit to the log
if (commitThread)
{
disposeCommitThread = true;
commit.Join();
}
// Final commit to the log
log.Commit(true);
// flag to make sure data has been checked

Просмотреть файл

@ -174,7 +174,10 @@ namespace FASTER.test
{
var key = new MyKey { key = i };
var value = new MyValue { value = i };
session.Upsert(ref key, ref value);
var r = await session.UpsertAsync(ref key, ref value);
while (r.Status == Status.PENDING)
r = await r.CompleteAsync(); // test async version of Upsert completion
}
var key1 = new MyKey { key = 1989 };
@ -203,7 +206,7 @@ namespace FASTER.test
var key = new MyKey { key = i };
input = new MyInput { value = 1 };
var r = await session.RMWAsync(ref key, ref input, Empty.Default);
while (r.status == Status.PENDING)
while (r.Status == Status.PENDING)
{
r = await r.CompleteAsync(); // test async version of RMW completion
}

Просмотреть файл

@ -12,7 +12,7 @@ using System.Diagnostics;
namespace FASTER.test.readaddress
{
#if false // TODO temporarily deactivated due to removal of addresses from single-writer callbacks
#if false // TODO temporarily deactivated due to removal of addresses from single-writer callbacks (also add UpsertAsync where we do RMWAsync/Upsert)
[TestFixture]
public class ReadAddressTests
{
@ -273,7 +273,7 @@ namespace FASTER.test.readaddress
if (status == Status.PENDING)
{
// This will spin CPU for each retrieved record; not recommended for performance-critical code or when retrieving chains for multiple records.
session.CompletePending(spinWait: true);
session.CompletePending(wait: true);
output = context.output;
recordInfo = context.recordInfo;
status = context.status;
@ -343,7 +343,7 @@ namespace FASTER.test.readaddress
if (status == Status.PENDING)
{
// This will spin CPU for each retrieved record; not recommended for performance-critical code or when retrieving chains for multiple records.
session.CompletePending(spinWait: true);
session.CompletePending(wait: true);
output = context.output;
recordInfo = context.recordInfo;
status = context.status;
@ -361,7 +361,7 @@ namespace FASTER.test.readaddress
if (status == Status.PENDING)
{
// This will spin CPU for each retrieved record; not recommended for performance-critical code or when retrieving chains for multiple records.
session.CompletePending(spinWait: true);
session.CompletePending(wait: true);
output = context.output;
recordInfo = context.recordInfo;
status = context.status;
@ -532,7 +532,7 @@ namespace FASTER.test.readaddress
if (status == Status.PENDING)
{
// This will spin CPU for each retrieved record; not recommended for performance-critical code or when retrieving chains for multiple records.
session.CompletePending(spinWait: true);
session.CompletePending(wait: true);
output = context.output;
status = context.status;
context.Reset();

Просмотреть файл

@ -47,7 +47,6 @@ namespace FASTER.test.async
new DirectoryInfo(path).Delete(true);
}
// Test that does .ReadAsync with minimum parameters (ref key)
[Test]
[Category("FasterKV")]
@ -56,7 +55,9 @@ namespace FASTER.test.async
using var s1 = fht1.NewSession(new SimpleFunctions<long, long>());
for (long key = 0; key < numOps; key++)
{
s1.Upsert(ref key, ref key);
var r = await s1.UpsertAsync(ref key, ref key);
while (r.Status == Status.PENDING)
r = await r.CompleteAsync(); // test async version of Upsert completion
}
for (long key = 0; key < numOps; key++)
@ -76,7 +77,8 @@ namespace FASTER.test.async
using var s1 = fht1.NewSession(new SimpleFunctions<long, long>());
for (long key = 0; key < numOps; key++)
{
s1.Upsert(ref key, ref key);
var r = await s1.UpsertAsync(ref key, ref key);
r.Complete(); // test sync version of Upsert completion
}
for (long key = 0; key < numOps; key++)
@ -94,7 +96,8 @@ namespace FASTER.test.async
using var s1 = fht1.NewSession(new SimpleFunctions<long, long>());
for (long key = 0; key < numOps; key++)
{
s1.Upsert(ref key, ref key);
var r = await s1.UpsertAsync(ref key, ref key);
r.Complete(); // test sync version of Upsert completion
}
for (long key = 0; key < numOps; key++)
@ -169,6 +172,68 @@ namespace FASTER.test.async
Assert.IsTrue(status == Status.OK && output == key + input + input);
}
// Test that does .UpsertAsync, .ReadAsync, .DeleteAsync, .ReadAsync with minimum parameters passed by reference (ref key)
[Test]
[Category("FasterKV")]
public async Task UpsertReadDeleteReadAsyncMinParamByRefTest()
{
using var s1 = fht1.NewSession(new SimpleFunctions<long, long>());
for (long key = 0; key < numOps; key++)
{
var r = await s1.UpsertAsync(ref key, ref key);
while (r.Status == Status.PENDING)
r = await r.CompleteAsync(); // test async version of Upsert completion
}
Assert.IsTrue(numOps > 100);
for (long key = 0; key < numOps; key++)
{
var (status, output) = (await s1.ReadAsync(ref key)).Complete();
Assert.IsTrue(status == Status.OK && output == key);
}
{ // Scope for variables
long deleteKey = 99;
var r = await s1.DeleteAsync(ref deleteKey);
while (r.Status == Status.PENDING)
r = await r.CompleteAsync(); // test async version of Delete completion
var (status, _) = (await s1.ReadAsync(ref deleteKey)).Complete();
Assert.IsTrue(status == Status.NOTFOUND);
}
}
// Test that does .UpsertAsync, .ReadAsync, .DeleteAsync, .ReadAsync with minimum parameters passed by value (key)
[Test]
[Category("FasterKV")]
public async Task UpsertReadDeleteReadAsyncMinParamByValueTest()
{
using var s1 = fht1.NewSession(new SimpleFunctions<long, long>());
for (long key = 0; key < numOps; key++)
{
var status = (await s1.UpsertAsync(key, key)).Complete(); // test sync version of Upsert completion
Assert.AreNotEqual(Status.PENDING, status);
}
Assert.IsTrue(numOps > 100);
for (long key = 0; key < numOps; key++)
{
var (status, output) = (await s1.ReadAsync(key)).Complete();
Assert.IsTrue(status == Status.OK && output == key);
}
{ // Scope for variables
long deleteKey = 99;
var status = (await s1.DeleteAsync(deleteKey)).Complete(); // test sync version of Delete completion
Assert.AreNotEqual(Status.PENDING, status);
(status, _) = (await s1.ReadAsync(deleteKey)).Complete();
Assert.IsTrue(status == Status.NOTFOUND);
}
}
/* ** TO DO: Using StartAddress in ReadAsync is now obsolete - might be design change etc but until then, commenting out test **
*
// Test that uses StartAddress parameter
@ -218,7 +283,8 @@ namespace FASTER.test.async
using var s1 = fht1.NewSession(new SimpleFunctions<long, long>((a, b) => a + b));
for (key = 0; key < numOps; key++)
{
(await s1.RMWAsync(key, key)).Complete();
status = (await s1.RMWAsync(key, key)).Complete();
Assert.AreNotEqual(Status.PENDING, status);
}
for (key = 0; key < numOps; key++)

Просмотреть файл

@ -52,14 +52,24 @@ namespace FASTER.test
public ValueStruct value;
}
public class Functions : FunctionsBase<KeyStruct, ValueStruct, InputStruct, OutputStruct, Empty>
public struct ContextStruct
{
public override void RMWCompletionCallback(ref KeyStruct key, ref InputStruct input, Empty ctx, Status status)
public long cfield1;
public long cfield2;
}
public class Functions : FunctionsWithContext<Empty>
{
}
public class FunctionsWithContext<TContext> : FunctionsBase<KeyStruct, ValueStruct, InputStruct, OutputStruct, TContext>
{
public override void RMWCompletionCallback(ref KeyStruct key, ref InputStruct input, TContext ctx, Status status)
{
Assert.IsTrue(status == Status.OK);
}
public override void ReadCompletionCallback(ref KeyStruct key, ref InputStruct input, ref OutputStruct output, Empty ctx, Status status)
public override void ReadCompletionCallback(ref KeyStruct key, ref InputStruct input, ref OutputStruct output, TContext ctx, Status status)
{
Assert.IsTrue(status == Status.OK);
Assert.IsTrue(output.value.vfield1 == key.kfield1);
@ -94,6 +104,52 @@ namespace FASTER.test
}
}
public class AdvancedFunctions : AdvancedFunctionsWithContext<Empty>
{
}
public class AdvancedFunctionsWithContext<TContext> : AdvancedFunctionsBase<KeyStruct, ValueStruct, InputStruct, OutputStruct, TContext>
{
public override void RMWCompletionCallback(ref KeyStruct key, ref InputStruct input, TContext ctx, Status status)
{
Assert.IsTrue(status == Status.OK);
}
public override void ReadCompletionCallback(ref KeyStruct key, ref InputStruct input, ref OutputStruct output, TContext ctx, Status status, RecordInfo recordInfo)
{
Assert.IsTrue(status == Status.OK);
Assert.IsTrue(output.value.vfield1 == key.kfield1);
Assert.IsTrue(output.value.vfield2 == key.kfield2);
}
// Read functions
public override void SingleReader(ref KeyStruct key, ref InputStruct input, ref ValueStruct value, ref OutputStruct dst, long address) => dst.value = value;
public override void ConcurrentReader(ref KeyStruct key, ref InputStruct input, ref ValueStruct value, ref OutputStruct dst, ref RecordInfo recordInfo, long address) => dst.value = value;
// RMW functions
public override void InitialUpdater(ref KeyStruct key, ref InputStruct input, ref ValueStruct value)
{
value.vfield1 = input.ifield1;
value.vfield2 = input.ifield2;
}
public override bool InPlaceUpdater(ref KeyStruct key, ref InputStruct input, ref ValueStruct value, ref RecordInfo recordInfo, long address)
{
value.vfield1 += input.ifield1;
value.vfield2 += input.ifield2;
return true;
}
public override bool NeedCopyUpdate(ref KeyStruct key, ref InputStruct input, ref ValueStruct oldValue) => true;
public override void CopyUpdater(ref KeyStruct key, ref InputStruct input, ref ValueStruct oldValue, ref ValueStruct newValue)
{
newValue.vfield1 = oldValue.vfield1 + input.ifield1;
newValue.vfield2 = oldValue.vfield2 + input.ifield2;
}
}
public class FunctionsCompaction : FunctionsBase<KeyStruct, ValueStruct, InputStruct, OutputStruct, int>
{
public override void RMWCompletionCallback(ref KeyStruct key, ref InputStruct input, int ctx, Status status)

Просмотреть файл

@ -14,6 +14,7 @@ an API that allows one to performs a mix of Reads, Blind Updates (Upserts), and
operations. It supports data larger than memory, and accepts an `IDevice` implementation for storing logs on
storage. We have provided `IDevice` implementations for local file system and Azure Page Blobs, but one may create
new devices as well. We also offer meta-devices that can group device instances into sharded and tiered configurations.
FASTER may be used as a high-performance replacement for traditional concurrent data structures such as the
.NET ConcurrentDictionary, and additionally supports larger-than-memory data. It also supports checkpointing of the
data structure - both incremental and non-incremental. Operations on FASTER can be issued synchronously or
@ -58,7 +59,7 @@ var store = new FasterKV<long, string>(1L << 20, new LogSettings { LogDevice = l
1. Hash Table Size: This the number of buckets allocated to FASTER, where each bucket is 64 bytes (size of a cache line).
2. Log Settings: These are settings related to the size of the log and devices used by the log.
3. Checkpoint Settings: These are settings related to checkpoints, such as checkpoint type and folder. Covered in the
3. Checkpoint Settings: These are settings related to checkpoints, such as checkpoint type and folder. Covered in the
section on checkpointing [below](#checkpointing-and-recovery).
4. Serialization Settings: Used to provide custom serializers for key and value types. Serializers implement
`IObjectSerializer<Key>` for keys and `IObjectSerializer<Value>` for values. *These are only needed for
@ -67,8 +68,8 @@ non-blittable types such as C# class objects.*
The total in-memory footprint of FASTER is controlled by the following parameters:
1. Hash table size: This parameter (the first contructor argument) times 64 is the size of the in-memory hash table in bytes.
2. Log size: The logSettings.MemorySizeBits denotes the size of the in-memory part of the hybrid log, in bits. In other
words, the size of the log is 2^B bytes, for a parameter setting of B. Note that if the log points to class key or value
2. Log size: The logSettings.MemorySizeBits denotes the size of the in-memory part of the hybrid log, in bits. In other
words, the size of the log is 2^B bytes, for a parameter setting of B. Note that if the log points to class key or value
objects, this size only includes the 8-byte reference to the object. The older part of the log is spilled to storage.
Read more about managing memory in FASTER in the [tuning](/FASTER/docs/fasterkv-tuning) guide.
@ -98,8 +99,8 @@ Apart from Key and Value, the IFunctions interface is defined on three additiona
#### IAdvancedFunctions
`IAdvancedFunctions` is a superset of `IFunctions` and provides the same methods with some additional parameters:
- Callbacks for in-place updates receive the logical address of the record, which can be useful for applications such as indexing, and a reference to the `RecordInfo` header of the record, for use with the new locking calls.
- ReadCompletionCallback receives the `RecordInfo` of the record that was read.
- Other callbacks receive the logical address of the record, which can be useful for applications such as indexing.
`IAdvancedFunctions` also contains a new method, ConcurrentDeleter, which may be used to implement user-defined post-deletion logic, such as calling object Dispose.
@ -121,36 +122,80 @@ var session = store.For(new Functions()).NewSession<Functions>();
As with the `IFunctions` and `IAdvancedFunctions` interfaces, there are separate, non-inheriting session classes that provide identical methods: `ClientSession` is returned by `NewSession` for a `Functions` class that implements `IFunctions`, and `AdvancedClientSession` is returned by `NewSession` for a `Functions` class that implements `IAdvancedFunctions`.
You can then perform a sequence of read, upsert, and RMW operations on the session. FASTER supports synchronous versions of all operations, as well as async versions of read and RMW (upserts do not go async by default). The basic forms of these operations are described below; additional overloads are available.
You can then perform a sequence of read, upsert, and RMW operations on the session. FASTER supports synchronous versions of all operations, as well as async versions. While all methods exist in an async form, only read and RMW are generally expected to go async; upserts and deletes will only go async when it is necessary to wait on flush operations when appending records to the log. The basic forms of these operations are described below; additional overloads are available.
#### Read
```cs
// Sync
var status = session.Read(ref key, ref output);
var status = session.Read(ref key, ref input, ref output, context, serialNo);
await session.ReadAsync(key, input);
// Async
var (status, output) = (await session.ReadAsync(key, input)).Complete();
```
#### Upsert
```cs
// Sync
var status = session.Upsert(ref key, ref value);
var status = session.Upsert(ref key, ref value, context, serialNo);
// Async with sync operation completion
var status = (await s1.UpsertAsync(ref key, ref value)).Complete();
// Fully async (completions may themselves need to go async)
var r = await session.UpsertAsync(ref key, ref value);
while (r.Status == Status.PENDING)
r = await r.CompleteAsync();
```
#### RMW
```cs
// Sync
var status = session.RMW(ref key, ref input);
var status = session.RMW(ref key, ref input, context, serialNo);
await session.RMWAsync(key, input);
// Async with sync operation completion (completion may rarely go async)
var status = (await session.RMWAsync(ref key, ref input)).Complete();
// Fully async (completion may rarely go async)
var r = await session.RMWAsync(ref key, ref input);
while (r.Status == Status.PENDING)
r = await r.CompleteAsync();
```
#### Delete
```cs
// Sync
var status = session.Delete(ref key);
var status = session.Delete(ref key, context, serialNo);
// Async
var status = (await s1.DeleteAsync(ref key)).Complete();
// Fully async
var r = await session.DeleteAsync(ref key);
while (r.Status == Status.PENDING)
r = await r.CompleteAsync();
```
### Pending Operations
The sync form of `Read`, `Upsert`, `RMW`, and `Delete` may go pending due to IO operations. When a `Status.PENDING` is returned, you can call `CompletePending()` to wait for the results to arrive. It is generally most performant to issue many of these operations and call `CompletePending()` periodically or upon completion of a batch. An optional `wait` parameter allows you to wait until all pending operations issued on the session until that point are completed before this call returns. A second optional parameter, `spinWaitForCommit` allows you to further wait until all operations until that point are committed by a parallel checkpointing thread.
Pending operations call the appropriate completion callback on the functions object: any or all of `ReadCompletionCallback`, `UpsertCompletionCallback`, `RMWCompletionCallback`, and `DeleteCompletionCallback` may be called, depending on the completed operation(s).
For ease of retrieving outputs from the calling code, there is also a `CompletePendingWithOutputs()` and a `CompletePendingWithOutputsAsync()` that return an iterator over the `Output`s that were completed.
```cs
session.CompletePending(wait: true);
session.CompletePendingWithOutputs(out var completedOutputs, wait: true);
await session.CompletePendingAsync();
var completedOutputs = await session.CompletePendingWithOutputsAsync();
```
### Disposing
@ -284,7 +329,7 @@ FASTER also support true "log compaction", where the log is scanned and live rec
This call perform synchronous compaction on the provided session until the specific `compactUntil` address, scanning and copying the live records to the tail. It returns the actual log address that the call compacted until (next nearest record boundary). You can only compact until the log's `SafeReadOnlyAddress` as the rest of the log is still mutable in-place. If you wish, you can move the read-only address to the tail by calling `store.Log.ShiftReadOnlyToTail(store.Log.TailAddress, true)` or by simply taking a fold-over checkpoint (`await store.TakeHybridLogCheckpointAsync(CheckpointType.FoldOver)`).
Typically, you may compact around 20% (up to 100%) of the log, e.g., you could set `compactUntil` address to `store.Log.BeginAddress + 0.2 * (store.Log.SafeReadOnlyAddress - store.Log.BeginAddress)`. The parameter `shiftBeginAddress`, when true, causes log compation to also automatically shift the log's begin address when the compaction is complete. However, since live records are written to the tail, directly shifting the begin address may result in data loss if the store fails immediately after the call. If you do not want to lose data, you need to trigger compaction with `shiftBeginAddress` set to false, then complete a checkpoint (either fold-over or snaphot is fine), and then shift the begin address. Finally, you can take another checkpoint to save the new begin address. This is shown below:
Typically, you may compact around 20% (up to 100%) of the log, e.g., you could set `compactUntil` address to `store.Log.BeginAddress + 0.2 * (store.Log.SafeReadOnlyAddress - store.Log.BeginAddress)`. The parameter `shiftBeginAddress`, when true, causes log compaction to also automatically shift the log's begin address when the compaction is complete. However, since live records are written to the tail, directly shifting the begin address may result in data loss if the store fails immediately after the call. If you do not want to lose data, you need to trigger compaction with `shiftBeginAddress` set to false, then complete a checkpoint (either fold-over or snaphot is fine), and then shift the begin address. Finally, you can take another checkpoint to save the new begin address. This is shown below:
```cs
long compactUntil = store.Log.BeginAddress + 0.2 * (store.Log.SafeReadOnlyAddress - store.Log.BeginAddress);