Initial state for moving over to github.com

This commit is contained in:
Ryan Newton 2018-12-05 16:15:09 -08:00
Коммит 397c22bb83
460 изменённых файлов: 42606 добавлений и 0 удалений

10
._run_ci.sh Normal file
Просмотреть файл

@ -0,0 +1,10 @@
#!/bin/bash
set -xeu
# A simple script to build and test under Linux CI.
uname -a
pwd -P
cat /etc/issue || echo ok
./build_docker_images.sh run

258
.dockerignore Normal file
Просмотреть файл

@ -0,0 +1,258 @@
## Ignore Visual Studio temporary files, build results, and
## files generated by popular Visual Studio add-ons.
# User-specific files
*.suo
*.user
*.userosscache
*.sln.docstates
# User-specific files (MonoDevelop/Xamarin Studio)
*.userprefs
# Build results
[Dd]ebug/
[Dd]ebugPublic/
[Rr]elease/
[Rr]eleases/
[Xx]64/
[Xx]86/
[Bb]uild/
bld/
[Oo]bj/
# Visual Studio 2015 cache/options directory
.vs/
# Uncomment if you have tasks that create the project's static files in wwwroot
#wwwroot/
# MSTest test Results
[Tt]est[Rr]esult*/
[Bb]uild[Ll]og.*
# NUNIT
*.VisualState.xml
TestResult.xml
# Build Results of an ATL Project
[Dd]ebugPS/
[Rr]eleasePS/
dlldata.c
# DNX
project.lock.json
artifacts/
*_i.c
*_p.c
*_i.h
*.ilk
*.meta
*.obj
*.pch
*.pgc
*.pgd
*.rsp
*.sbr
*.tlb
*.tli
*.tlh
*.tmp
*.tmp_proj
*.log
*.vspscc
*.vssscc
.builds
*.pidb
*.svclog
*.scc
# Chutzpah Test files
_Chutzpah*
# Visual C++ cache files
ipch/
*.aps
*.ncb
*.opendb
*.opensdf
*.sdf
*.cachefile
*.VC.db
# Visual Studio profiler
*.psess
*.vsp
*.vspx
*.sap
# TFS 2012 Local Workspace
$tf/
# Guidance Automation Toolkit
*.gpState
# ReSharper is a .NET coding add-in
_ReSharper*/
*.[Rr]e[Ss]harper
*.DotSettings.user
# JustCode is a .NET coding add-in
.JustCode
# TeamCity is a build add-in
_TeamCity*
# DotCover is a Code Coverage Tool
*.dotCover
# NCrunch
_NCrunch_*
.*crunch*.local.xml
nCrunchTemp_*
# MightyMoose
*.mm.*
AutoTest.Net/
# Web workbench (sass)
.sass-cache/
# Installshield output folder
[Ee]xpress/
# DocProject is a documentation generator add-in
DocProject/buildhelp/
DocProject/Help/*.HxT
DocProject/Help/*.HxC
DocProject/Help/*.hhc
DocProject/Help/*.hhk
DocProject/Help/*.hhp
DocProject/Help/Html2
DocProject/Help/html
# Click-Once directory
# Publish Web Output
*.[Pp]ublish.xml
*.azurePubxml
# TODO: Un-comment the next line if you do not want to checkin
# your web deploy settings because they may include unencrypted
# passwords
#*.pubxml
*.publishproj
# NuGet Packages
*.nupkg
# The packages folder can be ignored because of Package Restore
**/packages/*
# except build/, which is used as an MSBuild target.
!**/packages/build/
# Uncomment if necessary however generally it will be regenerated when needed
#!**/packages/repositories.config
# NuGet v3's project.json files produces more ignoreable files
*.nuget.props
*.nuget.targets
# Microsoft Azure Build Output
csx/
*.build.csdef
# Microsoft Azure Emulator
ecf/
rcf/
# Microsoft Azure ApplicationInsights config file
ApplicationInsights.config
# Windows Store app package directory
AppPackages/
BundleArtifacts/
# Visual Studio cache files
# files ending in .cache can be ignored
*.[Cc]ache
# but keep track of directories ending in .cache
!*.[Cc]ache/
# Others
ClientBin/
[Ss]tyle[Cc]op.*
~$*
*~
*.dbmdl
*.dbproj.schemaview
*.pfx
*.publishsettings
node_modules/
orleans.codegen.cs
# RIA/Silverlight projects
Generated_Code/
# Backup & report files from converting an old project file
# to a newer Visual Studio version. Backup files are not needed,
# because we have git ;-)
_UpgradeReport_Files/
Backup*/
UpgradeLog*.XML
UpgradeLog*.htm
# SQL Server files
*.mdf
*.ldf
# Business Intelligence projects
*.rdl.data
*.bim.layout
*.bim_*.settings
# Microsoft Fakes
FakesAssemblies/
# GhostDoc plugin setting file
*.GhostDoc.xml
# Node.js Tools for Visual Studio
.ntvs_analysis.dat
# Visual Studio 6 build log
*.plg
# Visual Studio 6 workspace options file
*.opt
# Visual Studio LightSwitch build output
**/*.HTMLClient/GeneratedArtifacts
**/*.DesktopClient/GeneratedArtifacts
**/*.DesktopClient/ModelManifest.xml
**/*.Server/GeneratedArtifacts
**/*.Server/ModelManifest.xml
_Pvt_Extensions
# LightSwitch generated files
GeneratedArtifacts/
ModelManifest.xml
# Paket dependency manager
.paket/paket.exe
# FAKE - F# Make
.fake/
# Ignore InternalImmortals, because they typically build their own Docker containers:
InternalImmortals/
#Test run logs
/AmbrosiaTest/AmbrosiaTest/AmbrosiaLogs
/bin/
# Emacs temp files
.\#*
\#*\#
/FranklinTest_Local/FranklinTest_Local/cmp
.git
Dockerfile
build_docker_images.sh

67
.gitattributes поставляемый Normal file
Просмотреть файл

@ -0,0 +1,67 @@
###############################################################################
# Set default behavior to automatically normalize line endings.
###############################################################################
* text=auto
###############################################################################
# Set default behavior for command prompt diff.
#
# This is need for earlier builds of msysgit that does not have it on by
# default for csharp files.
# Note: This is only used by command line
###############################################################################
#*.cs diff=csharp
*.c text
*.h text
*.sh text eol=lf
Dockerfile* text eol=lf
###############################################################################
# Set the merge driver for project and solution files
#
# Merging from the command prompt will add diff markers to the files if there
# are conflicts (Merging from VS is not affected by the settings below, in VS
# the diff markers are never inserted). Diff markers may cause the following
# file extensions to fail to load in VS. An alternative would be to treat
# these files as binary and thus will always conflict and require user
# intervention with every merge. To do so, just uncomment the entries below
###############################################################################
#*.sln merge=binary
#*.csproj merge=binary
#*.vbproj merge=binary
#*.vcxproj merge=binary
#*.vcproj merge=binary
#*.dbproj merge=binary
#*.fsproj merge=binary
#*.lsproj merge=binary
#*.wixproj merge=binary
#*.modelproj merge=binary
#*.sqlproj merge=binary
#*.wwaproj merge=binary
###############################################################################
# behavior for image files
#
# image files are treated as binary by default.
###############################################################################
#*.jpg binary
#*.png binary
#*.gif binary
###############################################################################
# diff behavior for common document formats
#
# Convert binary document formats to text before diffing them. This feature
# is only available from the command line. Turn it on by uncommenting the
# entries below.
###############################################################################
#*.doc diff=astextplain
#*.DOC diff=astextplain
#*.docx diff=astextplain
#*.DOCX diff=astextplain
#*.dot diff=astextplain
#*.DOT diff=astextplain
#*.pdf diff=astextplain
#*.PDF diff=astextplain
#*.rtf diff=astextplain
#*.RTF diff=astextplain

262
.gitignore поставляемый Normal file
Просмотреть файл

@ -0,0 +1,262 @@
## Ignore Visual Studio temporary files, build results, and
## files generated by popular Visual Studio add-ons.
# User-specific files
*.suo
*.user
*.userosscache
*.sln.docstates
# User-specific files (MonoDevelop/Xamarin Studio)
*.userprefs
# Build results
[Dd]ebug/
[Dd]ebugPublic/
[Rr]elease/
[Rr]eleases/
[Xx]64/
[Xx]86/
[Bb]uild/
bld/
[Oo]bj/
# Visual Studio 2015 cache/options directory
.vs/
# Uncomment if you have tasks that create the project's static files in wwwroot
#wwwroot/
# MSTest test Results
[Tt]est[Rr]esult*/
[Bb]uild[Ll]og.*
# NUNIT
*.VisualState.xml
TestResult.xml
# Build Results of an ATL Project
[Dd]ebugPS/
[Rr]eleasePS/
dlldata.c
# DNX
project.lock.json
artifacts/
*_i.c
*_p.c
*_i.h
*.ilk
*.meta
*.obj
*.pch
*.pgc
*.pgd
*.rsp
*.sbr
*.tlb
*.tli
*.tlh
*.tmp
*.tmp_proj
*.log
*.vspscc
*.vssscc
.builds
*.pidb
*.svclog
*.scc
# Chutzpah Test files
_Chutzpah*
# Visual C++ cache files
ipch/
*.aps
*.ncb
*.opendb
*.opensdf
*.sdf
*.cachefile
*.VC.db
# Visual Studio profiler
*.psess
*.vsp
*.vspx
*.sap
# TFS 2012 Local Workspace
$tf/
# Guidance Automation Toolkit
*.gpState
# ReSharper is a .NET coding add-in
_ReSharper*/
*.[Rr]e[Ss]harper
*.DotSettings.user
# JustCode is a .NET coding add-in
.JustCode
# TeamCity is a build add-in
_TeamCity*
# DotCover is a Code Coverage Tool
*.dotCover
# NCrunch
_NCrunch_*
.*crunch*.local.xml
nCrunchTemp_*
# MightyMoose
*.mm.*
AutoTest.Net/
# Web workbench (sass)
.sass-cache/
# Installshield output folder
[Ee]xpress/
# DocProject is a documentation generator add-in
DocProject/buildhelp/
DocProject/Help/*.HxT
DocProject/Help/*.HxC
DocProject/Help/*.hhc
DocProject/Help/*.hhk
DocProject/Help/*.hhp
DocProject/Help/Html2
DocProject/Help/html
# Click-Once directory
# Publish Web Output
*.[Pp]ublish.xml
*.azurePubxml
# TODO: Un-comment the next line if you do not want to checkin
# your web deploy settings because they may include unencrypted
# passwords
#*.pubxml
*.publishproj
# NuGet Packages
*.nupkg
# The packages folder can be ignored because of Package Restore
**/packages/*
# except build/, which is used as an MSBuild target.
!**/packages/build/
# Uncomment if necessary however generally it will be regenerated when needed
#!**/packages/repositories.config
# NuGet v3's project.json files produces more ignoreable files
*.nuget.props
*.nuget.targets
# Microsoft Azure Build Output
csx/
*.build.csdef
# Microsoft Azure Emulator
ecf/
rcf/
# Microsoft Azure ApplicationInsights config file
ApplicationInsights.config
# Windows Store app package directory
AppPackages/
BundleArtifacts/
# Visual Studio cache files
# files ending in .cache can be ignored
*.[Cc]ache
# but keep track of directories ending in .cache
!*.[Cc]ache/
# Others
ClientBin/
[Ss]tyle[Cc]op.*
~$*
*~
*.dbmdl
*.dbproj.schemaview
*.pfx
*.publishsettings
node_modules/
orleans.codegen.cs
# RIA/Silverlight projects
Generated_Code/
# Backup & report files from converting an old project file
# to a newer Visual Studio version. Backup files are not needed,
# because we have git ;-)
_UpgradeReport_Files/
Backup*/
UpgradeLog*.XML
UpgradeLog*.htm
# SQL Server files
*.mdf
*.ldf
# Business Intelligence projects
*.rdl.data
*.bim.layout
*.bim_*.settings
# Microsoft Fakes
FakesAssemblies/
# GhostDoc plugin setting file
*.GhostDoc.xml
# Node.js Tools for Visual Studio
.ntvs_analysis.dat
# Visual Studio 6 build log
*.plg
# Visual Studio 6 workspace options file
*.opt
# Visual Studio LightSwitch build output
**/*.HTMLClient/GeneratedArtifacts
**/*.DesktopClient/GeneratedArtifacts
**/*.DesktopClient/ModelManifest.xml
**/*.Server/GeneratedArtifacts
**/*.Server/ModelManifest.xml
_Pvt_Extensions
# LightSwitch generated files
GeneratedArtifacts/
ModelManifest.xml
# Paket dependency manager
.paket/paket.exe
# FAKE - F# Make
.fake/
# CodeGen history files
**/GeneratedSourceFiles/*/*/
# Except latest generated code
!**/GeneratedSourceFiles/*/latest/
#Test run logs
/AmbrosiaTest/AmbrosiaTest/AmbrosiaLogs
/bin/
# Emacs temp files
.\#*
\#*\#
/FranklinTest_Local/FranklinTest_Local/cmp
/LocalFranklinRuntime/Franklin_TestApp/K8s
/Ambrosia/NuGet.Config
# Local launch settings
**/launchSettings.json

1
AKS-scripts/.gitignore поставляемый Normal file
Просмотреть файл

@ -0,0 +1 @@
generated*.yml

81
AKS-scripts/Build-AKS.sh Normal file
Просмотреть файл

@ -0,0 +1,81 @@
#!/bin/bash
set -euo pipefail
#############################################################################
## This script builds a service container. Note that a service container MAY
## have different entrypoints and underpin multiple service instances with
## different roles.
##
## Args:
## <service source path> -- Path to service src dir containing Dockerfile
##
#############################################################################
if [ $# -ne 1 ]
then
echo "usage: Build-KS.sh <service source path>"
exit 1
fi
SERVICE_SRC_PATH=$1
source `dirname $0`/Defs/Common-Defs.sh
echo "-----------Begin Build-and-Push-----------"
############################################
## Docker container registry login
############################################
## TODO: Docker container registry details could be stored in ENV
## variables and managed as secrets.
#
# For now, paste your Docker Credentials in this file:
if [ ! -e `dirname $0`/Defs/AmbrosiaAKSConf.sh ]; then
echo "ERROR: the file "`dirname $0`"/Defs/AmbrosiaAKSConf.sh does not exist!"
echo "Please follow the instructions in AmbrosiaAKSConf.sh.template to populate this file."
exit 1;
fi
source `dirname $0`/Defs/AmbrosiaAKSConf.sh
source `dirname $0`/Defs/Set-Docker-Vars.sh
echo "Connecting to the Private Docker Registry...."
set -x
$DOCKER login $DockerPrivateRegistry_URL -u $DockerPrivateRegistry_Login -p $DockerPrivateRegistry_Pwd
set +x
############################################
## Build the service container
############################################
## This script expects the Dockerfile for the service to exist in its directory.
# But first, we depend on the ambrosia-dev base image:
# Go and build the base images only if they are not found:
if [ "$($DOCKER images -q ambrosia-dev)" == "" ]; then
echo "Could not find 'ambrosia-dev' image, attempting to build it."
# Top of Ambrosia source working dir:
set -x
pushd `dirname $0`/../
./build_docker_images.sh
popd
set +x
fi
echo "Building the service Docker container..."
pushd $SERVICE_SRC_PATH
set -x
$DOCKER build -t $DockerPrivateRegistry_URL/$AMBROSIA_CONTAINER_NAME .
$DOCKER tag $DockerPrivateRegistry_URL/$AMBROSIA_CONTAINER_NAME $AMBROSIA_CONTAINER_NAME
set +x
popd
############################################
## Push the container to the Docker Registry
############################################
echo "Pushing the service Docker container..."
set -x
time $DOCKER push $DockerPrivateRegistry_URL/$AMBROSIA_CONTAINER_NAME
$AZ acr repository list --name $ACR_NAME
$AZ acr repository show-tags --name $ACR_NAME --repository $AMBROSIA_CONTAINER_NAME
set +x
echo "-----------Build-and-Push Finished-----------"
echo

92
AKS-scripts/Clean-AKS.sh Normal file
Просмотреть файл

@ -0,0 +1,92 @@
#!/bin/bash
# set -euo pipefail
# Optional helper script: delete resources from Azure/AKS to clean up
# when finished or restart fresh.
ECHO_CORE_DEFS=1
source `dirname $0`/Defs/Common-Defs.sh
function usage() {
echo
echo "Usage: $0 <all|most|auth>"
echo " * all: delete the entire resource group ($AZURE_RESOURCE_GROUP)"
echo " * most: delete all resources created inside the group EXCEPT the kubernetes cluster,"
echo " furthermore, delete the pods inside it."
echo " * auth: delete secrets and authentication info, forcing them to be recreated."
echo
exit 1
}
function clean_auth()
{
# Private Azure Container Registry access:
# ----------------------------------------
if EXISTING=$($AZ ad sp show --id http://$SERVICE_PRINCIPAL_NAME --query appId --output tsv) ;
then
echo "Deleting existing service principal: $EXISTING"
set -x
# To avoid problems as described here:
# https://github.com/Azure/azure-powershell/issues/4919
# namely this cryptic error:
# "Another object with the same value for property identifierUris already exists."
# we must delete the app as well:
APP_ID=$($AZ ad app list --identifier-uri http://$SERVICE_PRINCIPAL_NAME --query "[0].appId" --output tsv)
$AZ ad app delete --id $APP_ID
# $AZ ad sp delete --id http://$SERVICE_PRINCIPAL_NAME
set +x
else
echo "Service principal, $SERVICE_PRINCIPAL_NAME, not found. Not deleting."
fi
if $KUBE get secret $ACR_SECRET_NAME >- ;
then
echo "Secret $ACR_SECRET_NAME exists, deleting:"
set -x
$KUBE delete secret $ACR_SECRET_NAME
set +x
fi
# File share access secret (logs)
# ----------------------------------------
if $KUBE get secret $FILESHARE_SECRET_NAME 2>-;
then
echo "Secret already exists, deleting:"
set -x
$KUBE delete secret $FILESHARE_SECRET_NAME
set +x
fi
}
function clean_most()
{
echo "Deleting main Azure resources (except AKS cluster):"
set -x
# $AZ storage share delete --name $FILESHARE_NAME
$AZ storage account delete --name $AZURE_STORAGE_NAME -g $AZURE_RESOURCE_GROUP
$AZ acr delete --name "$ACR_NAME" --resource-group "$AZURE_RESOURCE_GROUP"
set +x
echo "Do not destroy the (slow to create) Kube cluster, but clear its pods:"
set -x
$AZ aks get-credentials --resource-group=$AZURE_RESOURCE_GROUP --name=$AZURE_KUBERNETES_CLUSTER
$KUBE config current-context
$KUBE delete pods,deployments --all
set +x
}
if [ $# -ne 1 ]; then usage; fi
MODE=$1
case $MODE in
all)
$AZ group delete --name "$AZURE_RESOURCE_GROUP" ;;
most)
clean_auth; clean_most ;;
auth)
clean_auth ;;
*)
echo "ERROR: unrecognized mode argument."
usage ;;
esac
# Blast away the secrets without deleting everything in the resource group.

Просмотреть файл

@ -0,0 +1,37 @@
#!/bin/bash
set -euo pipefail
# ASSUMES: that the storage account has already been provisioned.
# ASSUMES: that the caller sets AZURE_STORAGE_KEY
## Please see https://docs.microsoft.com/en-us/azure/aks/azure-files-volume for more documentation
# if [ $# -eq 0 ]
# then
# echo "usage: Create-AKS-Secrete-SMBFileShare.sh"
# exit 1
# fi
echo "-----------Begin Create-AKS-SMBFileShare-Secret-----------"
source `dirname $0`/Defs/Common-Defs.sh
if [[ ! -v AZURE_STORAGE_KEY ]]; then
echo "$0: AZURE_STORAGE_KEY not set, retrieving:"
source `dirname $0`/Defs/Set-Storage-Vars.sh
fi
if $KUBE get secret $FILESHARE_SECRET_NAME 2>-;
then
echo "File share secret exists, ASSUMING it's up-to-date."
echo " (Manually clean with 'Clean-AKS.sh auth'.)"
else
echo "Creating secret for Kubernetes file share access:"
set -x
$KUBE create secret generic $FILESHARE_SECRET_NAME \
--from-literal=azurestorageaccountname=$AZURE_STORAGE_NAME \
--from-literal=azurestorageaccountkey=$AZURE_STORAGE_KEY
set +x
fi
echo "-----------Create-AKS-SMBFileShare-Secret Finished-----------"
echo

Просмотреть файл

@ -0,0 +1,50 @@
#!/bin/bash
set -euo pipefail
################################################################################
# This script can be used to create a service principle which can then be used
# to create an image pull secret.
#
# Note: This is an alternative to giving direct access to ACS using the
# "Grant-AKS-acess-ACR.sh" script.
################################################################################
#Please see https://docs.microsoft.com/en-us/azure/container-registry/container-registry-auth-aks for more documentation
echo "-----------Begin Create-AKS-ServicePrincipal-Secret-----------"
source `dirname $0`/Defs/Common-Defs.sh
# Create a 'Reader' role assignment with a scope of the ACR resource.
# Idempotence: retrieve the password if it already exists, otherwise create:
if ! $AZ ad sp show --id http://$SERVICE_PRINCIPAL_NAME >- ;
then
echo "Creating 'Reader' role and password."
ACR_REGISTRY_ID=$($AZ acr show --name $ACR_NAME --query id --output tsv)
set -x
$AZ ad sp create-for-rbac --name http://$SERVICE_PRINCIPAL_NAME --role Reader --scopes $ACR_REGISTRY_ID
set +x
else
echo "Service principal exists, ASSUMING it's up-to-date (manually clean w Clean-AKS.sh)"
fi
source `dirname $0`/Defs/Set-Docker-Vars.sh
# Get the service principal client id.
set -x
CLIENT_ID=$($AZ ad sp show --id http://$SERVICE_PRINCIPAL_NAME --query appId --output tsv)
set +x
echo "Service principal ID: $CLIENT_ID"
echo "Service principal password: $DockerPrivateRegistry_Pwd"
# Create Kubernetes secret
if $KUBE get secret $ACR_SECRET_NAME >- ;
then
echo "Secret $ACR_SECRET_NAME exists, ASSUMING up-to-date (manually clean w/ Clean-AKS.sh)."
else
echo "Creating secret for Kube to access the private container registry:"
set -x
time $KUBE create secret docker-registry $ACR_SECRET_NAME --docker-server $DockerPrivateRegistry_URL --docker-username $CLIENT_ID --docker-password $DockerPrivateRegistry_Pwd --docker-email $DOCKER_EMAIL
set +x
fi
echo "-----------Create-AKS-ServicePrincipal-Secret Finished-----------"
echo

1
AKS-scripts/Defs/.gitignore поставляемый Normal file
Просмотреть файл

@ -0,0 +1 @@
AmbrosiaAKSConf.sh

Просмотреть файл

@ -0,0 +1,45 @@
# Rename this file to AmbrosiaAKSConf.sh (in the same directory) but
# DO NOT check it in. Modify it to include your configuration info
# and secrets.
#
# This file can be used to to deploy multiple application instances
# (pods) into a single Kubernetes cluster.
# ------------------------------------------------------------------------------
# The values in this file are just placeholders, but are meant to be
# somewhat suggestive ones. EVERY value is mandatory.
#
# SUGGESTION: you can get away with doing only TWO things here:
# (1) fill in AZURE_SUBSCRIPTION with your subscription ID.
# (2) find/replace "xyz" with a unique string, such as your github username
# ------------------------------------------------------------------------------
# The descriptive overall name of your AMBROSIA service (which may contain
# multiple instances, each of which contain multiple replicas).
AMBROSIA_SERVICE_NAME=perftest
# Find this in the Azure portal or with `az account list --output table`
AZURE_SUBSCRIPTION=********-****-****-****-************
# Where to put resources we create:
AZURE_LOCATION=eastus
DOCKER_EMAIL=xyz@xyz.com
# These must be GLOBALLY unique.
# They are not scoped per-subscription or per-resource group.
# ------------------------------------------------------------
AZURE_RESOURCE_GROUP=xyz-group
AZURE_KUBERNETES_CLUSTER=xyz-cluster
ACR_NAME=xyzcontainerreg
AZURE_STORAGE_NAME=xyzstorage
SERVICE_PRINCIPAL_NAME=xyz-aks-acr-service-principal
# ------------------------------------------------------------
# These variables don't need to be changed.
# You can use these defaults:
# ------------------------------------------------------------
ACR_SECRET_NAME=acr-auth-secret
FILESHARE_NAME=ambrosia-logs-smbshare
FILESHARE_SECRET_NAME=azure-fileshare-secret
# ------------------------------------------------------------
# See Defs/Set-*.sh for variables that are retrieved automatically after login.

Просмотреть файл

@ -0,0 +1,62 @@
# Sourced into other scripts
# Assumes a working directory of ../ relative to this script.
if [ `uname` == Linux ]; then
DOCKER=docker
AZ=az
KUBE=kubectl
# Also: KUBE=microk8s.kubectl
else
DOCKER=docker
AZ=az.cmd
KUBE=kubectl.exe
fi
if [ ! -e Defs/AmbrosiaAKSConf.sh ]; then
echo "ERROR: Defs/AmbrosiaAKSConf.sh does not exist."
echo " Use Defs/AmbrosiaAKSConf.sh.template and populate this file with your info."
exit 1
fi
if [[ -v ECHO_CORE_DEFS ]]; then
set -x
source Defs/AmbrosiaAKSConf.sh
set +x
else
source Defs/AmbrosiaAKSConf.sh
fi
# ^ must set AMBROSIA_SERVICE_NAME, used below.
# Let our subprocesses use these vars:
export AMBROSIA_SERVICE_NAME
export AZURE_SUBSCRIPTION
export AZURE_RESOURCE_GROUP
export AZURE_KUBERNETES_CLUSTER
export ACR_NAME
export AZURE_STORAGE_NAME
export FILESHARE_NAME
export DOCKER_EMAIL
export SERVICE_PRINCIPAL_NAME
export ACR_SECRET_NAME
export FILESHARE_SECRET_NAME
export AMBROSIA_CONTAINER_NAME
# These could be configurable, but we're setting boring defaults here instead:
PREFIX="ambrosia-"
POSTFIX="-container"
AMBROSIA_CONTAINER_NAME="${PREFIX}${AMBROSIA_SERVICE_NAME}${POSTFIX}"
export AMBROSIA_LOGDIR="/ambrosia_logs"
# Configure the ports used for localhost app<->coordinator communication,
# as well as coordinator<->coordinator networking.
# ------------------------------
# export LOCALPORT1=1000
# export LOCALPORT2=1001
# export AMBROSIA_IMMORTALCOORDINATOR_PORT=1500
export LOCALPORT1=50001
export LOCALPORT2=50002
export AMBROSIA_IMMORTALCOORDINATOR_PORT=50500

Просмотреть файл

@ -0,0 +1,16 @@
# Sourced into parent scripts or shell:
# Sets auth-related variables to access a docker registry.
if [[ ! -v AZ ]];
then
echo "Error, Set-Storage-Vars.sh: source Defs/Common-Defs.sh before this file."
else
DockerPrivateRegistry_Login=$ACR_NAME
DockerPrivateRegistry_URL=$($AZ acr show --name $ACR_NAME --query loginServer --output tsv)
# TSV output here avoids double quotes getting into the variable:
DockerPrivateRegistry_Pwd=$($AZ acr credential show --name $ACR_NAME --query "passwords[0].value" --output tsv)
export DockerPrivateRegistry_Login
export DockerPrivateRegistry_URL
export DockerPrivateRegistry_Pwd
fi

Просмотреть файл

@ -0,0 +1,20 @@
# Sourced into parent scripts or shell:
# Sets auth-related variables to access Azure Storage.
if [[ ! -v AZ ]];
then
echo "Error, Set-Storage-Vars.sh: source Defs/Common-Defs.sh before this file."
else
AZURE_STORAGE_CONNECTION_STRING=$($AZ storage account show-connection-string --name $AZURE_STORAGE_NAME --resource-group $AZURE_RESOURCE_GROUP --query connectionString --output tsv)
AZURE_STORAGE_KEY=$($AZ storage account keys list --account-name $AZURE_STORAGE_NAME --resource-group $AZURE_RESOURCE_GROUP --query "[0].value" --output tsv)
export AZURE_STORAGE_CONNECTION_STRING
export AZURE_STORAGE_KEY
# <FIXME>: Ambrosia should be corrected to not expect this!!!
AZURE_STORAGE_CONN_STRING="$AZURE_STORAGE_CONNECTION_STRING"
export AZURE_STORAGE_CONN_STRING
# </FIXME>: See VSTS bug 127
echo "AZURE_STORAGE_KEY=$AZURE_STORAGE_KEY"
echo "AZURE_STORAGE_CONNECTION_STRING=$AZURE_STORAGE_CONNECTION_STRING"
fi

117
AKS-scripts/Deploy-AKS.sh Normal file
Просмотреть файл

@ -0,0 +1,117 @@
#!/bin/bash
set -euo pipefail
####################################################################
## This script deploys a service instance to Kubernetes (AKS)
##
## Env vars: AZURE_STORAGE_CONNECTION_STRING must be set.
## Args:
## <instance name> -- Instance Name (serves as the ImmortalCooridinator instance name)
## <full command string> -- shell command to start the contianer
##
####################################################################
if [ $# -eq 0 ]
then
echo "usage: Deploy-AKS.sh <instance name> [shell command for container start]*"
exit 1
fi
AMBROSIA_INSTANCE_NAME=$1
shift
echo "-----------Begin Deploy-AKS ($AMBROSIA_INSTANCE_NAME)-----------"
source `dirname $0`/Defs/Common-Defs.sh
if [[ ! -v AZURE_STORAGE_CONNECTION_STRING ]]; then
echo "$0: AZURE_STORAGE_CONNECTION_STRING not set, retrieving:"
source `dirname $0`/Defs/Set-Storage-Vars.sh
fi
##########################################################################################
## Identifiers for the service and the docker container for this service
##########################################################################################
# FIXME: remove discrepancies:
YMLEXTENSION=".yml"
# Could append some randomness here to make this more unique:
UNIQUE_ID="generated-${AMBROSIA_INSTANCE_NAME}"
SERVICE_YML_FILE="${UNIQUE_ID}${YMLEXTENSION}" # Need yml file to have unique deployment using CRA name
##########################################################################################
## Registering the instance by calling LocalAmbrosiaRuntime
##########################################################################################
echo "Registering Instance..."
set -x
function DOCKRUN() {
# FIXME: this should work with ambrosia, instead of ambrosia-dev, but right now [2018.11.29] it is producing an error:
# Error trying to upload service. Exception: One or more errors occurred. (The type initializer for 'System.Net.Http.CurlHandler' threw an exception
$DOCKER run --rm \
--env AZURE_STORAGE_CONN_STRING="$AZURE_STORAGE_CONN_STRING" \
ambrosia-dev $*
# FIXME: bug 127 : --env AZURE_STORAGE_CONNECTION_STRING="$AZURE_STORAGE_CONNECTION_STRING" \
}
time DOCKRUN Ambrosia RegisterInstance -i $AMBROSIA_INSTANCE_NAME --rp $LOCALPORT1 --sp $LOCALPORT2 -l "$AMBROSIA_LOGDIR" --lts 1024
set +x
##########################################################################################
## Generate K8s YAML deployment file for this service from the template
##########################################################################################
source `dirname $0`/Defs/Set-Docker-Vars.sh
echo "Generating K8s Deployment YAML from Template...."
cp -f ScriptBits/lartemplate.yml $SERVICE_YML_FILE
sed -i "s/#CONTAINTERNAME#/${AMBROSIA_CONTAINER_NAME}/g" $SERVICE_YML_FILE
sed -i "s/#AMBROSIAINSTANCE#/${AMBROSIA_INSTANCE_NAME}/g" $SERVICE_YML_FILE
sed -i "s/#SERVICEEXEFILE#/${AMBROSIA_SERVICE_NAME}/g" $SERVICE_YML_FILE
sed -i "s/#DEPLOYMENTNAME#/${UNIQUE_ID}/g" $SERVICE_YML_FILE
sed -i "s/#REGISTRYURL#/${DockerPrivateRegistry_URL}/g" $SERVICE_YML_FILE
sed -i "s/#ACRSECRETNAME#/${ACR_SECRET_NAME}/g" $SERVICE_YML_FILE
sed -i "s/#FILESHARESECRETNAME#/${FILESHARE_SECRET_NAME}/g" $SERVICE_YML_FILE
sed -i "s/#FILESHARENAME#/${FILESHARE_NAME}/g" $SERVICE_YML_FILE
sed -i "s/#COORDPORT#/${AMBROSIA_IMMORTALCOORDINATOR_PORT}/g" $SERVICE_YML_FILE
sed -i "s/#LOCALPORT1#/${LOCALPORT1}/g" $SERVICE_YML_FILE
sed -i "s/#LOCALPORT2#/${LOCALPORT2}/g" $SERVICE_YML_FILE
# Use an alternate delimiter because the string contains forward slash:
sed -i "s|#AZURECONNSTRING#|${AZURE_STORAGE_CONNECTION_STRING}|" $SERVICE_YML_FILE
sed -i "s|#FULLCOMMANDSTRING#|$*|" $SERVICE_YML_FILE
sed -i "s|#LOGDIR#|${AMBROSIA_LOGDIR}|" $SERVICE_YML_FILE
##########################################################################################
## Deploy the template file to ACS/AKS
##########################################################################################
echo "Deploying to K8s..."
set -x
# # <IMPERATIVE METHOD>
# # This step is NOT idempotent, and thus we ask for the users intervention:
# if $KUBE get -f $SERVICE_YML_FILE ;
# then
# set +x
# echo
# echo "Kubernetes cluster already deployed. Bring it down with:"
# echo " $KUBE delete -f $SERVICE_YML_FILE"
# echo "And then try again."
# exit 1;
# fi
# $KUBE create -f $SERVICE_YML_FILE
# # </IMPERATIVE METHOD>
# <DECLARATIVE METHOD>
$KUBE apply -f $SERVICE_YML_FILE
# </DECLARATIVE METHOD>
$KUBE get pods
set +x
echo "-----------Finished Deploy-AKS ($AMBROSIA_INSTANCE_NAME)-----------"
echo

Просмотреть файл

@ -0,0 +1,46 @@
#!/bin/bash
set -euo pipefail
################################################################################
## This script might need to be executed to give AKS cluster access to
## ACS. This access is needed to pull images from AKS.
##
## Please see
## https://docs.microsoft.com/en-us/azure/container-registry/container-registry-auth-aks
## for more documentation
################################################################################
if [ $# -ne 0 ]
then
echo "usage: Grant-AKS-acess-ACR.sh"
echo "Expects no arguments. Uses Common-Defs.sh and AmbrosiaAKSConf.sh"
exit 1
fi
echo "-----------Begin Grant-AKS-access-ACR-----------"
source `dirname $0`/Defs/Common-Defs.sh
## Assumes that both AKS and ACR are in the same resource group
set -x
# Get the id of the service principal configured for AKS
CLIENT_ID=$($AZ aks show --resource-group $AZURE_RESOURCE_GROUP --name $AZURE_KUBERNETES_CLUSTER --query "servicePrincipalProfile.clientId" --output tsv)
# Get the ACR registry resource id
ACR_ID=$($AZ acr show --name $ACR_NAME --resource-group $AZURE_RESOURCE_GROUP --query "id" --output tsv)
# Set it to empty string if there's an error:
EXISTING_ROLES=""
EXISTING_ROLES=$($AZ role assignment list --role Reader --scope $ACR_ID --query '[0].id')
set +x
echo
# Create role assignment
if [ "$EXISTING_ROLES" == "" ]; then
echo "Creating new role assignment."
set -x
$AZ role assignment create --assignee $CLIENT_ID --role Reader --scope $ACR_ID
set +x
else echo "Role assignments exists, ASSUMING it is correct."
fi
echo "-----------End Grant-AKS-access-ACR-----------"
echo

Просмотреть файл

@ -0,0 +1,86 @@
#!/bin/bash
set -euo pipefail
cd `dirname $0`
source Defs/Common-Defs.sh
echo "Logging into az CLI *if needed*..."
set -x
$AZ account list --output table || $AZ login
# Mutate the (stateful) az CLI to point to this subscription:
$AZ account set --subscription $AZURE_SUBSCRIPTION
# TODO: ^ can be eliminated by making sure all subsequent commands pass --subscription
set +x
echo
echo "--------Provision the resource group if it does not exist--------"
if [ `$AZ group exists --name $AZURE_RESOURCE_GROUP` == "false" ]; then
set -x
time $AZ group create --name $AZURE_RESOURCE_GROUP -l $AZURE_LOCATION ;
set +x
else
echo "Resource group already exists, not creating. (az group exists)"
fi
echo
echo "--------Provision the storage account if it does not exist--------"
if [ "" == "$($AZ storage account list --output table --subscription $AZURE_SUBSCRIPTION -g $AZURE_RESOURCE_GROUP)" ];
then
echo
set -x
time $AZ storage account create --name $AZURE_STORAGE_NAME -g $AZURE_RESOURCE_GROUP -l $AZURE_LOCATION
set +x
else
echo "Storage account already exists, not creating. (az storage account list)"
fi
echo
echo "--------Now we're ready to retrieve the connection string--------"
source `dirname $0`/Defs/Set-Storage-Vars.sh
echo
echo "--------Provision the file share if it does not exist--------"
echo "This step is idempotent:"
set -x
# TODO: May want to delete it to make sure the logs start fresh:
# $AZ storage share delete --name $FILESHARE_NAME --account-name $ACR_NAME --account-key="$AZURE_STORAGE_KEY"
$AZ storage share create --name $FILESHARE_NAME --quota "80" --account-name $ACR_NAME --account-key="$AZURE_STORAGE_KEY"
set +x
echo
echo "---------Provision the Container registry if needed---------"
if [ "" == "$($AZ acr list --output table --subscription $AZURE_SUBSCRIPTION -g $AZURE_RESOURCE_GROUP)" ];
then
set -x
# TODO: remove need for admin access here:
time $AZ acr create --name "$ACR_NAME" --resource-group "$AZURE_RESOURCE_GROUP" --sku Standard --admin-enabled true -l $AZURE_LOCATION
set +x
else
echo "Container registry already exists, not creating. (az acr list)"
fi
echo "Now log into the Azure Container Registry:"
set -x
$AZ acr login --name "$ACR_NAME"
set +x
echo
echo "--------Provision the Kubernetes Cluster if it's not already there--------"
if ! $AZ aks get-credentials --resource-group=$AZURE_RESOURCE_GROUP --name=$AZURE_KUBERNETES_CLUSTER 2>- ;
then
echo "Kubernetes cluster not found, creating."
set -x
time $AZ aks create --resource-group $AZURE_RESOURCE_GROUP --name=$AZURE_KUBERNETES_CLUSTER --node-count 2 --generate-ssh-keys -l $AZURE_LOCATION
$AZ aks get-credentials --resource-group=$AZURE_RESOURCE_GROUP --name=$AZURE_KUBERNETES_CLUSTER
set +x
else
echo "Kubernetes cluster already exists, not creating. (az aks get-credentials)"
fi
set -x
$KUBE config use-context $AZURE_KUBERNETES_CLUSTER
$KUBE config current-context
$KUBE get nodes
set +x
echo "-----------AKS Provisioning Finished-----------"
echo

111
AKS-scripts/README.md Normal file
Просмотреть файл

@ -0,0 +1,111 @@
AMBROSIA on Kubernetes, README & Quick Start
============================================
This directory contains scripts for launching an AMBROSIA service on a
Kubernetes cluster, and in particular, an Azure Kubernetes (AKS) cluster.
These scripts are launched from a *development machine*, which is
currently assumed to be running Linux, but which in the future may
allow the same Bash scripts to be run on Windows or Mac. Running the
scripts will locally build Docker containers before pushing them to
the cloud, and thus it assumes some prerequisites:
* Azure CLI 2.0
* Kubernetes CLI (kubectl)
* Docker command line tools
The main entrypoint is "run-end-to-end-test.sh", which is designed to
be *modified* to suite your application. It is initially configured
to build and deploy the InternalImmortals/PerformanceTestInterruptable
application, which consists of two pods that communicate with
eachother to test the performance of the RPC channel.
The other scripts in this directory automate various aspects of
deploying on AKS, including the authentication steps. It is designed
to use with a fresh Azure resource group.
Step 1: configure your deployment
---------------------------------
Use the provided template:
cp Defs/AmbrosiaAKSConf.sh.template Defs/AmbrosiaAKSConf.sh
$EDITOR Defs/AmbrosiaAKSConf.sh
Fill in your Azure subscription identifier and select the name of a
new resource group which will be created. (It is best to isolate this
from other Azure resources you may have running.)
Step 2: provision and run
-------------------------
./run-end-to-end-test.sh
That's it! The first time this script runs it will take a long time
to provision the storage account, container registry, file share, and
Kubernetes cluster. Running it again is safe, and it will run faster
once these things have already been created.
At the end of all that `kubectl get pods` should show two running
pods. Note that while the steps taken by `./run-end-to-end-test.sh`
should be idempotent, it does *not* provide safe "incremental builds"
when things change. Therefore you should take it upon yourself to
clean up when modifying the Azure/AKS-related (e.g. the contents of
AmbrosiaAKSConf.sh):
./Clean-AKS.sh <all|most|auth>
In contrast, changes to the *application* logic -- confined to the
application Dockerfile and resulting Docker container -- do not
require cleaning. They can be rerun on the previously provisioned
Azure resources, simply by reexecuting the run-end-to-end-test script.
Note if you are RERUNNING this PerformanceTestInterruptable or another
application then you typically want to *delete* the logs that are
mounted into the Azure Files SMB share (mounted at `/ambrosia_logs/`
by default). You can do this manually, or `Clean-AKS.sh most` is
sufficient to do it.
Step 3: viewing the output
--------------------------
Use the name shown in `kubectl get pods` to print the logs of the client container:
kubectl logs -f generated-perftestclient-*******
Eventually this will show lines of outputs that contain performance measurements:
*X* 65536 0.0103833476182065
*X* 32768 0.00985864082367517
These show throughput in GiB/s for a given message size. When you're
done, you can use `Clean-AKS.sh all` to delete the entire resource
group (or do it yourself in the Azure web portal).
Step 4: testing virtual resiliency, aka IMMORTALITY
---------------------------------------------------
Ambrosia, being the nectar of the gods, confers immortality on the
processes running on it. What this means is that if you kill a
container/pod, it will be able to restart and use durable storage to
recover exactly the state it was at before.
In order to demonstrate this with the sample application we will
emulate a crashed machine by killing the client application. Start
the application, then, while it is running, issue this command inside
the container to kill the client:
kubectl exec -it generated-perftestclient-******* bash
kill $(pidof Job)
If you look at `kubectl get pods --watch`, you will see Kubernetes
attempting to automatically restart the container, which will itself
go down as soon as the main executable (Job) is killed.
(Note, perftestclient may actually take several tries to start back
up, because the quick attempts to restart in the same pod hit an
`System.Net.Sockets.SocketException: Address already in use` error.
A real failure that brought down a machine, and thus one or more
pods, would not have this problem.)

Просмотреть файл

@ -0,0 +1,55 @@
kind : Deployment
apiVersion : extensions/v1beta1
metadata :
name : #DEPLOYMENTNAME#
namespace : default
labels :
app : #DEPLOYMENTNAME#
spec :
replicas : 1
template :
metadata :
name : #DEPLOYMENTNAME#
labels:
app: #DEPLOYMENTNAME#
spec:
dnsPolicy: Default
imagePullSecrets :
- name: #ACRSECRETNAME#
containers:
- name : #DEPLOYMENTNAME#
image : #REGISTRYURL#/#CONTAINTERNAME#
ports:
- containerPort: #COORDPORT#
name: craport
- containerPort: #LOCALPORT1#
name: sendport
- containerPort: #LOCALPORT2#
name: rcvport
- containerPort: 4022
name: dbgport
env:
- name: AMBROSIA_IMMORTALCOORDINATOR_PORT
value: "#COORDPORT#"
- name: AMBROSIA_INSTANCE_NAME
value: #AMBROSIAINSTANCE#
- name: AZURE_STORAGE_CONNECTION_STRING
value: "#AZURECONNSTRING#"
# FIXME: see VSTS bug 127:
- name: AZURE_STORAGE_CONN_STRING
value: "#AZURECONNSTRING#"
- name: AMBROSIA_LOGDIR
value: "#LOGDIR#"
volumeMounts:
- mountPath: "#LOGDIR#"
name: ambrosia-logs
command: ["bash"]
args: ["-c", "#FULLCOMMANDSTRING#"]
volumes:
- name: ambrosia-logs
azureFile:
secretName: #FILESHARESECRETNAME#
shareName: #FILESHARENAME#
readOnly: false
---

Просмотреть файл

@ -0,0 +1,83 @@
#!/bin/bash
set -euo pipefail
################################################################################
# Script to launch a service instance (coordinator + app), often
# inside a container.
################################################################################
# Responds to ENV VARS:
# * AMBROSIA_INSTANCE_NAME (required)
#
# * AMBROSIA_IMMORTALCOORDINATOR_PORT (optional)
# - this port should be open on the container, and is used for
# coordinator-coordinator communication
#
# * AMBROSIA_SILENT_COORDINATOR (optional)
# - if set, this suppresses coordinator messages to stdout,
# but they still go to /var/log/ImmortalCoordinator.log
if [[ ! -v AMBROSIA_INSTANCE_NAME ]]; then
echo "ERROR: unbound environment variable: AMBROSIA_INSTANCE_NAME"
echo "runAmbrosiaService.sh expects it to be bound to the service instance name."
echo "This is the same name that was registered with 'ambrosia RegisterInstance' "
exit 1
fi
if [[ -v AMBROSIA_IMMORTALCOORDINATOR_PORT ]];
then
echo "Using environment var AMBROSIA_IMMORTALCOORDINATOR_PORT=$AMBROSIA_IMMORTALCOORDINATOR_PORT"
else
AMBROSIA_IMMORTALCOORDINATOR_PORT=1500
echo "Using default AMBROSIA_IMMORTALCOORDINATOR_PORT of $AMBROSIA_IMMORTALCOORDINATOR_PORT"
fi
COORDLOG=/var/log/ImmortalCoordinator.log
# Arguments: all passed through to the coordinator.
# Returns: when the Coordinator is READY (in the background).
# Returns: sets "coord_pid" to the return value.
#
# ASSUMES: ImmortalCoordinator in $PATH
#
# Side effect: uses a log file on disk in the same directory as this script.
# Side effect: runs a tail proycess in the background
function start_immortal_coordinator() {
echo "Launching coordingator with: ImmortalCoordinator" $*
echo " Redirecting output to: $COORDLOG"
# Bound the total amount of output used by the ImmortalCoordinator log:
ImmortalCoordinator $* 2>1 | rotatelogs -f -t "$COORDLOG" 10M &
coord_pid=$!
while [ ! -e "$COORDLOG" ]; do
echo " -> Waiting for $COORDLOG to appear"
sleep 1
done
if [[ ! -v AMBROSIA_SILENT_COORDINATOR ]]; then
tail -F $COORDLOG | while read l; do echo " [ImmortalCoord] $l"; done &
fi
while ! grep -q "Ready" "$COORDLOG" && kill -0 $coord_pid 2>- ;
do sleep 2; done
if ! kill -0 $coord_pid 2>- ;
then echo
echo "ERROR: coordinator died while we were waiting. Final log ended with:"
tail $COORDLOG
exit 1;
fi
echo "Coordinator ready."
}
# Step 1:
start_immortal_coordinator -i $AMBROSIA_INSTANCE_NAME -p $AMBROSIA_IMMORTALCOORDINATOR_PORT
# Step 2:
echo "Launching app client process:"
set -x
$*
set +x
echo "Ambrosia: client exited, killing coordinator..."
kill $coord_pid || echo ok

Просмотреть файл

@ -0,0 +1,67 @@
#!/bin/bash
set -euo pipefail
if [ ! -e Defs/AmbrosiaAKSConf.sh ]; then
echo "You're not ready yet! (Defs/AmbrosiaAKSConf.sh does not exist)"
echo
echo "This script demonstrates the full process of provisioning and deploying AMBROSIA on K8s."
echo "The only configuration needed is to fill out Defs/AmbrosiaAKSConf.sh.template"
echo
echo "Please follow the instructions in README.md and in that template file."
echo
exit 1
fi
echo "$0: Provision and run an AMBROSIA app on Azure Kubernetes Service"
echo "Running with these user settings:"
( export ECHO_CORE_DEFS=1; source `dirname $0`/Defs/Common-Defs.sh)
echo
# This should perform IDEMPOTENT OPERATIONS
#------------------------------------------
# STEP 0: Create Azure resources.
./Provision-Resources.sh
# STEPs 1-3: Secrets and Authentication
./Grant-AKS-access-ACR.sh # TODO: Problems on Windows/Git-Bash [2018.12.03]
./Create-AKS-ServicePrincipal-Secret.sh
./Create-AKS-SMBFileShare-Secret.sh
# STEP 4: Building and pushing Docker.
./Build-AKS.sh "../InternalImmortals/PerformanceTestInterruptible/"
# STEP 5: Deploy two pods.
echo "-----------Pre-deploy cleanup-----------"
source Defs/Common-Defs.sh
echo "These are the secrets Kubernetes will use to access files/containers:"
$KUBE get secrets
echo
echo "Deleting all pods in this test Kubernetes instance before redeploying"
$KUBE get pods
time $KUBE delete pods,deployments -l app=generated-perftestclient
time $KUBE delete pods,deployments -l app=generated-perftestserver
$KUBE get pods
# Dummy version:
# ./Deploy-AKS.sh perftestserver 'while true; do sleep 1; echo server `date +%s`; done'
# ./Deploy-AKS.sh perftestclient 'while true; do sleep 1; echo client `date +%s`; done'
# [2018.12.03] If we run a DUMMY SERVICE here, the Coordinators do get to a "Ready" state.
./Deploy-AKS.sh perftestserver \
'runAmbrosiaService.sh Server --sp '$LOCALPORT1' --rp '$LOCALPORT2' -j perftestclient -s perftestserver -n 1 -c'
# 'runAmbrosiaService.sh sleep 99999999'
# 'runAmbrosiaService.sh yes'
./Deploy-AKS.sh perftestclient \
'runAmbrosiaService.sh Job --sp '$LOCALPORT1' --rp '$LOCALPORT2' -j perftestclient -s perftestserver --mms 65536 -n 13 -c'
# 'runAmbrosiaService.sh sleep 99999999'
set +x
echo "-----------------------------------------------------------------------"
echo " ** End-to-end AKS / Kubernetes test script completed successfully. ** "
echo
source `dirname $0`/Defs/Common-Defs.sh
echo "P.S. If you want to delete the ENTIRE resource group, and thus everything touched by this script, run:"
echo " az group delete --name $AZURE_RESOURCE_GROUP"
echo

34
Ambrosia.nuspec Normal file
Просмотреть файл

@ -0,0 +1,34 @@
<?xml version="1.0"?>
<package >
<metadata>
<id>AmbrosiaLibCS</id>
<version>0.0.5</version>
<title>AmbrosiaLibCS</title>
<authors>Ambrosia</authors>
<owners>Microsoft</owners>
<projectUrl>https://msrfranklin.visualstudio.com/_projects</projectUrl>
<requireLicenseAcceptance>false</requireLicenseAcceptance>
<description>The AmbrosiaLibCS Binary Distribution</description>
<releaseNotes>None yet</releaseNotes>
<copyright>Copyright (C) 2018 Microsoft Corporation</copyright>
<language>en-US</language>
<tags>"MS Internal Only"</tags>
<dependencies>
<dependency id="Microsoft.CodeAnalysis.CSharp.Scripting" version="2.7.0" />
<dependency id="System.Runtime.Serialization.Primitives" version="4.3.0" />
<dependency id="System.Threading.Tasks.Dataflow" version="4.9.0" />
<dependency id="Microsoft.VisualStudio.Threading" version="15.8.168" />
<dependency id="Microsoft.VisualStudio.Validation" version="15.3.58" />
</dependencies>
</metadata>
<files>
<file src="Clients\CSharp\AmbrosiaLibCS\bin\x64\$configuration$\net46\AmbrosiaLibCS.dll" target="lib\net46" />
<file src="Clients\CSharp\AmbrosiaLibCS\bin\x64\$configuration$\net46\AmbrosiaLibCS.pdb" target="lib\net46" />
<file src="Clients\CSharp\AmbrosiaLibCS\bin\x64\$configuration$\net46\Ambrosia.exe" target="lib\net46" />
<file src="Clients\CSharp\AmbrosiaLibCS\bin\x64\$configuration$\net46\Ambrosia.pdb" target="lib\net46" />
<file src="Clients\CSharp\AmbrosiaLibCS\bin\x64\$configuration$\netcoreapp2.0\AmbrosiaLibCS.dll" target="lib\netcoreapp2.0" />
<file src="Clients\CSharp\AmbrosiaLibCS\bin\x64\$configuration$\netcoreapp2.0\AmbrosiaLibCS.pdb" target="lib\netcoreapp2.0" />
<file src="Clients\CSharp\AmbrosiaLibCS\bin\x64\$configuration$\netcoreapp2.0\Ambrosia.dll" target="lib\netcoreapp2.0" />
<file src="Clients\CSharp\AmbrosiaLibCS\bin\x64\$configuration$\netcoreapp2.0\Ambrosia.pdb" target="lib\netcoreapp2.0" />
</files>
</package>

48
Ambrosia/Ambrosia.sln Normal file
Просмотреть файл

@ -0,0 +1,48 @@

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio 15
VisualStudioVersion = 15.0.27004.2006
MinimumVisualStudioVersion = 10.0.40219.1
Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "adv-file-ops", "adv-file-ops\adv-file-ops.vcxproj", "{5852AC33-6B01-44F5-BAF3-2AAF796E8449}"
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "Solution Items", "Solution Items", "{0BEADEF6-C937-465D-814B-726C3E2A22BA}"
ProjectSection(SolutionItems) = preProject
nuget.config = nuget.config
EndProjectSection
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "ImmortalCoordinator", "..\ImmortalCoordinator\ImmortalCoordinator.csproj", "{5C94C516-377C-4113-8C5F-DF4A016D1B3A}"
ProjectSection(ProjectDependencies) = postProject
{5852AC33-6B01-44F5-BAF3-2AAF796E8449} = {5852AC33-6B01-44F5-BAF3-2AAF796E8449}
EndProjectSection
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "Ambrosia", "Ambrosia\Ambrosia.csproj", "{F704AE0A-C37B-4D30-B9ED-0C76C62D66EC}"
ProjectSection(ProjectDependencies) = postProject
{5852AC33-6B01-44F5-BAF3-2AAF796E8449} = {5852AC33-6B01-44F5-BAF3-2AAF796E8449}
EndProjectSection
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|x64 = Debug|x64
Release|x64 = Release|x64
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{5852AC33-6B01-44F5-BAF3-2AAF796E8449}.Debug|x64.ActiveCfg = Release|x64
{5852AC33-6B01-44F5-BAF3-2AAF796E8449}.Debug|x64.Build.0 = Release|x64
{5852AC33-6B01-44F5-BAF3-2AAF796E8449}.Release|x64.ActiveCfg = Release|x64
{5852AC33-6B01-44F5-BAF3-2AAF796E8449}.Release|x64.Build.0 = Release|x64
{5C94C516-377C-4113-8C5F-DF4A016D1B3A}.Debug|x64.ActiveCfg = Debug|x64
{5C94C516-377C-4113-8C5F-DF4A016D1B3A}.Debug|x64.Build.0 = Debug|x64
{5C94C516-377C-4113-8C5F-DF4A016D1B3A}.Release|x64.ActiveCfg = Release|x64
{5C94C516-377C-4113-8C5F-DF4A016D1B3A}.Release|x64.Build.0 = Release|x64
{F704AE0A-C37B-4D30-B9ED-0C76C62D66EC}.Debug|x64.ActiveCfg = Debug|x64
{F704AE0A-C37B-4D30-B9ED-0C76C62D66EC}.Debug|x64.Build.0 = Debug|x64
{F704AE0A-C37B-4D30-B9ED-0C76C62D66EC}.Release|x64.ActiveCfg = Release|x64
{F704AE0A-C37B-4D30-B9ED-0C76C62D66EC}.Release|x64.Build.0 = Release|x64
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
GlobalSection(ExtensibilityGlobals) = postSolution
SolutionGuid = {1C6CEC2E-E856-4CBB-BD71-262FFB53BE62}
EndGlobalSection
EndGlobal

Просмотреть файл

@ -0,0 +1,56 @@
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFrameworks>netcoreapp2.0;net46</TargetFrameworks>
<TargetLatestRuntimePatch>true</TargetLatestRuntimePatch>
<Platforms>x64</Platforms>
<RuntimeIdentifiers>win7-x64</RuntimeIdentifiers>
<AllowUnsafeBlocks>true</AllowUnsafeBlocks>
<RootNamespace>Ambrosia</RootNamespace>
</PropertyGroup>
<PropertyGroup Condition="$([System.Text.RegularExpressions.Regex]::IsMatch('$(TargetFramework)', '^net\d'))">
<DefineConstants>$(DefineConstants);NETFRAMEWORK</DefineConstants>
</PropertyGroup>
<PropertyGroup Condition="$([System.Text.RegularExpressions.Regex]::IsMatch('$(TargetFramework)', '^netcoreapp\d'))">
<DefineConstants>$(DefineConstants);NETCORE</DefineConstants>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.VisualStudio.Threading">
<Version>15.8.168</Version>
</PackageReference>
<PackageReference Include="Mono.Options.Core" Version="1.0.0" />
<PackageReference Include="Newtonsoft.Json">
<Version>11.0.2</Version>
</PackageReference>
<PackageReference Include="Remote.Linq">
<Version>5.8.1</Version>
</PackageReference>
<PackageReference Include="System.Runtime.Serialization.Primitives">
<Version>4.3.0</Version>
</PackageReference>
<PackageReference Include="WindowsAzure.Storage">
<Version>9.3.2</Version>
</PackageReference>
<PackageReference Include="CRA">
<Version>2018.11.5.1</Version>
</PackageReference>
</ItemGroup>
<ItemGroup Condition="'$(TargetFramework)' == 'net46'">
<Content Include="..\adv-file-ops\x64\Release\adv-file-ops.dll">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</Content>
</ItemGroup>
<ItemGroup Condition="'$(TargetFramework)' == 'netcoreapp2.0'">
<PackageReference Include="System.Configuration.ConfigurationManager">
<Version>4.5.0</Version>
</PackageReference>
</ItemGroup>
<ItemGroup>
<Reference Include="Mono.Options.Core">
<HintPath>..\..\..\..\Users\talzacc\.nuget\packages\mono.options.core\1.0.0\lib\netstandard1.3\Mono.Options.Core.dll</HintPath>
</Reference>
</ItemGroup>
</Project>

Просмотреть файл

@ -0,0 +1,51 @@
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<appSettings>
</appSettings>
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.6"/>
</startup>
<system.serviceModel>
<extensions>
<!-- In this extension section we are introducing all known service bus extensions. User can remove the ones they don't need. -->
<behaviorExtensions>
<add name="connectionStatusBehavior" type="Microsoft.ServiceBus.Configuration.ConnectionStatusElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="transportClientEndpointBehavior" type="Microsoft.ServiceBus.Configuration.TransportClientEndpointBehaviorElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="serviceRegistrySettings" type="Microsoft.ServiceBus.Configuration.ServiceRegistrySettingsElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
</behaviorExtensions>
<bindingElementExtensions>
<add name="netMessagingTransport" type="Microsoft.ServiceBus.Messaging.Configuration.NetMessagingTransportExtensionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="tcpRelayTransport" type="Microsoft.ServiceBus.Configuration.TcpRelayTransportElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="httpRelayTransport" type="Microsoft.ServiceBus.Configuration.HttpRelayTransportElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="httpsRelayTransport" type="Microsoft.ServiceBus.Configuration.HttpsRelayTransportElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="onewayRelayTransport" type="Microsoft.ServiceBus.Configuration.RelayedOnewayTransportElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
</bindingElementExtensions>
<bindingExtensions>
<add name="basicHttpRelayBinding" type="Microsoft.ServiceBus.Configuration.BasicHttpRelayBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="webHttpRelayBinding" type="Microsoft.ServiceBus.Configuration.WebHttpRelayBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="ws2007HttpRelayBinding" type="Microsoft.ServiceBus.Configuration.WS2007HttpRelayBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="netTcpRelayBinding" type="Microsoft.ServiceBus.Configuration.NetTcpRelayBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="netOnewayRelayBinding" type="Microsoft.ServiceBus.Configuration.NetOnewayRelayBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="netEventRelayBinding" type="Microsoft.ServiceBus.Configuration.NetEventRelayBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
<add name="netMessagingBinding" type="Microsoft.ServiceBus.Messaging.Configuration.NetMessagingBindingCollectionElement, Microsoft.ServiceBus, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/>
</bindingExtensions>
</extensions>
</system.serviceModel>
<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<dependentAssembly>
<assemblyIdentity name="Newtonsoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral"/>
<bindingRedirect oldVersion="0.0.0.0-11.0.0.0" newVersion="11.0.0.0"/>
</dependentAssembly>
<dependentAssembly>
<assemblyIdentity name="Microsoft.Azure.KeyVault.Core" publicKeyToken="31bf3856ad364e35" culture="neutral"/>
<bindingRedirect oldVersion="0.0.0.0-2.0.0.0" newVersion="2.0.0.0"/>
</dependentAssembly>
<dependentAssembly>
<assemblyIdentity name="Microsoft.VisualStudio.Validation" publicKeyToken="b03f5f7f11d50a3a" culture="neutral"/>
<bindingRedirect oldVersion="0.0.0.0-14.0.0.0" newVersion="14.0.0.0"/>
</dependentAssembly>
</assemblyBinding>
<gcServer enabled="true"/>
</runtime>
</configuration>

Просмотреть файл

@ -0,0 +1,214 @@
// *********************************************************************
// Copyright (C) Microsoft. All rights reserved.
// *********************************************************************
using System;
using System.Collections.Generic;
using System.Runtime.CompilerServices;
using System.Runtime.Serialization;
namespace Ambrosia
{
[DataContract]
sealed class CircularBuffer<T>
{
public const int DefaultCapacity = 0xfff;
[DataMember]
public T[] Items = new T[DefaultCapacity+1];
[DataMember]
public int head = 0;
[DataMember]
public int tail = 0;
public CircularBuffer()
{
}
public T PeekFirst()
{
return Items[head];
}
public T PeekLast()
{
return Items[(tail-1) & DefaultCapacity];
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public void Enqueue(ref T value)
{
int next = (tail + 1) & DefaultCapacity;
if (next == head)
{
throw new InvalidOperationException("The list is full!");
}
Items[tail] = value;
tail = next;
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public T Dequeue()
{
if (head == tail)
{
throw new InvalidOperationException("The list is empty!");
}
int oldhead = head;
head = (head + 1) & DefaultCapacity;
var ret = Items[oldhead];
Items[oldhead] = default(T);
return ret;
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public bool IsFull() => (((tail + 1) & DefaultCapacity) == head);
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public bool IsEmpty() => (head == tail);
public IEnumerable<T> Iterate()
{
int i = head;
while (i != tail)
{
yield return Items[i];
i = (i + 1) & DefaultCapacity;
}
}
}
/// <summary>
/// Currently for internal use only - do not use directly.
/// </summary>
/// <typeparam name="T"></typeparam>
[DataContract]
public sealed class ElasticCircularBuffer<T> : IEnumerable<T>
{
private LinkedList<CircularBuffer<T>> buffers;
private LinkedListNode<CircularBuffer<T>> head;
private LinkedListNode<CircularBuffer<T>> tail;
private int count;
/// <summary>
/// Currently for internal use only - do not use directly.
/// </summary>
public ElasticCircularBuffer()
{
buffers = new LinkedList<CircularBuffer<T>>();
var node = new LinkedListNode<CircularBuffer<T>>(new CircularBuffer<T>());
buffers.AddFirst(node);
tail = head = node;
count = 0;
}
/// <summary>
/// Currently for internal use only - do not use directly.
/// </summary>
/// <param name="value"></param>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public void Enqueue(ref T value)
{
if (tail.Value.IsFull())
{
var next = tail.Next;
if (next == null) next = buffers.First;
if (!next.Value.IsEmpty())
{
next = new LinkedListNode<CircularBuffer<T>>(new CircularBuffer<T>());
buffers.AddAfter(tail, next);
}
tail = next;
}
tail.Value.Enqueue(ref value);
count++;
}
/// <summary>
/// Currently for internal use only - do not use directly.
/// </summary>
/// <param name="value"></param>
public void Add(T value)
{
Enqueue(ref value);
}
/// <summary>
/// Currently for internal use only - do not use directly.
/// </summary>
/// <returns></returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public T Dequeue()
{
if (head.Value.IsEmpty())
{
if (head == tail)
throw new InvalidOperationException("The list is empty!");
head = head.Next;
if (head == null) head = buffers.First;
}
count--;
return head.Value.Dequeue();
}
/// <summary>
/// Currently for internal use only - do not use directly.
/// </summary>
/// <returns></returns>
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public T PeekFirst()
{
//if (head.Value.IsEmpty())
if (head.Value.head == head.Value.tail)
{
if (head == tail)
throw new InvalidOperationException("The list is empty!");
head = head.Next;
if (head == null) head = buffers.First;
}
//return head.Value.PeekFirst();
return head.Value.Items[head.Value.head];
}
/// <summary>
/// Currently for internal use only - do not use directly.
/// </summary>
/// <returns></returns>
public T PeekLast()
{
if (tail.Value.IsEmpty())
throw new InvalidOperationException("The list is empty!");
return tail.Value.PeekLast();
}
/// <summary>
/// Currently for internal use only - do not use directly.
/// </summary>
/// <returns></returns>
public bool IsEmpty() => (head.Value.IsEmpty() && (head == tail));
IEnumerable<T> Iterate()
{
foreach (CircularBuffer<T> buffer in buffers)
{
foreach (T item in buffer.Iterate())
{
yield return item;
}
}
}
/// <summary>
/// Currently for internal use only - do not use directly.
/// </summary>
public int Count => count;
/// <summary>
/// Currently for internal use only - do not use directly.
/// </summary>
/// <returns></returns>
public IEnumerator<T> GetEnumerator() => Iterate().GetEnumerator();
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() => GetEnumerator();
}
}

Просмотреть файл

@ -0,0 +1,348 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using CRA.ClientLibrary;
#if WINDOWS_UWP
using Windows.Storage;
using Windows.Storage.Streams;
#endif
namespace Ambrosia
{
internal static class LogReaderUtils
{
internal static void ReadBig(this LogReader reader,
Stream writeToStream,
long checkpointSize)
{
var blockSize = 1024 * 1024;
var buffer = new byte[blockSize];
while (checkpointSize > 0)
{
int bytesRead;
if (checkpointSize >= blockSize)
{
bytesRead = reader.Read(buffer, 0, blockSize);
}
else
{
bytesRead = reader.Read(buffer, 0, (int)checkpointSize);
}
writeToStream.Write(buffer, 0, bytesRead);
checkpointSize -= bytesRead;
}
}
}
#if NETFRAMEWORK || NETCORE
public class LogReader : IDisposable
{
Stream stream;
public long Position
{
get { return stream.Position; }
set { stream.Position = value; }
}
public LogReader(string fileName)
{
stream = new FileStream(fileName, FileMode.Open, FileAccess.Read, FileShare.ReadWrite);
}
public async Task<Tuple<int, int>> ReadIntAsync(byte[] buffer)
{
return await stream.ReadIntAsync(buffer);
}
public async Task<Tuple<int, int>> ReadIntAsync(byte[] buffer, CancellationToken ct)
{
return await stream.ReadIntAsync(buffer, ct);
}
public Tuple<int, int> ReadInt(byte[] buffer)
{
return stream.ReadInt(buffer);
}
public int ReadInt()
{
return stream.ReadInt();
}
public async Task<int> ReadAllRequiredBytesAsync(byte[] buffer,
int offset,
int count,
CancellationToken ct)
{
return await stream.ReadAllRequiredBytesAsync(buffer, offset, count, ct);
}
public async Task<int> ReadAllRequiredBytesAsync(byte[] buffer,
int offset,
int count)
{
return await stream.ReadAllRequiredBytesAsync(buffer, offset, count);
}
public int ReadAllRequiredBytes(byte[] buffer,
int offset,
int count)
{
return stream.ReadAllRequiredBytes(buffer, offset, count);
}
public long ReadLongFixed()
{
return stream.ReadLongFixed();
}
public int ReadIntFixed()
{
return stream.ReadIntFixed();
}
public byte[] ReadByteArray()
{
return stream.ReadByteArray();
}
public int ReadByte()
{
return stream.ReadByte();
}
public int Read(byte[] buffer, int offset, int count)
{
return stream.Read(buffer, offset, count);
}
public void Dispose()
{
stream.Dispose();
}
}
#endif
#if WINDOWS_UWP
// TODO: Figure out if there is a better way to implement Read().
//
// TODO: Figure out if there's a way to avoid having so much duplicated code
// from the various versions of StreamCommunicator.
//
// TODO: Figure out proper way to handle async LogReader methods when the
// underlying UWP class only provides a synchronous implementation.
//
// TODO: Figure out if waiting on a Task in the Position setter will cause
// problems.
public class LogReader : IDisposable
{
StorageFile _file;
IRandomAccessStream _stream;
IInputStream _inputStream;
DataReader _dataReader;
public long Position
{
get { return (long)_stream.Position; }
set
{
_dataReader.Dispose();
_inputStream.Dispose();
_inputStream = _stream.GetInputStreamAt((ulong)value);
_dataReader = new DataReader(_inputStream);
_dataReader.LoadAsync((uint)_stream.Size).AsTask().Wait();
}
}
public LogReader(string fileName)
{
InitializeAsync(fileName).Wait();
}
public async Task InitializeAsync(string fileName)
{
_file = await StorageFile.GetFileFromPathAsync(fileName);
_stream = await _file.OpenAsync(FileAccessMode.Read);
_inputStream = _stream.GetInputStreamAt(0);
_dataReader = new DataReader(_inputStream);
await _dataReader.LoadAsync((uint)_stream.Size);
}
public async Task<Tuple<int, int>> ReadIntAsync(byte[] buffer,
CancellationToken ct)
{
return ReadInt(buffer);
}
public async Task<Tuple<int, int>> ReadIntAsync(byte[] buffer)
{
return ReadInt(buffer);
}
// Copied from StreamCommunicator
public Tuple<int, int> ReadInt(byte[] buffer)
{
buffer[0] = (byte)ReadByte();
var currentByte = (uint)buffer[0];
byte read = 1;
uint result = currentByte & 0x7FU;
int shift = 7;
while ((currentByte & 0x80) != 0)
{
buffer[read] = (byte)ReadByte();
currentByte = (uint)buffer[read];
read++;
result |= (currentByte & 0x7FU) << shift;
shift += 7;
if (read > 5)
{
throw new Exception("Invalid integer value in the input stream.");
}
}
return new Tuple<int, int>((int)((-(result & 1)) ^ ((result >> 1) & 0x7FFFFFFFU)), read);
}
// Copied from StreamCommunicator
public int ReadInt()
{
var currentByte = (uint)ReadByte();
byte read = 1;
uint result = currentByte & 0x7FU;
int shift = 7;
while ((currentByte & 0x80) != 0)
{
currentByte = (uint)ReadByte();
read++;
result |= (currentByte & 0x7FU) << shift;
shift += 7;
if (read > 5)
{
throw new Exception("Invalid integer value in the input stream.");
}
}
return (int)((-(result & 1)) ^ ((result >> 1) & 0x7FFFFFFFU));
}
public async Task<int> ReadAllRequiredBytesAsync(byte[] buffer,
int offset,
int count,
CancellationToken ct)
{
return ReadAllRequiredBytes(buffer, offset, count);
}
public async Task<int> ReadAllRequiredBytesAsync(byte[] buffer,
int offset,
int count)
{
return ReadAllRequiredBytes(buffer, offset, count);
}
// Copied from StreamCommunicator
public int ReadAllRequiredBytes(byte[] buffer,
int offset,
int count)
{
int toRead = count;
int currentOffset = offset;
int currentRead;
do
{
currentRead = Read(buffer, currentOffset, toRead);
currentOffset += currentRead;
toRead -= currentRead;
}
while (toRead > 0 && currentRead != 0);
return currentOffset - offset;
}
// Copied from StreamCommunicator
public long ReadLongFixed()
{
var value = new byte[8];
ReadAllRequiredBytes(value, 0, value.Length);
long intValue = value[0]
| (long)value[1] << 0x8
| (long)value[2] << 0x10
| (long)value[3] << 0x18
| (long)value[4] << 0x20
| (long)value[5] << 0x28
| (long)value[6] << 0x30
| (long)value[7] << 0x38;
return intValue;
}
// Copied from StreamCommunicator
public int ReadIntFixed()
{
var value = new byte[4];
ReadAllRequiredBytes(value, 0, value.Length);
int intValue = value[0]
| (int)value[1] << 0x8
| (int)value[2] << 0x10
| (int)value[3] << 0x18;
return intValue;
}
// Copied from CRA version of StreamCommunicator
public byte[] ReadByteArray()
{
int arraySize = ReadInt32();
var array = new byte[arraySize];
if (arraySize > 0)
{
ReadAllRequiredBytes(array, 0, array.Length);
}
return array;
}
// Copied from CRA version of StreamCommunicator
private int ReadInt32()
{
var currentByte = (uint)ReadByte();
byte read = 1;
uint result = currentByte & 0x7FU;
int shift = 7;
while ((currentByte & 0x80) != 0)
{
currentByte = (uint)ReadByte();
read++;
result |= (currentByte & 0x7FU) << shift;
shift += 7;
if (read > 5)
{
throw new InvalidOperationException("Invalid integer value in the input stream.");
}
}
return (int)((-(result & 1)) ^ ((result >> 1) & 0x7FFFFFFFU));
}
public int ReadByte()
{
return _dataReader.ReadByte();
}
public int Read(byte[] buffer, int offset, int count)
{
int bytesRead = 0;
for (int i = 0; i < count; i++)
{
buffer[offset + i] = _dataReader.ReadByte();
bytesRead++;
}
return bytesRead;
}
public void Dispose()
{
_dataReader.Dispose();
_inputStream.Dispose();
_stream.Dispose();
}
}
#endif
}

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,338 @@

namespace mtcollections.persistent
{
using System;
using System.Runtime.InteropServices;
using System.Security;
using Microsoft.Win32.SafeHandles;
using System.Threading;
/// <summary>
/// Interop with WINAPI for file I/O, threading, and NUMA functions.
/// </summary>
public static unsafe class Native32
{
#region io constants and flags
public const uint INFINITE = unchecked((uint)-1);
public const int ERROR_IO_PENDING = 997;
public const uint ERROR_IO_INCOMPLETE = 996;
public const uint ERROR_NOACCESS = 998;
public const uint ERROR_HANDLE_EOF = 38;
public const int ERROR_FILE_NOT_FOUND = 0x2;
public const int ERROR_PATH_NOT_FOUND = 0x3;
public const int ERROR_INVALID_DRIVE = 0x15;
public const uint FILE_BEGIN = 0;
public const uint FILE_CURRENT = 1;
public const uint FILE_END = 2;
public const uint FORMAT_MESSAGE_ALLOCATE_BUFFER = 0x00000100;
public const uint FORMAT_MESSAGE_IGNORE_INSERTS = 0x00000200;
public const uint FORMAT_MESSAGE_FROM_SYSTEM = 0x00001000;
public const uint INVALID_HANDLE_VALUE = unchecked((uint)-1);
public const uint GENERIC_READ = 0x80000000;
public const uint GENERIC_WRITE = 0x40000000;
public const uint GENERIC_EXECUTE = 0x20000000;
public const uint GENERIC_ALL = 0x10000000;
public const uint READ_CONTROL = 0x00020000;
public const uint FILE_READ_ATTRIBUTES = 0x0080;
public const uint FILE_READ_DATA = 0x0001;
public const uint FILE_READ_EA = 0x0008;
public const uint STANDARD_RIGHTS_READ = READ_CONTROL;
public const uint FILE_APPEND_DATA = 0x0004;
public const uint FILE_WRITE_ATTRIBUTES = 0x0100;
public const uint FILE_WRITE_DATA = 0x0002;
public const uint FILE_WRITE_EA = 0x0010;
public const uint STANDARD_RIGHTS_WRITE = READ_CONTROL;
public const uint FILE_GENERIC_READ =
FILE_READ_ATTRIBUTES
| FILE_READ_DATA
| FILE_READ_EA
| STANDARD_RIGHTS_READ;
public const uint FILE_GENERIC_WRITE =
FILE_WRITE_ATTRIBUTES
| FILE_WRITE_DATA
| FILE_WRITE_EA
| STANDARD_RIGHTS_WRITE
| FILE_APPEND_DATA;
public const uint FILE_SHARE_DELETE = 0x00000004;
public const uint FILE_SHARE_READ = 0x00000001;
public const uint FILE_SHARE_WRITE = 0x00000002;
public const uint CREATE_ALWAYS = 2;
public const uint CREATE_NEW = 1;
public const uint OPEN_ALWAYS = 4;
public const uint OPEN_EXISTING = 3;
public const uint TRUNCATE_EXISTING = 5;
public const uint FILE_FLAG_DELETE_ON_CLOSE = 0x04000000;
public const uint FILE_FLAG_NO_BUFFERING = 0x20000000;
public const uint FILE_FLAG_OPEN_NO_RECALL = 0x00100000;
public const uint FILE_FLAG_OVERLAPPED = 0x40000000;
public const uint FILE_FLAG_RANDOM_ACCESS = 0x10000000;
public const uint FILE_FLAG_SEQUENTIAL_SCAN = 0x08000000;
public const uint FILE_FLAG_WRITE_THROUGH = 0x80000000;
public const uint FILE_ATTRIBUTE_ENCRYPTED = 0x4000;
/// <summary>
/// Represents additional options for creating unbuffered overlapped file stream.
/// </summary>
[Flags]
public enum UnbufferedFileOptions : uint
{
None = 0,
WriteThrough = 0x80000000,
DeleteOnClose = 0x04000000,
OpenReparsePoint = 0x00200000,
Overlapped = 0x40000000,
}
#endregion
#region io functions
[DllImport("Kernel32.dll", CharSet = CharSet.Unicode, SetLastError = true)]
public static extern SafeFileHandle CreateFileW(
[In] string lpFileName,
[In] UInt32 dwDesiredAccess,
[In] UInt32 dwShareMode,
[In] IntPtr lpSecurityAttributes,
[In] UInt32 dwCreationDisposition,
[In] UInt32 dwFlagsAndAttributes,
[In] IntPtr hTemplateFile);
[DllImport("kernel32.dll", CharSet = CharSet.Unicode, SetLastError = true)]
public static extern void CloseHandle(
[In] SafeHandle handle);
[DllImport("Kernel32.dll", SetLastError = true)]
public static extern bool ReadFile(
[In] SafeFileHandle hFile,
[Out] IntPtr lpBuffer,
[In] UInt32 nNumberOfBytesToRead,
[Out] out UInt32 lpNumberOfBytesRead,
[In] NativeOverlapped* lpOverlapped);
[DllImport("Kernel32.dll", SetLastError = true)]
public static extern bool WriteFile(
[In] SafeFileHandle hFile,
[In] IntPtr lpBuffer,
[In] UInt32 nNumberOfBytesToWrite,
[Out] out UInt32 lpNumberOfBytesWritten,
[In] NativeOverlapped* lpOverlapped);
[DllImport("Kernel32.dll", SetLastError = true)]
public static extern bool GetOverlappedResult(
[In] SafeFileHandle hFile,
[In] NativeOverlapped* lpOverlapped,
[Out] out UInt32 lpNumberOfBytesTransferred,
[In] bool bWait);
[DllImport("adv-file-ops.dll", SetLastError = true)]
public static extern bool CreateAndSetFileSize(ref string filename, Int64 file_size);
[DllImport("adv-file-ops.dll", SetLastError = true)]
public static extern bool EnableProcessPrivileges();
[DllImport("adv-file-ops.dll", SetLastError = true)]
public static extern bool EnableVolumePrivileges(ref string filename, SafeFileHandle hFile);
[DllImport("adv-file-ops.dll", SetLastError = true)]
public static extern bool SetFileSize(SafeFileHandle hFile, Int64 file_size);
public enum EMoveMethod : uint
{
Begin = 0,
Current = 1,
End = 2
}
[DllImport("kernel32.dll", SetLastError = true)]
public static extern uint SetFilePointer(
[In] SafeFileHandle hFile,
[In] int lDistanceToMove,
[In, Out] ref int lpDistanceToMoveHigh,
[In] EMoveMethod dwMoveMethod);
[DllImport("kernel32.dll", SetLastError = true)]
public static extern uint SetFilePointerEx(
[In] SafeFileHandle hFile,
[In] long lDistanceToMove,
[In, Out] IntPtr lpDistanceToMoveHigh,
[In] EMoveMethod dwMoveMethod);
[DllImport("kernel32.dll", SetLastError = true)]
public static extern bool SetEndOfFile(
[In] SafeFileHandle hFile);
[DllImport("kernel32.dll", SetLastError = true)]
public static extern IntPtr CreateIoCompletionPort(
[In] SafeFileHandle fileHandle,
[In] IntPtr existingCompletionPort,
[In] UInt32 completionKey,
[In] UInt32 numberOfConcurrentThreads);
[DllImport("kernel32.dll", SetLastError = true)]
public static extern UInt32 GetLastError();
[DllImport("kernel32.dll", SetLastError = true)]
public static unsafe extern bool GetQueuedCompletionStatus(
[In] IntPtr completionPort,
[Out] out UInt32 ptrBytesTransferred,
[Out] out UInt32 ptrCompletionKey,
[Out] NativeOverlapped** lpOverlapped,
[In] UInt32 dwMilliseconds);
[DllImport("kernel32.dll", SetLastError = true)]
public static extern bool PostQueuedCompletionStatus(
[In] IntPtr completionPort,
[In] UInt32 bytesTrasferred,
[In] UInt32 completionKey,
[In] IntPtr lpOverlapped);
[DllImport("kernel32.dll", SetLastError = true, CharSet = CharSet.Auto)]
public static extern bool GetDiskFreeSpace(string lpRootPathName,
out uint lpSectorsPerCluster,
out uint lpBytesPerSector,
out uint lpNumberOfFreeClusters,
out uint lpTotalNumberOfClusters);
#endregion
#region thread and numa functions
[DllImport("kernel32.dll")]
public static extern IntPtr GetCurrentThread();
[DllImport("kernel32")]
public static extern uint GetCurrentThreadId();
[DllImport("kernel32.dll", SetLastError = true)]
public static extern uint GetCurrentProcessorNumber();
[DllImport("kernel32.dll", SetLastError = true)]
public static extern uint GetActiveProcessorCount(uint count);
[DllImport("kernel32.dll", SetLastError = true)]
public static extern ushort GetActiveProcessorGroupCount();
[DllImport("kernel32.dll", SetLastError = true)]
public static extern int SetThreadGroupAffinity(IntPtr hThread, ref GROUP_AFFINITY GroupAffinity, ref GROUP_AFFINITY PreviousGroupAffinity);
[DllImport("kernel32.dll", SetLastError = true)]
public static extern int GetThreadGroupAffinity(IntPtr hThread, ref GROUP_AFFINITY PreviousGroupAffinity);
public static uint ALL_PROCESSOR_GROUPS = 0xffff;
[System.Runtime.InteropServices.StructLayoutAttribute(System.Runtime.InteropServices.LayoutKind.Sequential)]
public struct GROUP_AFFINITY
{
public ulong Mask;
public uint Group;
public uint Reserved1;
public uint Reserved2;
public uint Reserved3;
}
/// <summary>
/// Accepts thread id = 0, 1, 2, ... and sprays them round-robin
/// across all cores (viewed as a flat space). On NUMA machines,
/// this gives us [socket, core] ordering of affinitization. That is,
/// if there are N cores per socket, then thread indices of 0 to N-1 map
/// to the range [socket 0, core 0] to [socket 0, core N-1].
/// </summary>
/// <param name="threadIdx">Index of thread (from 0 onwards)</param>
public static void AffinitizeThreadRoundRobin(uint threadIdx)
{
uint nrOfProcessors = GetActiveProcessorCount(ALL_PROCESSOR_GROUPS);
ushort nrOfProcessorGroups = GetActiveProcessorGroupCount();
uint nrOfProcsPerGroup = nrOfProcessors / nrOfProcessorGroups;
GROUP_AFFINITY groupAffinityThread = default(GROUP_AFFINITY);
GROUP_AFFINITY oldAffinityThread = default(GROUP_AFFINITY);
IntPtr thread = GetCurrentThread();
GetThreadGroupAffinity(thread, ref groupAffinityThread);
threadIdx = threadIdx % nrOfProcessors;
groupAffinityThread.Mask = (ulong)1L << ((int)(threadIdx % (int)nrOfProcsPerGroup));
groupAffinityThread.Group = (uint)(threadIdx / nrOfProcsPerGroup);
if (SetThreadGroupAffinity(thread, ref groupAffinityThread, ref oldAffinityThread) == 0)
{
Console.WriteLine("Unable to set group affinity");
}
}
#endregion
}
/// <summary>
/// Methods to perform high-resolution low-overhead timing
/// </summary>
public static class HiResTimer
{
private const string lib = "kernel32.dll";
[DllImport(lib)]
[SuppressUnmanagedCodeSecurity]
public static extern int QueryPerformanceCounter(ref Int64 count);
[DllImport(lib)]
[SuppressUnmanagedCodeSecurity]
public static extern int QueryPerformanceFrequency(ref Int64 frequency);
[DllImport(lib)]
[SuppressUnmanagedCodeSecurity]
private static extern void GetSystemTimePreciseAsFileTime(out long filetime);
[DllImport(lib)]
[SuppressUnmanagedCodeSecurity]
private static extern void GetSystemTimeAsFileTime(out long filetime);
[DllImport("readtsc.dll")]
[SuppressUnmanagedCodeSecurity]
public static extern ulong rdtsc();
public static long Freq;
public static long EstimateCPUFrequency()
{
long oldCps = 0, cps = 0, startT, endT;
ulong startC, endC;
long accuracy = 500; // wait for consecutive measurements to get within 300 clock cycles
int i = 0;
while (i < 5)
{
GetSystemTimeAsFileTime(out startT);
startC = rdtsc();
while (true)
{
GetSystemTimeAsFileTime(out endT);
endC = rdtsc();
if (endT - startT >= 10000000)
{
cps = (long)(10000000 * (endC - startC) / (double)(endT - startT));
break;
}
}
if ((oldCps > (cps - accuracy)) && (oldCps < (cps + accuracy)))
{
Freq = cps;
return cps;
}
oldCps = cps;
i++;
}
Freq = cps;
return cps;
}
}
}

3797
Ambrosia/Ambrosia/Program.cs Normal file

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,36 @@
using System.Reflection;
using System.Runtime.CompilerServices;
using System.Runtime.InteropServices;
// General Information about an assembly is controlled through the following
// set of attributes. Change these attribute values to modify the information
// associated with an assembly.
//[assembly: AssemblyTitle("LocalAmbrosiaRuntime")]
[assembly: AssemblyDescription("")]
//[assembly: AssemblyConfiguration("")]
//[assembly: AssemblyCompany("")]
//[assembly: AssemblyProduct("LocalAmbrosiaRuntime")]
[assembly: AssemblyCopyright("Copyright © 2017")]
[assembly: AssemblyTrademark("")]
[assembly: AssemblyCulture("")]
// Setting ComVisible to false makes the types in this assembly not visible
// to COM components. If you need to access a type in this assembly from
// COM, set the ComVisible attribute to true on that type.
[assembly: ComVisible(false)]
// The following GUID is for the ID of the typelib if this project is exposed to COM
[assembly: Guid("edcf146a-65fe-43dd-913d-283a96dbac47")]
// Version information for an assembly consists of the following four values:
//
// Major Version
// Minor Version
// Build Number
// Revision
//
// You can specify all the values or you can default the Build and Revision Numbers
// by using the '*' as shown below:
// [assembly: AssemblyVersion("1.0.*")]
//[assembly: AssemblyVersion("1.0.0.0")]
//[assembly: AssemblyFileVersion("1.0.0.0")]

Просмотреть файл

@ -0,0 +1,14 @@
using System;
using System.Collections.Generic;
using System.Text;
namespace LocalAmbrosiaRuntime
{
public enum ReturnValueTypes
{
None = 0,
ReturnValue = 1,
EmptyReturnValue = 2,
Exception = 3,
}
}

Просмотреть файл

@ -0,0 +1,17 @@
namespace Ambrosia
{
public static class RpcTypes
{
public enum RpcType : byte
{
ReturnValue = 0,
FireAndForget = 1,
Impulse = 2,
}
public static bool IsFireAndForget(this RpcType rpcType)
{
return rpcType == RpcType.FireAndForget || rpcType == RpcType.Impulse;
}
}
}

Просмотреть файл

@ -0,0 +1,847 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace Ambrosia
{
// Flexible size byte array which data can be deserialized into. The size of the buffer adjusts to the largest deserialize request
// made so far.
// Needs to be public so it is accessible to generated code
public class FlexReadBuffer
{
static Queue<byte[]> _bufferPool = new Queue<byte[]>();
byte[] _buffer;
int _curSize;
int _curSizeSize;
byte[] _sizeBuf;
// Gets a buffer from the buffer pool. Replaces an existing buffer from the pool if the one in
// the pool isn't big enough. The buffer should be returned to the pool when not needed. This
// allows many tasks to own FlexReadBuffers without actually taking up significant buffer space.
private void GetBuffer()
{
lock (_bufferPool)
{
if (_bufferPool.Count > 0)
{
_buffer = _bufferPool.Dequeue();
}
}
if (_buffer == null || _buffer.Length < _curSize)
{
_buffer = new byte[_curSize];
}
System.Buffer.BlockCopy(_sizeBuf, 0, _buffer, 0, _curSizeSize);
}
public FlexReadBuffer()
{
_sizeBuf = new byte[5];
_buffer = null;
_curSize = 0;
_curSizeSize = 0;
}
// Returns the current contents of the buffer
public byte[] Buffer { get { return _buffer; } }
// Returns the number of bytes copied in the last deserialize call
public int Length { get { return _curSize; } }
// Returns the number of bytes used at the beginning of the buffer to represent length
public int LengthLength { get { return _curSizeSize; } }
public void ThrowAwayBuffer()
{
_buffer = null;
_curSize = 0;
_curSizeSize = 0;
}
public void ResetBuffer()
{
if (_buffer != null)
{
lock (_bufferPool)
{
_bufferPool.Enqueue(_buffer);
}
_buffer = null;
_curSize = 0;
_curSizeSize = 0;
}
}
public void StealBuffer()
{
if (_buffer != null)
{
_buffer = null;
_curSize = 0;
_curSizeSize = 0;
}
}
public static void ReturnBuffer(byte [] returnedBuffer)
{
lock (_bufferPool)
{
_bufferPool.Enqueue(returnedBuffer);
}
}
// Deserializes a byte array from a .net stream assuming that the first 4 bytes contains the length of the byte array to be
// subsequently copied. If the specified number of bytes can't be read before hitting end of stream, an exception is thrown.
static public async Task<FlexReadBuffer> DeserializeAsync(Stream S,
FlexReadBuffer flexBuf,
CancellationToken ct)
{
var intReaderTask = S.ReadIntAsync(flexBuf._sizeBuf, ct);
var messageSize = await intReaderTask;
flexBuf._curSize = messageSize.Item1 + messageSize.Item2;
flexBuf._curSizeSize = messageSize.Item2;
if (flexBuf.Buffer != null)
{
throw new Exception("Flexbuffer should have been null in Deserialize");
}
flexBuf.GetBuffer();
var bytesReaderTask = S.ReadAllRequiredBytesAsync(flexBuf._buffer, messageSize.Item2, messageSize.Item1, ct);
var bytesRead = await bytesReaderTask;
if (bytesRead < messageSize.Item1)
throw new Exception("Error deserializing buffer in stream");
return flexBuf;
}
// Deserializes a byte array from a .net stream assuming that the first 4 bytes contains the length of the byte array to be
// subsequently copied. If the specified number of bytes can't be read before hitting end of stream, an exception is thrown.
static public async Task<FlexReadBuffer> DeserializeAsync(Stream S,
FlexReadBuffer flexBuf)
{
var intReaderTask = S.ReadIntAsync(flexBuf._sizeBuf);
var messageSize = await intReaderTask;
flexBuf._curSize = messageSize.Item1 + messageSize.Item2;
flexBuf._curSizeSize = messageSize.Item2;
if (flexBuf.Buffer != null)
{
throw new Exception("Flexbuffer should have been null in Deserialize");
}
flexBuf.GetBuffer();
var bytesReaderTask = S.ReadAllRequiredBytesAsync(flexBuf._buffer, messageSize.Item2, messageSize.Item1);
var bytesRead = await bytesReaderTask;
if (bytesRead < messageSize.Item1)
throw new Exception("Error deserializing buffer in stream");
return flexBuf;
}
static public FlexReadBuffer Deserialize(Stream S,
FlexReadBuffer flexBuf)
{
var messageSize = S.ReadInt(flexBuf._sizeBuf);
flexBuf._curSize = messageSize.Item1 + messageSize.Item2;
flexBuf._curSizeSize = messageSize.Item2;
if (flexBuf.Buffer != null)
{
throw new Exception("Flexbuffer should have been null in Deserialize");
}
flexBuf.GetBuffer();
var bytesRead = S.ReadAllRequiredBytes(flexBuf._buffer, messageSize.Item2, messageSize.Item1);
if (bytesRead < messageSize.Item1)
throw new Exception("Error deserializing buffer in stream");
return flexBuf;
}
// Copies of the above three methods that take a LogReader instead of a Stream
static public async Task<FlexReadBuffer> DeserializeAsync(LogReader S,
FlexReadBuffer flexBuf,
CancellationToken ct)
{
var intReaderTask = S.ReadIntAsync(flexBuf._sizeBuf, ct);
var messageSize = await intReaderTask;
flexBuf._curSize = messageSize.Item1 + messageSize.Item2;
flexBuf._curSizeSize = messageSize.Item2;
if (flexBuf.Buffer != null)
{
throw new Exception("Flexbuffer should have been null in Deserialize");
}
flexBuf.GetBuffer();
var bytesReaderTask = S.ReadAllRequiredBytesAsync(flexBuf._buffer, messageSize.Item2, messageSize.Item1, ct);
var bytesRead = await bytesReaderTask;
if (bytesRead < messageSize.Item1)
throw new Exception("Error deserializing buffer in stream");
return flexBuf;
}
static public async Task<FlexReadBuffer> DeserializeAsync(LogReader S,
FlexReadBuffer flexBuf)
{
var intReaderTask = S.ReadIntAsync(flexBuf._sizeBuf);
var messageSize = await intReaderTask;
flexBuf._curSize = messageSize.Item1 + messageSize.Item2;
flexBuf._curSizeSize = messageSize.Item2;
if (flexBuf.Buffer != null)
{
throw new Exception("Flexbuffer should have been null in Deserialize");
}
flexBuf.GetBuffer();
var bytesReaderTask = S.ReadAllRequiredBytesAsync(flexBuf._buffer, messageSize.Item2, messageSize.Item1);
var bytesRead = await bytesReaderTask;
if (bytesRead < messageSize.Item1)
throw new Exception("Error deserializing buffer in stream");
return flexBuf;
}
static public FlexReadBuffer Deserialize(LogReader S,
FlexReadBuffer flexBuf)
{
var messageSize = S.ReadInt(flexBuf._sizeBuf);
flexBuf._curSize = messageSize.Item1 + messageSize.Item2;
flexBuf._curSizeSize = messageSize.Item2;
if (flexBuf.Buffer != null)
{
throw new Exception("Flexbuffer should have been null in Deserialize");
}
flexBuf.GetBuffer();
var bytesRead = S.ReadAllRequiredBytes(flexBuf._buffer, messageSize.Item2, messageSize.Item1);
if (bytesRead < messageSize.Item1)
throw new Exception("Error deserializing buffer in stream");
return flexBuf;
}
}
public static class StreamCommunicator
{
public static int ReadIntFixed(this Stream stream)
{
var value = new byte[4];
stream.ReadAllRequiredBytes(value, 0, value.Length);
int intValue = value[0]
| (int)value[1] << 0x8
| (int)value[2] << 0x10
| (int)value[3] << 0x18;
return intValue;
}
public static long ReadLongFixed(this Stream stream)
{
var value = new byte[8];
stream.ReadAllRequiredBytes(value, 0, value.Length);
long intValue = value[0]
| (long)value[1] << 0x8
| (long)value[2] << 0x10
| (long)value[3] << 0x18
| (long)value[4] << 0x20
| (long)value[5] << 0x28
| (long)value[6] << 0x30
| (long)value[7] << 0x38;
return intValue;
}
public static int ReadBufferedIntFixed(this byte[] buf,
int offset)
{
int intValue = buf[0 + offset]
| (int)buf[1 + offset] << 0x8
| (int)buf[2 + offset] << 0x10
| (int)buf[3 + offset] << 0x18;
return intValue;
}
public static async Task<int> ReadIntFixedAsync(this Stream stream)
{
var value = new byte[4];
var readTask = stream.ReadAllRequiredBytesAsync(value, 0, value.Length);
await readTask;
int intValue = value[0]
| (int)value[1] << 0x8
| (int)value[2] << 0x10
| (int)value[3] << 0x18;
return intValue;
}
public static void WriteIntFixed(this Stream stream, int value)
{
stream.WriteByte((byte)(value & 0xFF));
stream.WriteByte((byte)((value >> 0x8) & 0xFF));
stream.WriteByte((byte)((value >> 0x10) & 0xFF));
stream.WriteByte((byte)((value >> 0x18) & 0xFF));
}
public static void WriteLongFixed(this Stream stream, long value)
{
stream.WriteByte((byte)(value & 0xFF));
stream.WriteByte((byte)((value >> 0x8) & 0xFF));
stream.WriteByte((byte)((value >> 0x10) & 0xFF));
stream.WriteByte((byte)((value >> 0x18) & 0xFF));
stream.WriteByte((byte)((value >> 0x20) & 0xFF));
stream.WriteByte((byte)((value >> 0x28) & 0xFF));
stream.WriteByte((byte)((value >> 0x30) & 0xFF));
stream.WriteByte((byte)((value >> 0x38) & 0xFF));
}
public static int ReadBufferedInt(this byte[] buf,
int offset)
{
var currentByte = (uint)buf[offset];
byte read = 1;
uint result = currentByte & 0x7FU;
int shift = 7;
while ((currentByte & 0x80) != 0)
{
currentByte = (uint)buf[offset + read];
read++;
result |= (currentByte & 0x7FU) << shift;
shift += 7;
if (read > 5)
{
throw new Exception("Invalid integer value in the input stream.");
}
}
return (int)((-(result & 1)) ^ ((result >> 1) & 0x7FFFFFFFU));
}
public static bool EnoughBytesForReadBufferedInt(this byte[] buf,
int offset,
int bytes)
{
if (bytes >= 5)
{
return true;
}
for (int i = 0; i < bytes; i++)
{
if ((buf[offset + i] & 0x80) == 0)
{
return true;
}
}
return false;
}
public static int ReadInt(this Stream stream)
{
var currentByte = (uint)stream.ReadByte();
byte read = 1;
uint result = currentByte & 0x7FU;
int shift = 7;
while ((currentByte & 0x80) != 0)
{
currentByte = (uint)stream.ReadByte();
read++;
result |= (currentByte & 0x7FU) << shift;
shift += 7;
if (read > 5)
{
throw new Exception("Invalid integer value in the input stream.");
}
}
return (int)((-(result & 1)) ^ ((result >> 1) & 0x7FFFFFFFU));
}
public static async Task<Tuple<int, int>> ReadIntAsync(this Stream stream,
byte[] buffer,
CancellationToken ct)
{
buffer[0] = await stream.ReadByteAsync(ct);
var currentByte = (uint)buffer[0];
byte read = 1;
uint result = currentByte & 0x7FU;
int shift = 7;
while ((currentByte & 0x80) != 0)
{
buffer[read] = await stream.ReadByteAsync(ct);
currentByte = (uint)buffer[read];
read++;
result |= (currentByte & 0x7FU) << shift;
shift += 7;
if (read > 5)
{
throw new Exception("Invalid integer value in the input stream.");
}
}
return new Tuple<int, int>((int)((-(result & 1)) ^ ((result >> 1) & 0x7FFFFFFFU)), read);
}
public static async Task<Tuple<int, int>> ReadIntAsync(this Stream stream,
byte[] buffer)
{
buffer[0] = await stream.ReadByteAsync();
var currentByte = (uint)buffer[0];
byte read = 1;
uint result = currentByte & 0x7FU;
int shift = 7;
while ((currentByte & 0x80) != 0)
{
buffer[read] = await stream.ReadByteAsync();
currentByte = (uint)buffer[read];
read++;
result |= (currentByte & 0x7FU) << shift;
shift += 7;
if (read > 5)
{
throw new Exception("Invalid integer value in the input stream.");
}
}
return new Tuple<int, int>((int)((-(result & 1)) ^ ((result >> 1) & 0x7FFFFFFFU)), read);
}
public static Tuple<int, int> ReadInt(this Stream stream,
byte[] buffer)
{
buffer[0] = (byte)stream.ReadByte();
var currentByte = (uint)buffer[0];
byte read = 1;
uint result = currentByte & 0x7FU;
int shift = 7;
while ((currentByte & 0x80) != 0)
{
buffer[read] = (byte)stream.ReadByte();
currentByte = (uint)buffer[read];
read++;
result |= (currentByte & 0x7FU) << shift;
shift += 7;
if (read > 5)
{
throw new Exception("Invalid integer value in the input stream.");
}
}
return new Tuple<int, int>((int)((-(result & 1)) ^ ((result >> 1) & 0x7FFFFFFFU)), read);
}
public static async Task<byte> ReadByteAsync(this Stream stream)
{
byte[] buffer = new byte[1];
await stream.ReadAsync(buffer, 0, 1);
return buffer[0];
}
public static async Task<byte> ReadByteAsync(this Stream stream,
CancellationToken ct)
{
byte[] buffer = new byte[1];
await stream.ReadAsync(buffer, 0, 1, ct);
return buffer[0];
}
public static void WriteInt(this Stream stream, int value)
{
var zigZagEncoded = unchecked((uint)((value << 1) ^ (value >> 31)));
while ((zigZagEncoded & ~0x7F) != 0)
{
stream.WriteByte((byte)((zigZagEncoded | 0x80) & 0xFF));
zigZagEncoded >>= 7;
}
stream.WriteByte((byte)zigZagEncoded);
}
public static int WriteInt(this byte [] buffer,
int offset,
int value)
{
int retVal = 0;
var zigZagEncoded = unchecked((uint)((value << 1) ^ (value >> 31)));
while ((zigZagEncoded & ~0x7F) != 0)
{
buffer[offset] = (byte)((zigZagEncoded | 0x80) & 0xFF);
offset++;
retVal++;
zigZagEncoded >>= 7;
}
buffer[offset] = (byte)zigZagEncoded;
retVal++;
return retVal;
}
public static int IntSize(int value)
{
int retVal = 0;
var zigZagEncoded = unchecked((uint)((value << 1) ^ (value >> 31)));
while ((zigZagEncoded & ~0x7F) != 0)
{
retVal++;
zigZagEncoded >>= 7;
}
retVal++;
return retVal;
}
public static long ReadBufferedLong(this byte[] buf,
int offset)
{
var currentByte = (uint)buf[offset];
byte read = 1;
ulong result = currentByte & 0x7FUL;
int shift = 7;
while ((currentByte & 0x80) != 0)
{
currentByte = (uint)buf[offset + read];
read++;
result |= (currentByte & 0x7FUL) << shift;
shift += 7;
if (read > 10)
{
throw new Exception("Invalid long value in the input stream.");
}
}
var tmp = unchecked((long)result);
return (-(tmp & 0x1L)) ^ ((tmp >> 1) & 0x7FFFFFFFFFFFFFFFL);
}
public static long ReadLong(this Stream stream)
{
var value = (uint)stream.ReadByte();
byte read = 1;
ulong result = value & 0x7FUL;
int shift = 7;
while ((value & 0x80) != 0)
{
value = (uint)stream.ReadByte();
read++;
result |= (value & 0x7FUL) << shift;
shift += 7;
if (read > 10)
{
throw new Exception("Invalid integer long in the input stream.");
}
}
var tmp = unchecked((long)result);
return (-(tmp & 0x1L)) ^ ((tmp >> 1) & 0x7FFFFFFFFFFFFFFFL);
}
public static void WriteLong(this Stream stream, long value)
{
var zigZagEncoded = unchecked((ulong)((value << 1) ^ (value >> 63)));
while ((zigZagEncoded & ~0x7FUL) != 0)
{
stream.WriteByte((byte)((zigZagEncoded | 0x80) & 0xFF));
zigZagEncoded >>= 7;
}
stream.WriteByte((byte)zigZagEncoded);
}
public static int WriteLong(this byte[] buffer, int offset, long value)
{
int retVal = 0;
var zigZagEncoded = unchecked((ulong)((value << 1) ^ (value >> 63)));
while ((zigZagEncoded & ~0x7FUL) != 0)
{
buffer[offset] = (byte)((zigZagEncoded | 0x80) & 0xFF);
offset++;
retVal++;
zigZagEncoded >>= 7;
}
buffer[offset] = (byte)zigZagEncoded;
retVal++;
return retVal;
}
public static int LongSize(long value)
{
int retVal = 0;
var zigZagEncoded = unchecked((ulong)((value << 1) ^ (value >> 63)));
while ((zigZagEncoded & ~0x7FUL) != 0)
{
retVal++;
zigZagEncoded >>= 7;
}
retVal++;
return retVal;
}
public static int ReadAllRequiredBytes(this Stream stream,
byte[] buffer,
int offset,
int count)
{
int toRead = count;
int currentOffset = offset;
int currentRead;
do
{
currentRead = stream.Read(buffer, currentOffset, toRead);
currentOffset += currentRead;
toRead -= currentRead;
}
while (toRead > 0 && currentRead != 0);
return currentOffset - offset;
}
public static async Task<int> ReadAllRequiredBytesAsync(this Stream stream,
byte[] buffer,
int offset,
int count,
CancellationToken ct)
{
int toRead = count;
int currentOffset = offset;
int currentRead;
do
{
var readTask = stream.ReadAsync(buffer, currentOffset, toRead, ct);
currentRead = await readTask;
currentOffset += currentRead;
toRead -= currentRead;
}
while (toRead > 0 && currentRead != 0);
return currentOffset - offset;
}
public static async Task<int> ReadAllRequiredBytesAsync(this Stream stream,
byte[] buffer,
int offset,
int count)
{
int toRead = count;
int currentOffset = offset;
int currentRead;
do
{
var readTask = stream.ReadAsync(buffer, currentOffset, toRead);
currentRead = await readTask;
currentOffset += currentRead;
toRead -= currentRead;
}
while (toRead > 0 && currentRead != 0);
return currentOffset - offset;
}
}
// This class eats the stream and just counts the number of bytes.
public class CountStream : Stream
{
long _count = 0;
public CountStream()
{
}
public override void Write(byte[] buffer, int offset, int count)
{
_count += count;
}
public override bool CanRead
{
get { return false; }
}
public override bool CanSeek
{
get { return false; }
}
public override bool CanWrite
{
get { return true; }
}
public override long Length
{
get { return _count; }
}
public override bool CanTimeout
{
get { return false; }
}
public override long Seek(long offset, SeekOrigin origin)
{
throw new NotImplementedException();
}
public override int Read(byte[] buffer, int offset, int count)
{
throw new NotImplementedException();
}
public override long Position
{
get { return _count; }
set { throw new NotImplementedException(); }
}
public override void SetLength(long value)
{
_count = value;
}
public override void Flush()
{
}
}
// Pass through write stream.
// Note that all writes get converted into one type of write, which
// hides bugs in NetworkStream
public class PassThruWriteStream : Stream
{
long _count = 0;
Stream _writeToStream;
public PassThruWriteStream(Stream writeToStream)
{
_writeToStream = writeToStream;
}
public override void Write(byte[] buffer, int offset, int count)
{
_count += count;
_writeToStream.Write(buffer, offset, count);
}
public override bool CanRead
{
get { return false; }
}
public override bool CanSeek
{
get { return false; }
}
public override bool CanWrite
{
get { return true; }
}
public override long Length
{
get { return _count; }
}
public override bool CanTimeout
{
get { return false; }
}
public override long Seek(long offset, SeekOrigin origin)
{
throw new NotImplementedException();
}
public override int Read(byte[] buffer, int offset, int count)
{
throw new NotImplementedException();
}
public override long Position
{
get { return _count; }
set { throw new NotImplementedException(); }
}
public override void SetLength(long value)
{
_count = value;
_writeToStream.SetLength(value);
}
public override void Flush()
{
_writeToStream.Flush();
}
}
// Pass through read stream.
// Note that all reads get converted into one type of read, which
// may hide bugs in NetworkStream. Also, introduces an end of stream
// response to ReadByte based on the number of bytes passed into the
// constructor, which is necessary for correct behavior of XMLDictionarySerializer
public class PassThruReadStream : Stream
{
long _count = 0;
Stream _readFromStream;
long _maxRead = 0;
public PassThruReadStream(Stream readFromStream, long maxRead)
{
_readFromStream = readFromStream;
_maxRead = maxRead;
}
public override void Write(byte[] buffer, int offset, int count)
{
throw new NotImplementedException();
}
public override bool CanRead
{
get { return true; }
}
public override bool CanSeek
{
get { return false; }
}
public override bool CanWrite
{
get { return false; }
}
public override long Length
{
get { throw new NotImplementedException(); }
}
public override bool CanTimeout
{
get { return false; }
}
public override long Seek(long offset, SeekOrigin origin)
{
throw new NotImplementedException();
}
public override int Read(byte[] buffer, int offset, int count)
{
if (_count + count > _maxRead)
{
throw new Exception();
}
_readFromStream.ReadAllRequiredBytes(buffer, offset, count);
_count += count;
return count;
}
public override int ReadByte()
{
if (_count >= _maxRead)
{
return -1;
}
else
{
_count++;
return _readFromStream.ReadByte();
}
}
public override long Position
{
get { return _count; }
set { throw new NotImplementedException(); }
}
public override void SetLength(long value)
{
throw new NotImplementedException();
}
public override void Flush()
{
throw new NotImplementedException();
}
}
}

Просмотреть файл

@ -0,0 +1,134 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT license.
#include <windows.h>
#include <string>
#include <sstream>
#include <iostream>
#include <iomanip>
std::string FormatWin32AndHRESULT(DWORD win32_result) {
std::stringstream ss;
ss << "Win32(" << win32_result << ") HRESULT("
<< std::showbase << std::uppercase << std::setfill('0') << std::hex
<< HRESULT_FROM_WIN32(win32_result) << ")";
return ss.str();
}
extern "C"
__declspec(dllexport) bool EnableProcessPrivileges() {
HANDLE token;
TOKEN_PRIVILEGES token_privileges;
token_privileges.PrivilegeCount = 1;
token_privileges.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED;
if (!LookupPrivilegeValue(0, SE_MANAGE_VOLUME_NAME,
&token_privileges.Privileges[0].Luid)) return false;
if (!OpenProcessToken(GetCurrentProcess(), TOKEN_ADJUST_PRIVILEGES, &token)) return false;
if (!AdjustTokenPrivileges(token, 0, (PTOKEN_PRIVILEGES)&token_privileges, 0, 0, 0)) return false;
if (GetLastError() != ERROR_SUCCESS) return false;
::CloseHandle(token);
return true;
}
extern "C"
__declspec(dllexport) bool EnableVolumePrivileges(std::string& filename, HANDLE file_handle)
{
std::string volume_string = "\\\\.\\" + filename.substr(0, 2);
HANDLE volume_handle = ::CreateFile(volume_string.c_str(), 0, 0, nullptr, OPEN_EXISTING,
FILE_ATTRIBUTE_NORMAL, nullptr);
if (INVALID_HANDLE_VALUE == volume_handle) {
// std::cerr << "Error retrieving volume handle: " << FormatWin32AndHRESULT(::GetLastError());
return false;
}
MARK_HANDLE_INFO mhi;
mhi.UsnSourceInfo = USN_SOURCE_DATA_MANAGEMENT;
mhi.VolumeHandle = volume_handle;
mhi.HandleInfo = MARK_HANDLE_PROTECT_CLUSTERS;
DWORD bytes_returned = 0;
BOOL result = DeviceIoControl(file_handle, FSCTL_MARK_HANDLE, &mhi, sizeof(MARK_HANDLE_INFO), nullptr,
0, &bytes_returned, nullptr);
if (!result) {
// std::cerr << "Error in DeviceIoControl: " << FormatWin32AndHRESULT(::GetLastError());
return false;
}
::CloseHandle(volume_handle);
return true;
}
extern "C"
__declspec(dllexport) bool SetFileSize(HANDLE file_handle, int64_t file_size)
{
LARGE_INTEGER li;
li.QuadPart = file_size;
BOOL result = ::SetFilePointerEx(file_handle, li, NULL, FILE_BEGIN);
if (!result) {
std::cerr << "SetFilePointer failed with error: " << FormatWin32AndHRESULT(::GetLastError()) << std::endl;
return false;
}
// Set a fixed file length
result = ::SetEndOfFile(file_handle);
if (!result) {
std::cerr << "SetEndOfFile failed with error: " << FormatWin32AndHRESULT(::GetLastError()) << std::endl;
return false;
}
result = ::SetFileValidData(file_handle, file_size);
if (!result) {
std::cerr << "SetFileValidData failed with error: " << FormatWin32AndHRESULT(::GetLastError()) << std::endl;
return false;
}
return true;
}
extern "C"
__declspec(dllexport) bool CreateAndSetFileSize(std::string& filename, int64_t file_size)
{
BOOL result = ::EnableProcessPrivileges();
if (!result) {
std::cerr << "EnableProcessPrivileges failed with error: "
<< FormatWin32AndHRESULT(::GetLastError()) << std::endl;
return false;
}
DWORD desired_access = GENERIC_READ | GENERIC_WRITE;
DWORD const flags = FILE_FLAG_RANDOM_ACCESS | FILE_FLAG_NO_BUFFERING;
DWORD create_disposition = CREATE_ALWAYS;
DWORD shared_mode = FILE_SHARE_READ;
// Create our test file
HANDLE file_handle = ::CreateFile(filename.c_str(), desired_access, shared_mode, NULL,
create_disposition, flags, NULL);
if (INVALID_HANDLE_VALUE == file_handle) {
std::cerr << "write file (" << filename << ") not created. Error: " <<
FormatWin32AndHRESULT(::GetLastError()) << std::endl;
return false;
}
result = ::EnableVolumePrivileges(filename, file_handle);
if (!result) {
std::cerr << "EnableVolumePrivileges failed with error: "
<< FormatWin32AndHRESULT(::GetLastError()) << std::endl;
return false;
}
result = ::SetFileSize(file_handle, file_size);
if (!result) {
std::cerr << "SetFileSize failed with error: " << FormatWin32AndHRESULT(::GetLastError()) << std::endl;
return false;
}
::CloseHandle(file_handle);
return true;
}

Просмотреть файл

@ -0,0 +1,82 @@
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" ToolsVersion="15.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup Label="ProjectConfigurations">
<ProjectConfiguration Include="Debug|x64">
<Configuration>Debug</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
<ProjectConfiguration Include="Release|x64">
<Configuration>Release</Configuration>
<Platform>x64</Platform>
</ProjectConfiguration>
</ItemGroup>
<PropertyGroup Label="Globals">
<ProjectGuid>{5852AC33-6B01-44F5-BAF3-2AAF796E8449}</ProjectGuid>
<RootNamespace>directdrivereadwrite</RootNamespace>
<WindowsTargetPlatformVersion>10.0.17134.0</WindowsTargetPlatformVersion>
<ProjectName>adv-file-ops</ProjectName>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.Default.props" />
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'" Label="Configuration">
<ConfigurationType>DynamicLibrary</ConfigurationType>
<UseDebugLibraries>true</UseDebugLibraries>
<PlatformToolset>v141</PlatformToolset>
<CharacterSet>MultiByte</CharacterSet>
<SpectreMitigation>false</SpectreMitigation>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'" Label="Configuration">
<ConfigurationType>DynamicLibrary</ConfigurationType>
<UseDebugLibraries>false</UseDebugLibraries>
<PlatformToolset>v141</PlatformToolset>
<WholeProgramOptimization>true</WholeProgramOptimization>
<CharacterSet>MultiByte</CharacterSet>
<SpectreMitigation>false</SpectreMitigation>
</PropertyGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.props" />
<ImportGroup Label="ExtensionSettings">
</ImportGroup>
<ImportGroup Label="Shared">
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<ImportGroup Label="PropertySheets" Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<Import Project="$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props" Condition="exists('$(UserRootDir)\Microsoft.Cpp.$(Platform).user.props')" Label="LocalAppDataPlatform" />
</ImportGroup>
<PropertyGroup Label="UserMacros" />
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<OutDir>$(ProjectDir)$(Platform)\$(Configuration)\</OutDir>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<OutDir>$(ProjectDir)$(Platform)\$(Configuration)\</OutDir>
</PropertyGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<Optimization>Disabled</Optimization>
<SDLCheck>true</SDLCheck>
<RuntimeLibrary>MultiThreadedDebug</RuntimeLibrary>
</ClCompile>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<ClCompile>
<WarningLevel>Level3</WarningLevel>
<Optimization>MaxSpeed</Optimization>
<FunctionLevelLinking>true</FunctionLevelLinking>
<IntrinsicFunctions>true</IntrinsicFunctions>
<SDLCheck>true</SDLCheck>
<RuntimeLibrary>MultiThreaded</RuntimeLibrary>
<ControlFlowGuard>Guard</ControlFlowGuard>
</ClCompile>
<Link>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<OptimizeReferences>true</OptimizeReferences>
</Link>
</ItemDefinitionGroup>
<ItemGroup>
<ClCompile Include="adv-file-ops.cpp" />
</ItemGroup>
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" />
<ImportGroup Label="ExtensionTargets">
</ImportGroup>
</Project>

Просмотреть файл

@ -0,0 +1,31 @@

Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio 15
VisualStudioVersion = 15.0.27130.2026
MinimumVisualStudioVersion = 10.0.40219.1
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "AmbrosiaTest", "AmbrosiaTest\AmbrosiaTest.csproj", "{F9AA4F89-945C-4118-99CF-FDC7AA142601}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
Debug|x64 = Debug|x64
Release|Any CPU = Release|Any CPU
Release|x64 = Release|x64
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{F9AA4F89-945C-4118-99CF-FDC7AA142601}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{F9AA4F89-945C-4118-99CF-FDC7AA142601}.Debug|Any CPU.Build.0 = Debug|Any CPU
{F9AA4F89-945C-4118-99CF-FDC7AA142601}.Debug|x64.ActiveCfg = Debug|x64
{F9AA4F89-945C-4118-99CF-FDC7AA142601}.Debug|x64.Build.0 = Debug|x64
{F9AA4F89-945C-4118-99CF-FDC7AA142601}.Release|Any CPU.ActiveCfg = Release|Any CPU
{F9AA4F89-945C-4118-99CF-FDC7AA142601}.Release|Any CPU.Build.0 = Release|Any CPU
{F9AA4F89-945C-4118-99CF-FDC7AA142601}.Release|x64.ActiveCfg = Release|x64
{F9AA4F89-945C-4118-99CF-FDC7AA142601}.Release|x64.Build.0 = Release|x64
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
GlobalSection(ExtensibilityGlobals) = postSolution
SolutionGuid = {2F849F00-A84F-432D-A8F1-46A92603700F}
EndGlobalSection
EndGlobal

Просмотреть файл

@ -0,0 +1,453 @@
using System;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using System.IO;
using System.Text;
using System.Threading;
using System.Windows.Forms; // need this to handle threading issue on sleeps
using System.Collections.Generic;
using System.Linq;
using System.Diagnostics;
using System.Configuration;
namespace AmbrosiaTest
{
/// <summary>
/// Summary description for AMB_UnitTest
/// </summary>
[TestClass]
public class AMB_UnitTest
{
//************* Init Code *****************
// NOTE: Need this bit of code at the top of every "[TestClass]" (per .cs test file) to get context \ details of the current test running
// NOTE: Make sure all names be "Azure Safe". No capital letters and no underscore.
[TestInitialize()]
public void Initialize()
{
Utilities MyUtils = new Utilities();
MyUtils.TestInitialize();
}
//************* Init Code *****************
private TestContext testContextInstance;
/// <summary>
///Gets or sets the test context which provides
///information about and functionality for the current test run.
///</summary>
public TestContext TestContext
{
get
{
return testContextInstance;
}
set
{
testContextInstance = value;
}
}
#region Additional test attributes
//
// You can use the following additional attributes as you write your tests:
//
// Use ClassInitialize to run code before running the first test in the class
// [ClassInitialize()]
// public static void MyClassInitialize(TestContext testContext) { }
//
// Use ClassCleanup to run code after all tests in a class have run
// [ClassCleanup()]
// public static void MyClassCleanup() { }
//
// Use TestInitialize to run code before running each test
// [TestInitialize()]
// public void MyTestInitialize() { }
//
// Use TestCleanup to run code after each test has run
// [TestCleanup()]
// public void MyTestCleanup() { }
//
#endregion
//** Basic end to end test with minimal rounds and message size of 1GB ... could make it smaller and it would be faster.
[TestMethod]
public void UnitTest_BasicEndtoEnd_Test()
{
//NOTE - the Cleanup has this hard coded so if this changes, update Cleanup section too
string testName = "unitendtoendtest";
string clientJobName = testName + "clientjob";
string serverName = testName + "server";
string ambrosiaLogDir = ConfigurationManager.AppSettings["AmbrosiaLogDirectory"] + "\\";
string byteSize = "1073741824";
Utilities MyUtils = new Utilities();
//AMB1 - Job
string logOutputFileName_AMB1 = testName + "_AMB1.log";
AMB_Settings AMB1 = new AMB_Settings
{
AMB_ServiceName = clientJobName,
AMB_PortAppReceives = "1000",
AMB_PortAMBSends = "1001",
AMB_ServiceLogPath = ambrosiaLogDir,
AMB_CreateService = "A",
AMB_PauseAtStart = "N",
AMB_PersistLogs = "Y",
AMB_NewLogTriggerSize = "1000",
AMB_ActiveActive = "N",
AMB_Version = "0"
};
MyUtils.CallAMB(AMB1, logOutputFileName_AMB1, AMB_ModeConsts.RegisterInstance);
//AMB2
string logOutputFileName_AMB2 = testName + "_AMB2.log";
AMB_Settings AMB2 = new AMB_Settings
{
AMB_ServiceName = serverName,
AMB_PortAppReceives = "2000",
AMB_PortAMBSends = "2001",
AMB_ServiceLogPath = ambrosiaLogDir,
AMB_CreateService = "A",
AMB_PauseAtStart = "N",
AMB_PersistLogs = "Y",
AMB_NewLogTriggerSize = "1000",
AMB_ActiveActive = "N",
AMB_Version = "0"
};
MyUtils.CallAMB(AMB2, logOutputFileName_AMB2, AMB_ModeConsts.RegisterInstance);
//ImmCoord1
string logOutputFileName_ImmCoord1 = testName + "_ImmCoord1.log";
int ImmCoordProcessID1 = MyUtils.StartImmCoord(clientJobName, 1500, logOutputFileName_ImmCoord1);
//ImmCoord2
string logOutputFileName_ImmCoord2 = testName + "_ImmCoord2.log";
int ImmCoordProcessID2 = MyUtils.StartImmCoord(serverName, 2500, logOutputFileName_ImmCoord2);
//Client Job Call
string logOutputFileName_ClientJob = testName + "_ClientJob.log";
int clientJobProcessID = MyUtils.StartPerfClientJob("1001", "1000", clientJobName, serverName, "1024", "1", logOutputFileName_ClientJob);
// Give it a few seconds to start
Thread.Sleep(2000);
//Server Call
string logOutputFileName_Server = testName + "_Server.log";
int serverProcessID = MyUtils.StartPerfServer("2001", "2000", clientJobName, serverName, logOutputFileName_Server, 1, false);
//Delay until client is done - also check Server just to make sure
bool pass = MyUtils.WaitForProcessToFinish(logOutputFileName_ClientJob, byteSize, 5, false, testName, true); // number of bytes processed
pass = MyUtils.WaitForProcessToFinish(logOutputFileName_Server, byteSize, 5, false, testName, true);
// Stop things so file is freed up and can be opened in verify
MyUtils.KillProcess(clientJobProcessID);
MyUtils.KillProcess(serverProcessID);
MyUtils.KillProcess(ImmCoordProcessID1);
MyUtils.KillProcess(ImmCoordProcessID2);
//Verify AMB
MyUtils.VerifyTestOutputFileToCmpFile(logOutputFileName_AMB1);
MyUtils.VerifyTestOutputFileToCmpFile(logOutputFileName_AMB2);
// Verify Client
MyUtils.VerifyTestOutputFileToCmpFile(logOutputFileName_ClientJob);
// Verify Server
MyUtils.VerifyTestOutputFileToCmpFile(logOutputFileName_Server);
// Verify integrity of Ambrosia logs by replaying
MyUtils.VerifyAmbrosiaLogFile(testName, Convert.ToInt64(byteSize), true, true, AMB1.AMB_Version);
}
//** Basic end to end test where something dies and restart it and works still
[TestMethod]
public void UnitTest_BasicRestartEndtoEnd_Test()
{
//NOTE - the Cleanup has this hard coded so if this changes, update Cleanup section too
string testName = "unitendtoendrestarttest";
string clientJobName = testName + "clientjob";
string serverName = testName + "server";
string ambrosiaLogDir = ConfigurationManager.AppSettings["AmbrosiaLogDirectory"] + "\\";
string byteSize = "1073741824";
Utilities MyUtils = new Utilities();
//AMB1 - Job
string logOutputFileName_AMB1 = testName + "_AMB1.log";
AMB_Settings AMB1 = new AMB_Settings
{
AMB_ServiceName = clientJobName,
AMB_PortAppReceives = "1000",
AMB_PortAMBSends = "1001",
AMB_ServiceLogPath = ambrosiaLogDir,
AMB_CreateService = "A",
AMB_PauseAtStart = "N",
AMB_PersistLogs = "Y",
AMB_NewLogTriggerSize = "1000",
AMB_ActiveActive = "N",
AMB_Version = "0"
};
MyUtils.CallAMB(AMB1, logOutputFileName_AMB1, AMB_ModeConsts.RegisterInstance);
//AMB2
string logOutputFileName_AMB2 = testName + "_AMB2.log";
AMB_Settings AMB2 = new AMB_Settings
{
AMB_ServiceName = serverName,
AMB_PortAppReceives = "2000",
AMB_PortAMBSends = "2001",
AMB_ServiceLogPath = ambrosiaLogDir,
AMB_CreateService = "A",
AMB_PauseAtStart = "N",
AMB_PersistLogs = "Y",
AMB_NewLogTriggerSize = "1000",
AMB_ActiveActive = "N",
AMB_Version = "0"
};
MyUtils.CallAMB(AMB2, logOutputFileName_AMB2, AMB_ModeConsts.RegisterInstance);
//ImmCoord1
string logOutputFileName_ImmCoord1 = testName + "_ImmCoord1.log";
int ImmCoordProcessID1 = MyUtils.StartImmCoord(clientJobName, 1500, logOutputFileName_ImmCoord1);
//ImmCoord2
string logOutputFileName_ImmCoord2 = testName + "_ImmCoord2.log";
int ImmCoordProcessID2 = MyUtils.StartImmCoord(serverName, 2500, logOutputFileName_ImmCoord2);
//Client Job Call
string logOutputFileName_ClientJob = testName + "_ClientJob.log";
int clientJobProcessID = MyUtils.StartPerfClientJob("1001", "1000", clientJobName, serverName, "1024", "1", logOutputFileName_ClientJob);
//Server Call
string logOutputFileName_Server = testName + "_Server.log";
int serverProcessID = MyUtils.StartPerfServer("2001", "2000", clientJobName, serverName, logOutputFileName_Server, 1, false);
// Give it 2 seconds to do something before killing it
Thread.Sleep(2000);
Application.DoEvents(); // if don't do this ... system sees thread as blocked thread and throws message.
//Kill Server at this point as well as ImmCoord2
MyUtils.KillProcess(serverProcessID);
MyUtils.KillProcess(ImmCoordProcessID2);
//Restart ImmCoord2
string logOutputFileName_ImmCoord2_Restarted = testName + "_ImmCoord2_Restarted.log";
int ImmCoordProcessID2_Restarted = MyUtils.StartImmCoord(serverName, 2500, logOutputFileName_ImmCoord2_Restarted);
// Restart Server Process
string logOutputFileName_Server_Restarted = testName + "_Server_Restarted.log";
int serverProcessID_Restarted = MyUtils.StartPerfServer("2001", "2000", clientJobName, serverName, logOutputFileName_Server_Restarted, 1, false);
//Delay until client is done - also check Server just to make sure
bool pass = MyUtils.WaitForProcessToFinish(logOutputFileName_ClientJob, byteSize, 8, false, testName, true); // number of bytes processed
pass = MyUtils.WaitForProcessToFinish(logOutputFileName_Server_Restarted, byteSize, 8, false, testName, true);
// Stop things so file is freed up and can be opened in verify
MyUtils.KillProcess(clientJobProcessID);
MyUtils.KillProcess(serverProcessID_Restarted);
MyUtils.KillProcess(ImmCoordProcessID1);
MyUtils.KillProcess(ImmCoordProcessID2_Restarted);
//Verify AMB
MyUtils.VerifyTestOutputFileToCmpFile(logOutputFileName_AMB1);
MyUtils.VerifyTestOutputFileToCmpFile(logOutputFileName_AMB2);
// Verify Client
MyUtils.VerifyTestOutputFileToCmpFile(logOutputFileName_ClientJob);
// Verify Server
MyUtils.VerifyTestOutputFileToCmpFile(logOutputFileName_Server);
MyUtils.VerifyTestOutputFileToCmpFile(logOutputFileName_Server_Restarted);
// Verify integrity of Ambrosia logs by replaying
MyUtils.VerifyAmbrosiaLogFile(testName, Convert.ToInt64(byteSize), true, true, AMB1.AMB_Version);
}
//****************************
// The basic test of Active Active where kill primary server
// 1 client
// 3 servers - primary, checkpointing secondary and active secondary (can become primary)
//
// killing first server (primary) will then have active secondary become primary
// restarting first server will make it the active secondary
//
//****************************
[TestMethod]
public void UnitTest_BasicActiveActive_KillPrimary_Test()
{
string testName = "unittestactiveactivekillprimary";
string clientJobName = testName + "clientjob";
string serverName = testName + "server";
string ambrosiaLogDir = ConfigurationManager.AppSettings["AmbrosiaLogDirectory"] + "\\";
string byteSize = "2147481250";
string newPrimary = "NOW I'm Primary";
Utilities MyUtils = new Utilities();
//AMB1 - primary -- in actuality, this is replica #0
string logOutputFileName_AMB1 = testName + "_AMB1.log";
AMB_Settings AMB1 = new AMB_Settings
{
AMB_ServiceName = serverName,
AMB_PortAppReceives = "1000",
AMB_PortAMBSends = "1001",
AMB_ServiceLogPath = ambrosiaLogDir,
AMB_CreateService = "A",
AMB_PauseAtStart = "N",
AMB_PersistLogs = "Y",
AMB_NewLogTriggerSize = "1000",
AMB_ActiveActive = "Y",
AMB_Version = "0"
};
MyUtils.CallAMB(AMB1, logOutputFileName_AMB1, AMB_ModeConsts.RegisterInstance);
//AMB2 - check pointer
string logOutputFileName_AMB2 = testName + "_AMB2.log";
AMB_Settings AMB2 = new AMB_Settings
{
AMB_ReplicaNumber = "1",
AMB_ServiceName = serverName,
AMB_PortAppReceives = "2000",
AMB_PortAMBSends = "2001",
AMB_ServiceLogPath = ambrosiaLogDir,
AMB_CreateService = "A",
AMB_PauseAtStart = "N",
AMB_PersistLogs = "Y",
AMB_NewLogTriggerSize = "1000",
AMB_ActiveActive = "Y",
AMB_Version = "0"
};
MyUtils.CallAMB(AMB2, logOutputFileName_AMB2, AMB_ModeConsts.AddReplica);
//AMB3 - active secondary
string logOutputFileName_AMB3 = testName + "_AMB3.log";
AMB_Settings AMB3 = new AMB_Settings
{
AMB_ReplicaNumber = "2",
AMB_ServiceName = serverName,
AMB_PortAppReceives = "3000",
AMB_PortAMBSends = "3001",
AMB_ServiceLogPath = ambrosiaLogDir,
AMB_CreateService = "A",
AMB_PauseAtStart = "N",
AMB_PersistLogs = "Y",
AMB_NewLogTriggerSize = "1000",
AMB_ActiveActive = "Y",
AMB_Version = "0"
};
MyUtils.CallAMB(AMB3, logOutputFileName_AMB3, AMB_ModeConsts.AddReplica);
//AMB4 - Job
string logOutputFileName_AMB4 = testName + "_AMB4.log";
AMB_Settings AMB4 = new AMB_Settings
{
AMB_ServiceName = clientJobName,
AMB_PortAppReceives = "4000",
AMB_PortAMBSends = "4001",
AMB_ServiceLogPath = ambrosiaLogDir,
AMB_CreateService = "A",
AMB_PauseAtStart = "N",
AMB_PersistLogs = "Y",
AMB_NewLogTriggerSize = "1000",
AMB_ActiveActive = "N",
AMB_Version = "0"
};
MyUtils.CallAMB(AMB4, logOutputFileName_AMB4, AMB_ModeConsts.RegisterInstance);
//ImmCoord1
string logOutputFileName_ImmCoord1 = testName + "_ImmCoord1.log";
int ImmCoordProcessID1 = MyUtils.StartImmCoord(serverName, 1500, logOutputFileName_ImmCoord1,true, 0);
//ImmCoord2
string logOutputFileName_ImmCoord2 = testName + "_ImmCoord2.log";
int ImmCoordProcessID2 = MyUtils.StartImmCoord(serverName, 2500, logOutputFileName_ImmCoord2,true,1);
//ImmCoord3
string logOutputFileName_ImmCoord3 = testName + "_ImmCoord3.log";
int ImmCoordProcessID3 = MyUtils.StartImmCoord(serverName, 3500, logOutputFileName_ImmCoord3,true,2);
//ImmCoord4
string logOutputFileName_ImmCoord4 = testName + "_ImmCoord4.log";
int ImmCoordProcessID4 = MyUtils.StartImmCoord(clientJobName, 4500, logOutputFileName_ImmCoord4);
//Server Call - primary
string logOutputFileName_Server1 = testName + "_Server1.log";
int serverProcessID1 = MyUtils.StartPerfServer("1001", "1000", clientJobName, serverName, logOutputFileName_Server1, 1, false);
Thread.Sleep(1000); // give a second to make it a primary
//Server Call - checkpointer
string logOutputFileName_Server2 = testName + "_Server2.log";
int serverProcessID2 = MyUtils.StartPerfServer("2001", "2000", clientJobName, serverName, logOutputFileName_Server2, 1, false);
Thread.Sleep(1000); // give a second
//Server Call - active secondary
string logOutputFileName_Server3 = testName + "_Server3.log";
int serverProcessID3 = MyUtils.StartPerfServer("3001", "3000", clientJobName, serverName, logOutputFileName_Server3, 1, false);
//Client Job Call
// Pass in packet that isn't base 2
string logOutputFileName_ClientJob = testName + "_ClientJob.log";
int clientJobProcessID = MyUtils.StartPerfClientJob("4001", "4000", clientJobName, serverName, "2500", "2", logOutputFileName_ClientJob);
// Give it 3 seconds to do something before killing it
Thread.Sleep(3000);
Application.DoEvents(); // if don't do this ... system sees thread as blocked thread and throws message.
//Kill Primary Server (server1) at this point as well as ImmCoord1
MyUtils.KillProcess(serverProcessID1);
MyUtils.KillProcess(ImmCoordProcessID1);
// at this point, server3 (active secondary) becomes primary
Thread.Sleep(1000);
//Restart server1 (ImmCoord1 and server) ... this will become active secondary now
string logOutputFileName_ImmCoord1_Restarted = testName + "_ImmCoord1_Restarted.log";
int ImmCoordProcessID1_Restarted = MyUtils.StartImmCoord(serverName, 1500, logOutputFileName_ImmCoord1_Restarted,true,0);
string logOutputFileName_Server1_Restarted = testName + "_Server1_Restarted.log";
int serverProcessID_Restarted1 = MyUtils.StartPerfServer("1001", "1000", clientJobName, serverName, logOutputFileName_Server1_Restarted, 1, false);
//Delay until finished ... looking at the most recent primary (server3) but also verify others hit done too
bool pass = MyUtils.WaitForProcessToFinish(logOutputFileName_Server3, byteSize, 5, false, testName, true); // Total Bytes received needs to be accurate
pass = MyUtils.WaitForProcessToFinish(logOutputFileName_ClientJob, byteSize, 5, false, testName, true);
pass = MyUtils.WaitForProcessToFinish(logOutputFileName_Server2, byteSize, 5, false, testName, true);
pass = MyUtils.WaitForProcessToFinish(logOutputFileName_Server1_Restarted, byteSize, 5, false, testName, true);
// Also verify ImmCoord has the string to show it is primary
pass = MyUtils.WaitForProcessToFinish(logOutputFileName_ImmCoord3, newPrimary, 5, false, testName, true);
// Stop things so file is freed up and can be opened in verify
MyUtils.KillProcess(serverProcessID2);
MyUtils.KillProcess(serverProcessID3);
MyUtils.KillProcess(serverProcessID_Restarted1);
MyUtils.KillProcess(clientJobProcessID);
MyUtils.KillProcess(ImmCoordProcessID2);
MyUtils.KillProcess(ImmCoordProcessID3);
MyUtils.KillProcess(ImmCoordProcessID1_Restarted);
MyUtils.KillProcess(ImmCoordProcessID4);
// Verify cmp files for client and 3 servers
MyUtils.VerifyTestOutputFileToCmpFile(logOutputFileName_ClientJob);
MyUtils.VerifyTestOutputFileToCmpFile(logOutputFileName_Server1);
MyUtils.VerifyTestOutputFileToCmpFile(logOutputFileName_Server1_Restarted);
MyUtils.VerifyTestOutputFileToCmpFile(logOutputFileName_Server2);
MyUtils.VerifyTestOutputFileToCmpFile(logOutputFileName_Server3);
// Verify integrity of Ambrosia logs by replaying
MyUtils.VerifyAmbrosiaLogFile(testName, Convert.ToInt64(byteSize), true, true, AMB1.AMB_Version);
}
[TestCleanup()]
public void Cleanup()
{
// Kill all ImmCoord.Workers, Job and Server exes
Utilities MyUtils = new Utilities();
MyUtils.UnitTestCleanup();
}
}
}

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,129 @@
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="15.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Import Project="..\packages\MSTest.TestAdapter.1.2.0\build\net45\MSTest.TestAdapter.props" Condition="Exists('..\packages\MSTest.TestAdapter.1.2.0\build\net45\MSTest.TestAdapter.props')" />
<PropertyGroup>
<Configuration Condition=" '$(Configuration)' == '' ">Debug</Configuration>
<Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
<ProjectGuid>{F9AA4F89-945C-4118-99CF-FDC7AA142601}</ProjectGuid>
<OutputType>Library</OutputType>
<AppDesignerFolder>Properties</AppDesignerFolder>
<RootNamespace>AmbrosiaTest</RootNamespace>
<AssemblyName>AmbrosiaTest</AssemblyName>
<TargetFrameworkVersion>v4.6.1</TargetFrameworkVersion>
<FileAlignment>512</FileAlignment>
<ProjectTypeGuids>{3AC096D0-A1C2-E12C-1390-A8335801FDAB};{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}</ProjectTypeGuids>
<VisualStudioVersion Condition="'$(VisualStudioVersion)' == ''">15.0</VisualStudioVersion>
<VSToolsPath Condition="'$(VSToolsPath)' == ''">$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v$(VisualStudioVersion)</VSToolsPath>
<ReferencePath>$(ProgramFiles)\Common Files\microsoft shared\VSTT\$(VisualStudioVersion)\UITestExtensionPackages</ReferencePath>
<IsCodedUITest>False</IsCodedUITest>
<TestProjectType>UnitTest</TestProjectType>
<NuGetPackageImportStamp>
</NuGetPackageImportStamp>
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ">
<DebugSymbols>true</DebugSymbols>
<DebugType>full</DebugType>
<Optimize>false</Optimize>
<OutputPath>bin\Debug\</OutputPath>
<DefineConstants>DEBUG;TRACE</DefineConstants>
<ErrorReport>prompt</ErrorReport>
<WarningLevel>4</WarningLevel>
</PropertyGroup>
<PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">
<DebugType>pdbonly</DebugType>
<Optimize>true</Optimize>
<OutputPath>bin\Release\</OutputPath>
<DefineConstants>TRACE</DefineConstants>
<ErrorReport>prompt</ErrorReport>
<WarningLevel>4</WarningLevel>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Debug|x64'">
<DebugSymbols>true</DebugSymbols>
<OutputPath>bin\x64\Debug\</OutputPath>
<DefineConstants>DEBUG;TRACE</DefineConstants>
<DebugType>full</DebugType>
<PlatformTarget>x64</PlatformTarget>
<ErrorReport>prompt</ErrorReport>
<CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Release|x64'">
<OutputPath>bin\x64\Release\</OutputPath>
<DefineConstants>TRACE</DefineConstants>
<Optimize>true</Optimize>
<DebugType>pdbonly</DebugType>
<PlatformTarget>x64</PlatformTarget>
<ErrorReport>prompt</ErrorReport>
<CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet>
</PropertyGroup>
<ItemGroup>
<Reference Include="System">
<Private>True</Private>
</Reference>
<Reference Include="System.Configuration">
<Private>True</Private>
</Reference>
<Reference Include="System.Core">
<Private>True</Private>
</Reference>
<Reference Include="System.Web.Extensions" />
<Reference Include="System.Windows.Forms">
<Private>True</Private>
</Reference>
</ItemGroup>
<ItemGroup>
<Compile Include="ActiveActive_Test.cs" />
<Compile Include="AsyncTests.cs" />
<Compile Include="BasicEXECalls_Test.cs" />
<Compile Include="EndToEndStressIntegration_Test.cs" />
<Compile Include="AMB_UnitTest.cs" />
<Compile Include="MTF_Test.cs" />
<Compile Include="Properties\AssemblyInfo.cs" />
<Compile Include="Utilities.cs" />
</ItemGroup>
<ItemGroup>
<None Include="app.config">
<SubType>Designer</SubType>
</None>
<None Include="CheckAmbrosiaStatus.ps1">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
<None Include="CleanUpAzure.ps1">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
<None Include="CodeCoverage.runsettings">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
<SubType>Designer</SubType>
</None>
<None Include="LaunchCodeCoverage.bat">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
<None Include="LaunchTests.bat">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
<None Include="LaunchUnitTests.bat">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
</ItemGroup>
<ItemGroup>
<PackageReference Include="Microsoft.CodeCoverage">
<Version>15.9.0</Version>
</PackageReference>
<PackageReference Include="Microsoft.NET.Test.Sdk">
<Version>15.9.0</Version>
</PackageReference>
<PackageReference Include="Microsoft.TestPlatform.TestHost">
<Version>15.9.0</Version>
</PackageReference>
<PackageReference Include="MSTest.TestAdapter">
<Version>1.4.0</Version>
</PackageReference>
<PackageReference Include="MSTest.TestFramework">
<Version>1.4.0</Version>
</PackageReference>
</ItemGroup>
<ItemGroup>
<Folder Include="Cmp\" />
</ItemGroup>
<Import Project="$(VSToolsPath)\TeamTest\Microsoft.TestTools.targets" Condition="Exists('$(VSToolsPath)\TeamTest\Microsoft.TestTools.targets')" />
<Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />
</Project>

Просмотреть файл

@ -0,0 +1,130 @@
using Microsoft.VisualStudio.TestTools.UnitTesting;
using System;
using System.Configuration;
using System.Threading;
using System.Windows.Forms; // need this to handle threading issue on sleeps
namespace AmbrosiaTest
{
/// <summary>
/// Async tests are the same as other AMB tests but instead of calling PerformanceTestInterruptible, it is calling
/// PerformaceTest binaries as those are the exes that run async.
/// </summary>
[TestClass]
public class AsyncTests
{
//************* Init Code *****************
// NOTE: Need this bit of code at the top of every "[TestClass]" (per .cs test file) to get context \ details of the current test running
// NOTE: Make sure all names be "Azure Safe". No capital letters and no underscore.
[TestInitialize()]
public void Initialize()
{
Utilities MyUtils = new Utilities();
MyUtils.TestInitialize();
}
//************* Init Code *****************
//** Basic end to end test starts job and server and runs a bunch of bytes through
//** Only a few rounds and part of
[TestMethod]
public void AMB_Async_Basic_Test()
{
//NOTE - the Cleanup has this hard coded so if this changes, update Cleanup section too
string testName = "asyncbasic";
string clientJobName = testName + "clientjob";
string serverName = testName + "server";
string ambrosiaLogDir = ConfigurationManager.AppSettings["AmbrosiaLogDirectory"] + "\\";
string byteSize = "3221225472";
Utilities MyUtils = new Utilities();
//#*#*# Remove ...
MyUtils.AsyncTestCleanup();
//#*#*#
//AMB1 - Job
string logOutputFileName_AMB1 = testName + "_AMB1.log";
AMB_Settings AMB1 = new AMB_Settings
{
AMB_ImmCoordName = testName + "immcoord1",
AMB_PortAppReceives = "1000",
AMB_PortAMBSends = "1001",
AMB_ServiceLogPath = ambrosiaLogDir,
AMB_CreateService = "A",
AMB_PauseAtStart = "N",
AMB_PersistLogs = "Y",
AMB_NewLogTriggerSize = "1000",
AMB_ActiveActive = "N",
AMB_Version = "0"
};
MyUtils.CallAMB(AMB1, logOutputFileName_AMB1, AMB_ModeConsts.RegisterInstance);
//AMB2
string logOutputFileName_AMB2 = testName + "_AMB2.log";
AMB_Settings AMB2 = new AMB_Settings
{
AMB_ImmCoordName = testName + "immcoord2",
AMB_PortAppReceives = "2000",
AMB_PortAMBSends = "2001",
AMB_ServiceLogPath = ambrosiaLogDir,
AMB_CreateService = "A",
AMB_PauseAtStart = "N",
AMB_PersistLogs = "Y",
AMB_NewLogTriggerSize = "1000",
AMB_ActiveActive = "N",
AMB_Version = "0"
};
MyUtils.CallAMB(AMB2, logOutputFileName_AMB2, AMB_ModeConsts.RegisterInstance);
//ImmCoord1
string logOutputFileName_ImmCoord1 = testName + "_ImmCoord1.log";
int ImmCoordProcessID1 = MyUtils.StartImmCoord(testName + "immcoord1", 1500, logOutputFileName_ImmCoord1);
//ImmCoord2
string logOutputFileName_ImmCoord2 = testName + "_ImmCoord2.log";
int ImmCoordProcessID2 = MyUtils.StartImmCoord(testName + "immcoord2", 2500, logOutputFileName_ImmCoord2);
//Client Job Call
string logOutputFileName_ClientJob = testName + "_ClientJob.log";
int clientJobProcessID = MyUtils.StartAsyncPerfClientJob("1001", "1000", clientJobName, serverName, logOutputFileName_ClientJob);
//Server Call
string logOutputFileName_Server = testName + "_Server.log";
int serverProcessID = MyUtils.StartAsyncPerfServer("2001", "2000", serverName, logOutputFileName_Server);
//Delay until client is done - also check Server just to make sure
// bool pass = MyUtils.WaitForProcessToFinish(logOutputFileName_ClientJob, byteSize, 15, false, testName, true); // number of bytes processed
// pass = MyUtils.WaitForProcessToFinish(logOutputFileName_Server, byteSize, 15, false, testName, true);
// Stop things so file is freed up and can be opened in verify
MyUtils.KillProcess(clientJobProcessID);
MyUtils.KillProcess(serverProcessID);
MyUtils.KillProcess(ImmCoordProcessID1);
MyUtils.KillProcess(ImmCoordProcessID2);
//Verify AMB
// MyUtils.VerifyTestOutputFileToCmpFile(logOutputFileName_AMB1);
// MyUtils.VerifyTestOutputFileToCmpFile(logOutputFileName_AMB2);
// Verify Client
// MyUtils.VerifyTestOutputFileToCmpFile(logOutputFileName_ClientJob);
// Verify Server
// MyUtils.VerifyTestOutputFileToCmpFile(logOutputFileName_Server);
// Verify integrity of Ambrosia logs by replaying
// MyUtils.VerifyAmbrosiaLogFile(testName, Convert.ToInt64(byteSize), true, true, AMB1.AMB_Version);
}
[TestCleanup()]
public void Cleanup()
{
// Kill all ImmortalCoordinators, Job and Server exes
Utilities MyUtils = new Utilities();
MyUtils.AsyncTestCleanup();
}
}
}

Просмотреть файл

@ -0,0 +1,26 @@
using System;
using System.Configuration;
using System.Diagnostics;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using System.Threading;
namespace AmbrosiaTest
{
[TestClass]
public class BasicEXECalls_Test
{
//************* Init Code *****************
// NOTE: Need this bit of code at the top of every "[TestClass]" (per .cs test file) to get context \ details of the current test running
[TestInitialize()]
public void Initialize()
{
Utilities MyUtils = new Utilities();
MyUtils.TestInitialize();
}
//************* Init Code *****************
//**** Add tests to check EXE error handling??
}
}

Просмотреть файл

@ -0,0 +1,97 @@
###########################################
#
# Script to check the status of Ambrosia Azure tables as well as running processes
#
# Call:
# .\CheckAmbrosiaStatus.ps1 laractiveactiveadd* > AmbrosiaStatus.log 2>&1
#
# Parameters:
# ObjectName - name of the objects in Azure you want to check - can use "*" as wild card ...
#
# Note - might need Microsoft Azure Powershell add in - http://go.microsoft.com/fwlink/p/?linkid=320376&clcid=0x409
# - also need to do this at powershell prompt:
# - Install-Module -Name AzureRM -AllowClobber
# - Install-Module AzureRmStorageTable
# - Get-Module -ListAvailable AzureRM -->> This should show 5.6 (just needs to be above 4.4)
# - This script requires environment variable
# - AZURE_STORAGE_CONN_STRING - Connection string used to connect to the Azure subscription
#
# Info - https://docs.microsoft.com/en-us/azure/cosmos-db/table-storage-how-to-use-powershell
#
###########################################
$ObjectName=$args[0]
# Verify parameter is passed
if ([string]::IsNullOrEmpty($ObjectName)) {
Write-output "ERROR! Missing parameter value. "
Write-output " Please specify the name of the objects that you want checked in the Ambrosia Azure tables."
Write-output ""
Write-output " Note: Wild cards (ie *ImmCoord1*) are supported."
Write-output ""
exit
}
# Verify the connection info is there
if ([string]::IsNullOrEmpty($env:AZURE_STORAGE_CONN_STRING)) {
Write-output "ERROR! Missing environment variable AZURE_STORAGE_CONN_STRING"
Write-output " That env variable containes the needed connection info"
Write-output ""
exit
}
Write-output "------------- Verify Running Ambrosia process -------------"
Write-output ""
Write-output "-- ImmCoord Worker -- "
Get-Process -Name ImmortalCoordinator
Write-output "-- Job.exe -- "
Get-Process -Name Job
Write-output "-- Server.exe -- "
Get-Process -Name Server
Write-output "-- Ambrosia.exe -- "
Get-Process -Name Ambrosia
Write-output "------------- Verify Azure tables -------------"
Write-output ""
Write-output "--- Connection Info ---"
# Get connection info from Env Var
$ConnectionString = $env:AZURE_STORAGE_CONN_STRING
$ConnectionString_Array = $ConnectionString.Split(";")
$ConnectionString_Array2 = $ConnectionString_Array.Split("=")
$storageAccountName = $ConnectionString_Array2[3]
$storageKey = $ConnectionString_Array2[5]+"==" #Split removes the == off the end so put them back
Write-output " Storage Account:" $storageAccountName
Write-output " Storage Key:" $storageKey
Write-output "----------------"
Write-output ""
# had issues when used $ctx for each table call so made separate ctx var
Write-output "------------- Get items from Azure table: craendpointtable filtered on $ObjectName -------------"
$tableName2 = "craendpointtable"
$ctx = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageKey
$storageTable2 = Get-AzureStorageTable -Name $tableName2 -Context $ctx
Get-AzureStorageTableRowAll -table $storageTable2 | where PartitionKey -Like $ObjectName
Write-output ""
Write-output "------------- Get items from Azure table: craconnectiontable filtered on $ObjectName -------------"
$tableName1 = "craconnectiontable"
$ctx1 = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageKey
$storageTable1 = Get-AzureStorageTable -Name $tableName1 -Context $ctx1
Get-AzureStorageTableRowAll -table $storageTable1 | where PartitionKey -Like $ObjectName
Write-output ""
Write-output "------------- Get items from Azure table: cravertextable filtered on $ObjectName -------------"
$tableName3 = "cravertextable"
$ctx2 = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageKey
$storageTable3 = Get-AzureStorageTable -Name $tableName3 -Context $ctx2
Write-output "-- PartitionKey --"
Get-AzureStorageTableRowAll -table $storageTable3 | where PartitionKey -Like $ObjectName
Write-output "-- RowKey -- "
Get-AzureStorageTableRowAll -table $storageTable3 | where RowKey -Like $ObjectName
Write-output ""

Просмотреть файл

@ -0,0 +1,91 @@
###########################################
#
# Script to clean up the Azure tables.
#
# Parameters:
# ObjectName - name of the objects in Azure you want to delete - can use "*" as wild card ... so "process" will NOT delete "process1" but "process*" will.
#
# Note - might need Microsoft Azure Powershell add in - http://go.microsoft.com/fwlink/p/?linkid=320376&clcid=0x409
# - also need to do this at powershell prompt:
# - Install-Module -Name AzureRM -AllowClobber
# - Install-Module AzureRmStorageTable
# - Get-Module -ListAvailable AzureRM -->> This should show 5.6 (just needs to be above 4.4)
# - This script requires environment variable
# - AZURE_STORAGE_CONN_STRING - Connection string used to connect to the Azure subscription
#
# Info - https://docs.microsoft.com/en-us/azure/cosmos-db/table-storage-how-to-use-powershell
#
# WARNING - you are deleting items in Azure ... be careful on using this as you don't want to delete other people's data
#
###########################################
$ObjectName=$args[0]
# Verify parameter is passed
if ([string]::IsNullOrEmpty($ObjectName)) {
Write-Host "ERROR! Missing parameter value. "
Write-Host " Please specify the name of the objects that you want deleted from the Ambrosia Azure tables."
Write-Host
Write-Host " Note: Wild cards (ie *ImmCoord1*) are supported."
Write-Host
exit
}
# Verify the connection info is there
if ([string]::IsNullOrEmpty($env:AZURE_STORAGE_CONN_STRING)) {
Write-Host "ERROR! Missing environment variable AZURE_STORAGE_CONN_STRING"
Write-Host " That env variable containes the needed connection info"
Write-Host
exit
}
Write-host "------------- Clean Up Azure tables and file share -------------"
Write-host
Write-host "--- Connection Info ---"
# Get connection info from Env Var
$ConnectionString = $env:AZURE_STORAGE_CONN_STRING
$ConnectionString_Array = $ConnectionString.Split(";")
$ConnectionString_Array2 = $ConnectionString_Array.Split("=")
$storageAccountName = $ConnectionString_Array2[3]
$storageKey = $ConnectionString_Array2[5]+"==" #Split removes the == off the end so put them back
Write-host " Storage Account:" $storageAccountName
Write-host " Storage Key:" $storageKey
Write-host "----------------"
Write-host
# Get a storage context
$ctx = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageKey
# Delete the table created by the Ambrosia
Write-host "------------- Delete Ambrosia created tables filtered on $ObjectName -------------"
Get-AzureStorageTable $ObjectName* -Context $ctx | Remove-AzureStorageTable -Context $ctx -Force
# Clean up the data in the CRA (Immortal Coordintor) tables
Write-host "------------- Delete items in Azure table: craconnectiontable filtered on $ObjectName -------------"
$tableName = "craconnectiontable"
$storageTable = Get-AzureStorageTable -Name $tableName -Context $ctx
Get-AzureStorageTableRowAll -table $storageTable | where PartitionKey -Like $ObjectName | Remove-AzureStorageTableRow -table $storageTable
Write-host
Write-host "------------- Delete items in Azure table: craendpointtable filtered on $ObjectName -------------"
$tableName = "craendpointtable"
$storageTable = Get-AzureStorageTable -Name $tableName -Context $ctx
Get-AzureStorageTableRowAll -table $storageTable | where PartitionKey -Like $ObjectName | Remove-AzureStorageTableRow -table $storageTable
Write-host
Write-host "------------- Delete items in Azure table: cravertextable filtered on $ObjectName -------------"
$tableName = "cravertextable"
$storageTable = Get-AzureStorageTable -Name $tableName -Context $ctx
Get-AzureStorageTableRowAll -table $storageTable | where PartitionKey -Like $ObjectName | Remove-AzureStorageTableRow -table $storageTable
Get-AzureStorageTableRowAll -table $storageTable | where RowKey -Like $ObjectName | Remove-AzureStorageTableRow -table $storageTable
Write-host
#Write-host "------------- Clean Up Azure File Share -------------"
#Write-host
## TO DO: Not sure what we do here for File Share ... need the proper name and if we even use it any more.
#Remove-AzureStorageShare -Context $ctx.Context -Name Ambrosia_logs

Просмотреть файл

@ -0,0 +1,29 @@
Bytes per RPC Throughput (GB/sec)
*X* 65536 0.0397151101704702
Service Received 1024 MB so far
*X* 32768 0.0340913362398561
Service Received 2048 MB so far
*X* 16384 0.0429140937217801
Service Received 3072 MB so far
*X* 8192 0.044828269266056
Service Received 4096 MB so far
*X* 4096 0.0437013958265158
Service Received 5120 MB so far
*X* 2048 0.0427848580228769
Service Received 6144 MB so far
*X* 1024 0.0435964516080739
Service Received 7168 MB so far
*X* 512 0.0421149453394968
Service Received 8192 MB so far
*X* 256 0.0418615391081382
Service Received 9216 MB so far
*X* 128 0.0422496839628578
Service Received 10240 MB so far
*X* 64 0.0381878578345484
Service Received 11264 MB so far
*X* 32 0.0365223712215256
Service Received 12288 MB so far
*X* 16 0.024471523201795
Service Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,29 @@
Bytes per RPC Throughput (GB/sec)
*X* 65536 0.00262566224137973
Service Received 1024 MB so far
*X* 32768 0.00272593778374721
Service Received 2048 MB so far
*X* 16384 0.00283242871484678
Service Received 3072 MB so far
*X* 8192 0.00289367864545106
Service Received 4096 MB so far
*X* 4096 0.0029488215993435
Service Received 5120 MB so far
*X* 2048 0.0030094678351647
Service Received 6144 MB so far
*X* 1024 0.00312775833293168
Service Received 7168 MB so far
*X* 512 0.00325242981388185
Service Received 8192 MB so far
*X* 256 0.00336725044042163
Service Received 9216 MB so far
*X* 128 0.00348734963292932
Service Received 10240 MB so far
*X* 64 0.00363445622615781
Service Received 11264 MB so far
*X* 32 0.00375990560432125
Service Received 12288 MB so far
*X* 16 0.00369138825222562
Service Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,31 @@
*X* Server in Entry Point
*X* At checkpoint, received 15345 messages
Received 1024 MB so far
*X* At checkpoint, received 25521 messages
*X* At checkpoint, received 46415 messages
Received 2048 MB so far
*X* At checkpoint, received 104889 messages
Received 3072 MB so far
*X* At checkpoint, received 217055 messages
Received 4096 MB so far
*X* At checkpoint, received 433432 messages
Received 5120 MB so far
*X* At checkpoint, received 846010 messages
Received 6144 MB so far
*X* At checkpoint, received 1631568 messages
Received 7168 MB so far
*X* At checkpoint, received 3112706 messages
Received 8192 MB so far
*X* At checkpoint, received 5875072 messages
Received 9216 MB so far
*X* At checkpoint, received 10884582 messages
Received 10240 MB so far
*X* At checkpoint, received 19590251 messages
*X* At checkpoint, received 33527041 messages
Received 11264 MB so far
*X* At checkpoint, received 59975631 messages
Received 12288 MB so far
*X* At checkpoint, received 100399965 messages
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,16 @@
*X* Server in Entry Point
Received 1024 MB so far
Received 2048 MB so far
Received 3072 MB so far
Received 4096 MB so far
Received 5120 MB so far
Received 6144 MB so far
Received 7168 MB so far
Received 8192 MB so far
Received 9216 MB so far
Received 10240 MB so far
Received 11264 MB so far
Received 12288 MB so far
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,16 @@
*X* Server in Entry Point
Received 1024 MB so far
Received 2048 MB so far
Received 3072 MB so far
Received 4096 MB so far
Received 5120 MB so far
Received 6144 MB so far
Received 7168 MB so far
Received 8192 MB so far
Received 9216 MB so far
Received 10240 MB so far
Received 11264 MB so far
Received 12288 MB so far
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,29 @@
Bytes per RPC Throughput (GB/sec)
*X* 65536 0.0372342693059305
Service Received 1024 MB so far
*X* 32768 0.0385322604241199
Service Received 2048 MB so far
*X* 16384 0.0421944485422286
Service Received 3072 MB so far
*X* 8192 0.0448676446434267
Service Received 4096 MB so far
*X* 4096 0.0451691721682756
Service Received 5120 MB so far
*X* 2048 0.044631104418191
Service Received 6144 MB so far
*X* 1024 0.0419209925952016
Service Received 7168 MB so far
*X* 512 0.0446787974456828
Service Received 8192 MB so far
*X* 256 0.0412141830203171
Service Received 9216 MB so far
*X* 128 0.0411807597823824
Service Received 10240 MB so far
*X* 64 0.0379665717699799
Service Received 11264 MB so far
*X* 32 0.0352991449512828
Service Received 12288 MB so far
*X* 16 0.0189336790163664
Service Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,16 @@
*X* Server in Entry Point
Received 1024 MB so far
Received 2048 MB so far
Received 3072 MB so far
Received 4096 MB so far
Received 5120 MB so far
Received 6144 MB so far
Received 7168 MB so far
Received 8192 MB so far
Received 9216 MB so far
Received 10240 MB so far
Received 11264 MB so far
Received 12288 MB so far
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,29 @@
Bytes per RPC Throughput (GB/sec)
*X* 65536 0.0372342693059305
Service Received 1024 MB so far
*X* 32768 0.0385322604241199
Service Received 2048 MB so far
*X* 16384 0.0421944485422286
Service Received 3072 MB so far
*X* 8192 0.0448676446434267
Service Received 4096 MB so far
*X* 4096 0.0451691721682756
Service Received 5120 MB so far
*X* 2048 0.044631104418191
Service Received 6144 MB so far
*X* 1024 0.0419209925952016
Service Received 7168 MB so far
*X* 512 0.0446787974456828
Service Received 8192 MB so far
*X* 256 0.0412141830203171
Service Received 9216 MB so far
*X* 128 0.0411807597823824
Service Received 10240 MB so far
*X* 64 0.0379665717699799
Service Received 11264 MB so far
*X* 32 0.0352991449512828
Service Received 12288 MB so far
*X* 16 0.0189336790163664
Service Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,29 @@
Bytes per RPC Throughput (GB/sec)
*X* 65536 0.0372342693059305
Service Received 1024 MB so far
*X* 32768 0.0385322604241199
Service Received 2048 MB so far
*X* 16384 0.0421944485422286
Service Received 3072 MB so far
*X* 8192 0.0448676446434267
Service Received 4096 MB so far
*X* 4096 0.0451691721682756
Service Received 5120 MB so far
*X* 2048 0.044631104418191
Service Received 6144 MB so far
*X* 1024 0.0419209925952016
Service Received 7168 MB so far
*X* 512 0.0446787974456828
Service Received 8192 MB so far
*X* 256 0.0412141830203171
Service Received 9216 MB so far
*X* 128 0.0411807597823824
Service Received 10240 MB so far
*X* 64 0.0379665717699799
Service Received 11264 MB so far
*X* 32 0.0352991449512828
Service Received 12288 MB so far
*X* 16 0.0189336790163664
Service Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,18 @@
*X* At checkpoint, received 0 messages
*X* Becoming a primary now
*X* Server in Entry Point
Received 1024 MB so far
Received 2048 MB so far
Received 3072 MB so far
Received 4096 MB so far
Received 5120 MB so far
Received 6144 MB so far
Received 7168 MB so far
Received 8192 MB so far
Received 9216 MB so far
Received 10240 MB so far
Received 11264 MB so far
Received 12288 MB so far
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,31 @@
*X* Server in Entry Point
*X* At checkpoint, received 15274 messages
Received 1024 MB so far
*X* At checkpoint, received 44865 messages
Received 2048 MB so far
*X* At checkpoint, received 101719 messages
Received 3072 MB so far
*X* At checkpoint, received 210993 messages
Received 4096 MB so far
*X* At checkpoint, received 420427 messages
Received 5120 MB so far
*X* At checkpoint, received 820546 messages
Received 6144 MB so far
*X* At checkpoint, received 1581824 messages
Received 7168 MB so far
*X* At checkpoint, received 3014001 messages
Received 8192 MB so far
*X* At checkpoint, received 5697009 messages
Received 9216 MB so far
*X* At checkpoint, received 10556921 messages
Received 10240 MB so far
*X* At checkpoint, received 19006666 messages
*X* At checkpoint, received 32911747 messages
Received 11264 MB so far
*X* At checkpoint, received 58685297 messages
Received 12288 MB so far
*X* At checkpoint, received 98001605 messages
Received 13312 MB so far
Bytes received: 13958643712
DONE
*X* At checkpoint, received 134201344 messages

Просмотреть файл

@ -0,0 +1,16 @@
*X* Server in Entry Point
Received 1024 MB so far
Received 2048 MB so far
Received 3072 MB so far
Received 4096 MB so far
Received 5120 MB so far
Received 6144 MB so far
Received 7168 MB so far
Received 8192 MB so far
Received 9216 MB so far
Received 10240 MB so far
Received 11264 MB so far
Received 12288 MB so far
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,16 @@
*X* Server in Entry Point
Received 1024 MB so far
Received 2048 MB so far
Received 3072 MB so far
Received 4096 MB so far
Received 5120 MB so far
Received 6144 MB so far
Received 7168 MB so far
Received 8192 MB so far
Received 9216 MB so far
Received 10240 MB so far
Received 11264 MB so far
Received 12288 MB so far
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,28 @@
*X* 65536 0.011938401677185
Service Received 1024 MB so far
*X* 32768 0.0256902439325655
Service Received 2048 MB so far
*X* 16384 0.0329773938118686
Service Received 3072 MB so far
*X* 8192 0.0341867754134318
Service Received 4096 MB so far
*X* 4096 0.0322751478560026
Service Received 5120 MB so far
*X* 2048 0.0323962145152924
Service Received 6144 MB so far
*X* 1024 0.0318758112274423
Service Received 7168 MB so far
*X* 512 0.0312745254828837
Service Received 8192 MB so far
*X* 256 0.0313614473132311
Service Received 9216 MB so far
*X* 128 0.0307281865159377
Service Received 10240 MB so far
*X* 64 0.0275001700541766
Service Received 11264 MB so far
*X* 32 0.026143227044497
Service Received 12288 MB so far
*X* 16 0.0199191787687513
Service Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,29 @@
Bytes per RPC Throughput (GB/sec)
*X* 65536 0.0194137376224003
Service Received 1024 MB so far
*X* 32768 0.0256513636667652
Service Received 2048 MB so far
*X* 16384 0.0329779382276125
Service Received 3072 MB so far
*X* 8192 0.0341867754134318
Service Received 4096 MB so far
*X* 4096 0.0322751478560026
Service Received 5120 MB so far
*X* 2048 0.0319429538708673
Service Received 6144 MB so far
*X* 1024 0.0318763184563304
Service Received 7168 MB so far
*X* 512 0.0312750138538147
Service Received 8192 MB so far
*X* 256 0.0313619391894884
Service Received 9216 MB so far
*X* 128 0.0307395208804251
Service Received 10240 MB so far
*X* 64 0.0275243905322372
Service Received 11264 MB so far
*X* 32 0.0261442516720125
Service Received 12288 MB so far
*X* 16 0.0199203695577558
Service Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,29 @@
Bytes per RPC Throughput (GB/sec)
*X* 65536 0.0197037096045998
Service Received 1024 MB so far
*X* 32768 0.0263670046046546
Service Received 2048 MB so far
*X* 16384 0.0330810451532779
Service Received 3072 MB so far
*X* 8192 0.0343458780977616
Service Received 4096 MB so far
*X* 4096 0.033278876002502
Service Received 5120 MB so far
*X* 2048 0.0340341687943524
Service Received 6144 MB so far
*X* 1024 0.0340761595350376
Service Received 7168 MB so far
*X* 512 0.032616119861005
Service Received 8192 MB so far
*X* 256 0.0326722095396873
Service Received 9216 MB so far
*X* 128 0.031965320746805
Service Received 10240 MB so far
*X* 64 0.0279506400640673
Service Received 11264 MB so far
*X* 32 0.0270926560573903
Service Received 12288 MB so far
*X* 16 0.0201497657534319
Service Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,29 @@
Bytes per RPC Throughput (GB/sec)
*X* 65536 0.00193709012697161
Service Received 1024 MB so far
*X* 32768 0.0256902439325655
Service Received 2048 MB so far
*X* 16384 0.0329773938118686
Service Received 3072 MB so far
*X* 8192 0.0341867754134318
Service Received 4096 MB so far
*X* 4096 0.0322751478560026
Service Received 5120 MB so far
*X* 2048 0.0323962145152924
Service Received 6144 MB so far
*X* 1024 0.0318758112274423
Service Received 7168 MB so far
*X* 512 0.0312745254828837
Service Received 8192 MB so far
*X* 256 0.0313614473132311
Service Received 9216 MB so far
*X* 128 0.0307281865159377
Service Received 10240 MB so far
*X* 64 0.0275001700541766
Service Received 11264 MB so far
*X* 32 0.026143227044497
Service Received 12288 MB so far
*X* 16 0.0199191787687513
Service Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,15 @@
Received 1024 MB so far
Received 2048 MB so far
Received 3072 MB so far
Received 4096 MB so far
Received 5120 MB so far
Received 6144 MB so far
Received 7168 MB so far
Received 8192 MB so far
Received 9216 MB so far
Received 10240 MB so far
Received 11264 MB so far
Received 12288 MB so far
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,30 @@
*X* Server in Entry Point
*X* At checkpoint, received 14171 messages
Received 1024 MB so far
*X* At checkpoint, received 42708 messages
Received 2048 MB so far
*X* At checkpoint, received 97718 messages
Received 3072 MB so far
*X* At checkpoint, received 202886 messages
Received 4096 MB so far
*X* At checkpoint, received 405745 messages
Received 5120 MB so far
*X* At checkpoint, received 792738 messages
Received 6144 MB so far
*X* At checkpoint, received 1531830 messages
Received 7168 MB so far
*X* At checkpoint, received 2916029 messages
Received 8192 MB so far
*X* At checkpoint, received 5507677 messages
Received 9216 MB so far
*X* At checkpoint, received 10166540 messages
Received 10240 MB so far
*X* At checkpoint, received 18207780 messages
*X* At checkpoint, received 32205920 messages
Received 11264 MB so far
*X* At checkpoint, received 57340862 messages
Received 12288 MB so far
*X* At checkpoint, received 95683519 messages
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,16 @@
*X* Server in Entry Point
Received 1024 MB so far
Received 2048 MB so far
Received 3072 MB so far
Received 4096 MB so far
Received 5120 MB so far
Received 6144 MB so far
Received 7168 MB so far
Received 8192 MB so far
Received 9216 MB so far
Received 10240 MB so far
Received 11264 MB so far
Received 12288 MB so far
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,16 @@
*X* Server in Entry Point
Received 1024 MB so far
Received 2048 MB so far
Received 3072 MB so far
Received 4096 MB so far
Received 5120 MB so far
Received 6144 MB so far
Received 7168 MB so far
Received 8192 MB so far
Received 9216 MB so far
Received 10240 MB so far
Received 11264 MB so far
Received 12288 MB so far
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,29 @@
Bytes per RPC Throughput (GB/sec)
*X* 65536 0.038306664763234
Service Received 1024 MB so far
*X* 32768 0.0365078823256032
Service Received 2048 MB so far
*X* 16384 0.044555531086648
Service Received 3072 MB so far
*X* 8192 0.0458841577810274
Service Received 4096 MB so far
*X* 4096 0.0413834748923375
Service Received 5120 MB so far
*X* 2048 0.0430707566433207
Service Received 6144 MB so far
*X* 1024 0.0420086607491503
Service Received 7168 MB so far
*X* 512 0.0427186942542399
Service Received 8192 MB so far
*X* 256 0.0416357841911551
Service Received 9216 MB so far
*X* 128 0.0394996562739911
Service Received 10240 MB so far
*X* 64 0.0280272216857018
Service Received 11264 MB so far
*X* 32 0.0160859517442359
Service Received 12288 MB so far
*X* 16 0.0168696966144314
Service Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,29 @@
Bytes per RPC Throughput (GB/sec)
*X* 65536 0.00270337008440608
Service Received 1024 MB so far
*X* 32768 0.00285618474507381
Service Received 2048 MB so far
*X* 16384 0.00292668610851371
Service Received 3072 MB so far
*X* 8192 0.00299789015715655
Service Received 4096 MB so far
*X* 4096 0.00309008240105197
Service Received 5120 MB so far
*X* 2048 0.0031819725826987
Service Received 6144 MB so far
*X* 1024 0.00332639601026847
Service Received 7168 MB so far
*X* 512 0.00348096627776707
Service Received 8192 MB so far
*X* 256 0.00366028925952706
Service Received 9216 MB so far
*X* 128 0.00387815761245871
Service Received 10240 MB so far
*X* 64 0.00404752710100596
Service Received 11264 MB so far
*X* 32 0.00419196727818756
Service Received 12288 MB so far
*X* 16 0.00415791841844153
Service Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,3 @@
*X* At checkpoint, received 0 messages
*X* Becoming a primary now
*X* Server in Entry Point

Просмотреть файл

@ -0,0 +1,15 @@
Received 1024 MB so far
Received 2048 MB so far
Received 3072 MB so far
Received 4096 MB so far
Received 5120 MB so far
Received 6144 MB so far
Received 7168 MB so far
Received 8192 MB so far
Received 9216 MB so far
Received 10240 MB so far
Received 11264 MB so far
Received 12288 MB so far
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,30 @@
*X* Server in Entry Point
*X* At checkpoint, received 13320 messages
Received 1024 MB so far
*X* At checkpoint, received 40800 messages
Received 2048 MB so far
*X* At checkpoint, received 93885 messages
Received 3072 MB so far
*X* At checkpoint, received 195561 messages
Received 4096 MB so far
*X* At checkpoint, received 391042 messages
Received 5120 MB so far
*X* At checkpoint, received 762760 messages
Received 6144 MB so far
*X* At checkpoint, received 1469006 messages
Received 7168 MB so far
*X* At checkpoint, received 2800222 messages
Received 8192 MB so far
*X* At checkpoint, received 5272400 messages
Received 9216 MB so far
*X* At checkpoint, received 9744943 messages
Received 10240 MB so far
*X* At checkpoint, received 17448425 messages
*X* At checkpoint, received 31370653 messages
Received 11264 MB so far
*X* At checkpoint, received 55752198 messages
Received 12288 MB so far
*X* At checkpoint, received 93137720 messages
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,16 @@
*X* Server in Entry Point
Received 1024 MB so far
Received 2048 MB so far
Received 3072 MB so far
Received 4096 MB so far
Received 5120 MB so far
Received 6144 MB so far
Received 7168 MB so far
Received 8192 MB so far
Received 9216 MB so far
Received 10240 MB so far
Received 11264 MB so far
Received 12288 MB so far
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,16 @@
*X* Server in Entry Point
Received 1024 MB so far
Received 2048 MB so far
Received 3072 MB so far
Received 4096 MB so far
Received 5120 MB so far
Received 6144 MB so far
Received 7168 MB so far
Received 8192 MB so far
Received 9216 MB so far
Received 10240 MB so far
Received 11264 MB so far
Received 12288 MB so far
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,29 @@
Bytes per RPC Throughput (GB/sec)
*X* 65536 0.0352988330728209
Service Received 1024 MB so far
*X* 32768 0.0372857837576519
Service Received 2048 MB so far
*X* 16384 0.0414205054656117
Service Received 3072 MB so far
*X* 8192 0.0432675700018011
Service Received 4096 MB so far
*X* 4096 0.0416941086866515
Service Received 5120 MB so far
*X* 2048 0.0438669371911439
Service Received 6144 MB so far
*X* 1024 0.0416419896236157
Service Received 7168 MB so far
*X* 512 0.0422990703742958
Service Received 8192 MB so far
*X* 256 0.0420296870558185
Service Received 9216 MB so far
*X* 128 0.0396254785217365
Service Received 10240 MB so far
*X* 64 0.0368080119970268
Service Received 11264 MB so far
*X* 32 0.0357323424154478
Service Received 12288 MB so far
*X* 16 0.020614544643097
Service Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,29 @@
Bytes per RPC Throughput (GB/sec)
*X* 65536 0.0352988330728209
Service Received 1024 MB so far
*X* 32768 0.0372857837576519
Service Received 2048 MB so far
*X* 16384 0.0414205054656117
Service Received 3072 MB so far
*X* 8192 0.0432675700018011
Service Received 4096 MB so far
*X* 4096 0.0416941086866515
Service Received 5120 MB so far
*X* 2048 0.0438669371911439
Service Received 6144 MB so far
*X* 1024 0.0416419896236157
Service Received 7168 MB so far
*X* 512 0.0422990703742958
Service Received 8192 MB so far
*X* 256 0.0420296870558185
Service Received 9216 MB so far
*X* 128 0.0396254785217365
Service Received 10240 MB so far
*X* 64 0.0368080119970268
Service Received 11264 MB so far
*X* 32 0.0357323424154478
Service Received 12288 MB so far
*X* 16 0.020614544643097
Service Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,18 @@
*X* At checkpoint, received 0 messages
*X* Becoming a primary now
*X* Server in Entry Point
Received 1024 MB so far
Received 2048 MB so far
Received 3072 MB so far
Received 4096 MB so far
Received 5120 MB so far
Received 6144 MB so far
Received 7168 MB so far
Received 8192 MB so far
Received 9216 MB so far
Received 10240 MB so far
Received 11264 MB so far
Received 12288 MB so far
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,31 @@
*X* Server in Entry Point
*X* At checkpoint, received 15309 messages
Received 1024 MB so far
*X* At checkpoint, received 44946 messages
Received 2048 MB so far
*X* At checkpoint, received 101841 messages
Received 3072 MB so far
*X* At checkpoint, received 210971 messages
Received 4096 MB so far
*X* At checkpoint, received 420915 messages
Received 5120 MB so far
*X* At checkpoint, received 822076 messages
Received 6144 MB so far
*X* At checkpoint, received 1584903 messages
Received 7168 MB so far
*X* At checkpoint, received 3032207 messages
Received 8192 MB so far
*X* At checkpoint, received 5735455 messages
Received 9216 MB so far
*X* At checkpoint, received 10626311 messages
Received 10240 MB so far
*X* At checkpoint, received 19132276 messages
*X* At checkpoint, received 33094205 messages
Received 11264 MB so far
*X* At checkpoint, received 59042796 messages
Received 12288 MB so far
*X* At checkpoint, received 98813567 messages
Received 13312 MB so far
Bytes received: 13958643712
DONE
*X* At checkpoint, received 134201344 messages

Просмотреть файл

@ -0,0 +1 @@
*X* Server in Entry Point

Просмотреть файл

@ -0,0 +1,15 @@
Received 1024 MB so far
Received 2048 MB so far
Received 3072 MB so far
Received 4096 MB so far
Received 5120 MB so far
Received 6144 MB so far
Received 7168 MB so far
Received 8192 MB so far
Received 9216 MB so far
Received 10240 MB so far
Received 11264 MB so far
Received 12288 MB so far
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,16 @@
*X* Server in Entry Point
Received 1024 MB so far
Received 2048 MB so far
Received 3072 MB so far
Received 4096 MB so far
Received 5120 MB so far
Received 6144 MB so far
Received 7168 MB so far
Received 8192 MB so far
Received 9216 MB so far
Received 10240 MB so far
Received 11264 MB so far
Received 12288 MB so far
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,29 @@
Bytes per RPC Throughput (GB/sec)
*X* 65536 0.0340605672022467
Service Received 1024 MB so far
*X* 32768 0.0670605701732435
Service Received 2048 MB so far
*X* 16384 0.0453120050088253
Service Received 3072 MB so far
*X* 8192 0.0341972331853042
Service Received 4096 MB so far
*X* 4096 0.0448916262944674
Service Received 5120 MB so far
*X* 2048 0.0429219502192423
Service Received 6144 MB so far
*X* 1024 0.0432270526791751
Service Received 7168 MB so far
*X* 512 0.042365198355405
Service Received 8192 MB so far
*X* 256 0.042429440159236
Service Received 9216 MB so far
*X* 128 0.0420089861674365
Service Received 10240 MB so far
*X* 64 0.0385248990294179
Service Received 11264 MB so far
*X* 32 0.0362347157484927
Service Received 12288 MB so far
*X* 16 0.0217966144090439
Service Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,29 @@
Bytes per RPC Throughput (GB/sec)
*X* 65536 0.0372342693059305
Service Received 1024 MB so far
*X* 32768 0.0385322604241199
Service Received 2048 MB so far
*X* 16384 0.0421944485422286
Service Received 3072 MB so far
*X* 8192 0.0448676446434267
Service Received 4096 MB so far
*X* 4096 0.0451691721682756
Service Received 5120 MB so far
*X* 2048 0.044631104418191
Service Received 6144 MB so far
*X* 1024 0.0419209925952016
Service Received 7168 MB so far
*X* 512 0.0446787974456828
Service Received 8192 MB so far
*X* 256 0.0412141830203171
Service Received 9216 MB so far
*X* 128 0.0411807597823824
Service Received 10240 MB so far
*X* 64 0.0379665717699799
Service Received 11264 MB so far
*X* 32 0.0352991449512828
Service Received 12288 MB so far
*X* 16 0.0189336790163664
Service Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,18 @@
*X* At checkpoint, received 0 messages
*X* Becoming a primary now
*X* Server in Entry Point
Received 1024 MB so far
Received 2048 MB so far
Received 3072 MB so far
Received 4096 MB so far
Received 5120 MB so far
Received 6144 MB so far
Received 7168 MB so far
Received 8192 MB so far
Received 9216 MB so far
Received 10240 MB so far
Received 11264 MB so far
Received 12288 MB so far
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,31 @@
*X* Server in Entry Point
*X* At checkpoint, received 15347 messages
Received 1024 MB so far
*X* At checkpoint, received 44650 messages
Received 2048 MB so far
*X* At checkpoint, received 101171 messages
Received 3072 MB so far
*X* At checkpoint, received 210496 messages
Received 4096 MB so far
*X* At checkpoint, received 420893 messages
Received 5120 MB so far
*X* At checkpoint, received 822972 messages
Received 6144 MB so far
*X* At checkpoint, received 1585846 messages
Received 7168 MB so far
*X* At checkpoint, received 3032062 messages
Received 8192 MB so far
*X* At checkpoint, received 5735455 messages
Received 9216 MB so far
*X* At checkpoint, received 10638915 messages
Received 10240 MB so far
*X* At checkpoint, received 19134267 messages
*X* At checkpoint, received 33071940 messages
Received 11264 MB so far
*X* At checkpoint, received 59044091 messages
Received 12288 MB so far
*X* At checkpoint, received 98644869 messages
Received 13312 MB so far
Bytes received: 13958643712
DONE
*X* At checkpoint, received 134201344 messages

Просмотреть файл

@ -0,0 +1,15 @@
Received 1024 MB so far
Received 2048 MB so far
Received 3072 MB so far
Received 4096 MB so far
Received 5120 MB so far
Received 6144 MB so far
Received 7168 MB so far
Received 8192 MB so far
Received 9216 MB so far
Received 10240 MB so far
Received 11264 MB so far
Received 12288 MB so far
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,16 @@
*X* Server in Entry Point
Received 1024 MB so far
Received 2048 MB so far
Received 3072 MB so far
Received 4096 MB so far
Received 5120 MB so far
Received 6144 MB so far
Received 7168 MB so far
Received 8192 MB so far
Received 9216 MB so far
Received 10240 MB so far
Received 11264 MB so far
Received 12288 MB so far
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,29 @@
Bytes per RPC Throughput (GB/sec)
*X* 65536 0.00248077924737343
Service Received 1024 MB so far
*X* 32768 0.0026014211917764
Service Received 2048 MB so far
*X* 16384 0.00266123144137562
Service Received 3072 MB so far
*X* 8192 0.00273922031779549
Service Received 4096 MB so far
*X* 4096 0.00281357534471601
Service Received 5120 MB so far
*X* 2048 0.00289779386959548
Service Received 6144 MB so far
*X* 1024 0.00302031632964737
Service Received 7168 MB so far
*X* 512 0.00315336099572177
Service Received 8192 MB so far
*X* 256 0.00334320241560175
Service Received 9216 MB so far
*X* 128 0.00350726654210998
Service Received 10240 MB so far
*X* 64 0.00367001332123085
Service Received 11264 MB so far
*X* 32 0.00379505896756931
Service Received 12288 MB so far
*X* 16 0.00368113570098266
Service Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1 @@
The CRA instance appears to be down. Restart it and this vertex will be instantiated automatically

Просмотреть файл

@ -0,0 +1 @@
The CRA instance appears to be down. Restart it and this vertex will be instantiated automatically

Просмотреть файл

@ -0,0 +1,9 @@
Bytes per RPC Throughput (GB/sec)
*X* 32768 0.0538327740000449
Service Received 1024 MB so far
*X* 16384 0.0709862409498754
Service Received 2048 MB so far
*X* 8192 0.0695878693925042
Service Received 3072 MB so far
Bytes received: 3221225472
DONE

Просмотреть файл

@ -0,0 +1,9 @@
Bytes per RPC Throughput (GB/sec)
*X* 32768 0.00943044129323776
Service Received 1024 MB so far
*X* 16384 0.00989352845861985
Service Received 2048 MB so far
*X* 8192 0.00993638850272688
Service Received 3072 MB so far
Bytes received: 3221225472
DONE

Просмотреть файл

@ -0,0 +1,18 @@
*X* At checkpoint, received 0 messages
*X* becoming primary
*X* Server in Entry Point
*X* I'm healthy after 3000 checks at time:10/24/2018 1:15:13 PM
*X* I'm healthy after 6000 checks at time:10/24/2018 1:15:19 PM
*X* At checkpoint, received 30564 messages
Received 1024 MB so far
*X* I'm healthy after 9000 checks at time:10/24/2018 1:15:25 PM
*X* I'm healthy after 12000 checks at time:10/24/2018 1:15:31 PM
*X* At checkpoint, received 89584 messages
Received 2048 MB so far
*X* I'm healthy after 15000 checks at time:10/24/2018 1:15:37 PM
*X* I'm healthy after 18000 checks at time:10/24/2018 1:15:43 PM
*X* At checkpoint, received 202934 messages
*X* I'm healthy after 21000 checks at time:10/24/2018 1:15:49 PM
Received 3072 MB so far
Bytes received: 3221225472
DONE

Просмотреть файл

@ -0,0 +1,18 @@
*X* Server in Entry Point
*X* I'm healthy after 3000 checks at time:10/24/2018 1:20:10 PM
*X* I'm healthy after 6000 checks at time:10/24/2018 1:20:16 PM
Received 1024 MB so far
*X* I'm healthy after 9000 checks at time:10/24/2018 1:20:22 PM
*X* I'm healthy after 12000 checks at time:10/24/2018 1:20:28 PM
Received 2048 MB so far
*X* I'm healthy after 15000 checks at time:10/24/2018 1:20:34 PM
*X* I'm healthy after 18000 checks at time:10/24/2018 1:20:40 PM
*X* I'm healthy after 21000 checks at time:10/24/2018 1:20:46 PM
*X* I'm healthy after 24000 checks at time:10/24/2018 1:20:52 PM
*X* I'm healthy after 27000 checks at time:10/24/2018 1:20:58 PM
*X* I'm healthy after 30000 checks at time:10/24/2018 1:21:04 PM
Received 3072 MB so far
Bytes received: 3221225472
DONE
*X* I'm healthy after 33000 checks at time:10/24/2018 1:21:10 PM
*X* I'm healthy after 36000 checks at time:10/24/2018 1:21:16 PM

Просмотреть файл

@ -0,0 +1 @@
The CRA instance appears to be down. Restart it and this vertex will be instantiated automatically

Просмотреть файл

@ -0,0 +1 @@
The CRA instance appears to be down. Restart it and this vertex will be instantiated automatically

Просмотреть файл

@ -0,0 +1 @@
Bytes per RPC Throughput (GB/sec)

Просмотреть файл

@ -0,0 +1,29 @@
Bytes per RPC Throughput (GB/sec)
*X* 65536 0.0206634241525057
Service Received 1024 MB so far
*X* 32768 0.073691388145091
Service Received 2048 MB so far
*X* 16384 0.0705612561563197
Service Received 3072 MB so far
*X* 8192 0.0691178152488954
Service Received 4096 MB so far
*X* 4096 0.0732169460650954
Service Received 5120 MB so far
*X* 2048 0.0693136438224146
Service Received 6144 MB so far
*X* 1024 0.0702204274692397
Service Received 7168 MB so far
*X* 512 0.0664492306082805
Service Received 8192 MB so far
*X* 256 0.0662917699138886
Service Received 9216 MB so far
*X* 128 0.0662354456549958
Service Received 10240 MB so far
*X* 64 0.062141197888539
Service Received 11264 MB so far
*X* 32 0.0446573648055219
Service Received 12288 MB so far
*X* 16 0.0289017086271066
Service Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,29 @@
Bytes per RPC Throughput (GB/sec)
*X* 65536 0.0206634241525057
Service Received 1024 MB so far
*X* 32768 0.073691388145091
Service Received 2048 MB so far
*X* 16384 0.0705612561563197
Service Received 3072 MB so far
*X* 8192 0.0691178152488954
Service Received 4096 MB so far
*X* 4096 0.0732169460650954
Service Received 5120 MB so far
*X* 2048 0.0693136438224146
Service Received 6144 MB so far
*X* 1024 0.0702204274692397
Service Received 7168 MB so far
*X* 512 0.0664492306082805
Service Received 8192 MB so far
*X* 256 0.0662917699138886
Service Received 9216 MB so far
*X* 128 0.0662354456549958
Service Received 10240 MB so far
*X* 64 0.062141197888539
Service Received 11264 MB so far
*X* 32 0.0446573648055219
Service Received 12288 MB so far
*X* 16 0.0289017086271066
Service Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,3 @@
*X* At checkpoint, received 0 messages
*X* Becoming a primary now
*X* Server in Entry Point

Просмотреть файл

@ -0,0 +1,31 @@
*X* Server in Entry Point
*X* At checkpoint, received 13524 messages
*X* Becoming a primary now
Received 1024 MB so far
*X* At checkpoint, received 41185 messages
Received 2048 MB so far
*X* At checkpoint, received 94328 messages
Received 3072 MB so far
*X* At checkpoint, received 196155 messages
Received 4096 MB so far
*X* At checkpoint, received 390990 messages
Received 5120 MB so far
*X* At checkpoint, received 760956 messages
Received 6144 MB so far
*X* At checkpoint, received 1460784 messages
Received 7168 MB so far
*X* At checkpoint, received 2776565 messages
Received 8192 MB so far
*X* At checkpoint, received 5205938 messages
Received 9216 MB so far
*X* At checkpoint, received 9587333 messages
Received 10240 MB so far
*X* At checkpoint, received 17132635 messages
*X* At checkpoint, received 31048176 messages
Received 11264 MB so far
*X* At checkpoint, received 55151322 messages
Received 12288 MB so far
*X* At checkpoint, received 91921367 messages
Received 13312 MB so far
Bytes received: 13958643712
DONE

Просмотреть файл

@ -0,0 +1,31 @@
*X* Server in Entry Point
*X* At checkpoint, received 13524 messages
*X* Becoming a primary now
Received 1024 MB so far
*X* At checkpoint, received 41185 messages
Received 2048 MB so far
*X* At checkpoint, received 94328 messages
Received 3072 MB so far
*X* At checkpoint, received 196155 messages
Received 4096 MB so far
*X* At checkpoint, received 390990 messages
Received 5120 MB so far
*X* At checkpoint, received 760956 messages
Received 6144 MB so far
*X* At checkpoint, received 1460784 messages
Received 7168 MB so far
*X* At checkpoint, received 2776565 messages
Received 8192 MB so far
*X* At checkpoint, received 5205938 messages
Received 9216 MB so far
*X* At checkpoint, received 9587333 messages
Received 10240 MB so far
*X* At checkpoint, received 17132635 messages
*X* At checkpoint, received 31048176 messages
Received 11264 MB so far
*X* At checkpoint, received 55151322 messages
Received 12288 MB so far
*X* At checkpoint, received 91921367 messages
Received 13312 MB so far
Bytes received: 13958643712
DONE

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше