This commit is contained in:
Marc Greisen 2020-10-21 16:25:45 -07:00 коммит произвёл GitHub
Родитель 6ca0ca94ea
Коммит 3ba0d7bf46
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
180 изменённых файлов: 26163 добавлений и 323 удалений

362
.gitignore поставляемый
Просмотреть файл

@ -1,8 +1,3 @@
## Ignore Visual Studio temporary files, build results, and
## files generated by popular Visual Studio add-ons.
##
## Get latest from https://github.com/github/gitignore/blob/master/VisualStudio.gitignore
# User-specific files
*.rsuser
*.suo
@ -10,12 +5,6 @@
*.userosscache
*.sln.docstates
# User-specific files (MonoDevelop/Xamarin Studio)
*.userprefs
# Mono auto generated files
mono_crash.*
# Build results
[Dd]ebug/
[Dd]ebugPublic/
@ -31,320 +20,67 @@ bld/
[Ll]og/
[Ll]ogs/
# Visual Studio 2015/2017 cache/options directory
.vs/
# Uncomment if you have tasks that create the project's static files in wwwroot
#wwwroot/
# Visual Studio 2017 auto generated files
Generated\ Files/
# MSTest test Results
[Tt]est[Rr]esult*/
[Bb]uild[Ll]og.*
# NUnit
*.VisualState.xml
TestResult.xml
nunit-*.xml
# Build Results of an ATL Project
[Dd]ebugPS/
[Rr]eleasePS/
dlldata.c
# Benchmark Results
BenchmarkDotNet.Artifacts/
# .NET Core
project.lock.json
project.fragment.lock.json
artifacts/
# StyleCop
StyleCopReport.xml
# Files built by Visual Studio
*_i.c
*_p.c
*_h.h
*.ilk
*.meta
*.obj
*.iobj
*.pch
*.pdb
*.ipdb
*.pgc
*.pgd
*.rsp
*.sbr
*.tlb
*.tli
*.tlh
*.tmp
*.tmp_proj
*_wpftmp.csproj
*.log
*.vspscc
*.vssscc
.builds
*.pidb
*.svclog
*.scc
# Chutzpah Test files
_Chutzpah*
# Visual C++ cache files
ipch/
*.aps
*.ncb
*.opendb
*.opensdf
*.sdf
*.cachefile
*.VC.db
*.VC.VC.opendb
# Visual Studio profiler
*.psess
*.vsp
*.vspx
*.sap
# Visual Studio Trace Files
*.e2e
# TFS 2012 Local Workspace
$tf/
# Guidance Automation Toolkit
*.gpState
# ReSharper is a .NET coding add-in
_ReSharper*/
*.[Rr]e[Ss]harper
*.DotSettings.user
# TeamCity is a build add-in
_TeamCity*
# DotCover is a Code Coverage Tool
*.dotCover
# AxoCover is a Code Coverage Tool
.axoCover/*
!.axoCover/settings.json
# Visual Studio code coverage results
*.coverage
*.coveragexml
# NCrunch
_NCrunch_*
.*crunch*.local.xml
nCrunchTemp_*
# MightyMoose
*.mm.*
AutoTest.Net/
# Web workbench (sass)
.sass-cache/
# Installshield output folder
[Ee]xpress/
# DocProject is a documentation generator add-in
DocProject/buildhelp/
DocProject/Help/*.HxT
DocProject/Help/*.HxC
DocProject/Help/*.hhc
DocProject/Help/*.hhk
DocProject/Help/*.hhp
DocProject/Help/Html2
DocProject/Help/html
# Click-Once directory
publish/
# Publish Web Output
*.[Pp]ublish.xml
*.azurePubxml
# Note: Comment the next line if you want to checkin your web deploy settings,
# but database connection strings (with potential passwords) will be unencrypted
*.pubxml
*.publishproj
# Microsoft Azure Web App publish settings. Comment the next line if you want to
# checkin your Azure Web App publish settings, but sensitive information contained
# in these scripts will be unencrypted
PublishScripts/
# NuGet Packages
*.nupkg
# NuGet Symbol Packages
*.snupkg
# The packages folder can be ignored because of Package Restore
**/[Pp]ackages/*
# except build/, which is used as an MSBuild target.
!**/[Pp]ackages/build/
# Uncomment if necessary however generally it will be regenerated when needed
#!**/[Pp]ackages/repositories.config
# NuGet v3's project.json files produces more ignorable files
*.nuget.props
*.nuget.targets
# Microsoft Azure Build Output
csx/
*.build.csdef
# Microsoft Azure Emulator
ecf/
rcf/
# Windows Store app package directories and files
AppPackages/
BundleArtifacts/
Package.StoreAssociation.xml
_pkginfo.txt
*.appx
*.appxbundle
*.appxupload
# Visual Studio cache files
# files ending in .cache can be ignored
*.[Cc]ache
# but keep track of directories ending in .cache
!?*.[Cc]ache/
# Others
ClientBin/
~$*
.vs/
.ionide/*
/src/**/bin/*
/src/**/obj/*
/src/**/Properties/*
/src/**/.vs/*
/src**/*.fsproj.user
/Scripts/.Defaults.ps1
/cli/.Defaults.ps1
/k8sConfigs/*
*.swp
*~
*.dbmdl
*.dbproj.schemaview
*.jfm
*.pfx
*.publishsettings
orleans.codegen.cs
/src/OrchestratorFunc/OrchestratorFunc/Properties/PublishProfiles/*
appsettings.Development.json
.fake
.ionide
/Scripts/webAppSettings.json
/Scripts/orchestratorSettings.json
/cli/defaults.json
/.vscode/*
/.venv/*
# Including strong name files can present a security risk
# (https://github.com/github/gitignore/pull/2483#issue-259490424)
#*.snk
/packages
/TestResults
# Since there are multiple workflows, uncomment next line to ignore bower_components
# (https://github.com/github/gitignore/pull/1529#issuecomment-104372622)
#bower_components/
/tools/NuGet.exe
/App_Data
/secrets
/data
.secrets
local.settings.json
# RIA/Silverlight projects
Generated_Code/
node_modules
dist
# Backup & report files from converting an old project file
# to a newer Visual Studio version. Backup files are not needed,
# because we have git ;-)
_UpgradeReport_Files/
Backup*/
UpgradeLog*.XML
UpgradeLog*.htm
ServiceFabricBackup/
*.rptproj.bak
# Local python packages
.python_packages/
# SQL Server files
*.mdf
*.ldf
*.ndf
# Python Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Business Intelligence projects
*.rdl.data
*.bim.layout
*.bim_*.settings
*.rptproj.rsuser
*- [Bb]ackup.rdl
*- [Bb]ackup ([0-9]).rdl
*- [Bb]ackup ([0-9][0-9]).rdl
# Microsoft Fakes
FakesAssemblies/
# GhostDoc plugin setting file
*.GhostDoc.xml
# Node.js Tools for Visual Studio
.ntvs_analysis.dat
node_modules/
# Visual Studio 6 build log
*.plg
# Visual Studio 6 workspace options file
*.opt
# Visual Studio 6 auto-generated workspace file (contains which files were open etc.)
*.vbw
# Visual Studio LightSwitch build output
**/*.HTMLClient/GeneratedArtifacts
**/*.DesktopClient/GeneratedArtifacts
**/*.DesktopClient/ModelManifest.xml
**/*.Server/GeneratedArtifacts
**/*.Server/ModelManifest.xml
_Pvt_Extensions
# Paket dependency manager
.paket/paket.exe
paket-files/
# FAKE - F# Make
.fake/
# CodeRush personal settings
.cr/personal
# Python Tools for Visual Studio (PTVS)
# Byte-compiled / optimized / DLL files
__pycache__/
*.pyc
*.py[cod]
*$py.class
/cli/.cache/*
/cli/.vscode/*
/cli/.vscode/*
/cli/raft_sdk/.cache/token_cache.bin
# Cake - Uncomment if you are using it
# tools/**
# !tools/packages.config
/cli/raft-utils/auth/dotnet-core-3.1/*
# Tabs Studio
*.tss
# Telerik's JustMock configuration file
*.jmconfig
# BizTalk build output
*.btp.cs
*.btm.cs
*.odx.cs
*.xsd.cs
# OpenCover UI analysis results
OpenCover/
# Azure Stream Analytics local run output
ASALocalRun/
# MSBuild Binary and Structured Log
*.binlog
# NVidia Nsight GPU debugger configuration file
*.nvuser
# MFractors (Xamarin productivity tool) working folder
.mfractor/
# Local History for Visual Studio
.localhistory/
# BeatPulse healthcheck temp database
healthchecksdb
# Backup folder for Package Reference Convert tool in Visual Studio 2017
MigrationBackup/
# Ionide (cross platform F# VS Code tools) working folder
.ionide/
/cli/defaults-*.json

14
CONTRIBUTING.md Normal file
Просмотреть файл

@ -0,0 +1,14 @@
# Contributing
This project welcomes contributions and suggestions. Most contributions require you to
agree to a Contributor License Agreement (CLA) declaring that you have the right to,
and actually do, grant us the rights to use your contribution. For details, visit
https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need
to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the
instructions provided by the bot. You will only need to do this once across all repositories using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.

23
GeoPol.xml Normal file
Просмотреть файл

@ -0,0 +1,23 @@
<!-- List of Include Files and folders and exception lists for this repo for GeoPolitical scanning. Contact Joerow for detail. -->
<!-- Consult Global Readiness Notebook @ aka.ms/NExTGeoPol for further details -->
<!-- This file is consumed by PowerShell scripts in the 'health-localization' repo under the LocBuild\GeoPolitical folder(s) -->
<!DOCTYPE varsdefined [
<!ENTITY GitReposFolder "C:\GITs\Repos">
<!ENTITY GitRepoName "Raft">
]>
<GeoPol_Folders>
<!-- List of Folders to include for GeoPolitical scanning -->
<GitRepoName>&GitRepoName;</GitRepoName>
<Component Include="List here folders to Include in a GeoPol Scan">
<!-- . means the entire repo -->
<!-- Use back slash \ to indicate folder path e.g. C:\Temp\Git\ -->
<IncludeFolder>.</IncludeFolder>
</Component>
<Component Exclude="List exceptions here to not be scanned, that have been included above">
<!-- Make sure to consult http://aka.ms/NExtStart if excluding 3rd party or OSS components -->
<!-- Use back slash \ to indicate folder path e.g. C:\Temp\Git\ -->
<ExcludeFolder>.gitignore</ExcludeFolder>
<ExcludeFolder>GeoPol.xml</ExcludeFolder>
</Component>
</GeoPol_Folders>

Просмотреть файл

10859
NOTICE.md Normal file

Разница между файлами не показана из-за своего большого размера Загрузить разницу

2
PRIVACY.md Normal file
Просмотреть файл

@ -0,0 +1,2 @@
## Data Collection.
The software may collect information about you and your use of the software and send it to Microsoft. Microsoft may use this information to provide services and improve our products and services. You may turn off the telemetry as described in the repository. There are also some features in the software that may enable you and Microsoft to collect data from users of your applications. If you use these features, you must comply with applicable law, including providing appropriate notices to users of your applications together with a copy of Microsoft's privacy statement. Our privacy statement is located at https://go.microsoft.com/fwlink/?LinkID=824704. You can learn more about data collection and use in the help documentation and our privacy statement. Your use of the software operates as your consent to these practices.

Просмотреть файл

@ -1,14 +1,40 @@
# REST API Fuzz Testing (RAFT)
# Contributing
## A self hosted REST API Fuzzing-As-A-Service platform
RAFT enables painless fuzzing of REST API's using multiple fuzzers in parallel. Using a single command line
baked into your CI/CD pipeline developers can launch fuzz jobs against their services.
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
RAFT has first class integration with Microsoft's Research's [RESTler](https://github.com/microsoft/restler), the first stateful,
fuzzing tool designed to automatically test your REST API's driven by your swagger specification.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
RAFT also supports [ZAP](https://www.zaproxy.org/) from OWASP out of the box.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
As a platform, RAFT is designed to host any API fuzzers that are packaged into a docker container.
These can be configured and used in the system via configuration files and require no code changes to integrate.
### Getting Started
This project is designed to run on [Azure](https://azure.microsoft.com).
To deploy the service download the CLI release and run `python raft.py service deploy`. See
the [documentation](docs/deploying/deploying.md) for more details.
Once deployed, use the samples to try out the service and fuzzers!
### Documentation
* [Table of Contents](docs/index.md)
* [Overview](docs/how-it-works/overview.md)
* [FAQ](docs/faq.md)
### Swagger Documentation
Once the service is created, you can examine the REST interface of the service by browsing to the swagger page at https://\<deploymentName\>-raft-apiservice.azurewebsites.net/swagger
### Microsoft Open Source Code of Conduct
https://opensource.microsoft.com/codeofconduct
### Trademarks
Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.
### Preferred Languages
We prefer all communications to be in English.

186
Scripts/Tests/bvt.py Normal file
Просмотреть файл

@ -0,0 +1,186 @@
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import argparse
import json
import os
import pathlib
import sys
import requests
import time
cli_path = os.path.join(os.path.dirname(os.path.abspath(__file__)),'..', '..', 'cli')
sys.path.append(cli_path)
import raft
from raft_sdk.raft_service import RaftCLI
from raft_sdk.raft_common import RaftDefinitions
from raft_sdk.raft_deploy import azure_function_keys
def webhooks_test_url(subscription_id, resource_group, function):
keys = azure_function_keys(subscription_id, resource_group, function, "webhooks-trigger-test")
return f"https://{function}.azurewebsites.net/api/test?{keys['default']}"
def webhook_triggers_results(job_id, test_url):
webhook_triggers_response = requests.get(test_url + "&jobId=" + job_id)
if webhook_triggers_response.ok:
triggers = json.loads(webhook_triggers_response.text)
for t in triggers:
j = json.loads(t)
yield j[0]
else:
raise Exception(webhook_triggers_response.text)
def time_span(t_start, t_end):
return time.strftime("%H:%M:%S", time.gmtime(t_end - t_start))
def bvt(cli, definitions, bvt_host):
print('Getting available wehook events')
webhook_events = cli.list_available_webhooks_events()
try:
test_url = webhooks_test_url(definitions.subscription, definitions.resource_group, definitions.test_infra)
for event in webhook_events:
print(f'Setting webhook for {event}')
compile_webhook = cli.set_webhooks_subscription('sample-compile', event, test_url)
fuzz_webhook = cli.set_webhooks_subscription('sample-fuzz', event, test_url)
added_compile = cli.list_webhooks('sample-compile', event)
if len(added_compile) == 0:
raise Exception('Expected sample-compile webhooks not to be empty after creation')
added_fuzz = cli.list_webhooks('sample-fuzz', event)
if len(added_fuzz) == 0:
raise Exception('Expected sample-fuzz webhooks not to be empty after creation')
t_pre_compile = time.time()
print('Compile')
compile_config_path = os.path.abspath(os.path.join(cli_path, 'samples', 'no-authentication-sample', 'sample.restler.compile.json'))
subs = {
'{sample.host}' : bvt_host
}
compile_config = raft.RaftJobConfig(file_path=compile_config_path, substitutions=subs)
compile_job = cli.new_job(compile_config)
cli.poll(compile_job['jobId'], 10)
#calculate compile duration
t_pre_fuzz = time.time()
timespan_pre_fuzz = time_span(t_pre_compile, t_pre_fuzz)
after_compile_pre_fuzz = cli.list_jobs(timespan_pre_fuzz)
n = 0
for x in after_compile_pre_fuzz:
if x['jobId'] == compile_job['jobId']:
n += 1
if x['agentName'] == compile_job['jobId']:
if x['state'] != 'Completed':
raise Exception('Expected job to be in completed state when retrieved job list.'
f'{after_compile_pre_fuzz}')
if n != 3:
raise Exception('Expected 3 after compile job step'
f' for job {compile_job["jobId"]}'
f' got {n}'
f' {after_compile_pre_fuzz}')
print('Fuzz')
fuzz_config_path = os.path.abspath(os.path.join(cli_path, 'samples', 'no-authentication-sample', 'sample.restler.fuzz.json'))
subs['{compile.jobId}'] = compile_job['jobId']
fuzz_config = raft.RaftJobConfig(file_path=fuzz_config_path, substitutions=subs)
fuzz_config.config['duration'] = '00:20:00'
fuzz_job = cli.new_job(fuzz_config)
cli.poll(fuzz_job['jobId'], 10)
#calculate fuzz duration
timespan_post_fuzz = time_span(t_pre_fuzz, time.time())
after_fuzz = cli.list_jobs(timespan_post_fuzz)
#validate list jobs
m = 0
for x in after_fuzz:
if x['jobId'] == fuzz_job['jobId']:
m += 1
if x['agentName'] == fuzz_job['jobId']:
if x['state'] != 'Completed':
raise Exception('Expected job to be in completed state when retrieved job list.'
f'{after_fuzz}')
if m != 4:
raise Exception('Expected 4 after compile job step'
f' for job {fuzz_job["jobId"]}'
f' got {m}'
f' {after_fuzz}')
print('Validating webhook posted triggers')
compile_webhook_triggers = webhook_triggers_results(compile_job['jobId'], test_url)
for r in compile_webhook_triggers:
if r['EventType'] == 'BugFound':
raise Exception(f'Compile step produced BugFound event')
fuzz_webhook_triggers = webhook_triggers_results(fuzz_job['jobId'], test_url)
bug_found_events = []
job_status_events = []
for r in fuzz_webhook_triggers:
if r['EventType'] == 'BugFound':
bug_found_events.append(r)
elif r['EventType'] == 'JobStatus':
job_status_events.append(r)
else:
raise Exception(f'Unhandled webhook trigger event type {r["EventType"]} : {r}')
if len(job_status_events) == 0:
raise Exception('Job did not post any job status events webhooks')
print('Validating that bugs posted events matches total bugs found in job status')
total_bugs_found = 0
for r in job_status_events:
if r['Data']['State'] == 'Completed' and r['Data']['AgentName'] != r['Data']['JobId']:
total_bugs_found += r['Data']['Metrics']['TotalBugBucketsCount']
print(f'Total bugs found: {total_bugs_found}')
print(f'Number of Bug found events: {len(bug_found_events)}')
if total_bugs_found != len(bug_found_events):
raise Exception(f"Total bugs found does not match number of bug found webhook triggered events {total_bugs_found} and {len(bug_found_events)}")
except Exception as ex:
print(f"FAIL: {ex}")
raise ex
finally:
for event in webhook_events:
print(f'Cleaning up webhook {event}')
cli.delete_webhook('sample-compile', event)
cli.delete_webhook('sample-fuzz', event)
deleted_compile = cli.list_webhooks('sample-compile', event)
if len(deleted_compile) > 0:
raise Exception('Expected sample-compile webhooks to be empty after deletion, instead got %s', deleted_compile)
deleted_fuzz = cli.list_webhooks('sample-fuzz', event)
if len(deleted_fuzz) > 0:
raise Exception('Expected sample-fuzz webhooks to be empty after deletion, instead got %s', deleted_compile)
if __name__ == "__main__":
formatter = argparse.ArgumentDefaultsHelpFormatter
parser = argparse.ArgumentParser(description='bvt', formatter_class=formatter)
raft.add_defaults_and_secret_args(parser)
parser.add_argument('--host', required=True)
args = parser.parse_args()
if args.defaults_context_json:
print(f"Loading defaults from command line: {args.defaults_context_json}")
defaults = json.loads(args.defaults_context_json)
else:
with open(args.defaults_context_path, 'r') as defaults_json:
defaults = json.load(defaults_json)
definitions = RaftDefinitions(defaults)
defaults['secret'] = args.secret
cli = RaftCLI(defaults)
bvt(cli, definitions, args.host)

Просмотреть файл

@ -0,0 +1,37 @@
import argparse
import json
import os
import pathlib
import sys
sys.path.append(os.path.join(os.path.dirname(os.path.abspath(__file__)),'..', '..', 'cli'))
import raft
from raft_sdk.raft_service import RaftCLI
def compare_lists(cli):
expected = ['JobStatus', 'BugFound']
current = cli.list_available_webhooks_events()
if expected != current:
raise Exception(f'Expected {expected} does not match current {current}')
else:
print('PASS')
if __name__ == "__main__":
formatter = argparse.ArgumentDefaultsHelpFormatter
parser = argparse.ArgumentParser(description = 'List webhooks', formatter_class=formatter)
raft.add_defaults_and_secret_args(parser)
args = parser.parse_args()
if args.defaults_context_json:
print(f"Loading defaults from command line: {args.defaults_context_json}")
defaults = json.loads(args.defaults_context_json)
else:
with open(args.defaults_context_path, 'r') as defaults_json:
defaults = json.load(defaults_json)
defaults['secret'] = args.secret
cli = RaftCLI(defaults)
compare_lists(cli)

Просмотреть файл

@ -0,0 +1,35 @@
trigger:
batch : true
branches:
include:
- '*'
parameters:
- name: variable_group
displayName: Variable Group
default: ""
- name: RunBvt
displayName: Run BVT?
default: Yes
values:
- Yes
- No
variables:
- template: 'variables/version-variables.yml'
- group: ${{ parameters.variable_group }}
- name: version.revision
value: $[counter(variables['devRevision'], 1)]
- name: versionNumber
value: $(version.major).$(version.minor).$(version.revision)
stages:
- template: stages/build/build.yml
- template: stages/publish/publish.yml
parameters:
variable_group: ${{ parameters.variable_group }}
- template: stages/deploy/deploy.yml
- template: stages/test/test.yml
parameters:
RunBvt: ${{ parameters.RunBvt }}

94
ado/build.security.yml Normal file
Просмотреть файл

@ -0,0 +1,94 @@
# Some of these tasks will only run on a windows build machine.
# A number of variables need to be defined for this pipeline to work correctly.
#
# BuildConfiguration | Debug
# BuildPlatform | Any CPU
# isSecurityBuild | True
pool:
name: Azure Pipelines
demands:
- msbuild
- visualstudio
vmImage: 'windows-2019'
jobs:
- job: RaftSecurityBuild
displayName: 'RAFT Security build'
steps:
- task: NuGetToolInstaller@1
displayName: 'Use NuGet 5.4'
inputs:
versionSpec: 5.4
- task: NuGetCommand@2
displayName: 'Restore NuGet Packages'
inputs:
restoreSolution: src\Raft.sln
- task: VSBuild@1
displayName: 'Build solution src\Raft.sln'
inputs:
solution: src\Raft.sln
platform: '$(BuildPlatform)'
configuration: '$(BuildConfiguration)'
- task: CopyFiles@2
displayName: 'Copy Files to: $(Build.ArtifactStagingDirectory)'
inputs:
SourceFolder: '$(Build.SourcesDirectory)'
Contents: '**\bin\$(BuildConfiguration)\**'
TargetFolder: '$(Build.ArtifactStagingDirectory)'
- task: ms.vss-governance-buildtask.governance-build-task-component-detection.ComponentGovernanceComponentDetection@0
displayName: 'Component Detection'
- task: securedevelopmentteam.vss-secure-development-tools.build-task-binskim.BinSkim@3
displayName: 'Run BinSkim'
inputs:
InputType: Basic
AnalyzeTarget: '$(Build.ArtifactStagingDirectory)\*.exe;$(Build.ArtifactStagingDirectory)\*.dll'
AnalyzeVerbose: true
AnalyzeStatistics: true
AnalyzeEnvironment: true
enabled: false
- task: securedevelopmentteam.vss-secure-development-tools.build-task-credscan.CredScan@2
displayName: 'Run CredScan'
inputs:
outputFormat: sarif
debugMode: false
continueOnError: true
- task: securedevelopmentteam.vss-secure-development-tools.build-task-roslynanalyzers.RoslynAnalyzers@2
displayName: 'Run Roslyn Analyzers'
- task: securedevelopmentteam.vss-secure-development-tools.build-task-vulnerabilityassessment.VulnerabilityAssessment@0
displayName: 'Run Vulnerability Assessment'
- task: securedevelopmentteam.vss-secure-development-tools.build-task-publishsecurityanalysislogs.PublishSecurityAnalysisLogs@2
condition: and(succeeded(), eq(variables['isSecurityBuild'], 'True'))
displayName: 'Publish Security Analysis Logs'
# For this task there needs to be a pipeline variable `System.AccessToken`
# This is a Personal Access Token. These do not have unlimited lifetimes.
# The current token expires at the end
# of March 2021.
- task: corygehr.air-autoassess.uploadScanResults.uploadScanResults@1
condition: and(succeeded(), eq(variables['isSecurityBuild'], 'True'))
displayName: 'Upload Scan Results for Analysis'
inputs:
areaPathParent: Raft
uploadUrl: 'https://airbuildscan.azurewebsites.net/api/Upload'
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
- task: securedevelopmentteam.vss-secure-development-tools.build-task-postanalysis.PostAnalysis@1
condition: and(succeeded(),eq(variables['isSecurityBuild'], 'True'))
displayName: 'Post Analysis'
inputs:
BinSkim: true
CredScan: true
RoslynAnalyzers: true

13
ado/production-build.yml Normal file
Просмотреть файл

@ -0,0 +1,13 @@
trigger:
batch : true
branches:
include:
- 'main'
variables:
versionNumber: 1.0.0
imageTag: 'v1.0'
imageTagWithBuildDate: $(imageTag)-$(Build.BuildNumber)
stages:
- template: stages/build/build.yml

Просмотреть файл

@ -0,0 +1,20 @@
trigger:
batch : true
branches:
include:
- '*'
variables:
- name: devRevision
value: 1
- name: version.major
value: 0
- name: version.minor
value: 7
- name: version.revision
value: $[counter(variables['devRevision'], 1)]
- name: versionNumber
value: $(version.major).$(version.minor).$(version.revision)
stages:
- template: stages/build/build.yml

Просмотреть файл

@ -0,0 +1,36 @@
trigger:
batch : true
branches:
include:
- '*'
parameters:
- name: variable_group
displayName: Variable Group
default: rolling
- name: RunBvt
displayName: Run BVT?
default: Yes
values:
- Yes
- No
variables:
- template: 'variables/version-variables.yml'
- group: ${{ parameters.variable_group }}
- name: version.revision
value: $[counter(variables['devRevision'], 1)]
- name: versionNumber
value: $(version.major).$(version.minor).$(version.revision)
stages:
- template: stages/build/build.yml
- template: stages/publish/publish.yml
parameters:
variable_group: ${{ parameters.variable_group }}
- template: stages/deploy/deploy-infrastructure.yml
- template: stages/deploy/deploy.yml
- template: stages/test/test.yml
parameters:
RunBvt: ${{ parameters.RunBvt }}

Просмотреть файл

@ -0,0 +1,45 @@
stages:
- stage: Build
jobs:
- job: CLI
pool:
vmImage: 'ubuntu-latest'
steps:
- template: steps/cli.yml
- job: ApiService
pool:
vmImage: 'ubuntu-latest'
steps:
- template: steps/apiservice.yml
- job: RESTlerAgent
pool:
vmImage: 'ubuntu-latest'
steps:
- template: steps/restleragent.yml
- job: Orchestrator
pool:
vmImage: 'ubuntu-latest'
steps:
- template: steps/orchestrator.yml
- job: TestInfra
pool:
vmImage: 'ubuntu-latest'
steps:
- template: steps/test-infra.yml
- job: ResultAnalyzer
pool:
vmImage: 'ubuntu-latest'
steps:
- template: steps/result-analyzer.yml
- job: AzureAuthUtility
pool:
vmImage: 'ubuntu-latest'
steps:
- template: steps/azure-auth-utility.yml

Просмотреть файл

@ -0,0 +1,46 @@
# see https://docs.microsoft.com/en-us/dotnet/core/rid-catalog for definitions of the '-r' flag
steps:
- task: NuGetToolInstaller@1
displayName: 'Use NuGet 5.4'
inputs:
versionSpec: 5.4
- task: NuGetCommand@2
displayName: 'NuGet restore'
inputs:
restoreSolution: '**\APIService.sln'
- task: DotNetCoreCLI@2
displayName: 'APIService'
inputs:
command: publish
publishWebProjects: false
projects: src/APIService/ApiService/APIService.fsproj
arguments: '-c release -r linux-musl-x64 /p:version=$(versionNumber)'
zipAfterPublish: false
- task: DotNetCoreCLI@2
displayName: 'Run Unit Tests : APIService'
inputs:
command: test
projects: src/APIService/ApiService/APIServiceTests/APIServiceTests.fsproj
arguments: -v:detailed
# Because we are using a a release pipeline we need to publish the Dockerfile
# in the artifacts, since the release pipeline does not have access to the code tree.
- task: CopyFiles@2
displayName: 'Copy APIService Dockerfile'
inputs:
targetFolder: src/APIService/ApiService/bin/release/netcoreapp3.1/linux-musl-x64/publish/
sourceFolder: src/APIService/ApiService
contents: Dockerfile
# Create files with version information that will be used in the release pipeline
- script: echo $(imageTag) > src/APIService/ApiService/bin/release/netcoreapp3.1/linux-musl-x64/publish/imageTag.txt
- script: echo $(imageTagWithBuildDate) > src/APIService/ApiService/bin/release/netcoreapp3.1/linux-musl-x64/publish/imageTagWithBuildDate.txt
- task: PublishPipelineArtifact@1
inputs:
targetPath: src/APIService/ApiService/bin/release/netcoreapp3.1/linux-musl-x64/publish
artifactName: apiservice

Просмотреть файл

@ -0,0 +1,24 @@
steps:
- task: NuGetToolInstaller@1
displayName: 'Use NuGet 5.4'
inputs:
versionSpec: 5.4
- task: NuGetCommand@2
displayName: 'NuGet restore'
inputs:
restoreSolution: '**\RESTlerAgent.sln'
- task: DotNetCoreCLI@2
displayName: 'Azure Auth utility'
inputs:
command: publish
publishWebProjects: false
projects: src/Agent/AzureAuth/AzureAuth.fsproj
arguments: '-c release /p:version=$(versionNumber)'
zipAfterPublish: false
- task: PublishPipelineArtifact@1
inputs:
targetPath: src/Agent/AzureAuth/bin/release/netcoreapp3.1/publish/
artifactName: AzureAuth

Просмотреть файл

@ -0,0 +1,15 @@
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.8'
addToPath: true
- script: |
pip install pycodestyle
pycodestyle cli/raft.py
displayName: 'Run pycodestyle'
- task: PublishPipelineArtifact@1
inputs:
targetPath: cli
artifactName: cli

Просмотреть файл

@ -0,0 +1,35 @@
steps:
- task: NuGetToolInstaller@1
displayName: 'Use NuGet 5.4'
inputs:
versionSpec: 5.4
- task: NuGetCommand@2
displayName: 'NuGet restore'
inputs:
restoreSolution: '**\Orchestrator.sln'
- task: DotNetCoreCLI@2
displayName: 'Orchestrator'
inputs:
command: publish
publishWebProjects: false
projects: src/Orchestrator/Orchestrator/Orchestrator.csproj
arguments: '-c release /p:version=$(versionNumber)'
zipAfterPublish: false
# Because we are using a deployment job with the orchestrator we need to push the
# Dockerfile in with the artifacts. The deployment job does not check out the source code
# like a normal job.
- task: CopyFiles@2
displayName: 'Copy Orchestrator Dockerfile'
inputs:
targetFolder: src/Orchestrator/Orchestrator/bin/release/netcoreapp3.1/publish/
sourceFolder: src/Orchestrator
contents: Dockerfile
- task: PublishPipelineArtifact@1
inputs:
targetPath: src/Orchestrator/Orchestrator/bin/release/netcoreapp3.1/publish/
artifactName: Orchestrator

Просмотреть файл

@ -0,0 +1,24 @@
steps:
- task: NuGetToolInstaller@1
displayName: 'Use NuGet 5.4'
inputs:
versionSpec: 5.4
- task: NuGetCommand@2
displayName: 'NuGet restore'
inputs:
restoreSolution: '**\RESTlerAgent.sln'
- task: DotNetCoreCLI@2
displayName: 'RESTlerAgent'
inputs:
command: publish
publishWebProjects: false
projects: src/Agent/RESTlerAgent/RestlerAgent.fsproj
arguments: '-c release /p:version=$(versionNumber)'
zipAfterPublish: false
- task: PublishPipelineArtifact@1
inputs:
targetPath: src/Agent/RESTlerAgent/bin/release/netcoreapp3.1/publish/
artifactName: RestlerAgent

Просмотреть файл

@ -0,0 +1,24 @@
steps:
- task: NuGetToolInstaller@1
displayName: 'Use NuGet 5.4'
inputs:
versionSpec: 5.4
- task: NuGetCommand@2
displayName: 'NuGet restore'
inputs:
restoreSolution: '**\RESTlerAgent.sln'
- task: DotNetCoreCLI@2
displayName: 'Result Analyzer'
inputs:
command: publish
publishWebProjects: false
projects: src/Agent/RaftResultAnalyzer/RaftResultAnalyzer.fsproj
arguments: '-c release /p:version=$(versionNumber)'
zipAfterPublish: false
- task: PublishPipelineArtifact@1
inputs:
targetPath: src/Agent/RaftResultAnalyzer/bin/release/netcoreapp3.1/publish/
artifactName: RaftResultAnalyzer

Просмотреть файл

@ -0,0 +1,41 @@
steps:
- task: NuGetToolInstaller@1
displayName: 'Use NuGet 5.4'
inputs:
versionSpec: 5.4
- task: NuGetCommand@2
displayName: 'NuGet restore'
inputs:
restoreSolution: '**\TestInfra.sln'
- task: DotNetCoreCLI@2
displayName: 'TestInfra'
inputs:
command: publish
publishWebProjects: false
projects: src/Test/TestInfraFunc/TestInfraFunc.csproj
arguments: '-c release /p:version=$(versionNumber)'
zipAfterPublish: false
- task: CopyFiles@2
displayName: 'Copy Test Infra Dockerfile'
inputs:
targetFolder: src/Test/TestInfraFunc/bin/release/netcoreapp3.1/publish/
sourceFolder: src/Test/
contents: Dockerfile
# Because we are using a a release pipeline we need to publish the Dockerfile
# in the artifacts, since the release pipeline does not have access to the code tree.
- task: CopyFiles@2
displayName: 'Copy TestInfraFunc Dockerfile'
inputs:
targetFolder: src/Test/TestInfraFunc/bin/release/netcoreapp3.1/publish/
sourceFolder: src/Test
contents: Dockerfile
- task: PublishPipelineArtifact@1
inputs:
targetPath: src/Test/TestInfraFunc/bin/release/netcoreapp3.1/publish/
artifactName: TestInfraFunc

Просмотреть файл

@ -0,0 +1,8 @@
stages:
- stage: DeployInfrastructure
jobs:
- job: DeployInfrastructure
pool:
vmImage: 'ubuntu-latest'
steps:
- template: steps/deploy-infrastructure.yml

Просмотреть файл

@ -0,0 +1,8 @@
stages:
- stage: Deploy
jobs:
- job: UploadServiceUtilites_Restart
pool:
vmImage: 'ubuntu-latest'
steps:
- template: steps/upload-utils.yml

Просмотреть файл

@ -0,0 +1,17 @@
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.8'
addToPath: true
- script: pip3 install -r cli/requirements.txt
displayName: Install CLI Python requirements
- task: AzureCLI@2
displayName: "Deploy Service Infrastructure"
inputs:
azureSubscription: $(raft-subscription)
scriptType: 'pscore'
scriptLocation: 'inlineScript'
inlineScript: "python cli/raft.py --defaults-context-json '$(raft-defaults)' --secret $(bvt-secret) --skip-sp-deployment service deploy"

Просмотреть файл

@ -0,0 +1,17 @@
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.8'
addToPath: true
- script: pip3 install -r cli/requirements.txt
displayName: Install CLI Python requirements
- task: AzureCLI@2
displayName: "Restart Services"
inputs:
azureSubscription: $(raft-subscription)
scriptType: 'pscore'
scriptLocation: 'inlineScript'
inlineScript: "python cli/raft.py --defaults-context-json '$(raft-defaults)' --secret $(bvt-secret) service restart"

Просмотреть файл

@ -0,0 +1,22 @@
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.8'
addToPath: true
- script: pip3 install -r cli/requirements.txt
displayName: Install CLI Python requirements
- task: DownloadPipelineArtifact@2
inputs:
artifact: AzureAuth
path: cli/raft-utils/auth/dotnet-core-3.1
- task: AzureCLI@2
displayName: "Upload service utilities"
inputs:
azureSubscription: $(raft-subscription)
scriptType: 'pscore'
scriptLocation: 'inlineScript'
inlineScript: "python cli/raft.py --defaults-context-json '$(raft-defaults)' --secret $(bvt-secret) service upload-utils"

Просмотреть файл

@ -0,0 +1,40 @@
parameters:
- name: variable_group
type: string
stages:
- stage: Publish
jobs:
- job: APIService
pool:
vmImage: 'ubuntu-latest'
steps:
- template: steps/apiservice.yml
# It's a requirement to use a deployment job to specify the environment
# We only need one deployment, as referencing this job via the environment will show
# everything from the whole pipeline.
- deployment: Orchestrator
pool:
vmImage: 'ubuntu-latest'
environment: ${{ parameters.variable_group }}
strategy:
runOnce:
deploy:
steps:
- template: steps/orchestrator.yml
- job: RESTleragent
pool:
vmImage: 'ubuntu-latest'
steps:
- template: steps/restleragent.yml
- job: TestInfra
pool:
vmImage: 'ubuntu-latest'
steps:
- template: steps/test-infra.yml

Просмотреть файл

@ -0,0 +1,16 @@
steps:
- task: DownloadPipelineArtifact@2
inputs:
artifact: apiservice
path: $(Build.SourcesDirectory)/artifacts/apiservice
- task: Docker@2
displayName: 'Publish APIService imageTag'
inputs:
buildContext: $(Build.SourcesDirectory)/artifacts/apiservice/
command: buildAndPush
containerRegistry: $(raft-containerRegistryServiceConnection)
repository: restapifuzztesting/apiservice
Dockerfile: artifacts/apiservice/Dockerfile
tags: |
$(imageTag)

Просмотреть файл

@ -0,0 +1,11 @@
steps:
- task: Docker@2
displayName: 'Publish Orchestrator imageTag'
inputs:
buildContext: $(Pipeline.Workspace)/Orchestrator/
command: buildAndPush
containerRegistry: $(raft-containerRegistryServiceConnection)
repository: restapifuzztesting/orchestrator
Dockerfile: $(Pipeline.Workspace)/Orchestrator/Dockerfile
tags: |
$(imageTag)

Просмотреть файл

@ -0,0 +1,30 @@
steps:
- task: DownloadPipelineArtifact@2
inputs:
artifact: RestlerAgent
path: $(Build.SourcesDirectory)/artifacts/RestlerAgent
- task: DownloadPipelineArtifact@2
inputs:
artifact: RaftResultAnalyzer
path: $(Build.SourcesDirectory)/artifacts/RaftResultAnalyzer
# This authenticates against the service connection which is needed to pull the restler image
# from the repository.
- task: Docker@2
displayName: Login to ACR
inputs:
command: login
containerRegistry: acrrestler
- task: Docker@2
displayName: 'Publish Restler Agent imageTag'
inputs:
command: buildAndPush
buildContext: $(Build.SourcesDirectory)/artifacts/
containerRegistry: $(raft-containerRegistryServiceConnection)
repository: restapifuzztesting/restler-agent
Dockerfile: src/Agent/Dockerfile
tags: |
$(imageTag)

Просмотреть файл

@ -0,0 +1,16 @@
steps:
- task: DownloadPipelineArtifact@2
inputs:
artifact: TestInfraFunc
path: $(Build.SourcesDirectory)/artifacts/TestInfraFunc
- task: Docker@2
displayName: 'Publish Test Infra imageTag'
inputs:
buildContext: $(Build.SourcesDirectory)/artifacts/TestInfraFunc/
command: buildAndPush
containerRegistry: $(raft-containerRegistryServiceConnection)
repository: restapifuzztesting/test-infra-func
Dockerfile: $(Build.SourcesDirectory)/artifacts/TestInfraFunc/Dockerfile
tags: $(imageTag)

Просмотреть файл

@ -0,0 +1,16 @@
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.8'
addToPath: true
- script: pip3 install -r cli/requirements.txt
displayName: Install CLI Python requirements
- task: AzureCLI@2
displayName: "Compile-Fuzz Test"
inputs:
azureSubscription: $(raft-subscription)
scriptType: 'pscore'
scriptLocation: 'inlineScript'
inlineScript: "python Scripts/Tests/bvt.py --defaults-context-json '$(raft-defaults)' --secret $(bvt-secret) --host $(bvt-host)"

Просмотреть файл

@ -0,0 +1,16 @@
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.8'
addToPath: true
- script: pip3 install -r cli/requirements.txt
displayName: Install CLI Python requirements
- task: AzureCLI@2
displayName: "List Webhook Events Test"
inputs:
azureSubscription: $(raft-subscription)
scriptType: 'pscore'
scriptLocation: 'inlineScript'
inlineScript: "python Scripts/Tests/list-available-webhook-events.py --defaults-context-json '$(raft-defaults)' --secret $(bvt-secret)"

14
ado/stages/test/test.yml Normal file
Просмотреть файл

@ -0,0 +1,14 @@
parameters:
- name: RunBvt
type: string
stages:
- stage: Test
jobs:
- job: BuildVerification
condition: and(succeeded(), eq('${{parameters.RunBvt}}', 'Yes'))
pool:
vmImage: 'ubuntu-latest'
steps:
- template: steps/webhooks.yml
- template: steps/bvt.yml

Просмотреть файл

@ -0,0 +1,11 @@
variables:
- name: version.major
value: 1
- name: version.minor
value: 0
- name: imageTag
value: 'v1.0'
- name: devRevision
value: 0
- name: imageTagWithBuildDate
value : $(imageTag)-$(Build.BuildNumber)

Просмотреть файл

@ -0,0 +1,31 @@
import msal
import os
import json
import sys
def get_token(client_id, tenant_id, secret, scopes, authority_uri):
if authority_uri:
authority = f"{authority_uri}/{tenant_id}"
else:
authority = f"https://login.microsoftonline.com/{tenant_id}"
if not scopes:
scopes = [f"{client_id}/.default"]
app = msal.ConfidentialClientApplication(client_id, authority=authority, client_credential=secret)
return app.acquire_token_for_client(scopes)
def token_from_env_variable(env_variable_name):
auth_params = os.environ.get(f"RAFT_{env_variable_name}") or os.environ.get(env_variable_name)
if auth_params:
auth = json.loads(auth_params)
token = get_token(auth['client'], auth['tenant'], auth['secret'], auth.get('scopes'), auth.get('authorityUri') )
print("Getting MSAL token")
return f'{token["token_type"]} {token["access_token"]}'
else:
print(f"Authentication parameters are not set in environment variable {env_variable_name}")
return None
if __name__ == "__main__":
token = token_from_env_variable(sys.argv[1])
print(token)

Просмотреть файл

@ -0,0 +1 @@
msal~=1.4.3

Просмотреть файл

@ -0,0 +1,13 @@
{
"container" : "{PrivateRegistryRestler}/restler-agent:v0.9.0",
"run" : {
"command" : "/bin/sh",
"arguments" : ["-c",
"dotnet /raft/agent/RestlerAgent.dll --agent-name ${RAFT_CONTAINER_NAME} --job-id ${RAFT_JOB_ID} --task-config-path ${RAFT_WORK_DIRECTORY}/task-config.json --work-directory ${RAFT_WORK_DIRECTORY} --restler-path /RESTler --app-insights-instrumentation-key ${RAFT_APP_INSIGHTS_KEY} --output-sas \"${RAFT_SB_OUT_SAS}\""
]
},
"idle" : {
"command" : "/bin/sh",
"arguments" : ["-c", "echo DebugMode; while true; do sleep 100000; done;"]
}
}

Просмотреть файл

@ -0,0 +1,262 @@
{
"openapi": "3.0.1",
"info": {
"title": "RESTler",
"version": "v1"
},
"paths": {},
"components": {
"schemas": {
"CustomDictionary": {
"type": "object",
"properties": {
"fuzzableString": {
"type": "array",
"items": {
"type": "string"
},
"description": "List of string values used as fuzzing inputs. If null then values are auto-generated",
"nullable": true
},
"fuzzableInt": {
"type": "array",
"items": {
"type": "string"
},
"description": "List of int values used as fuzzing inputs. If null then values are auto-generated",
"nullable": true
},
"fuzzableNumber": {
"type": "array",
"items": {
"type": "string"
},
"description": "List of number values used as fuzzing inputs. If null then values are auto-generated",
"nullable": true
},
"fuzzableBool": {
"type": "array",
"items": {
"type": "string"
},
"description": "List of bool values used as fuzzing inputs. If null then values are auto-generated",
"nullable": true
},
"fuzzableDatetime": {
"type": "array",
"items": {
"type": "string"
},
"description": "List of date-time values used as fuzzing inputs. If null then values are auto-generated",
"nullable": true
},
"fuzzableObject": {
"type": "array",
"items": {
"type": "string"
},
"description": "List of string encoded JSON objects values used as fuzzing inputs",
"nullable": true
},
"fuzzableUuid4": {
"type": "array",
"items": {
"type": "string"
},
"description": "List of UUID4 values used as fuzzing inputs",
"nullable": true
},
"customPayload": {
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"type": "string"
}
},
"description": "Map of values to use as replacement of parameters defined in Swagger specifications. For example\r\nif { \"userId\" : [\"a\", \"b\"] } is specified then {userId} in URL path /users/{userId} will be replaced\r\nby \"a\" or by \"b\"",
"nullable": true
},
"customPayloadUuid4Suffix": {
"type": "object",
"additionalProperties": {
"type": "string"
},
"description": "Map of values to use as replacement of parameters defined in swagger. The values will\r\nhave a random suffix added. For example {\"publicIpAddressName\": \"publicIpAddrName-\"} will produce publicIpAddrName-f286a0a069 for\r\npublicIpAddressName parameter defined in Swagger specifications.",
"nullable": true
},
"customPayloadHeader": {
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"type": "string"
}
},
"description": "User specified custom headers to pass in every request",
"nullable": true
},
"shadowValues": {
"type": "object",
"additionalProperties": {
"type": "object",
"additionalProperties": {
"type": "array",
"items": {
"type": "string"
}
}
},
"description": "RESTler documentation will have more info on this",
"nullable": true
}
},
"additionalProperties": false
},
"CompileConfiguration": {
"type": "object",
"properties": {
"inputJsonGrammarPath": {
"type": "string",
"description": "Path to a JSON grammar to use for compilation\r\nIf set then JSON grammar used for compilation instead of Swagger",
"nullable": true
},
"inputFolderPath": {
"type": "string",
"description": "Grammar is produced by compile step prior. The compile step\r\nfile share is mounted and set here. Agent will not modify\r\nthis share. Agent will make a copy of all needed files to it's work directory\r\nand re-run compile with data passed through this folder.",
"nullable": true
},
"readOnlyFuzz": {
"type": "boolean",
"description": "When true, only fuzz the GET requests"
},
"allowGetProducers": {
"type": "boolean",
"description": "Allows GET requests to be considered.\r\nThis option is present for debugging, and should be\r\nset to 'false' by default.\r\nIn limited cases when GET is a valid producer, the user\r\nshould add an annotation for it."
},
"useRefreshableToken": {
"type": "boolean",
"description": "Use refreshable token for authenticating with service under test"
},
"mutationsSeed": {
"type": "integer",
"description": "Use the seed to generate random value for empty/null customDictitonary fields\r\nif not set then default hard-coded RESTler values are used for populating customDictionary fields",
"format": "int64",
"nullable": true
},
"customDictionary": {
"$ref": "#/components/schemas/CustomDictionary"
}
},
"additionalProperties": false,
"description": "User-specified RESTler compiler configuration"
},
"TargetEndpointConfiguration": {
"type": "object",
"properties": {
"ip": {
"type": "string",
"description": "The IP of the endpoint being fuzzed",
"nullable": true
},
"port": {
"type": "integer",
"description": "The port of the endpoint being fuzzed. Defaults to 443.",
"format": "int32"
}
},
"additionalProperties": false,
"description": "Configuration of the endpoint under test"
},
"RunConfiguration": {
"type": "object",
"properties": {
"grammarPy": {
"type": "string",
"description": "Path to grammar py relative to compile folder path. If not set then default \"grammar.py\" grammar is assumed",
"nullable": true
},
"inputFolderPath": {
"type": "string",
"description": "For Test or Fuzz tasks: Grammar is produced by compile step. The compile step\r\nfile share is mounted and set here. Agent will not modify\r\nthis share. Agent will make a copy of all needed files to it's work directory.\r\nFor Replay task: path to RESTler Fuzz or Test run that contains bug buckets to replay",
"nullable": true
},
"targetEndpointConfiguration": {
"$ref": "#/components/schemas/TargetEndpointConfiguration"
},
"producerTimingDelay": {
"type": "integer",
"description": "The delay in seconds after invoking an API that creates a new resource",
"format": "int32"
},
"useSsl": {
"type": "boolean",
"description": "Use SSL when connecting to the server"
},
"pathRegex": {
"type": "string",
"description": "Path regex for filtering tested endpoints",
"nullable": true
}
},
"additionalProperties": false,
"description": "RESTler job Test, Fuzz or Replay configuration"
},
"ReplayConfiguration": {
"type": "object",
"properties": {
"replayBugBucketsPaths": {
"type": "array",
"items": {
"type": "string"
},
"description": "List of paths to RESTler folder runs to replay (names of folders are assigned when mounted readonly/readwrite file share mounts).\r\nIf path is a folder, then all bug buckets replayed in the folder.\r\nIf path is a bug_bucket file - then only that file is replayed.\r\nIf empty - then replay all bugs under RunConfiguration.previousStepOutputFolderPath.",
"nullable": true
},
"runConfiguration": {
"$ref": "#/components/schemas/RunConfiguration"
}
},
"additionalProperties": false,
"description": "RESTler configuration for replaying request sequences that triggered a reproducable bug"
},
"AgentConfiguration": {
"type": "object",
"properties": {
"resultsAnalyzerReportTimeSpanInterval": {
"type": "string",
"description": "How often to run result analyzer against RESTler logs. Default is every 1 minute.\r\nIf not set then result analyzer will run only once after RESTler task is over.",
"format": "date-span",
"nullable": true
}
},
"additionalProperties": false,
"description": "Configure behaviour of RESTler agent"
},
"RESTler": {
"type": "object",
"properties": {
"task": {
"type": "string",
"description": "Can be compile, fuzz, test, replay",
"nullable": false
},
"compileConfiguration": {
"$ref": "#/components/schemas/CompileConfiguration"
},
"runConfiguration": {
"$ref": "#/components/schemas/RunConfiguration"
},
"replayConfiguration": {
"$ref": "#/components/schemas/ReplayConfiguration"
},
"agentConfiguration": {
"$ref": "#/components/schemas/AgentConfiguration"
},
},
"additionalProperties": false,
"description": "RESTler payload"
}
}
}
}

Просмотреть файл

@ -0,0 +1,13 @@
{
"container" : "owasp/zap2docker-stable",
"run" : {
"command" : "bash",
"arguments" : ["-c",
"touch /.dockerenv; cd $RAFT_RUN_DIRECTORY; ln -s $RAFT_WORK_DIRECTORY /zap/wrk; python3 run.py install; python3 run.py" ]
},
"idle" : {
"command" : "bash",
"arguments" : ["-c", "echo DebugMode; while true; do sleep 100000; done;"]
}
}

Просмотреть файл

@ -0,0 +1,2 @@
applicationinsights
azure-servicebus

Просмотреть файл

@ -0,0 +1,48 @@
import json
import os
import subprocess
import sys
work_directory = os.environ['RAFT_WORK_DIRECTORY']
run_directory = os.environ['RAFT_RUN_DIRECTORY']
def auth_token(init):
with open(os.path.join(work_directory, "task-config.json"), 'r') as task_config:
config = json.load(task_config)
auth_config = config.get("authenticationMethod")
if auth_config:
if auth_config.get("TxtToken"):
token = os.environ.get(f"RAFT_{auth_config['TxtToken']}") or os.environ.get(auth_config["TxtToken"])
return token
elif auth_config.get("CommandLine"):
subprocess.getoutput(auth_config.get("CommandLine"))
elif auth_config.get("MSAL"):
msal_dir = os.path.join(run_directory, "..", "..", "auth", "python3", "msal")
if init:
print("Installing MSAL requirements")
subprocess.check_call([sys.executable, "-m", "pip", "install", "-r", os.path.join(msal_dir, "requirements.txt")])
else:
print("Retrieving MSAL token")
sys.path.append(msal_dir)
authentication_environment_variable = auth_config["MSAL"]
import msal_token
token = msal_token.token_from_env_variable( authentication_environment_variable )
if token:
print("Retrieved MSAL token")
return token
else:
print("Failed to retrieve MSAL token")
return None
else:
print(f'Unhandled authentication configuration {auth_config}')
return None
if __name__ == "__main__":
if len(sys.argv) == 2 and sys.argv[1] == "install":
subprocess.check_call([sys.executable, "-m", "pip", "install", "-r", os.path.join(run_directory, "requirements.txt")])
auth_token(True)
else:
token = auth_token(False)
import scan
scan.run(token)

Просмотреть файл

@ -0,0 +1,159 @@
import os
import sys
import json
import shutil
import time
import io
import logging
from logging import StreamHandler
from applicationinsights import TelemetryClient
from azure.servicebus import TopicClient, Message
from contextlib import redirect_stdout
class RaftUtils():
def __init__(self):
work_directory = os.environ['RAFT_WORK_DIRECTORY']
with open(os.path.join(work_directory, 'task-config.json'), 'r') as task_config:
self.config = json.load(task_config)
connection_str = os.environ['RAFT_SB_OUT_SAS']
self.topic_client = TopicClient.from_connection_string(connection_str)
self.telemetry_client = TelemetryClient(instrumentation_key=os.environ['RAFT_APP_INSIGHTS_KEY'])
self.job_id = os.environ['RAFT_JOB_ID']
self.container_name = os.environ['RAFT_CONTAINER_NAME']
self.telemetry_properties = {
"jobId" : self.job_id,
"taskIndex" : os.environ['RAFT_TASK_INDEX'],
"containerName" : self.container_name
}
def report_status(self, state, details):
m = {
'eventType' : 'JobStatus',
'message': {
'tool' : 'ZAP',
'jobId' : self.job_id,
'agentName': self.container_name,
'details': details,
'utcEventTime' : time.strftime('%Y-%m-%d %H:%M:%S', time.gmtime()),
'state' : state
}
}
msg = Message(str.encode(json.dumps(m)))
self.topic_client.send(msg)
def report_status_created(self, details=None):
self.report_status('Created', details)
def report_status_running(self, details=None):
self.report_status('Running', details)
def report_status_error(self, details=None):
self.report_status('Error', details)
def report_status_completed(self, details=None):
self.report_status('Completed', details)
def log_trace(self, trace):
self.telemetry_client.track_trace(trace, properties=self.telemetry_properties)
def log_exception(self):
self.telemetry_client.track_exception(properties=self.telemetry_properties)
def flush(self):
self.telemetry_client.flush()
def get_swagger_target(self):
swagger = self.config.get("swaggerLocation")
if swagger and swagger.get("url"):
return swagger["url"]
elif swagger.get("filePath"):
return swagger["filePath"]
zap_dir = '/zap'
sys.path.append(zap_dir)
zap = __import__("zap-api-scan")
utils = RaftUtils()
class StatusReporter(StreamHandler):
def __init__(self):
StreamHandler.__init__(self)
self.last_txt = None
def emit(self, record):
txt = self.format(record)
if txt != self.last_txt:
self.last_txt = txt
progress='Active Scan progress %:'
i = txt.find(progress)
if i != -1:
utils.report_status_running([txt[i :]])
def run_zap(token):
target = utils.get_swagger_target()
if token:
utils.log_trace('Authentication token is set')
auth = ('-config replacer.full_list(0).description=auth1'
' -config replacer.full_list(0).enabled=true'
' -config replacer.full_list(0).matchtype=REQ_HEADER'
' -config replacer.full_list(0).matchstr=Authorization'
' -config replacer.full_list(0).regex=false'
f' -config replacer.full_list(0).replacement="{token}"')
zap_auth_config = ['-z', auth]
else:
utils.log_trace('Authentication token is not set')
zap_auth_config = []
os.chdir(zap_dir)
r = 0
try:
print('Removing zap.out if exists')
os.remove('/zap/zap.out')
except:
pass
try:
utils.log_trace("Starting ZAP")
utils.report_status_running()
status_reporter = StatusReporter()
logger = logging.getLogger()
logger.addHandler(status_reporter)
zap.main(['-t', target, '-f', 'openapi', '-J', 'report.json', '-r', 'report.html', '-w', 'report.md', '-x', 'report.xml', '-d'] + zap_auth_config)
except SystemExit as e:
r = e.code
utils.log_trace(f"ZAP exited with exit code: {r}")
shutil.copy('/zap/zap.out', '/zap/wrk/zap.out')
utils.report_status_completed()
if r < 2:
return 0
else:
return r
def run(token):
try:
utils.report_status_created()
return run_zap(token)
except Exception as ex:
utils.log_exception()
utils.report_status_error([f"{ex}"])
raise
finally:
utils.flush()
if __name__ == "__main__":
if len(sys.argv) == 2:
run(sys.argv[1])
else:
run(None)

460
cli/raft.py Normal file
Просмотреть файл

@ -0,0 +1,460 @@
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import argparse
import json
import os
import uuid
import textwrap
import raft_sdk.raft_common
from raft_sdk.raft_service import RaftCLI, RaftJobConfig
from raft_sdk.raft_deploy import RaftServiceCLI
script_dir = os.path.dirname(os.path.abspath(__file__))
fresh_defaults = json.dumps(
{
"subscription": "",
"deploymentName": "",
"region": "",
"metricsOptIn": True,
"useAppInsights": True,
"registry": "mcr.microsoft.com"
}, indent=4)
defaults_help = '''
subscription - The Azure Subscription ID to which RAFT is deployed
deploymentName - RAFT deployment name
deployment name requirements:
- only letters or numbers
- at most 20 characters long
- no capital letters
- no dashes
region - Region to deploy RAFT (e.g. westus2)
https://docs.microsoft.com/en-us/azure/container-instances/container-instances-region-availability
for to pick the optimal region for your deployment.
All jobs will be deployed by default in the same
region as your service deployment
metricsOptIn - allow Microsoft collect anonymized metrics from the deployment.
useAppInsights - deploy AppInsights and use it to write all service logs
registry - registry which stores service images.
Default: mcr.microsoft.com
-------------------------
To apply any changes made to the defaults.json file,
please run 'raft.py service deploy'
-------------------------
'''
def run(args):
def validate(defaults):
s = defaults.get('subscription')
d = defaults.get('deploymentName')
r = defaults.get('region')
return (s and d and r)
defaults_path = args['defaults_context_path']
defaults_json = args['defaults_context_json']
if defaults_json:
print(f"Loading defaults from command line: {defaults_json}")
defaults = json.loads(defaults_json)
if not validate(defaults):
print(defaults_help)
return
# check if defaults.json is set in the context and it exists
elif os.path.isfile(defaults_path):
with open(defaults_path, 'r') as d:
defaults = json.load(d)
if not validate(defaults):
print(defaults_help)
return
else:
with open(defaults_path, 'w') as d:
d.write(fresh_defaults)
print(defaults_help)
return
defaults['secret'] = args.get('secret')
if 'metricsOptIn' not in defaults:
defaults['metricsOptIn'] = True
if defaults.get('useAppInsights') is None:
defaults['useAppInsights'] = True
cli_action = args.get('logout')
service_action = args.get('service-action')
job_action = args.get('job-action')
webhook_action = args.get('webhook-action')
if cli_action == 'logout':
raft_sdk.raft_common.delete_token_cache()
elif service_action:
service_cli = RaftServiceCLI(defaults, defaults_path)
if service_action == 'restart':
service_cli.restart()
elif service_action == 'info':
info = service_cli.service_info()
print(info)
elif service_action == 'deploy':
skip_sp_deployment = args.get('skip_sp_deployment')
service_cli.deploy(
args['sku'], skip_sp_deployment and skip_sp_deployment is True)
elif service_action == 'upload-utils':
utils_file_share = f'{uuid.uuid4()}'
service_cli.upload_utils(utils_file_share)
service_cli.restart()
else:
raise Exception(f'Unhandled service argument: {service_action}')
elif job_action:
cli = RaftCLI()
if job_action == 'create':
json_config_path = args.get('file')
if json_config_path is None:
ArgumentRequired('--file')
substitutionDictionary = {}
substitutionParameter = args.get('substitute')
if substitutionParameter:
substitutionDictionary = json.loads(substitutionParameter)
job_config = RaftJobConfig(file_path=json_config_path,
substitutions=substitutionDictionary)
print(job_config.config)
duration = args.get('duration')
if duration:
job_config.config['duration'] = duration
metadata = args.get('metadata')
if metadata:
job_config.config['metadata'] = json.loads(metadata)
newJob = cli.new_job(job_config, args.get('region'))
poll_interval = args.get('poll')
if poll_interval:
print(newJob)
cli.poll(newJob['jobId'], poll_interval)
else:
print(newJob)
elif job_action == 'status':
job_id = args.get('job_id')
if job_id is None:
ArgumentRequired('--job-id')
job_status = cli.job_status(job_id)
poll_interval = args.get('poll')
cli.print_status(job_status)
if poll_interval:
cli.poll(job_id, poll_interval)
elif job_action == 'list':
jobs_list = cli.list_jobs(args['look_back_hours'])
sorted = {}
for job in jobs_list:
if sorted.get(job['jobId']):
sorted[job['jobId']].append(job)
else:
sorted[job['jobId']] = [job]
for s in sorted:
cli.print_status(sorted[s])
print()
print(f"Total number of jobs: {len(sorted)}")
elif job_action == 'update':
json_config_path = args.get('file')
if json_config_path is None:
ArgumentRequired('--file')
substitutionDictionary = {}
substitutionParameter = args.get('substitute')
if substitutionParameter:
substitutionDictionary = json.loads(substitutionParameter)
job_update = cli.update_job(
args.get('job-id'),
RaftJobConfig(file_path=json_config_path,
substitutions=substitutionDictionary))
print(job_update)
elif job_action == 'results':
job_id = args.get('job_id')
if job_id is None:
ArgumentRequired('--job-id')
url = cli.result_url(job_id)
print(url)
elif job_action == 'delete':
job_id = args.get('job_id')
if job_id is None:
ArgumentRequired('--job-id')
job_delete = cli.delete_job(job_id)
print(job_delete)
elif webhook_action:
cli = RaftCLI()
if webhook_action == 'events':
webhook_events = cli.list_available_webhooks_events()
print(webhook_events)
elif webhook_action == 'create':
name_parameter = args.get('name')
if name_parameter is None:
ArgumentRequired('--name')
event_parameter = args.get('event')
if event_parameter is None:
ArgumentRequired('--event')
uri_parameter = args.get('url')
if uri_parameter is None:
ArgumentRequired('--url')
webhook_create_or_update = cli.set_webhooks_subscription(
name_parameter, event_parameter, uri_parameter)
print(webhook_create_or_update)
elif webhook_action == 'test':
name_parameter = args.get('name')
if name_parameter is None:
ArgumentRequired('--name')
event_parameter = args.get('event')
if event_parameter is None:
ArgumentRequired('--event')
webhook_test = cli.test_webhook(name_parameter, event_parameter)
print(webhook_test)
elif webhook_action == 'delete':
name_parameter = args.get('name')
if name_parameter is None:
ArgumentRequired('--name')
event_parameter = args.get('event')
if event_parameter is None:
ArgumentRequired('--event')
webhook_delete = cli.delete_webhook(name_parameter,
event_parameter)
print(webhook_delete)
elif webhook_action == 'list':
name_parameter = args.get('name')
if name_parameter is None:
ArgumentRequired('--name')
# the event name is not required.
# If not supplied all events will be listed.
event_parameter = args.get('event')
webhooks_list = cli.list_webhooks(name_parameter, event_parameter)
print(webhooks_list)
else:
raise Exception('Expected arguments could not be found in args')
def ArgumentRequired(name):
print(f'The {name} parameter is required')
quit()
def add_defaults_and_secret_args(parser):
parser.add_argument(
"--defaults-context-path",
default=os.path.join(script_dir, 'defaults.json'),
help="Path to the defaults.json",
required=False
)
parser.add_argument(
"--defaults-context-json",
default=None,
help="JSON blob containing service configuration",
required=False
)
parser.add_argument('--secret', required=False)
parser.add_argument('--skip-sp-deployment',
required=False,
action='store_true')
# pip install -r requirements.txt
def main():
parser = argparse.ArgumentParser(
description='RAFT CLI',
formatter_class=argparse.RawTextHelpFormatter)
sub_parser = parser.add_subparsers()
cli_parser = sub_parser.add_parser(
'cli',
formatter_class=argparse.RawTextHelpFormatter)
cli_parser.add_argument('logout',
help=textwrap.dedent('''\
Clears the cache so re-authentication will be needed to use the CLI again.'''))
service_parser = sub_parser.add_parser(
'service',
formatter_class=argparse.RawTextHelpFormatter)
service_parser.add_argument(
'service-action',
choices=['deploy', 'restart', 'info', 'upload-utils'],
help=textwrap.dedent('''\
deploy - Deploys the service
restart - Restarts the service updating the
docker containers if a new one is available
info - Show the version of the service and the last time it was started
upload-utils - Uploads the tool definitions to the service
'''))
allowed_skus = [
'B1', 'B2', 'B3', 'D1', 'F1', 'FREE',
'I1', 'I2', 'I3', 'P1V2', 'P2V2', 'P3V2',
'PC2', 'PC3', 'PC4', 'S1', 'S2', 'S3', 'SHARED']
service_parser.add_argument(
'--sku', default='B2',
choices=allowed_skus, required=False, help='Default value: B2')
# Add the positional argument.
job_parser = sub_parser.add_parser(
'job',
formatter_class=argparse.RawTextHelpFormatter)
job_parser.add_argument(
'job-action',
choices=['create', 'update', 'delete', 'status', 'list', 'results'],
help=textwrap.dedent('''\
create - Create a new job
--file is required
update - Update an existing job
--file is required
delete - Delete a job
--job-id is required
status - Get the status of a job
--job-id is required
list - Get a list of jobs
Use --look-back-hours to specify how far back to look
the default is 24 hours
results - Get the Uri of the job results
'''))
# Add the flag arguments
# This is a list of all the possible flags that can be
# used with any of the job choices.
# When parsing the job choice, the "required" flags will be enforced.
job_parser.add_argument(
'--file',
help=textwrap.dedent('''\
File path to the job definition file.
Required for 'create' and 'update' commands'''))
job_parser.add_argument(
'--poll', type=int,
help='Interval in seconds used to poll for job status')
job_parser.add_argument(
'--look-back-hours',
default=24,
help='The number of hours look back for job status')
job_parser.add_argument(
'--duration',
help='The duration in hours that a job should run')
job_parser.add_argument(
'--region',
help='''\
Optional parameter to run a job in a specific region.
If no region is specified the job runs in the same
region where the service is deployed
https://docs.microsoft.com/en-us/azure/container-instances/container-instances-region-availability
''')
job_parser.add_argument(
'--metadata',
help='Arbitrary key/value pairs that will be included in webhooks.')
job_parser.add_argument(
'--job-id',
help="Id of the job to update, required for 'status' and 'delete'")
job_parser.add_argument(
'--substitute',
help=textwrap.dedent('''\
Dictionary of values to find and replace in the --file.
Should be in the form {"find1":"replace1", "find2":"replace2"}
This parameter is only valid with the create and update commands
'''))
# The webhook positional arguments
webhook_parser = sub_parser.add_parser(
'webhook',
formatter_class=argparse.RawTextHelpFormatter)
webhook_parser.add_argument(
'webhook-action',
choices=['create', 'delete', 'list', 'events', 'test'],
help=textwrap.dedent('''\
create - Create a new webhook
--name, --event, and --url are required parameters
delete - Delete a webhook
--name is required
list - List a webhook
--name is required
events - List the supported events
test - Test a webhook
--name and --event are required
'''))
# Add the flag arguments
# This is a list of all the possible flags that
# can be used with any of the webhook choices.
# When parsing the webhook choice, the "required"
# flags will be enforced.
webhook_parser.add_argument(
'--event',
help=textwrap.dedent('''\
Identifies the webhook hook event, for example JobStatus or BugFound'''))
webhook_parser.add_argument(
'--url',
help='The webhook Url which will accept the POST command')
webhook_parser.add_argument(
'--name',
help='Name of the webhook')
add_defaults_and_secret_args(parser)
args = parser.parse_args()
run(vars(args))
if __name__ == "__main__":
try:
main()
except Exception as ex:
print(ex)

154
cli/raft_sdk/raft_common.py Normal file
Просмотреть файл

@ -0,0 +1,154 @@
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import msal
import requests
import os
import atexit
import string
from pathlib import Path
script_dir = os.path.dirname(os.path.abspath(__file__))
cache_dir = os.path.join(str(Path.home()), '.cache', 'raft')
cache_path = os.path.join(cache_dir, 'token_cache.bin')
def delete_token_cache():
try:
os.remove(cache_path)
except OSError:
pass
# https://msal-python.readthedocs.io/en/latest/#tokencache
def token_cache():
if not os.path.isdir(cache_dir):
os.makedirs(cache_dir)
cache = msal.SerializableTokenCache()
if os.path.exists(cache_path):
cache.deserialize(open(cache_path, "r").read())
atexit.register(
lambda: open(cache_path, "w").write(cache.serialize())
if cache.has_state_changed else None)
return cache
cache = token_cache()
def get_auth_token(client_id, tenant_id, secret=None):
authority = f"https://login.microsoftonline.com/{tenant_id}"
scopes = [f"{client_id}/.default"]
if secret:
app = msal.ConfidentialClientApplication(
client_id,
authority=authority,
client_credential=secret,
token_cache=cache
)
return app.acquire_token_for_client(scopes)
else:
app = msal.PublicClientApplication(
client_id,
authority=authority,
token_cache=cache
)
accounts = app.get_accounts(None)
if accounts and app.acquire_token_silent(scopes, accounts[0]):
return app.acquire_token_silent(scopes, accounts[0])
else:
flow = app.initiate_device_flow(scopes)
print(flow['message'], flush=True)
return app.acquire_token_by_device_flow(flow)
class RaftApiException(Exception):
def __init__(self, message, status_code):
self.message = message
self.status_code = status_code
class RestApiClient():
def __init__(self, endpoint, client_id, tenant_id, secret):
self.endpoint = endpoint
self.client_id = client_id
self.tenant_id = tenant_id
self.secret = secret
def auth_header(self):
token = get_auth_token(self.client_id, self.tenant_id, self.secret)
if 'error_description' in token:
raise RaftApiException(token['error_description'], 400)
return {
'Authorization': f"{token['token_type']} {token['access_token']}"
}
def post(self, relative_url, json_data):
return requests.post(
self.endpoint + relative_url,
json=json_data,
headers=self.auth_header())
def put(self, relative_url, json_data):
return requests.put(
self.endpoint + relative_url,
json=json_data,
headers=self.auth_header())
def delete(self, relative_url):
return requests.delete(
self.endpoint + relative_url,
headers=self.auth_header())
def get(self, relative_url):
return requests.get(
self.endpoint + relative_url,
headers=self.auth_header())
class RaftDefinitions():
def __init__(self, context):
self.context = context
self.deployment = context['deploymentName']
if len(self.deployment) > 24:
raise Exception("Deployment name must be no"
" more than 24 characters long")
for c in self.deployment:
if (c not in string.ascii_lowercase) and (c not in string.digits):
raise Exception("Deployment name"
" must use lowercase"
" letters and numbers"
" only")
self.subscription = context['subscription']
self.resource_group = f"{self.deployment}-raft"
self.service_bus = f"{self.deployment}-raft-servicebus"
self.app_insights = f"{self.deployment}-raft-ai"
self.asp = f"{self.deployment}-raft-asp"
self.container_tag = "v1.0"
self.queues = {
'job_events': "raft-jobevents",
'create_queue': "raft-jobcreate",
'delete_queue': "raft-jobdelete"
}
self.orchestrator = f"{self.deployment}-raft-orchestrator"
self.storage_suffix = self.subscription.split('-')[1]
self.storage_account = f"{self.deployment}raft" + self.storage_suffix
self.event_domain = f"{self.deployment}-raft-events"
self.storage_utils = f"{self.deployment}raftutil" + self.storage_suffix
self.storage_results = f"{self.deployment}raftrslt" + self.storage_suffix
self.api_service_webapp = f"{self.deployment}-raft-apiservice"
self.endpoint = f"https://{self.api_service_webapp}.azurewebsites.net"
self.test_infra = f"{self.deployment}-raft-test-infra"
self.test_infra_storage = (f"{self.deployment}rafttest"
f"{self.storage_suffix}")
self.key_vault = f"{self.deployment}-raft-kv"

1205
cli/raft_sdk/raft_deploy.py Normal file

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,379 @@
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import json
import os
import sys
import time
import tabulate
from .raft_common import RaftApiException, RestApiClient, RaftDefinitions
script_dir = os.path.dirname(os.path.abspath(__file__))
dos2unix_file_types = [".sh", ".bash"]
class RaftJobConfig():
def __init__(self,
*,
substitutions={},
file_path=None,
json_config=None):
if file_path:
with open(file_path, 'r') as config_file:
c = config_file.read()
for src in substitutions:
c = c.replace(src, substitutions[src])
config = json.loads(c)
self.config = config
elif json:
self.config = json_config
else:
raise Exception('Expected file_path or json to be set')
def add_read_only_mount(self, file_share, mount_as):
fs = [{"FileShareName": file_share, "MountPath": mount_as}]
if 'readOnlyFileShareMounts' in self.config:
self.config['readOnlyFileShareMounts'] += fs
else:
self.config['readOnlyFileShareMounts'] = fs
def add_read_write_mount(self, file_share, mount_as):
fs = [{"FileShareName": file_share, "MountPath": mount_as}]
if 'readWriteFileShareMounts' in self.config:
self.config['readWriteFileShareMounts'] += fs
else:
self.config['readWriteFileShareMounts'] = fs
def add_read_only_mounts(self, read_only_mounts):
if 'readOnlyMounts' in self.config:
self.config['readOnlyFileShareMounts'] += read_only_mounts
else:
self.config['readOnlyFileShareMounts'] = read_only_mounts
def add_read_write_mounts(self, read_write_mounts):
if 'readWriteFileShareMounts' in self.config:
self.config['readWriteFileShareMounts'] += read_write_mounts
else:
self.config['readWriteFileShareMounts'] = read_write_mounts
def add_metadata(self, data):
if 'webhook' in self.config:
if 'metadata' in self.config['webhook']:
self.config['metadata'] += data
else:
self.config['metadata'] = data
class RaftJobError(Exception):
def __init__(self, error, message):
self.error = error
self.message = message
class RaftCLI():
def __init__(self, context=None):
if context:
self.context = context
else:
with open(os.path.join(
script_dir,
'..',
'defaults.json'), 'r') as defaults_json:
self.context = json.load(defaults_json)
self.definitions = RaftDefinitions(self.context)
self.raft_api = RestApiClient(
self.definitions.endpoint,
self.context['clientId'],
self.context['tenantId'],
self.context.get('secret'))
def result_url(self, job_id):
'''
Constructs Azure File Storage results URL
Parameters:
job_id: job ID
Returns:
URL that contains results of the job run
'''
return(
"https://ms.portal.azure.com/#blade/Microsoft_Azure_FileStorage/"
"FileShareMenuBlade/overview/storageAccountId/"
f"%2Fsubscriptions%2F{self.definitions.subscription}"
f"%2FresourceGroups%2F{self.definitions.resource_group}"
f"%2Fproviders%2FMicrosoft.Storage%2FstorageAccounts%2F"
f"{self.definitions.storage_account}/"
f"path/{job_id}/protocol/")
def job_status(self, job_id):
'''
Gets job status
Parameters:
job_id: job ID
Returns:
Job status
'''
response = self.raft_api.get(f'/jobs/{job_id}')
if response.ok:
return json.loads(response.text)
else:
raise RaftApiException(response.text, response.status_code)
def list_jobs(self, time_span=None):
'''
List jobs for specified look-back timespan
Parameters:
time_span: look-back timespan.
Default is 24 hours
Returns:
List of job status objects within 'now' minus 'timespan'
time window
'''
if time_span:
response = self.raft_api.get(
f'/jobs?timeSpanFilter={time_span}')
else:
response = self.raft_api.get(f'/jobs')
if response.ok:
return json.loads(response.text)
else:
raise RaftApiException(response.text, response.status_code)
def new_job(self, job_config, region=None):
'''
Creates and deploys a new job with specified job configuration
Parameters:
job_config: job configuration
region: if set, then deploy job to that region
Returns:
Job ID assigned to newly created job
'''
if region:
query = f'/jobs?region={region}'
else:
query = '/jobs'
response = self.raft_api.post(query, job_config.config)
if response.ok:
return json.loads(response.text)
else:
raise RaftApiException(response.text, response.status_code)
def update_job(self, job_id, job_config):
'''
Re-apply job configuration on an existing job.
This is useful when one of the job tasks has isIdling flag set
to 'true'
Parameters:
job_id: currently running job
job_config: job configuration to apply to the job
'''
response = self.raft_api.post(f'/jobs/{job_id}', job_config.config)
if response.ok:
return json.loads(response.text)
else:
raise RaftApiException(response.text, response.status_code)
def delete_job(self, job_id):
'''
Deletes job
Parameters:
job_id: ID of a job to delete
'''
response = self.raft_api.delete(f'/jobs/{job_id}')
if response.ok:
return json.loads(response.text)
else:
raise RaftApiException(response.text, response.status_code)
def list_available_webhooks_events(self):
'''
Lists available webhook events
Returns:
list of events that are used with
other webhook API calls
'''
response = self.raft_api.get('/webhooks/events')
if response.ok:
return json.loads(response.text)
else:
raise RaftApiException(response.text, response.status_code)
def set_webhooks_subscription(self, name, event, url):
'''
Creates or updates webhook subscription
Parameters:
name: webhook name
event: one of the events returned by
list_available_webhooks_events
url: URL to POST webhook data to
Returns:
webhook configuration
'''
data = {
'WebhookName': name,
'Event': event,
'TargetUrl': url
}
response = self.raft_api.post('/webhooks', data)
if response.ok:
return json.loads(response.text)
else:
raise RaftApiException(response.text, response.status_code)
def test_webhook(self, name, event):
'''
Tests webhook by posting dummy data to the webhook
registered with set_webhooks_subscription
Parameters:
name: webhook name
event: one of the events returned by
list_available_webhooks_events
Returns:
Webhook send status
'''
response = self.raft_api.put(f'/webhooks/test/{name}/{event}', None)
if response.ok:
return json.loads(response.text)
else:
raise RaftApiException(response.text, response.status_code)
def list_webhooks(self, name, event=None):
'''
Lists webhook registrations
Parameters:
name: webhook name
event: one of the events returned by
list_available_webhooks_events
if None then list webhooks for all events
Returns:
List of webhook definitions
'''
if event:
url = f'/webhooks?name={name}&event={event}'
else:
url = f'/webhooks?name={name}'
response = self.raft_api.get(url)
if response.ok:
return json.loads(response.text)
else:
raise RaftApiException(response.text, response.status_code)
def delete_webhook(self, name, event):
'''
Deletes webhook registration for the event
Parameters:
name: webhook name
event: one of the events returned by
list_available_webhooks_events
'''
response = self.raft_api.delete(f'/webhooks/{name}/{event}')
if response.ok:
return json.loads(response.text)
else:
raise RaftApiException(response.text, response.status_code)
def print_status(self, status):
'''
Prints status object to standard output in a readable format
Parameters:
status: status object returned by the service
'''
for s in status:
if s['agentName'] == s['jobId']:
print(f"{s['jobId']} {s['state']}")
if s.get('details') and len(s['details']) > 0:
details = '\n'.join(s['details'])
print("Details:")
print(f"{details}")
for s in status:
if s['agentName'] != s['jobId']:
agent_status = (
f"Agent: {s['agentName']}"
f" Tool: {s['tool']}"
f" State: {s['state']}")
if 'metrics' in s:
metrics = s['metrics']
total_request_counts = metrics.get('totalRequestCount')
if total_request_counts and total_request_counts > 0:
print(f"{agent_status}"
" Total Request Count:"
f" {total_request_counts}")
response_code_counts = []
for key in metrics['responseCodeCounts']:
response_code_counts.append(
[key, metrics['responseCodeCounts'][key]])
table = tabulate.tabulate(
response_code_counts,
headers=['Response Code', 'Count'])
print(table)
print()
else:
print(agent_status)
if s.get('details') and len(s['details']) > 0:
details = '\n'.join(s['details'])
print("Details:")
print(f"{details}")
def poll(self, job_id, poll_interval=10):
'''
Polls and prints job status updates until job terminates.
Parameters:
job_id: job id
poll_interval: poll interval in seconds
'''
og_status = None
while True:
i = 0
while i < poll_interval:
time.sleep(1)
sys.stdout.write('.')
sys.stdout.flush()
i += 1
try:
status = self.job_status(job_id)
if og_status != status:
og_status = status
print()
self.print_status(status)
for s in status:
# overall job status information
if s['agentName'] == s['jobId']:
completed = s['state'] == 'Completed'
stopped = s['state'] == 'ManuallyStopped'
error = s['state'] == 'Error'
timed_out = s['state'] == 'TimedOut'
if completed or stopped:
return
elif error or timed_out:
raise RaftJobError(s['state'], s['details'])
except RaftApiException as ex:
if ex.status_code != 404:
print(f"{ex.message}")
raise RaftApiException(ex.text, ex.status_code)

3
cli/requirements.txt Normal file
Просмотреть файл

@ -0,0 +1,3 @@
msal~=1.4.3
requests~=2.24.0
tabulate~=0.8.7

Просмотреть файл

@ -0,0 +1,74 @@
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import pathlib
import sys
import os
import json
import urllib.parse
import copy
import random
cur_dir = os.path.dirname(os.path.abspath(__file__))
sys.path.append(os.path.join(cur_dir, '..', '..'))
from raft_sdk.raft_service import RaftCLI, RaftJobConfig
def run(compile, test, host):
cli = RaftCLI()
substitutions = {
'{host}': host
}
compile_job_config = RaftJobConfig(file_path=compile, substitutions=substitutions)
compile_task = compile_job_config.config['tasks'][0]
#use first task as template and create 30 compile task
compile_tasks = []
for t in range(30):
new_task = copy.deepcopy(compile_task)
new_task['outputFolder'] = compile_task['outputFolder'] + f"-{t}"
new_task['toolConfiguration']['compileConfiguration']['mutationsSeed'] = random.randint(0, 1000)
compile_tasks.append(new_task)
compile_job_config.config['tasks'] = compile_tasks
print('Compile')
# create a new job with the Compile config and get new job ID
# in compile_job
compile_job = cli.new_job(compile_job_config)
# wait for a job with ID from compile_job to finish the run
cli.poll(compile_job['jobId'])
substitutions['{compile.jobId}'] = compile_job['jobId']
print('Test')
test_job_config = RaftJobConfig(file_path=test, substitutions=substitutions)
task_test_fuzz_lean = test_job_config.config['tasks'][0]
task_test = test_job_config.config['tasks'][1]
test_tasks = []
for t in range(30):
new_task_test = copy.deepcopy(task_test)
new_task_test_fuzz_lean = copy.deepcopy(task_test_fuzz_lean)
new_task_test['outputFolder'] = task_test['outputFolder'] + f"-{t}"
new_task_test['toolConfiguration']['runConfiguration']['inputFolderPath'] += '/' + compile_tasks[t]['outputFolder']
new_task_test_fuzz_lean['outputFolder'] = task_test_fuzz_lean['outputFolder'] + f"-{t}"
new_task_test_fuzz_lean['toolConfiguration']['runConfiguration']['inputFolderPath'] += '/' + compile_tasks[t]['outputFolder']
test_tasks.append(new_task_test)
test_tasks.append(new_task_test_fuzz_lean)
test_job_config.config['tasks'] = test_tasks
test_job = cli.new_job(test_job_config)
cli.poll(test_job['jobId'])
if __name__ == "__main__":
if len(sys.argv) != 2:
print('Please provide host under test as an argument that will be used to\
substitute {sample.host} in compile.json and fuzz.json config files')
else:
host = sys.argv[1]
run(os.path.join(cur_dir, "raft.restler.compile.json"),
os.path.join(cur_dir, "raft.restler.test.json"),
host)

Просмотреть файл

@ -0,0 +1,42 @@
{
"swaggerLocation": {
"URL" : "https://{host}/swagger/v1/swagger.json"
},
"resources" : {
"Cores" : 2,
"MemoryGBs" : 3
},
"rootFileShare" : "60-compile",
"tasks": [
{
"toolName": "RESTler",
"outputFolder" : "RESTler-compile",
"isIdling": false,
"toolConfiguration": {
"task": "compile",
"compileConfiguration": {
"useRefreshableToken": true,
"customDictionary": {
"customPayload": {
"duration": ["00:10:00"],
"swaggerLocation": ["{\"URL\": \"https://some.service.azurewebsites.net/swagger.json\"}"],
"authenticationMethod": ["{\"CommandLine\": \"abc\" }"],
"targetUrl":["{replace-with-valid-webhook-URL}"],
"event": ["JobStatus"],
"timeSpanFilter": ["00:10:00"],
"eventName": ["JobStatus"]
}
}
}
}
}
]
}

Просмотреть файл

@ -0,0 +1,48 @@
{
"swaggerLocation": {
"URL" : "https://{host}/swagger/v1/swagger.json"
},
"host": "{host}",
"resources" : {
"Cores" : 4,
"MemoryGBs" : 12
},
"ReadOnlyFileShareMounts" : [
{
"fileShareName" : "60-compile",
"mountPath" : "/job-compile"
}
],
"tasks": [
{
"toolName" : "RESTler",
"outputFolder": "RESTLer-test-fuzz-lean",
"keyVaultSecrets" : ["RaftServicePrincipal"],
"authenticationMethod": {
"MSAL": "RaftServicePrincipal"
},
"toolConfiguration" : {
"task": "TestFuzzLean",
"runConfiguration": {
"inputFolderPath": "/job-compile/{compile.jobId}",
"authenticationTokenRefreshIntervalSeconds": 300
}
}
},
{
"toolName" : "RESTler",
"outputFolder" : "RESTler-test",
"keyVaultSecrets" : ["RaftServicePrincipal"],
"authenticationMethod": {
"MSAL": "RaftServicePrincipal"
},
"toolConfiguration" : {
"task": "Test",
"runConfiguration": {
"inputFolderPath": "/job-compile/{compile.jobId}"
}
}
}
]
}

Просмотреть файл

@ -0,0 +1,40 @@
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import pathlib
import sys
import os
import json
import urllib.parse
cur_dir = os.path.dirname(os.path.abspath(__file__))
sys.path.append(os.path.join(cur_dir, '..', '..'))
import raft
def run(compile, fuzz, sample_host):
cli = raft.RaftCLI()
subs = {
'{sample.host}' : sample_host,
'{defaults.deploymentName}' : cli.definitions.deployment
}
compile_job_config = raft.RaftJobConfig(file_path=compile, substitutions=subs)
print('Compile')
compile_job = cli.new_job(compile_job_config)
cli.poll(compile_job['jobId'])
subs['{compile.jobId}'] = compile_job['jobId']
fuzz_job_config = raft.RaftJobConfig(file_path=fuzz, substitutions=subs)
print('Fuzz')
fuzz_job = cli.new_job(fuzz_job_config)
cli.poll(fuzz_job['jobId'])
if __name__ == "__main__":
if len(sys.argv) != 2:
print('Please provide host under test as an argument that will be used to\
substitute {sample.host} in compile.json and fuzz.json config files')
else:
host = sys.argv[1]
run(os.path.join(cur_dir, "compile.json"),
os.path.join(cur_dir, "fuzz.json"),
host)

Просмотреть файл

@ -0,0 +1,31 @@
{
"swaggerLocation": {
"URL" : "https://{defaults.deploymentName}-raft-apiservice.azurewebsites.net/swagger/v1/swagger.json"
},
"tasks": [
{
"toolName" : "RESTler",
"isIdling": false,
"outputFolder" : "{defaults.deploymentName}-compile",
"toolConfiguration" : {
"task": "Compile",
"compileConfiguration": {
"useRefreshableToken": true
}
}
},
{
"toolName" : "RESTler",
"isIdling": false,
"outputFolder": "sample-compile",
"toolConfiguration" : {
"task": "Compile"
},
"swaggerLocation": {
"URL": "https://{sample.host}/swagger/v1/swagger.json"
}
}
]
}

Просмотреть файл

@ -0,0 +1,109 @@
{
"swaggerLocation": {
"URL" : "https://{defaults.deploymentName}-raft-apiservice.azurewebsites.net/swagger/v1/swagger.json"
},
"host": "{defaults.deploymentName}-raft-apiservice.azurewebsites.net",
"ReadOnlyFileShareMounts" : [
{
"FileShareName" : "{compile.jobId}",
"MountPath" : "/job-compile"
}
],
"resources" : {
"Cores" : 4,
"MemoryGBs" : 4
},
"tasks": [
{
"duration": "00:20:00",
"toolName" : "RESTler",
"outputFolder" : "{defaults.deploymentName}-RESTler-fuzz",
"isIdling" : false,
"keyVaultSecrets" : ["RaftServicePrincipal"],
"authenticationMethod": {
"MSAL": "RaftServicePrincipal"
},
"toolConfiguration" : {
"task": "Fuzz",
"runConfiguration": {
"inputFolderPath": "/job-compile/{defaults.deploymentName}-compile",
"useSsl": true,
"producerTimingDelay": 5
}
}
},
{
"isIdling" : false,
"toolName": "ZAP",
"keyVaultSecrets" : ["RaftServicePrincipal"],
"outputFolder": "{defaults.deploymentName}-ZAP-out",
"authenticationMethod": {
"MSAL" : "RaftServicePrincipal"
}
},
{
"swaggerLocation": {
"URL" : "https://{sample.host}/swagger/v1/swagger.json"
},
"host": "{sample.host}",
"isIdling" : false,
"toolName": "ZAP",
"outputFolder": "sample-zap-out"
},
{
"duration": "00:10:00",
"toolName" : "RESTler",
"swaggerLocation": {
"URL" : "https://{sample.host}/swagger/v1/swagger.json"
},
"host": "{sample.host}",
"outputFolder": "sample-RESTler-fuzz",
"toolConfiguration" : {
"task": "Fuzz",
"runConfiguration": {
"inputFolderPath": "/job-compile/sample-compile"
}
}
},
{
"duration": "00:10:00",
"toolName" : "RESTler",
"swaggerLocation": {
"URL" : "https://{sample.host}/swagger/v1/swagger.json"
},
"host": "{sample.host}",
"outputFolder": "sample-RESTler-fuzz-random-walk",
"toolConfiguration" : {
"task": "FuzzRandomWalk",
"runConfiguration": {
"inputFolderPath": "/job-compile/sample-compile"
}
}
},
{
"duration": "00:10:00",
"toolName" : "RESTler",
"swaggerLocation": {
"URL" : "https://{sample.host}/swagger/v1/swagger.json"
},
"host": "{sample.host}",
"outputFolder": "sample-RESTler-fuzz-bfs-cheap",
"toolConfiguration" : {
"task": "FuzzBfsCheap",
"runConfiguration": {
"inputFolderPath": "/job-compile/sample-compile"
}
}
}
]
}

Просмотреть файл

@ -0,0 +1,50 @@
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import pathlib
import sys
import os
cur_dir = os.path.dirname(os.path.abspath(__file__))
sys.path.append(os.path.join(cur_dir, '..', '..'))
from raft_sdk.raft_service import RaftCLI, RaftJobConfig
def run(compile, fuzz, sample_host):
# instantiate RAFT CLI
cli = RaftCLI()
substitutions = {
'{sample.host}' : sample_host
}
# Create compilation step job configuratin
compile_job_config = RaftJobConfig(file_path = compile, substitutions=substitutions)
# add webhook metadata that will be included in every triggered webhook by Compile job
compile_job_config.add_metadata({"branch":"wizbangFeature"})
print('Compile')
# create a new job with the Compile config and get new job ID
# in compile_job
compile_job = cli.new_job(compile_job_config)
# wait for a job with ID from compile_job to finish the run
cli.poll(compile_job['jobId'])
substitutions['{compile.jobId}'] = compile_job['jobId']
# create a new job config with Fuzz configuration JSON
fuzz_job_config = RaftJobConfig(file_path = fuzz, substitutions=substitutions)
print('Fuzz')
# add webhook metadata that will included in every triggered webhook by Fuzz job
fuzz_job_config.add_metadata({"branch":"wizbangFeature"})
# create new fuzz job configuration
fuzz_job = cli.new_job(fuzz_job_config)
# wait for job ID from fuzz_job to finish the run
cli.poll(fuzz_job['jobId'])
if __name__ == "__main__":
if len(sys.argv) != 2:
print('Please provide host under test as an argument that will be used to\
substitute {sample.host} in compile.json and fuzz.json config files')
else:
host = sys.argv[1]
run(os.path.join(cur_dir, "sample.restler.compile.json"),
os.path.join(cur_dir, "sample.restler.fuzz.json"),
host)

Просмотреть файл

@ -0,0 +1,33 @@
{
"swaggerLocation": {
"URL": "https://{sample.host}/swagger/v1/swagger.json"
},
"host": "{sample.host}",
"namePrefix" : "sample-compile-",
"rootFileshare" : "sample",
"webhook" : {
"name" : "sample-compile",
"metadata" : {}
},
"tasks": [
{
"toolName" : "RESTler",
"isIdling": false,
"outputFolder": "RESTler-compile-1",
"toolConfiguration" : {
"task": "Compile"
}
},
{
"toolName" : "RESTler",
"outputFolder" : "RESTler-compile-2",
"toolConfiguration" : {
"task": "Compile",
"compileConfiguration": {
"mutationsSeed": 34534798
}
}
}
]
}

Просмотреть файл

@ -0,0 +1,48 @@
{
"swaggerLocation": {
"URL" : "https://{sample.host}/swagger/v1/swagger.json"
},
"duration": "00:10:00",
"host": "{sample.host}",
"rootFileshare" : "sample",
"webhook" : {
"name" : "sample-fuzz",
"metadata" : {}
},
"readOnlyFileShareMounts" : [{
"FileShareName" : "sample",
"MountPath" : "/compile-out"
}],
"tasks": [
{
"toolName" : "RESTler",
"outputFolder" : "RESTler-fuzz-1",
"toolConfiguration" : {
"task": "Fuzz",
"runConfiguration": {
"inputFolderPath": "/compile-out/{compile.jobId}/RESTler-compile-1"
}
}
},
{
"toolName" : "RESTler",
"outputFolder" : "RESTler-fuzz-2",
"toolConfiguration" : {
"task": "Fuzz",
"runConfiguration": {
"inputFolderPath": "/compile-out/{compile.jobId}/RESTler-Compile-1"
}
}
},
{
"toolName" : "RESTler",
"outputFolder" : "RESTler-fuzz-3",
"toolConfiguration" : {
"task": "FuzzRandomWalk",
"runConfiguration": {
"inputFolderPath": "/compile-out/{compile.jobId}/RESTler-compile-2"
}
}
}
]
}

Просмотреть файл

@ -0,0 +1,57 @@
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import pathlib
import sys
import os
cur_dir = os.path.dirname(os.path.abspath(__file__))
sys.path.append(os.path.join(cur_dir, '..', '..'))
from raft_sdk.raft_service import RaftCLI, RaftJobConfig
def run(compile, fuzz, host):
# instantiate RAFT CLI
cli = RaftCLI()
# will replace {sample.host} with the value of host variable
# see sample.restler.compile.json and sample.restler.fuzz.json
subs = {
'{sample.host}' : host
}
# Create compilation job configuration
compile_job_config = RaftJobConfig(file_path=compile, substitutions=subs)
# add webhook metadata that will be included in every triggered webhook by Compile job
compile_job_config.add_metadata({"branch":"wizbangFeature"})
print('Compile')
# submit a new job with the Compile config and get new job ID
compile_job = cli.new_job(compile_job_config)
# wait for a job with ID from compile_job to finish the run
cli.poll(compile_job['jobId'])
# use compile job as input for fuzz job
subs['{compile.jobId}'] = compile_job['jobId']
# create a new job config with Fuzz configuration JSON
fuzz_job_config = RaftJobConfig(file_path=fuzz, substitutions=subs)
print('Fuzz')
# add webhook metadata that will included in every triggered webhook by Fuzz job
fuzz_job_config.add_metadata({"branch":"wizbangFeature"})
# create new fuzz job configuration
fuzz_job = cli.new_job(fuzz_job_config)
# wait for job ID from fuzz_job to finish the run
cli.poll(fuzz_job['jobId'])
if __name__ == "__main__":
if len(sys.argv) != 2:
print('Please provide host under test as an argument that will be used to\
substitute {sample.host} in compile.json and fuzz.json config files')
else:
host = sys.argv[1]
run(os.path.join(cur_dir, "sample.restler.compile.json"),
os.path.join(cur_dir, "sample.restler.fuzz.json"),
host)

Просмотреть файл

@ -0,0 +1,30 @@
{
"swaggerLocation": {
"URL": "https://{sample.host}/swagger/v1/swagger.json"
},
"namePrefix" : "sample-compile-",
"webhook" : {
"name" : "sample-compile",
"metadata" : {}
},
"tasks": [
{
"toolName" : "RESTler",
"isIdling": false,
"outputFolder": "compile-1",
"toolConfiguration" : {
"task": "Compile"
}
},
{
"toolName" : "RESTler",
"outputFolder" : "compile-2",
"toolConfiguration" :{
"task": "Compile",
"compileConfiguration": {
"mutationsSeed": 34534798
}
}
}
]
}

Просмотреть файл

@ -0,0 +1,50 @@
{
"swaggerLocation": {
"URL": "https://{sample.host}/swagger/v1/swagger.json"
},
"host": "{sample.host}",
"duration": "00:10:00",
"rootFileshare" : "sample-fuzz",
"webhook" : {
"name" : "sample-fuzz",
"metadata" : {}
},
"readonlyFileShareMounts" : [
{
"FileShareName" : "{compile.jobId}",
"MountPath" : "/job-compile"
}
],
"tasks": [
{
"toolName" : "RESTler",
"outputFolder" : "fuzz-1",
"toolConfiguration" : {
"task": "Fuzz",
"runConfiguration": {
"inputFolderPath": "/job-compile/compile-1"
}
}
},
{
"toolName" : "RESTler",
"outputFolder" : "fuzz-2",
"toolConfiguration" : {
"task": "Fuzz",
"runConfiguration": {
"inputFolderPath": "/job-compile/compile-1"
}
}
},
{
"toolName" : "RESTler",
"outputFolder" : "fuzz-3",
"toolConfiguration" : {
"task": "Fuzz",
"runConfiguration": {
"inputFolderPath": "/job-compile/compile-2"
}
}
}
]
}

Просмотреть файл

@ -0,0 +1,24 @@
{
"swaggerLocation": {
"URL": "https://{sample.host}/swagger/v1/swagger.json"
},
"host": "{sample.host}",
"readOnlyFileShareMounts" : [
{
"FileShareName" : "sample-fuzz",
"MountPath" : "/job-run"
}
],
"tasks": [
{
"toolName" : "RESTler",
"toolConfiguration" : {
"outputFolder" : "RESTler-replay",
"task": "Replay",
"runConfiguration": {
"inputFolderPath": "/job-run/{fuzz.jobId}/fuzz-1"
}
}
}
]
}

Просмотреть файл

@ -0,0 +1,13 @@
{
"swaggerLocation": {
"URL" : "https://{sample.host}/swagger/v1/swagger.json"
},
"host": "{sample.host}",
"tasks": [
{
"isIdling" : false,
"toolName": "ZAP",
"outputFolder" : "zap-out"
}
]
}

Просмотреть файл

@ -0,0 +1,48 @@
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import pathlib
import sys
import os
import json
import urllib.parse
cur_dir = os.path.dirname(os.path.abspath(__file__))
sys.path.append(os.path.join(cur_dir, '..', '..'))
from raft_sdk.raft_service import RaftCLI, RaftJobConfig
def run(replay, fuzz_job_id, replay_job_id=None):
cli = RaftCLI()
substitutions = {
'{defaults.deploymentName}': cli.definitions.deployment,
'{jobRunId}' : fuzz_job_id
}
replay_job_config = RaftJobConfig(file_path=replay, substitutions=substitutions)
print('Replay')
isIdle = False
for task in replay_job_config.config['tasks']:
isIdle = isIdle or task['isIdling']
if isIdle and replay_job_id:
cli.update_job(replay_job_id, replay_job_config)
print(f'Idle Job: {replay_job_id}')
else:
# create new fuzz job configuration
replay_job_id = cli.new_job(replay_job_config)
if isIdle:
print(f'New Idle Job: {replay_job_id}')
else:
print(f'New Job: {replay_job_id}')
if not isIdle:
# wait for job ID from fuzz_job to finish the run
cli.poll(replay_job_id['jobId'])
if __name__ == "__main__":
run(replay = "raft.restler.replay.json",
#job ID that produced bugs and those bugs going to be replayed
fuzz_job_id = "d29c7a2a-1815-4edb-91c1-56dd4faea0ce",
replay_job_id=None)

Просмотреть файл

@ -0,0 +1,42 @@
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import pathlib
import sys
import os
import json
import urllib.parse
cur_dir = os.path.dirname(os.path.abspath(__file__))
sys.path.append(os.path.join(cur_dir, '..', '..'))
from raft_sdk.raft_service import RaftCLI, RaftJobConfig
def run(compile, test, fuzz):
# instantiate RAFT CLI
cli = RaftCLI()
substitutions = {
'{defaults.deploymentName}': cli.definitions.deployment
}
compile_job_config = RaftJobConfig(file_path=compile, substitutions=substitutions)
print('Compile')
# create a new job with the Compile config and get new job ID
# in compile_job
compile_job = cli.new_job(compile_job_config)
# wait for a job with ID from compile_job to finish the run
cli.poll(compile_job['jobId'])
substitutions['{compile.jobId}'] = compile_job['jobId']
print('Test')
test_job_config = RaftJobConfig(file_path=test, substitutions=substitutions)
test_job = cli.new_job(test_job_config)
cli.poll(test_job['jobId'])
print('Fuzz')
fuzz_job_config = RaftJobConfig(file_path=fuzz, substitutions=substitutions)
fuzz_job = cli.new_job(fuzz_job_config)
cli.poll(fuzz_job['jobId'])
if __name__ == "__main__":
run(os.path.join(cur_dir, "raft.restler.compile.json"),
os.path.join(cur_dir, "raft.restler.test.json"),
os.path.join(cur_dir, "raft.restler.fuzz.json"))

Просмотреть файл

@ -0,0 +1,25 @@
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import pathlib
import sys
import os
import json
import urllib.parse
cur_dir = os.path.dirname(os.path.abspath(__file__))
sys.path.append(os.path.join(cur_dir, '..', '..'))
from raft_sdk.raft_service import RaftCLI, RaftJobConfig
def run(run_zap):
cli = RaftCLI()
substitutions = {
'{defaults.deploymentName}': cli.definitions.deployment
}
run_zap_config = RaftJobConfig(file_path=run_zap, substitutions=substitutions)
zap_job = cli.new_job(run_zap_config)
print(zap_job)
cli.poll(zap_job['jobId'])
if __name__ == "__main__":
run("raft.zap.json")

Просмотреть файл

@ -0,0 +1,20 @@
{
"swaggerLocation": {
"Url": "https://{defaults.deploymentName}-raft-apiservice.azurewebsites.net/swagger/v1/swagger.json"
},
"rootFileShare" : "raft",
"tasks": [
{
"toolName" : "RESTler",
"outputFolder" : "compile",
"isIdling": false,
"toolConfiguration" : {
"task": "compile",
"compileConfiguration": {
"useRefreshableToken": true
}
}
}
]
}

Просмотреть файл

@ -0,0 +1,62 @@
{
"swaggerLocation": {
"Url" : "https://{defaults.deploymentName}-raft-apiservice.azurewebsites.net/swagger/v1/swagger.json"
},
"duration": "00:10:00",
"host": "{defaults.deploymentName}-raft-apiservice.azurewebsites.net",
"rootFileShare" : "raft",
"readOnlyFileShareMounts" : [
{
"FileShareName" : "raft",
"MountPath" : "/job-compile"
}
],
"tasks": [
{
"toolName" : "RESTler",
"outputFolder" : "fuzz-1",
"isIdling" : false,
"keyVaultSecrets" : ["RaftServicePrincipal"],
"authenticationMethod": {
"MSAL": "RaftServicePrincipal"
},
"toolConfiguration" : {
"task": "Fuzz",
"runConfiguration": {
"inputFolderPath": "/job-compile/{compile.jobId}/compile"
}
}
},
{
"toolName" : "RESTler",
"outputFolder" : "RESTler-fuzz-random-walk-2",
"isIdling" : false,
"keyVaultSecrets" : ["RaftServicePrincipal"],
"authenticationMethod": {
"MSAL": "RaftServicePrincipal"
},
"toolConfiguration" : {
"task": "FuzzRandomWalk",
"runConfiguration": {
"inputFolderPath": "/job-compile/{compile.jobId}/compile"
}
}
},
{
"toolName" : "RESTler",
"outputFolder" : "RESTler-fuzz-bfscheap-3",
"isIdling" : false,
"keyVaultSecrets" : ["RaftServicePrincipal"],
"authenticationMethod": {
"MSAL": "RaftServicePrincipal"
},
"toolConfiguration" : {
"task": "FuzzBfsCheap",
"runConfiguration": {
"inputFolderPath": "/job-compile/{compile.jobId}/compile"
}
}
}
]
}

Просмотреть файл

@ -0,0 +1,28 @@
{
"host": "{defaults.deploymentName}-raft-apiservice.azurewebsites.net",
"rootFileShare" : "raft",
"readOnlyFileShareMounts" : [
{
"FileShareName" : "raft",
"MountPath" : "/job-run"
}
],
"tasks": [
{
"toolName" : "RESTler",
"outputFolder" : "RESTLer-replay",
"isIdling" : false,
"keyVaultSecrets" : ["RaftServicePrincipal"],
"authenticationMethod": {
"MSAL": "RaftServicePrincipal"
},
"toolConfiguration" : {
"task": "Replay",
"runConfiguration" :{
"inputFolderPath": "/job-run/{jobRunId}/fuzz-1"
}
}
}
]
}

Просмотреть файл

@ -0,0 +1,46 @@
{
"swaggerLocation": {
"Url" : "https://{defaults.deploymentName}-raft-apiservice.azurewebsites.net/swagger/v1/swagger.json"
},
"host": "{defaults.deploymentName}-raft-apiservice.azurewebsites.net",
"rootFileShare" : "raft",
"readOnlyFileShareMounts" : [
{
"FileShareName" : "raft",
"MountPath" : "/job-compile"
}
],
"tasks": [
{
"toolName" : "RESTler",
"outputFolder" : "RESTler-test",
"isIdling" : false,
"keyVaultSecrets" : ["RaftServicePrincipal"],
"authenticationMethod": {
"MSAL": "RaftServicePrincipal"
},
"toolConfiguration" : {
"task": "Test",
"runConfiguration": {
"inputFolderPath": "/job-compile/{compile.jobId}/compile"
}
}
},
{
"toolName" : "RESTler",
"outputFolder" : "RESTler-test-fuzz-lean",
"isIdling" : false,
"keyVaultSecrets" : ["RaftServicePrincipal"],
"authenticationMethod": {
"MSAL": "RaftServicePrincipal"
},
"toolConfiguration" : {
"task": "TestFuzzLean",
"runConfiguration": {
"inputFolderPath": "/job-compile/{compile.jobId}/compile",
"authenticationTokenRefreshIntervalSeconds": 300
}
}
}
]
}

Просмотреть файл

@ -0,0 +1,18 @@
{
"swaggerLocation": {
"Url" : "https://{defaults.deploymentName}-raft-apiservice.azurewebsites.net/swagger/v1/swagger.json"
},
"host": "{defaults.deploymentName}-raft-apiservice.azurewebsites.net",
"rootFileshare" : "raft",
"tasks": [
{
"isIdling" : false,
"toolName": "ZAP",
"outputFolder" : "zap-out",
"keyVaultSecrets" : ["RaftServicePrincipal"],
"authenticationMethod": {
"MSAL": "RaftServicePrincipal"
}
}
]
}

Просмотреть файл

@ -0,0 +1,43 @@
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import pathlib
import sys
import os
import json
import urllib.parse
cur_dir = os.path.dirname(os.path.abspath(__file__))
sys.path.append(os.path.join(cur_dir, '..', '..'))
from raft_sdk.raft_service import RaftCLI, RaftJobConfig
def run(compile, test, fuzz):
cli = RaftCLI()
substitutions = {
'{defaults.deploymentName}': cli.definitions.deployment
}
compile_job_config = RaftJobConfig(file_path=compile, substitutions=substitutions)
print('Compile')
# create a new job with the Compile config and get new job ID
# in compile_job
compile_job = cli.new_job(compile_job_config)
# wait for a job with ID from compile_job to finish the run
cli.poll(compile_job['jobId'])
substitutions['{compile.jobId}'] = compile_job['jobId']
print('Test')
test_job_config = RaftJobConfig(file_path=test, substitutions=substitutions)
test_job = cli.new_job(test_job_config)
cli.poll(test_job['jobId'])
print('Fuzz')
fuzz_job_config = RaftJobConfig(file_path=fuzz, substitutions=substitutions)
fuzz_job = cli.new_job(fuzz_job_config)
cli.poll(fuzz_job['jobId'])
if __name__ == "__main__":
run(os.path.join(cur_dir, "raft.restler.compile.json"),
os.path.join(cur_dir, "raft.restler.test.json"),
os.path.join(cur_dir, "raft.restler.fuzz.json"))

Просмотреть файл

@ -0,0 +1,23 @@
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
import pathlib
import sys
import os
import json
import urllib.parse
cur_dir = os.path.dirname(os.path.abspath(__file__))
sys.path.append(os.path.join(cur_dir, '..', '..'))
from raft_sdk.raft_service import RaftCLI, RaftJobConfig
def run(run_zap):
cli = RaftCLI()
run_zap_config = RaftJobConfig(file_path=run_zap,
substitutions={'{defaults.deploymentName}' : cli.definitions.deployment})
zap_job = cli.new_job(run_zap_config)
print(zap_job)
cli.poll(zap_job['jobId'])
if __name__ == "__main__":
run(os.path.join(cur_dir, "raft.zap.json"))

Просмотреть файл

@ -0,0 +1,29 @@
{
"swaggerLocation": {
"URL" : "https://{defaults.deploymentName}-raft-apiservice.azurewebsites.net/swagger/v1/swagger.json"
},
"tasks": [{
"toolName": "RESTler",
"outputFolder" : "RESTler-compile",
"isIdling": false,
"toolConfiguration": {
"task": "compile",
"compileConfiguration": {
"useRefreshableToken": true,
"customDictionary": {
"customPayload": {
"duration": ["00:10:00"],
"swaggerLocation": ["{\"URL\": \"https://some.service.azurewebsites.net/swagger.json\"}"],
"authenticationMethod": ["{\"CommandLine\": \"abc\" }"],
"targetUrl":["{replace-with-valid-webhook-URL}"],
"event": ["JobStatus"],
"timeSpanFilter": ["00:10:00"],
"eventName": ["JobStatus"]
}
}
}
}
}
]
}

Просмотреть файл

@ -0,0 +1,51 @@
{
"swaggerLocation": {
"URL" : "https://{defaults.deploymentName}-raft-apiservice.azurewebsites.net/swagger/v1/swagger.json"
},
"host": "{defaults.deploymentName}-raft-apiservice.azurewebsites.net",
"duration": "00:10:00",
"ReadOnlyFileShareMounts" : [
{
"fileShareName" : "{compile.jobId}",
"mountPath" : "/job-compile"
}
],
"tasks": [
{
"toolName" : "RESTler",
"isIdling" : false,
"outputFolder" : "RESTler-random-walk-fuzz",
"keyVaultSecrets" : ["RaftServicePrincipal"],
"authenticationMethod": {
"MSAL": "RaftServicePrincipal"
},
"toolConfiguration" : {
"task": "FuzzRandomWalk",
"runConfiguration": {
"inputFolderPath": "/job-compile/RESTler-compile",
"useSsl": true,
"producerTimingDelay": 5
}
}
},
{
"toolName" : "RESTler",
"outputFolder" : "RESTler-fuzz",
"isIdling" : false,
"keyVaultSecrets" : ["RaftServicePrincipal"],
"authenticationMethod": {
"MSAL": "RaftServicePrincipal"
},
"toolConfiguration" : {
"task": "Fuzz",
"runConfiguration": {
"inputFolderPath": "/job-compile/RESTler-compile",
"useSsl": true,
"producerTimingDelay": 5
}
}
}
]
}

Просмотреть файл

@ -0,0 +1,44 @@
{
"swaggerLocation": {
"URL" : "https://{defaults.deploymentName}-raft-apiservice.azurewebsites.net/swagger/v1/swagger.json"
},
"host": "{defaults.deploymentName}-raft-apiservice.azurewebsites.net",
"ReadOnlyFileShareMounts" : [
{
"fileShareName" : "{compile.jobId}",
"mountPath" : "/job-compile"
}
],
"tasks": [
{
"toolName" : "RESTler",
"outputFolder": "RESTLer-test-fuzz-lean-1",
"keyVaultSecrets" : ["RaftServicePrincipal"],
"authenticationMethod": {
"MSAL": "RaftServicePrincipal"
},
"toolConfiguration" : {
"task": "TestFuzzLean",
"runConfiguration": {
"inputFolderPath": "/job-compile/RESTler-compile",
"authenticationTokenRefreshIntervalSeconds": 300
}
}
},
{
"toolName" : "RESTler",
"outputFolder" : "RESTler-test-2",
"keyVaultSecrets" : ["RaftServicePrincipal"],
"authenticationMethod": {
"MSAL": "RaftServicePrincipal"
},
"toolConfiguration" : {
"task": "Test",
"runConfiguration": {
"inputFolderPath": "/job-compile/RESTler-compile"
}
}
}
]
}

Просмотреть файл

@ -0,0 +1,18 @@
{
"swaggerLocation": {
"URL" : "https://{defaults.deploymentName}-raft-apiservice.azurewebsites.net/swagger/v1/swagger.json"
},
"host": "{defaults.deploymentName}-raft-apiservice.azurewebsites.net",
"tasks": [
{
"isIdling" : false,
"toolName": "ZAP",
"keyVaultSecrets" : ["RaftServicePrincipal"],
"outputFolder" : "zap-results",
"authenticationMethod": {
"MSAL": "RaftServicePrincipal"
}
}
]
}

74
docs/cli/job.md Normal file
Просмотреть файл

@ -0,0 +1,74 @@
# Job Commands
## job-create \<jobDefinition.json\>
Creates a new job. The \<jobDefinition.json\> file defines the job and tasks to run.
* **--duration** specifies how long the job should run. This is a TimeSpan value.</br>
The duration can also be specified in the job definition. If it is defined in the job definition and this parameter is used, this value will be the one used. It overrides the job definition value.
* **--read-only-mounts** specifies which file shares will be mounted to the new job as a read-only file share.</br>
This is helpful when multiple jobs need to access the same information.</br>
```
Usage: --read-only-mounts '[{"FileShareName":"grade-track-compile-48297e1a-9cb4-4578-8fa1-15bd8949affb", "MountPath": "/job-compile"}]'
```
* **--read-write-mounts** specified file shares which will be mounted with read-write access.
```
Usage: --read-write-mounts '[{"FileShareName":"MyData", "MountPath": "/personalData "}]'
```
* **--poll \<int\>** takes a value which is used as the job status polling interval in seconds.</br>
Polling terminates once the job has completed.
* **--metadata** Arbitrary key/value pairs can be passed to a job.</br>
This data will be returned in webhooks. In this way you can track things like branch names, change authors, bug numbers, etc in your jobs.
If you have a logic app which handles your bugFound webhook by creating a bug in your bug tracking system, you could have this data available in the bug.
```
Usage: --metadata '{"BuildNumber":"pipelineBuildNumber", "Author": "John"}'
```
Returns a \<jobId\> as json.
```
Example: {'jobId': '0a0fd91f-8592-4c9d-97dd-01c9c3c44159'}
```
The jobId that is returned is a string that will contain a guid, if you decide to use a namePrefix in the job definition, the guid will be prepended with the prefix.
## job-status \<jobId\></br>
Gets the status of a job.
```
Usage: job-status ffc4a296-f85d-4122-b49b-8074b88c9755
```
## job-list --look-back-hours</br>
List the status of all jobs. By default the command will return the status of all jobs over the last 24 hours.
Use `--look-back-hours` to specify a different time frame. For example to look back over the last hour
```
Usage: job-list --look-back-hours 1
```
## job-update \<jobId\> \<jobdefinition.json\></br>
Deploy a job definition to an existing job. This is useful when the job is deployed with the "isIdling" flag set to true
which tells the service to not delete the container when the job has completed. In this way it is possible to quickly
deploy a new job without waiting for container creation.
It is also possible to use `ssh` to log into the container if manual exploration of the container is needed.
If the container is not running for some reason, the job will be created as normal.
If the job container creation failed for some reason, the job will not be created. You can check the application insights log for failures.
## job-delete \<jobId\></br>
Deletes the job. By default jobs are garbage collected when they have completed their run.
However, if "isIdling" is set to true, manual job deletion is required.
## job-results-url \<jobId\></br>
Returns a url to the storage file share where the job results are stored. Use the url in your browser to go directly to the results.

6
docs/cli/logout.md Normal file
Просмотреть файл

@ -0,0 +1,6 @@
# Logout command
## logout
The logout parameter will remove the cashed tokens.
The next use of the CLI will require that you re-authenticate.

25
docs/cli/overview.md Normal file
Просмотреть файл

@ -0,0 +1,25 @@
# Using the RAFT command line
The Raft command line interface is written in python and is simply an interface to the REST commands in the service.
Anything you can do with the CLI you can do with any tool that can interact with REST interfaces.
PostMan, curl, to mention a few.
The Raft CLI is both a command line interface which parses commands and executes them,
and an SDK which you can import and script the underlying functions yourself.
## CLI Commands
In our CLI syntax, values without two dashes "--" are positional parameters.
When they are separated by "|" as in [ red | blue | yellow ] select one value.
Parameters with two dashes "--" are always optional
### General Parameters
These parameters apply to all commands.
**--secret \<secretValue\>**</br>
When **--skip-sp-deployment** is used, new secret generation is not executed.
However, the deployment will overwrite configuration settings for the APIService and the Orchestrator.
These services need to know the service principal secret.
Use this parameter to pass the secret to the deployment process.

23
docs/cli/service.md Normal file
Просмотреть файл

@ -0,0 +1,23 @@
# Service Commands:
## service [deploy | restart | info]
The **deploy** parameter has the following options
**--sku option**</br>
* The allowed sku values are: 'B1', 'B2', 'B3', 'D1', 'F1', 'FREE','I1', 'I2', 'I3', 'P1V2','P2V2','P3V2','PC2', 'PC3', 'PC4', 'S1', 'S2', 'S3', 'SHARED'
* These correspond to the App Service Plan sizes. The default is B2. Note that is a linux app service plan.
**--skip-sp-deployment**</br>
* When using the Azure DevOps pipeline to re-deploy the service during code development,
this parameter can be used to skip the service principal deployment.
The assumption here is that the service principal has already been deployed.
In this scenario, use of the --secret parameter is required.
The **restart** parameter will restart the api service and the orchestrator. </br>
When the services restart if there is a new version of the software it will be downloaded.
The **info** parameter will return version information about the service and the last time it was restarted.</br>
Example: `{'version': '1.0.0.0', 'serviceStartTime': '2020-08-04T21:05:53+00:00'}`

29
docs/cli/webhook.md Normal file
Просмотреть файл

@ -0,0 +1,29 @@
# Webhook Commands</br>
These commands are used to create and control webhooks. Webhooks are implemented by using an [EventDomain](https://docs.microsoft.com/en-us/azure/event-grid/event-domains)
## webhook-events</br>
Lists the set of events which will generate webhooks.
## webhook-create \<name\> --event \<eventName\> --url \<targetUrl\></br>
Creates a webhook. The \<name\> value is a string that is used in your jobDefinition file.
The --event \<eventName\> parameter is required. The event name must be one of the values returned from webhook-events.
The --url \<targetUrl\> parameter is required. The targetUrl will receive the webhook. The targetUrl must implement endpoint validation. See https://docs.microsoft.com/en-us/azure/event-grid/webhook-event-delivery
A common and simple use as the target is an azure logic app. This provides a simple way to process your webhooks posting to teams or slack, creating new work items, etc.
## webhook-test \<name\> --event \<eventName\></br>
Test your webhook receiver. Dummy data will be sent to the webhook endpoint you are testing.
The **--event** parameter is required.
## webhook-list \<name\></br>
List the definition of webhook \<name\>.
Use of **--event** parameter is optional to limit what is returned.
## webhook-delete \<name\> --event \<eventName\></br>
Deletes a webhook for a specific event.
The **--event** parameter is required.

Просмотреть файл

@ -0,0 +1,63 @@
# Installing RAFT
* Get the RAFT CLI from [releases.](https://github.com/microsoft/raft/releases)
**Prerequisites:**
* An azure subscription</br>
It is helpful if you are the **owner** on the azure subscription that you are deploying to.
During deployment, the script will create a service principle,
this service principle needs to be given access to the newly created resource group.
You must have owner permissions for the script to do the assignment.
If you are not the subscription owner you will need to have the owner manually assign the service principle to the resource group that is created.
The service principle should be given `contributor` access to the resource group that was created.
## Container Instances
RAFT uses container instances and by default subscriptions are limited to 100 instances,
if you are an existing user of container instances on the selected subscription,
you may need to reach out to Azure support to ask for an increase of Linux container instances.
## Installation Process
* Run the command `python .\raft.py service deploy`<br>
The first time you run this command you will be asked to fill in some values in a `defaults.json` file. This file is used to describe the context for the cli.
```
subscription - The Azure Subscription ID to which RAFT is deployed
deploymentName - RAFT deployment name
deployment name requirements:
- only letters or numbers
- at most 24 characters long
- no capital letters
- no dashes
region - Region to deploy RAFT (e.g. westus2)
See https://azure.microsoft.com/en-us/global-infrastructure/regions/
for a list of regions
metricsOptIn - allow Microsoft collect anonymized metrics from the deployment.
useAppInsights - deploy AppInsights and use it to write all service logs
registry - registry which stores service images.
-------------------------
To apply any changes made to the defaults.json file,
please run 'raft.py service deploy'
-------------------------
```
The only values you **MUST** change are:
* subscription - this is the azure subscription you own where you will be installing the service
* deploymentName - this is used to name a resource group and services with "-raft" appended
* region - the region to use for the services. We recommend a region that is close to your location. Find the list of available regions [here](https://azure.microsoft.com/en-us/global-infrastructure/geographies/).
* metricsOptIn - by default we collect anonymous metrics to help us improve the service.
To opt-out simply set this value to false. If you change your mind at some point, update this value and re-deploy the service.
Once the default.json file has been updated, re-run `python .\raft.py service deploy`. Most deployments complete in about 15 minutes.

Просмотреть файл

@ -0,0 +1,120 @@
# On-boarding your own tools
RAFT allows you to on-board any tool that is packaged into a docker container.
This means that any public (or private tool you develop) is available
to include in the test suite you design to test your services.
**At this time only Linux containers are supported.**
## Tool Configuration
Tools are packaged in containers. These containers are then deployed into a [container instance](https://azure.microsoft.com/en-us/services/container-instances/).
[See subscription default quota's and limits](https://docs.microsoft.com/en-us/azure/container-instances/container-instances-quotas#service-quotas-and-limits)
to understand your scaling limits.
See [Container Instance Pricing](https://azure.microsoft.com/en-us/pricing/details/container-instances/) to understand the cost
implications of running your tool.
### 1. Create the folder
Your first step in integrating a tool is to create a folder under cli/raft-utils/tools.
Give your tool a name, this is the name that you will use in the jobDefinition "toolName" field.
The name is case sensitive.
Let's assume we have a tool called demoTool. Create the folder.</br>
![](../images/demotool-folder.jpg)
### 2. Create config.json
The config.json file defines the commands that will run your tool. It also defines
the resources needed by the container.
```
{
"container" : "demoTool/demoTool-stable",
"run" : {
"command" : "bash",
"arguments" : ["-c",
"cd $RAFT_RUN_DIRECTORY; ln -s $RAFT_WORK_DIRECTORY /demoTool; python3 run.py install; python3 run.py" ]
},
"idle" : {
"command" : "bash",
"arguments" : ["-c", "echo DebugMode; while true; do sleep 100000; done;"]
}
}
```
In this example, the "container" field defines the docker image. By default dockerhub is
searched for the image.
#### Private Registries
If you want to specify an image from a private repository, create a secret that starts with the
string `PrivateRegistry` in the key vault.
This secret will be a json blob consisting of repository name, user name, and password.
For example:
```
{
"repository" : "myprivateRegistry.azurecr.io",
"user" : "myUsername",
"password" : "myPassword"
}
```
When the orchestrator starts, it will search the key vault for secrets that begin with `PrivateRegistry` and register
each one so it is available for use.
These key vault entries will be registered in alphabetical order of the secrets.
Registries will be searched for the requested image in the same order, starting with the public docker registry, and
then moving though the private registries.
#### Container resources
The "containerConfig" section defines the set of resources that will be allocated to the container.
"CPUs", "MemorySizeInGB", and "GPUs" can be defined.
#### Commands
The `run` and `idle` commands are used in two different job launch scenarios. `run` is used to actually run
a job, and `idle` is used when you want to interactively work with the container.
The "command" needs to be a command that is available on the container. In the "arguments" you can reference
environment variables that have been preset for you.
The environment variables are:
* RAFT_JOB_ID</br>
This is the id of the currently executing job. If you generate events, this will be needed.
* RAFT_CONTAINER_GROUP_NAME
* RAFT_CONTAINER_NAME
* RAFT_APP_INSIGHTS_KEY</br>
Use the app insights key if you want to log to the app insights instance created with the service.
* RAFT_WORK_DIRECTORY</br>
In the work directory you will find a `task-config.json` that contains any data defined in the
job definition `toolConfiguration` section of the task. The format of this data is under your control.
For example, a RESTler job definition might contain the following toolConfiguration
```
"toolConfiguration" : {
"task": "TestFuzzLean",
"agentConfiguration": {
"resultsAnalyzerReportTimeSpanInterval": "00:01:00"
},
"runConfiguration": {
"previousStepOutputFolderPath": "/job-compile/0",
"useSsl": true,
"producerTimingDelay": 5
}
}
```
This data fragment would be available in the task-config.json file.
* RAFT_RUN_DIRECTORY</br>
When the tool is uploaded to the file share via the cli command `python raft.py service upload-utils`
a unique file share is created and mounted to the container as read-only. This gives you an easy way to reference any scripts
or executables you launch.
* RAFT_RUN_CMD</br>
The command that was run when the container was launched.
### 3. Swagger schema for your tool (optional)
If you specify a swagger document called schema.json in your tool folder, it will be folded in under the toolConfiguration
node in the service swagger. All component types referenced in the swagger must be defined in the file.
See the RESTler tool schema.json file as an example.

Просмотреть файл

@ -0,0 +1,11 @@
# Telemetry
We collect anonymous telemetry metrics to help us improve the service. You can find all the data types that are collected in the
`telemetry.fs` file.
## Opting out
To opt-out the easiest way is to run the deployment with the metricsOptIn field in the defaults.json file set to false.
You can also manually opt out by clearing the value from the setting `RAFT_METRICS_APP_INSIGHTS_KEY` in the apiservice and the orchestrator function app.
Do not delete the setting, simply clear the value.

8
docs/deploying/tools.md Normal file
Просмотреть файл

@ -0,0 +1,8 @@
# Tools deployed by default
Two tools are deployed by default. [RESTler](https://github.com/microsoft/restler) and [ZAP](https://www.zaproxy.org/).
You can see their configuration under the `cli/raft-utils/tools` folder.
See an explanation of the `config.json` file in [How a job executes](../how-it-works/how-a-job-executes.md).

Просмотреть файл

@ -0,0 +1,12 @@
# Validating your installation
Once your deployment has completed, you can check to see if the service is running by
executing the CLI command:
`python raft.py service info`
This will return information about the service including the version of the service installed and the date and time the service was last started.
There are a number of samples in the sample folder. These are setup so that you can run
the python scripts directly to exercise the service. Take some time to look at the log files and
get familiar with the output.

15
docs/faq.md Normal file
Просмотреть файл

@ -0,0 +1,15 @@
# FAQ
**I see 404's for GET /robots933456.txt in my Application Insights log**
</br>We deploy containers to the webapp. See this [explanation](https://docs.microsoft.com/en-us/azure/app-service/containers/configure-custom-container#robots933456-in-logs).
**Why use file shares?**
</br>File shares can be mounted to the containers running tools. This provides an easy way to share data.
The use of file shares makes it easy to upload your custom swagger files, and authentication configurations files.
You can also view real time log changes and results in the files share during the job run,
and mount these file shares to your computer for a better user experience.
To mount the file share, use the connect button on the azure portal.
![Connect File Share Image](images/mount_file_share.jpg)

Просмотреть файл

@ -0,0 +1,14 @@
# Azure Resources
Raft is a service that is built on [Azure](https://azure.microsoft.com/en-us/). All you need to get started is an Azure [subscription](https://azure.microsoft.com/en-us/free/).
The deployment script will create all the resources that you need for the service. Use the [Azure portal](https://portal.azure.com) to view the created resources.
The resources that are used by the service are:
* Application Insights (for logging)
* App Service (for the front-end API)
* App Service Plan (VM's used for App Service and Azure functions)
* Event Grid Domain (for webhooks)
* Key vault (for secret management)
* Funtion app (for service orchestration)
* Service Bus (for messaging between components of the service)
* Storage accounts (one for the service use and one for all your data)

Просмотреть файл

@ -0,0 +1,74 @@
# How a job executes
Jobs are submitted to the front-end service via a job definition written in json. Every job has one or more tasks. The `taskConfiguration`
portion of the job definition is information that is specific to the task that will run.
This data is passed to the task by being written into a file called `task-config.json` and placed into the file share that
is the working directory for the task. The task can then deserialize and validate the data as needed.
Under the CLI that is downloaded, you will find under the raft-utils/tools folder a folder for each installed tool. By default there are
tools for RESTler and ZAP. In the tool directory you will find a file called `config.json`. This file contains the configuration
data for how to run the tool.
Here is the example for the ZAP tool.
```
{
"container" : "owasp/zap2docker-stable",
"containerConfig" : {
"CPUs" : 2,
"MemorySizeInGB" : 1.0
},
"run" : {
"command" : "bash",
"arguments" : ["-c",
"cd $RAFT_RUN_DIRECTORY; ln -s $RAFT_WORK_DIRECTORY /zap/wrk; python3 run.py install; python3 run.py" ]
},
"idle" : {
"command" : "bash",
"arguments" : ["-c", "echo DebugMode; while true; do sleep 100000; done;"]
}
}
```
The "container" specifies the name of the docker container. The "containerConfig" allows you to specify the amount of compute resources you need for your tool.
The "run" structure allows you to specify the command to run and it's arguments on the container. The "idle" structure is what is run when
the tool is marked as "isDebug".
The available commands and arguments are limited by what is available in the container you select.
The schema for the tool configuration is
```
type ToolCommand =
{
command : string
arguments: string array
}
type ToolConfig =
{
container : string
containerConfig : ContainerConfig option
run : ToolCommand
idle : ToolCommand
}
```
## Environment Variables
A number of pre-defined environment variables are available in the container for your use.
* RAFT_JOB_ID
* RAFT_TASK_INDEX
* RAFT_CONTAINER_GROUP_NAME
* RAFT_CONTAINER_NAME
* RAFT_APP_INSIGHTS_KEY
* RAFT_WORK_DIRECTORY
* RAFT_RUN_DIRECTORY
* RAFT_SB_OUT_SAS</br>
This can be used to write event our to the service bus so your tool can generate job status and webhook data.
Paths specified in the environment variables are Linux specific paths.

Просмотреть файл

@ -0,0 +1,7 @@
# Job Output
Every tool has it's own way of generating output. Within the container, the working directory is a mounted file share.
The tool can create folders, write output, in whatever forms are desired.
You have the ability to launch commands that connect to these output sources and generate job status or webhook data. See the
ZAP `scan.py` script as an example.

Просмотреть файл

@ -0,0 +1,11 @@
# Overview
Job configurations are submitted to the RAFT service front-end.
A message is put onto the service bus and processed by the back-end orchestrator.
Once the message is received an Azure storage File Share is created with the job Id as the share name.
The orchestrator creates a container group for each job. The container group name is the job Id and you can see it in the portal in the resource group where the service was deployed.
Each task runs as a container within the container group.
The file share is mounted to each container in the container group as the "working directory" where the running tool should write all its results.
When an agent processes the task output, it may send job progress events on the service bus. These events are recorded in the job status azure table by job id.
Status can be retrieved with the CLI or with a REST API call to the RAFT service.

Двоичные данные
docs/images/connect.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 14 KiB

Двоичные данные
docs/images/containerInstanceInPortal.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 9.1 KiB

Двоичные данные
docs/images/containersTab.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 19 KiB

Двоичные данные
docs/images/demotool-folder.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 4.2 KiB

Двоичные данные
docs/images/mount_file_share.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 15 KiB

Двоичные данные
docs/images/postman-auth.png Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 20 KiB

Двоичные данные
docs/images/postman-inherit-auth.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 11 KiB

Двоичные данные
docs/images/postman-request-token.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 40 KiB

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше