21 KiB
Javascript Codegen Quick Start for Test
This page is to help you write and run tests quickly for Javascript Codegen SDK including high-level and rest-level clients. We firstly show how to run in record
and playback
mode and then guide on how to add test cases.
Table of contents
- Background
- Prerequisites
- How to run test
- How to add tests
Background
The Azure SDK test framework uses the test-recorder
library, which in turn rests upon on a HTTP recording system (testproxy) that enables tests dependent on network interaction to be run offline.
Please notice that this quickstart is based on 3.x.y version of recorder tool (@azure-tools/test-recorder
).
Prerequisites
- Rush 5.x
- Install/update Rush globally via
npm install -g @microsoft/rush
- Install/update Rush globally via
- Any of the LTS versions of Node.js
- A C++ compiler toolchain and Python (for compiling machine-code modules)
- Refer here for more details
To be able to leverage the asset-sync workflow
- Install Powershell
- Make sure "pwsh" command works at this step (If you follow the above link, "pwsh" is typically added to the system environment variables by default)
- Add
dev-tool
to thedevDependencies
in thepackage.json
.
How to run test
This section describes how to run the SDK tests. If you want to run the tests of a specific project, go to that project's folder and execute rushx test
. All of the tests will automatically run both in NodeJS and in the browser. To target these environments individually, you can run rushx test:node
and rushx test:browser
. Let's take purview-catalog-rest
as an example.
If you have no concepts of recording
, playback
or TEST_MODE we'll highly recommand you to read this doc. We'll touch upon these concepts in below content.
Code structure
If you are the first time to generate SDK you could enable the config generate-test: true
in README.md
. We'll generate simple utils and a sample test file for you.
generate-test: true
They only contains basics for testing, you need to update to your own utility and test cases. The overall structure will be similar to below:
Note: the structure of test
folder has slight differences between high-level and rest-level clients. In HLC we only have one file under the test
folder which contains all contents. But in RLC we separate the sample test and utils.
sdk/
├─ purview/
│ ├─ purview-catalog-rest/
│ │ ├─ src/
│ │ │ ├─ ...
│ │ ├─ recordings/
│ │ │ ├─ node/
│ │ │ ├─ browsers/
│ │ ├─ test/
│ │ │ ├─ public/
│ │ │ | ├─ utils/
│ │ │ | | ├─ recordedClient.ts
│ │ │ | ├─ sampleTest.spec.ts
Run tests in record mode
Before running tests it's advised to update the dependencises and build our project by running the command rush update && rush build -t <package-name>
. Please notice this command is time-consuming and it will take around 10 mins, you could refer here for more details.
> rush update
> rush build -t @azure-rest/purview-catalog
Then we could go to the project folder to run the tests. By default, if you don't specify TEST_MODE
, it will run previously recorded tests.
> cd sdk/purview/purview-catalog-rest
sdk/purview/purview-catalog-rest> rushx test
If you are the first time to run tests you may fail with below message because there is no any recordings found.
[test-info] ===TEST_MODE=undefined===
...
[node-tests] 2 failing
[node-tests]
[node-tests] 1) My test
[node-tests] "before each" hook for "sample test":
[node-tests] RecorderError: Start request failed.
To record or update our recordings we need to set the environment variable TEST_MODE
to record
. Then run rushx test
.
# Windows
> set TEST_MODE=record
> rushx test
# Linux / Mac
> export TEST_MODE=record
> rushx test
This time we could get following similar logs. Go to the folder purview-catalog-rest/recordings
to view recording files.
[test-info] ===TEST_MODE="record"===
...
[node-tests] My test
[node-tests] √ sample test
[node-tests]
[node-tests] 1 passing (223ms)
Run tests in playback mode
If we have existing recordings then the tests have been run against generated the HTTP recordings, we can run your tests in playback
mode.
# Windows
> set TEST_MODE=playback
> rushx test
# Linux / Mac
> export TEST_MODE=playback
> rushx test
How to push test recordings to assets repo
We need to push test recording files to asset repo after testing your test cases.
Notice
: Before push your recording file, you must confirm that you are able to push recordings to the "azure-sdk-assets" repo, you need write-access to the assets repo.Permissions to Azure/azure-sdk-assets
Push test recording
New Package - No recorded tests
This section assumes that your package is new to the JS repo and that you're trying to onboard your tests with recorder, and the asset-sync workflow.
Generate an sdk/<service-folder>/<package-name>/assets.json
file by running the following command.
npx dev-tool test-proxy init
Note: If you install dev-tool
globally, you don't need npx
prefix in the above command
This command would generate an assets.json file with an empty tag.
Example assets.json
with an empty tag:
{
"AssetsRepo": "Azure/azure-sdk-assets",
"AssetsRepoPrefixPath": "js",
"TagPrefix": "js/network/arm-network",
"Tag": ""
}
Then go to the next step to Existing package - Tests have been pushed before.
Existing package - Tests have been pushed before
At this point, you should have an assets.json
file under your SDK. sdk/<service-folder>/<package-name>/assets.json
.
With asset sync enabled, there is one extra step that must be taken before you create a PR with changes to recorded tests: you must push the new recordings to the assets repo. This is done with the following command:
Notice
:
the tests have to be recorded using the TEST_MODE=record
, then the recording files will be generate. And then you can push them to assets repo
npx dev-tool test-proxy push
This command will:
- Push your local recordings to a tag in the
Azure/azure-sdk-assets
repo, and - Update the
assets.json
in your package root to reference the newly created tag.
You should stage and commit the assets.json
update as part of your PR. If you don't run the push
command before creating a PR, the CI (and anyone else who tries to run your recorded tests) will use the old recordings, which will cause failures.
How to find recording files
Find local recording files
you can find your recording files in ./azure-sdk-for-js/.assets
If you want to search your recording quickly, you can open .breadcrumb
file and search your package in which folder.
Find recording files in assets repo
You can get the tag in assets.json
in your package root, which is a tag pointing
to your recordings in the Azure/azure-sdk-assets
repo
Example assets.json
from "arm-network" SDK:
{
"AssetsRepo": "Azure/azure-sdk-assets",
"AssetsRepoPrefixPath": "js",
"TagPrefix": "js/network/arm-network",
"Tag": "js/network/arm-network_bec01aa795"
}
the recordings are located at :https://github.com/Azure/azure-sdk-assets/tree/js/network/arm-network_bec01aa795
How to add tests
Adding runnable tests requires both a good understanding of the service, and the knowledge of the client and test framework. Feel free to contact SDK developers, if you encountered issues on client or test framework.
Before adding tests
Client authentication
There are several ways to authenticate to Azure and most common ways are AzureAD OAuth2 authentication and API key authentication. Before adding tests you are supposed to know what your services support and ensure you or service principal have rights to perform actions in test.
AzureAD OAuth2 Authentication
If your service uses AzureAD OAuth2 token for authentication. A common solution is to provide an application and its service principal and to provide RBAC to the service principal for the access to the Azure resource of your service.
Client requires following three variables for the service principal using client ID/secret for authentication:
AZURE_TENANT_ID
AZURE_CLIENT_ID
AZURE_CLIENT_SECRET
The recommended practice is to store these three values in environment variables called AZURE_TENANT_ID
, AZURE_CLIENT_ID
, and AZURE_CLIENT_SECRET
. To set an environment variable use the following commands:
# Windows
> set AZURE_TENANT_ID=<value>
# Linux / Mac
> export AZURE_TENANT_ID=<value>
To ensure our recorder could record OAuth traffic we have to leverage the createTestCredential
helper to prepare test credential. So please follow below code snippet to create your client.
import { createTestCredential } from "@azure-tools/test-credential";
const credential = createTestCredential();
// Create your client using the test credential.
new MyServiceClient(<endpoint>, credential);
To avoid storing the sensitive info in the recordings like authenticating with your Azure endpoints, keys, secrets, etc, we use the sanitizers to mask the values with the fake ones or remove them, RecorderStartOptions
helps us here. In our generated sample file we have below sanitizers' code:
const envSetupForPlayback: Record<string, string> = {
ENDPOINT: "https://endpoint",
AZURE_CLIENT_ID: "azure_client_id",
AZURE_CLIENT_SECRET: "azure_client_secret",
AZURE_TENANT_ID: "88888888-8888-8888-8888-888888888888",
AZURE_SUBSCRIPTION_ID: "azure_subscription_id"
};
const recorderEnvSetup: RecorderStartOptions = {
envSetupForPlayback,
};
//...
await recorder.start(recorderEnvSetup);
API Key Authentication
API key authentication would hit the service's endpoint directly so these traffic will be recorded. It doesn't require any customization in tests. However we must secure the sensitive data and not leak into our recordings, so add a sanitizer to replace your API keys. You could read more on how to add sanitizer at here.
Example 1: Basic RLC test interaction and recording for Azure data-plane service
At the code structure section we described we'll generate sample file for you, if you are the first time to write test cases you could grow up your own based on them.
This simple test creates a resource and checks that the service handles it correctly in the project purview-catalog-rest
. Below are the steps:
- Step 1: Create your test file and add one test case with resource creation, here we have purview catalog glossary test file
glossary.spec.ts
and one case namedShould create a glossary
. Or rename thesampleTest.spec.ts
file and its casesample test
. - Step 2: Add the utility method
createClient
inpublic/utils/recordedClient.ts
to share thePurviewCatalogClient
creation.- Call
createTestCredential
to init your credential and refer here for more details - Wrap the
option
with test options by callingrecorder.configureClientOptions(options)
- Call
- Step 3: In
glossary.spec.ts
file callcreateClient
to prepare the client and callclient.path("/atlas/v2/glossary").post()
to create our glossary resource under our caseShould create a glossary
- Step 4[Optional]: Specify environment variables that would be faked in the recordings in map
envSetupForPlayback
under the filepublic/utils/recordedClient.ts
. - Step 5: In
glossary.spec.ts
file add necessary assertions in your test case - Step 6: Run and record your test cases
glossary.spec.ts
import { Recorder } from "@azure-tools/test-recorder";
import { assert } from "chai";
import { PurviewCatalogClient } from "../../src";
import { createClient, createRecorder } from "./utils/recordedClient";
describe("My test", () => {
let recorder: Recorder;
// Step 3: Declare your own variables
let client: PurviewCatalogClient;
let glossaryName: string;
beforeEach(async function () {
recorder = await createRecorder(this);
// Step 3: Create your client
client = await createClient(recorder);
glossaryName = "js-testing";
});
afterEach(async function () {
await recorder.stop();
});
// Step 1: Create your test case
it("Should create a glossary", async () => {
// Step 3: Add your test cases
const glossaryResponse = await client.path("/atlas/v2/glossary").post({
body: {
name: glossaryName,
shortDescription: "Example Short Description",
longDescription: "Example Long Description",
language: "en",
usage: "Example Glossary",
},
});
if (isUnexpected(glossaryResponse)) {
throw new Error(glossaryResponse.body?.error.message);
}
// Step 5: Add your assertions
assert.strictEqual(glossaryResponse.status, "200");
});
});
utils/recordedClient.ts
import { Context } from "mocha";
import { Recorder, RecorderStartOptions } from "@azure-tools/test-recorder";
import PurviewCatalog, { PurviewCatalogClient } from "../../../src";
import { createTestCredential } from "@azure-tools/test-credential";
import { ClientOptions } from "@azure-rest/core-client";
const envSetupForPlayback: Record<string, string> = {
ENDPOINT: "https://endpoint",
AZURE_CLIENT_ID: "azure_client_id",
AZURE_CLIENT_SECRET: "azure_client_secret",
AZURE_TENANT_ID: "88888888-8888-8888-8888-888888888888",
SUBSCRIPTION_ID: "azure_subscription_id",
// Step 4: Add environment variables you'd like to mask the values in recordings
PURVIEW_CATALOG_GLOSSARY_ENV: "glossary_custom_env",
};
const recorderEnvSetup: RecorderStartOptions = {
envSetupForPlayback,
};
/**
* Should be called first in the test suite to make sure environment variables are
* read before they are being used.
*/
export async function createRecorder(context: Context): Promise<Recorder> {
const recorder = new Recorder(context.currentTest);
await recorder.start(recorderEnvSetup);
return recorder;
}
// Step 2: Add your client creation factory
export function createClient(recorder: Recorder, options?: ClientOptions): PurviewCatalogClient {
// Use createTestCredential to record AAD traffic so it could work in playback mode
const credential = createTestCredential();
// Use recorder.configureClientOptions to add the recording policy in the client options
const client = PurviewCatalog("<endpoint>", credential, recorder.configureClientOptions(options));
return client;
}
Example 2: Basic HLC test interaction and recording for Azure management service
At the code structure section we described if your SDK is generated base on HLC we'll generate a sample test named sampleTest.ts
for you.
Next we'll take the package @azure/arm-monitor
as an example to guide you how to add your own test case. Below are the steps:
- Step 1: Create your test file and add one test case with resource creation, here we have monitor test file
monitor.spec.ts
and one case namedShould create diagnosticSettings
. Or rename thesampleTest.spec.ts
file and its casesample test
. - Step 2: Add declarations for common variables e.g monitor client, its diagnostic name and subscription id.
- Step 3: Create the monitor client in
beforeEach
and callclient.diagnosticSettings.createOrUpdate
in test case- Read the
subscriptionId
fromenv
- Call
createTestCredential
to init your credential and refer here for more details - Wrap the
option
with test options by callingrecorder.configureClientOptions(options)
- Read the
- Step 4[Optional]: Specify environment variables that would be faked in the recordings in map
envSetupForPlayback
. - Step 5: Add necessary assertions in your test case
- Step 6: Run and record your test cases
monitor.spec.ts
/*
* Copyright (c) Microsoft Corporation.
* Licensed under the MIT License.
*
* Code generated by Microsoft (R) AutoRest Code Generator.
* Changes may cause incorrect behavior and will be lost if the code is regenerated.
*/
import { env, Recorder, RecorderStartOptions } from "@azure-tools/test-recorder";
import { createTestCredential } from "@azure-tools/test-credential";
import { assert } from "chai";
import { Context } from "mocha";
import { MonitorClient } from "../src/monitorClient";
// Step 4: Add environment variables you'd like to mask the values in recordings
const replaceableVariables: Record<string, string> = {
AZURE_CLIENT_ID: "azure_client_id",
AZURE_CLIENT_SECRET: "azure_client_secret",
AZURE_TENANT_ID: "88888888-8888-8888-8888-888888888888",
SUBSCRIPTION_ID: "azure_subscription_id",
};
const recorderOptions: RecorderStartOptions = {
envSetupForPlayback: replaceableVariables,
};
// Step 1: prepare the test file and test case
describe("Monitor client", () => {
let recorder: Recorder;
// Step 2: declare common variables
let subscriptionId: string;
let client: MonitorClient;
let diagnosticName: string;
beforeEach(async function (this: Context) {
recorder = new Recorder(this.currentTest);
await recorder.start(recorderOptions);
// Step 3: create clients
subscriptionId = env.SUBSCRIPTION_ID || "";
const credential = createTestCredential();
client = new MonitorClient(credential, subscriptionId, recorder.configureClientOptions({}));
diagnosticName = "my-test-diagnostic-name";
});
afterEach(async function () {
await recorder.stop();
});
it("should create diagnosticSettings", async function () {
// Step 3: call createOrUpdate to prepare resource
const res = await client.diagnosticSettings.createOrUpdate("workflowsId", diagnosticName, {
storageAccountId: "storageId",
workspaceId: "workspaceId",
eventHubAuthorizationRuleId: "authorizationId",
eventHubName: "eventhubName",
metrics: [],
logs: [
{
category: "WorkflowRuntime",
enabled: true,
retentionPolicy: {
enabled: false,
days: 0,
},
},
],
});
// Step 5: Add assertions
assert.equal(res.name, diagnosticName);
});
});