Create new ImageAnalysis RLC package (#28027)

### Packages impacted by this PR
@Azure/imageAnalysis

### Issues associated with this PR
N/A

### Describe the problem that is addressed by this PR
Initial add of ImageAnalysis RLC, opening PR to get gates run to start
knocking those problems down...

### What are the possible designs available to address the problem? If
there are more than one possible design, why was the one in this PR
chosen?
A DPG client...

### Are there test cases added in this PR? _(If not, why?)_
Yes

### Provide a list of related PRs _(if any)_
TyepSpec PR: https://github.com/Azure/azure-rest-api-specs/pull/26146

### Command used to generate this PR:**_(Applicable only to SDK release
request PRs)_

### Checklists
- [ ] Added impacted package name to the issue description
- [ ] Does this PR needs any fixes in the SDK Generator?** _(If so,
create an Issue in the
[Autorest/typescript](https://github.com/Azure/autorest.typescript)
repository and link it here)_
- [ ] Added a changelog (if necessary)
This commit is contained in:
Ryan Hurey 2024-01-09 16:08:19 -08:00 коммит произвёл GitHub
Родитель 60a6bd5a3a
Коммит 56426f8de0
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: 4AEE18F83AFDEB23
51 изменённых файлов: 4810 добавлений и 1756 удалений

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -2148,6 +2148,11 @@
"projectFolder": "sdk/playwrighttesting/arm-playwrighttesting",
"versionPolicyName": "management"
},
{
"packageName": "@azure-rest/ai-vision-image-analysis",
"projectFolder": "sdk/vision/ai-vision-image-analysis-rest",
"versionPolicyName": "client"
},
{
"packageName": "@azure/arm-hybridnetwork",
"projectFolder": "sdk/hybridnetwork/arm-hybridnetwork",

Просмотреть файл

@ -0,0 +1,11 @@
{
"plugins": ["@azure/azure-sdk"],
"extends": ["plugin:@azure/azure-sdk/azure-sdk-base"],
"rules": {
"@azure/azure-sdk/ts-modules-only-named": "warn",
"@azure/azure-sdk/ts-apiextractor-json-types": "warn",
"@azure/azure-sdk/ts-package-json-types": "warn",
"@azure/azure-sdk/ts-package-json-engine-is-present": "warn",
"tsdoc/syntax": "warn"
}
}

Просмотреть файл

@ -0,0 +1,7 @@
# Release History
## 1.0.0-beta.1 (2024-01-09)
### Features Added
Initial release of Image Analysis SDK. Uses the generally available [Computer Vision REST API (2023-10-01)](https://eastus.dev.cognitive.microsoft.com/docs/services/Cognitive_Services_Unified_Vision_API_2023-10-01).

Просмотреть файл

@ -0,0 +1,200 @@
# Azure AI Vision Image Analysis client library for JavaScript
The Image Analysis service provides AI algorithms for processing images and returning information about their content. In a single service call, you can extract one or more visual features from the image simultaneously, including getting a caption for the image, extracting text shown in the image (OCR) and detecting objects. For more information on the service and the supported visual features, see [Image Analysis overview][image_analysis_overview], and the [Concepts][image_analysis_concepts] page.
Use the Image Analysis client library to:
* Authenticate against the service
* Set what features you would like to extract
* Upload an image for analysis, or send an image URL
* Get the analysis result
[Product documentation][image_analysis_overview]
| [Samples](https://github.com/Azure/azure-sdk-for-js/tree/rhurey/ia_dev/sdk/vision/ai-vision-image-analysis-rest/samples)
| [Vision Studio][vision_studio]
| [API reference documentation](https://learn.microsoft.com/javascript/api/overview/azure/visual-search)
## Getting started
### Currently supported environments
- [LTS versions of Node.js](https://github.com/nodejs/release#release-schedule)
- Latest versions of Safari, Chrome, Edge, and Firefox.
See our [support policy](https://github.com/Azure/azure-sdk-for-js/blob/main/SUPPORT.md) for more details.
### Prerequisites
- An [Azure subscription](https://azure.microsoft.com/free).
- A [Computer Vision resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesComputerVision) in your Azure subscription.
* You will need the key and endpoint from this resource to authenticate against the service.
* You can use the free pricing tier (`F0`) to try the service, and upgrade later to a paid tier for production.
* Note that in order to run Image Analysis with the `Caption` or `Dense Captions` features, the Azure resource needs to be from one of the following GPU-supported regions: `East US`, `France Central`, `Korea Central`, `North Europe`, `Southeast Asia`, `West Europe`, or `West US`.
### Install the `@azure-rest/ai-vision-image-analysis` package
Install the Image Analysis client library for JavaScript with `npm`:
```bash
npm install @azure-rest/ai-vision-image-analysis
```
### Browser support
#### JavaScript Bundle
To use this client library in the browser, first, you need to use a bundler. For details on how to do this, please refer to our [bundling documentation](https://aka.ms/AzureSDKBundling).
## Key concepts
Once you've initialized an `ImageAnalysisClient`, you need to select one or more visual features to analyze. The options are specified by the enum class `VisualFeatures`. The following features are supported:
1. `VisualFeatures.Caption`: ([Examples](#analyze-an-image-from-url) | [Samples](https://github.com/Azure/azure-sdk-for-js/tree/rhurey/ia_dev/sdk/vision/ai-vision-image-analysis-rest/samples)) Generate a human-readable sentence that describes the content of an image.
1. `VisualFeatures.Read`: ([Examples](#extract-text-from-an-image-url) | [Samples](https://github.com/Azure/azure-sdk-for-js/tree/rhurey/ia_dev/sdk/vision/ai-vision-image-analysis-rest/samples)) Also known as Optical Character Recognition (OCR). Extract printed or handwritten text from images.
1. `VisualFeatures.DenseCaptions`: Dense Captions provides more details by generating one-sentence captions for up to 10 different regions in the image, including one for the whole image.
1. `VisualFeatures.Tags`: Extract content tags for thousands of recognizable objects, living beings, scenery, and actions that appear in images.
1. `VisualFeatures.Objects`: Object detection. This is similar to tagging, but focused on detecting physical objects in the image and returning their location.
1. `VisualFeatures.SmartCrops`: Used to find a representative sub-region of the image for thumbnail generation, with priority given to include faces.
1. `VisualFeatures.People`: Locate people in the image and return their location.
For more information about these features, see [Image Analysis overview][image_analysis_overview], and the [Concepts][image_analysis_concepts] page.
### Supported image formats
Image Analysis works on images that meet the following requirements:
* The image must be presented in JPEG, PNG, GIF, BMP, WEBP, ICO, TIFF, or MPO format
* The file size of the image must be less than 20 megabytes (MB)
* The dimensions of the image must be greater than 50 x 50 pixels and less than 16,000 x 16,000 pixels
### ImageAnalysisClient
The `ImageAnalysisClient` is the primary interface for developers interacting with the Image Analysis service. It serves as the gateway from which all interaction with the library will occur.
## Examples
### Authenticate the client
Here's an example of how to create an `ImageAnalysisClient` instance using a key-based authentication and an Azure Active Directory authentication.
```javascript Snippet:ImageAnalysisAuthKey
const { ImageAnalysisClient, KeyCredential } = require("@azure-rest/ai-image-analysis");
const endpoint = "<your_endpoint>";
const key = "<your_key>";
const credential = new KeyCredential(key);
const client = new ImageAnalysisClient(endpoint, credential);
```
### Analyze an image from URL
The following example demonstrates how to analyze an image using the Image Analysis client library for JavaScript.
```javascript Snippet:ImageAnalysisFromUrl
const imageUrl = "https://example.com/image.jpg";
const features = ["Caption", "DenseCaptions", "Objects", "People", "Read", "SmartCrops", "Tags"];
async function analyzeImageFromUrl() {
const result = await client.path("/imageanalysis:analyze").post({
body: {
url: imageUrl,
},
queryParameters: {
features: features,
"smartCrops-aspect-ratios": [0.9, 1.33],
},
contentType: "application/json",
});
console.log("Image analysis result:", result.body);
}
analyzeImageFromUrl();
```
### Analyze an image from a local file
In this example, we will analyze an image from a local file using the Image Analysis client library for JavaScript.
```javascript Snippet:ImageAnalysisFromLocalFile
const fs = require("fs");
const imagePath = "./path/to/your/image.jpg";
const features = ["Caption", "DenseCaptions", "Objects", "People", "Read", "SmartCrops", "Tags"];
async function analyzeImageFromFile() {
const imageBuffer = fs.readFileSync(imagePath);
const result = await client.path("/imageanalysis:analyze").post({
body: imageBuffer,
queryParameters: {
features: features,
"smartCrops-aspect-ratios": [0.9, 1.33],
},
contentType: "application/octet-stream",
});
console.log("Image analysis result:", result.body);
}
analyzeImageFromFile();
```
### Extract text from an image Url
This example demonstrates how to extract printed or hand-written text for the image file [sample.jpg](https://aka.ms/azai/vision/image-analysis-sample.jpg) using the ImageAnalysisClient. The method call returns an ImageAnalysisResult object. The ReadResult property on the returned object includes a list of text lines and a bounding polygon surrounding each text line. For each line, it also returns a list of words in the text line and a bounding polygon surrounding each word.
``` javascript Snippet:readmeText
const client: ImageAnalysisClient = createImageAnalysisClient(endpoint, credential);
const features: string[] = [
'Read'
];
const imageUrl: string = 'https://aka.ms/azai/vision/image-analysis-sample.jpg';
client.path('/imageanalysis:analyze').post({
body: { url: imageUrl },
queryParameters: { features: features },
contentType: 'application/json'
}).then(result => {
const iaResult: ImageAnalysisResultOutput = result.body as ImageAnalysisResultOutput;
// Process the response
if (iaResult.readResult && iaResult.readResult.blocks.length > 0) {
iaResult.readResult.blocks.forEach(block => {
console.log(`Detected text block: ${JSON.stringify(block)}`);
});
} else {
console.log('No text blocks detected.');
}
```
## Troubleshooting
### Logging
Enabling logging may help uncover useful information about failures. In order to see a log of HTTP requests and responses, set the `AZURE_LOG_LEVEL` environment variable to `info`. Alternatively, logging can be enabled at runtime by calling `setLogLevel` in the `@azure/logger`:
```javascript
const { setLogLevel } = require("@azure/logger");
setLogLevel("info");
```
For more detailed instructions on how to enable logs, you can look at the [@azure/logger package docs](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/core/logger).
## Next steps
Please take a look at the [samples](https://github.com/Azure/azure-sdk-for-js/tree/rhurey/ia_dev/sdk/vision/ai-vision-image-analysis-rest/samples) directory for detailed examples that demonstrate how to use the client libraries.
## Contributing
If you'd like to contribute to this library, please read the [contributing guide](https://github.com/Azure/azure-sdk-for-js/blob/main/CONTRIBUTING.md) to learn more about how to build and test the code.
## Related projects
- [Microsoft Azure SDK for JavaScript](https://github.com/Azure/azure-sdk-for-js)
[image_analysis_overview]: https://learn.microsoft.com/azure/ai-services/computer-vision/overview-image-analysis?tabs=4-0
[image_analysis_concepts]: https://learn.microsoft.com/azure/ai-services/computer-vision/concept-tag-images-40
[vision_studio]: https://portal.vision.cognitive.azure.com/gallery/imageanalysis

Просмотреть файл

@ -0,0 +1,31 @@
{
"$schema": "https://developer.microsoft.com/json-schemas/api-extractor/v7/api-extractor.schema.json",
"mainEntryPointFilePath": "./types/src/index.d.ts",
"docModel": {
"enabled": true
},
"apiReport": {
"enabled": true,
"reportFolder": "./review"
},
"dtsRollup": {
"enabled": true,
"untrimmedFilePath": "",
"publicTrimmedFilePath": "./types/ai-vision-image-analysis.d.ts"
},
"messages": {
"tsdocMessageReporting": {
"default": {
"logLevel": "none"
}
},
"extractorMessageReporting": {
"ae-missing-release-tag": {
"logLevel": "none"
},
"ae-unresolved-link": {
"logLevel": "none"
}
}
}
}

Просмотреть файл

@ -0,0 +1,6 @@
{
"AssetsRepo": "Azure/azure-sdk-assets",
"AssetsRepoPrefixPath": "js",
"TagPrefix": "js/vision/ai-vision-image-analysis-rest",
"Tag": "js/vision/ai-vision-image-analysis-rest_19acc08c63"
}

Просмотреть файл

@ -0,0 +1,135 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
// https://github.com/karma-runner/karma-chrome-launcher
process.env.CHROME_BIN = require("puppeteer").executablePath();
require("dotenv").config();
const { relativeRecordingsPath } = require("@azure-tools/test-recorder");
process.env.RECORDINGS_RELATIVE_PATH = relativeRecordingsPath();
module.exports = function (config) {
config.set({
// base path that will be used to resolve all patterns (eg. files, exclude)
basePath: "./",
// frameworks to use
// available frameworks: https://npmjs.org/browse/keyword/karma-adapter
frameworks: ["source-map-support", "mocha"],
plugins: [
"karma-mocha",
"karma-mocha-reporter",
"karma-chrome-launcher",
"karma-firefox-launcher",
"karma-env-preprocessor",
"karma-coverage",
"karma-sourcemap-loader",
"karma-junit-reporter",
"karma-source-map-support",
],
// list of files / patterns to load in the browser
files: [
"dist-test/index.browser.js",
{
pattern: "dist-test/index.browser.js.map",
type: "html",
included: false,
served: true,
},
],
// list of files / patterns to exclude
exclude: [],
// preprocess matching files before serving them to the browser
// available preprocessors: https://npmjs.org/browse/keyword/karma-preprocessor
preprocessors: {
"**/*.js": ["sourcemap", "env"],
// IMPORTANT: COMMENT following line if you want to debug in your browsers!!
// Preprocess source file to calculate code coverage, however this will make source file unreadable
// "dist-test/index.js": ["coverage"]
},
envPreprocessor: [
"VISION_KEY",
"VISION_ENDPOINT",
"TEST_MODE",
"ENDPOINT",
"AZURE_CLIENT_SECRET",
"AZURE_CLIENT_ID",
"AZURE_TENANT_ID",
"SUBSCRIPTION_ID",
"RECORDINGS_RELATIVE_PATH",
],
// test results reporter to use
// possible values: 'dots', 'progress'
// available reporters: https://npmjs.org/browse/keyword/karma-reporter
reporters: ["mocha", "coverage", "junit"],
coverageReporter: {
// specify a common output directory
dir: "coverage-browser/",
reporters: [
{ type: "json", subdir: ".", file: "coverage.json" },
{ type: "lcovonly", subdir: ".", file: "lcov.info" },
{ type: "html", subdir: "html" },
{ type: "cobertura", subdir: ".", file: "cobertura-coverage.xml" },
],
},
junitReporter: {
outputDir: "", // results will be saved as $outputDir/$browserName.xml
outputFile: "test-results.browser.xml", // if included, results will be saved as $outputDir/$browserName/$outputFile
suite: "", // suite will become the package name attribute in xml testsuite element
useBrowserName: false, // add browser name to report and classes names
nameFormatter: undefined, // function (browser, result) to customize the name attribute in xml testcase element
classNameFormatter: undefined, // function (browser, result) to customize the classname attribute in xml testcase element
properties: {}, // key value pair of properties to add to the <properties> section of the report
},
// web server port
port: 9876,
// enable / disable colors in the output (reporters and logs)
colors: true,
// level of logging
// possible values: config.LOG_DISABLE || config.LOG_ERROR || config.LOG_WARN || config.LOG_INFO || config.LOG_DEBUG
logLevel: config.LOG_INFO,
// enable / disable watching file and executing tests whenever any file changes
autoWatch: false,
// --no-sandbox allows our tests to run in Linux without having to change the system.
// --disable-web-security allows us to authenticate from the browser without having to write tests using interactive auth, which would be far more complex.
browsers: ["ChromeHeadlessNoSandbox"],
customLaunchers: {
ChromeHeadlessNoSandbox: {
base: "ChromeHeadless",
flags: ["--no-sandbox", "--disable-web-security"],
},
},
// Continuous Integration mode
// if true, Karma captures browsers, runs the tests and exits
singleRun: false,
// Concurrency level
// how many browser should be started simultaneous
concurrency: 1,
browserNoActivityTimeout: 60000000,
browserDisconnectTimeout: 10000,
browserDisconnectTolerance: 3,
client: {
mocha: {
// change Karma's debug.html to the mocha web reporter
reporter: "html",
timeout: "600000",
},
},
});
};

Просмотреть файл

@ -0,0 +1,135 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
// https://github.com/karma-runner/karma-chrome-launcher
process.env.CHROME_BIN = require("puppeteer").executablePath();
require("dotenv").config();
const { relativeRecordingsPath } = require("@azure-tools/test-recorder");
process.env.RECORDINGS_RELATIVE_PATH = relativeRecordingsPath();
module.exports = function (config) {
config.set({
// base path that will be used to resolve all patterns (eg. files, exclude)
basePath: "./",
// frameworks to use
// available frameworks: https://npmjs.org/browse/keyword/karma-adapter
frameworks: ["source-map-support", "mocha"],
plugins: [
"karma-mocha",
"karma-mocha-reporter",
"karma-chrome-launcher",
"karma-firefox-launcher",
"karma-env-preprocessor",
"karma-coverage",
"karma-sourcemap-loader",
"karma-junit-reporter",
"karma-source-map-support",
],
// list of files / patterns to load in the browser
files: [
"dist-test/index.browser.js",
{
pattern: "dist-test/index.browser.js.map",
type: "html",
included: false,
served: true,
},
],
// list of files / patterns to exclude
exclude: [],
// preprocess matching files before serving them to the browser
// available preprocessors: https://npmjs.org/browse/keyword/karma-preprocessor
preprocessors: {
"**/*.js": ["sourcemap", "env"],
// IMPORTANT: COMMENT following line if you want to debug in your browsers!!
// Preprocess source file to calculate code coverage, however this will make source file unreadable
// "dist-test/index.js": ["coverage"]
},
envPreprocessor: [
"VISION_KEY",
"VISION_ENDPOINT",
"TEST_MODE",
"ENDPOINT",
"AZURE_CLIENT_SECRET",
"AZURE_CLIENT_ID",
"AZURE_TENANT_ID",
"SUBSCRIPTION_ID",
"RECORDINGS_RELATIVE_PATH",
],
// test results reporter to use
// possible values: 'dots', 'progress'
// available reporters: https://npmjs.org/browse/keyword/karma-reporter
reporters: ["mocha", "coverage", "junit"],
coverageReporter: {
// specify a common output directory
dir: "coverage-browser/",
reporters: [
{ type: "json", subdir: ".", file: "coverage.json" },
{ type: "lcovonly", subdir: ".", file: "lcov.info" },
{ type: "html", subdir: "html" },
{ type: "cobertura", subdir: ".", file: "cobertura-coverage.xml" },
],
},
junitReporter: {
outputDir: "", // results will be saved as $outputDir/$browserName.xml
outputFile: "test-results.browser.xml", // if included, results will be saved as $outputDir/$browserName/$outputFile
suite: "", // suite will become the package name attribute in xml testsuite element
useBrowserName: false, // add browser name to report and classes names
nameFormatter: undefined, // function (browser, result) to customize the name attribute in xml testcase element
classNameFormatter: undefined, // function (browser, result) to customize the classname attribute in xml testcase element
properties: {}, // key value pair of properties to add to the <properties> section of the report
},
// web server port
port: 9876,
// enable / disable colors in the output (reporters and logs)
colors: true,
// level of logging
// possible values: config.LOG_DISABLE || config.LOG_ERROR || config.LOG_WARN || config.LOG_INFO || config.LOG_DEBUG
logLevel: config.LOG_INFO,
// enable / disable watching file and executing tests whenever any file changes
autoWatch: false,
// --no-sandbox allows our tests to run in Linux without having to change the system.
// --disable-web-security allows us to authenticate from the browser without having to write tests using interactive auth, which would be far more complex.
browsers: ["ChromeHeadlessNoSandbox"],
customLaunchers: {
ChromeHeadlessNoSandbox: {
base: "ChromeHeadless",
flags: ["--no-sandbox", "--disable-web-security"],
},
},
// Continuous Integration mode
// if true, Karma captures browsers, runs the tests and exits
singleRun: false,
// Concurrency level
// how many browser should be started simultaneous
concurrency: 1,
browserNoActivityTimeout: 60000000,
browserDisconnectTimeout: 10000,
browserDisconnectTolerance: 3,
client: {
mocha: {
// change Karma's debug.html to the mocha web reporter
reporter: "html",
timeout: "600000",
},
},
});
};

Просмотреть файл

@ -0,0 +1,119 @@
{
"name": "@azure-rest/ai-vision-image-analysis",
"sdk-type": "client",
"author": "Microsoft Corporation",
"version": "1.0.0-beta.1",
"description": "undefined",
"keywords": [
"node",
"azure",
"cloud",
"typescript",
"browser",
"isomorphic"
],
"license": "MIT",
"main": "dist/index.js",
"module": "./dist-esm/src/index.js",
"types": "./types/ai-vision-image-analysis.d.ts",
"repository": "github:Azure/azure-sdk-for-js",
"homepage": "https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/vision/ai-vision-image-analysis-rest/README.md",
"bugs": {
"url": "https://github.com/Azure/azure-sdk-for-js/issues"
},
"files": [
"dist/",
"dist-esm/src/",
"types/ai-vision-image-analysis.d.ts",
"README.md",
"LICENSE",
"CHANGELOG.md",
"review/*"
],
"engines": {
"node": ">=18.0.0"
},
"scripts": {
"audit": "node ../../../common/scripts/rush-audit.js && rimraf node_modules package-lock.json && npm i --package-lock-only 2>&1 && npm audit",
"build:browser": "tsc -p . && cross-env ONLY_BROWSER=true rollup -c 2>&1",
"build:node": "tsc -p . && cross-env ONLY_NODE=true rollup -c 2>&1",
"build:samples": "echo skipped.",
"build:test": "tsc -p . && dev-tool run bundle",
"build:debug": "tsc -p . && dev-tool run bundle && api-extractor run --local",
"check-format": "prettier --list-different --config ../../../.prettierrc.json --ignore-path ../../../.prettierignore \"src/**/*.ts\" \"*.{js,json}\" \"test/**/*.ts\"",
"clean": "rimraf --glob dist dist-browser dist-esm test-dist temp types *.tgz *.log",
"execute:samples": "echo skipped",
"extract-api": "rimraf review && mkdirp ./review && api-extractor run --local",
"format": "prettier --write --config ../../../.prettierrc.json --ignore-path ../../../.prettierignore \"src/**/*.ts\" \"*.{js,json}\" \"test/**/*.ts\"",
"generate:client": "echo skipped",
"integration-test:browser": "dev-tool run test:browser",
"integration-test:node": "dev-tool run test:node-js-input -- --timeout 5000000 'dist-esm/test/**/*.spec.js'",
"integration-test": "npm run integration-test:node && npm run integration-test:browser",
"lint:fix": "eslint package.json api-extractor.json src test --ext .ts --fix --fix-type [problem,suggestion]",
"lint": "eslint package.json api-extractor.json src test --ext .ts",
"pack": "npm pack 2>&1",
"test:browser": "npm run clean && npm run build:test && npm run unit-test:browser",
"test:node": "npm run clean && npm run build:test && npm run unit-test:node",
"test": "npm run clean && npm run build:test && npm run unit-test",
"unit-test": "npm run unit-test:node && npm run unit-test:browser",
"unit-test:node": "dev-tool run test:node-ts-input -- --timeout 1200000 --exclude 'test/**/browser/*.spec.ts' 'test/**/*.spec.ts'",
"unit-test:browser": "dev-tool run test:browser",
"build": "npm run clean && tsc -p . && dev-tool run bundle && mkdirp ./review && api-extractor run --local"
},
"sideEffects": false,
"autoPublish": false,
"dependencies": {
"@azure/core-auth": "^1.3.0",
"@azure-rest/core-client": "^1.1.6",
"@azure/core-rest-pipeline": "^1.12.0",
"@azure/logger": "^1.0.0",
"tslib": "^2.2.0"
},
"devDependencies": {
"@microsoft/api-extractor": "~7.39.0",
"autorest": "latest",
"@types/node": "~20.10.3",
"dotenv": "^16.0.0",
"eslint": "^8.0.0",
"mkdirp": "^2.1.2",
"prettier": "^2.5.1",
"rimraf": "^5.0.0",
"source-map-support": "^0.5.9",
"typescript": "~5.2.0",
"@azure/dev-tool": "^1.0.0",
"@azure/eslint-plugin-azure-sdk": "^3.0.0",
"@azure-tools/test-credential": "~1.0.2",
"@azure/identity": "^3.3.0",
"@azure-tools/test-recorder": "^3.0.0",
"mocha": "^10.0.0",
"esm": "^3.2.18",
"@types/mocha": "^10.0.0",
"mocha-junit-reporter": "^1.18.0",
"cross-env": "^7.0.2",
"@types/chai": "^4.2.8",
"chai": "^4.2.0",
"karma-chrome-launcher": "^3.0.0",
"karma-coverage": "^2.0.0",
"karma-env-preprocessor": "^0.1.1",
"karma-firefox-launcher": "^2.1.2",
"karma-junit-reporter": "^2.0.1",
"karma-mocha-reporter": "^2.2.5",
"karma-mocha": "^2.0.1",
"karma-source-map-support": "~1.4.0",
"karma-sourcemap-loader": "^0.4.0",
"karma": "^6.2.0",
"c8": "^8.0.0",
"ts-node": "^10.0.0"
},
"//metadata": {
"constantPaths": [
{
"path": "src/imageAnalysisClient.ts",
"prefix": "userAgentInfo"
}
]
},
"browser": {
"./dist-esm/test/public/utils/env.js": "./dist-esm/test/public/utils/env.browser.js"
}
}

Просмотреть файл

@ -0,0 +1,274 @@
## API Report File for "@azure-rest/ai-vision-image-analysis"
> Do not edit this file. It is a report generated by [API Extractor](https://api-extractor.com/).
```ts
/// <reference types="node" />
import { Client } from '@azure-rest/core-client';
import { ClientOptions } from '@azure-rest/core-client';
import { ErrorResponse } from '@azure-rest/core-client';
import { HttpResponse } from '@azure-rest/core-client';
import { KeyCredential } from '@azure/core-auth';
import { RawHttpHeaders } from '@azure/core-rest-pipeline';
import { RequestParameters } from '@azure-rest/core-client';
import { StreamableMethod } from '@azure-rest/core-client';
// @public (undocumented)
export interface AnalyzeFromBuffer {
post(options: AnalyzeFromBufferParameters): StreamableMethod<AnalyzeFromBuffer200Response | AnalyzeFromBufferDefaultResponse>;
post(options: AnalyzeFromUrlParameters): StreamableMethod<AnalyzeFromUrl200Response | AnalyzeFromUrlDefaultResponse>;
}
// @public
export interface AnalyzeFromBuffer200Response extends HttpResponse {
// (undocumented)
body: ImageAnalysisResultOutput;
// (undocumented)
status: "200";
}
// @public (undocumented)
export interface AnalyzeFromBufferBodyParam {
body: string | Uint8Array | ReadableStream<Uint8Array> | NodeJS.ReadableStream;
}
// @public (undocumented)
export interface AnalyzeFromBufferDefaultHeaders {
"x-ms-error-code"?: string;
}
// @public (undocumented)
export interface AnalyzeFromBufferDefaultResponse extends HttpResponse {
// (undocumented)
body: ErrorResponse;
// (undocumented)
headers: RawHttpHeaders & AnalyzeFromBufferDefaultHeaders;
// (undocumented)
status: string;
}
// @public (undocumented)
export interface AnalyzeFromBufferMediaTypesParam {
contentType: "application/octet-stream";
}
// @public (undocumented)
export type AnalyzeFromBufferParameters = AnalyzeFromBufferQueryParam & AnalyzeFromBufferMediaTypesParam & AnalyzeFromBufferBodyParam & RequestParameters;
// @public (undocumented)
export interface AnalyzeFromBufferQueryParam {
// (undocumented)
queryParameters: AnalyzeFromBufferQueryParamProperties;
}
// @public (undocumented)
export interface AnalyzeFromBufferQueryParamProperties {
"gender-neutral-caption"?: boolean;
"model-version"?: string;
"smartcrops-aspect-ratios"?: number[];
features: string[];
language?: string;
}
// @public
export interface AnalyzeFromUrl200Response extends HttpResponse {
// (undocumented)
body: ImageAnalysisResultOutput;
// (undocumented)
status: "200";
}
// @public (undocumented)
export interface AnalyzeFromUrlBodyParam {
body: ImageUrl;
}
// @public (undocumented)
export interface AnalyzeFromUrlDefaultHeaders {
"x-ms-error-code"?: string;
}
// @public (undocumented)
export interface AnalyzeFromUrlDefaultResponse extends HttpResponse {
// (undocumented)
body: ErrorResponse;
// (undocumented)
headers: RawHttpHeaders & AnalyzeFromUrlDefaultHeaders;
// (undocumented)
status: string;
}
// @public (undocumented)
export interface AnalyzeFromUrlMediaTypesParam {
contentType: "application/json";
}
// @public (undocumented)
export type AnalyzeFromUrlParameters = AnalyzeFromUrlQueryParam & AnalyzeFromUrlMediaTypesParam & AnalyzeFromUrlBodyParam & RequestParameters;
// @public (undocumented)
export interface AnalyzeFromUrlQueryParam {
// (undocumented)
queryParameters: AnalyzeFromUrlQueryParamProperties;
}
// @public (undocumented)
export interface AnalyzeFromUrlQueryParamProperties {
"gender-neutral-caption"?: boolean;
"model-version"?: string;
"smartcrops-aspect-ratios"?: number[];
features: string[];
language?: string;
}
// @public
export interface CaptionResultOutput {
confidence: number;
text: string;
}
// @public
function createClient(endpoint: string, credentials: KeyCredential, options?: ClientOptions): ImageAnalysisClient;
export default createClient;
// @public
export interface CropRegionOutput {
aspectRatio: number;
boundingBox: ImageBoundingBoxOutput;
}
// @public
export interface DenseCaptionOutput {
boundingBox: ImageBoundingBoxOutput;
confidence: number;
text: string;
}
// @public
export interface DenseCaptionsResultOutput {
values: Array<DenseCaptionOutput>;
}
// @public
export interface DetectedObjectOutput {
boundingBox: ImageBoundingBoxOutput;
tags: Array<DetectedTagOutput>;
}
// @public
export interface DetectedPersonOutput {
readonly boundingBox: ImageBoundingBoxOutput;
readonly confidence: number;
}
// @public
export interface DetectedTagOutput {
confidence: number;
name: string;
}
// @public
export interface DetectedTextBlockOutput {
lines: Array<DetectedTextLineOutput>;
}
// @public
export interface DetectedTextLineOutput {
boundingPolygon: Array<ImagePointOutput>;
text: string;
words: Array<DetectedTextWordOutput>;
}
// @public
export interface DetectedTextWordOutput {
boundingPolygon: Array<ImagePointOutput>;
confidence: number;
text: string;
}
// @public (undocumented)
export type ImageAnalysisClient = Client & {
path: Routes;
};
// @public
export interface ImageAnalysisResultOutput {
captionResult?: CaptionResultOutput;
denseCaptionsResult?: DenseCaptionsResultOutput;
metadata: ImageMetadataOutput;
modelVersion: string;
objectsResult?: ObjectsResultOutput;
peopleResult?: PeopleResultOutput;
readResult?: ReadResultOutput;
smartCropsResult?: SmartCropsResultOutput;
tagsResult?: TagsResultOutput;
}
// @public
export interface ImageBoundingBoxOutput {
h: number;
w: number;
x: number;
y: number;
}
// @public
export interface ImageMetadataOutput {
height: number;
width: number;
}
// @public
export interface ImagePointOutput {
x: number;
y: number;
}
// @public
export interface ImageUrl {
url: string;
}
// @public
export interface ImageUrlOutput {
url: string;
}
// @public (undocumented)
export function isUnexpected(response: AnalyzeFromBuffer200Response | AnalyzeFromUrl200Response | AnalyzeFromBufferDefaultResponse): response is AnalyzeFromBufferDefaultResponse;
// @public
export interface ObjectsResultOutput {
values: Array<DetectedObjectOutput>;
}
// @public
export interface PeopleResultOutput {
values: Array<DetectedPersonOutput>;
}
// @public
export interface ReadResultOutput {
blocks: Array<DetectedTextBlockOutput>;
}
// @public (undocumented)
export interface Routes {
(path: "/imageanalysis:analyze"): AnalyzeFromBuffer;
}
// @public
export interface SmartCropsResultOutput {
values: Array<CropRegionOutput>;
}
// @public
export interface TagsResultOutput {
values: Array<DetectedTagOutput>;
}
// (No @packageDocumentation comment for this package)
```

Просмотреть файл

@ -0,0 +1,65 @@
# Azure AI Vision Image Analysis client library samples for JavaScript
These sample programs show how to use the JavaScript client libraries for Azure AI Vision Image Analysis in some common scenarios.
| **File Name** | **Description** |
| --------------------------------------------------------- | ---------------------------------------------------------------------------------------------- |
| [analyzeImageFromLocalFile.js][analyzeImageFromLocalFile] | Analyze an image from a local file using Azure AI Vision Image Analysis service. |
| [analyzeImageFromUrl.js][analyzeImageFromUrl] | Analyze an image from a URL using Azure AI Vision Image Analysis service. |
| [caption.js][caption] | Generate a human-readable phrase that describes the contents of an image. |
| [denseCaptions.js][denseCaptions] | Generate detailed descriptions of up to 10 regions of the image. |
| [objects.js][objects] | Detect objects in an image and return their bounding box coordinates. |
| [read.js][read] | Extract printed or handwritten text from images. |
| [tags.js][tags] | Return content tags for recognizable objects, living beings, scenery, and actions in an image. |
## Prerequisites
The sample programs are compatible with [LTS versions of Node.js](https://github.com/nodejs/release#release-schedule).
You need [an Azure subscription][freesub] and the following Azure resources to run these sample programs:
- [Azure Computer Vision][createinstance_azureaivision]
Samples retrieve credentials to access the service endpoint from environment variables. Alternatively, edit the source code to include the appropriate credentials. See each individual sample for details on which environment variables/credentials it requires to function.
Adapting the samples to run in the browser may require some additional consideration. For details, please see the [package README][package].
## Setup
To run the samples using the published version of the package:
1. Install the dependencies using `npm`:
```bash
npm install
```
2. Edit the file `sample.env`, adding the correct credentials to access the Azure service and run the samples. Then rename the file from `sample.env` to just `.env`. The sample programs will read this file automatically.
3. Run whichever samples you like (note that some samples may require additional setup, see the table above):
```bash
node analyzeImageFromLocalFile.js
```
Alternatively, run a single sample with the correct environment variables set (setting up the `.env` file is not required if you do this), for example (cross-platform):
```bash
npx cross-env VISION_ENDPOINT="<Computer Vision endpoint>" node analyzeImageFromLocalFile.js
```
## Next Steps
Take a look at our [API Documentation]<!--TODO: publish refs [apiref]--> for more information about the APIs that are available in the clients.
[analyzeImageFromLocalFile]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/vision/ai-vision-image-analysis-rest/samples/javascript/analyzeImageFromLocalFile.js
[analyzeImageFromUrl]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/vision/ai-vision-image-analysis-rest/samples/javascript/analyzeImageFromUrl.js
[caption]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/vision/ai-vision-image-analysis-rest/samples/javascript/caption.js
[denseCaptions]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/vision/ai-vision-image-analysis-rest/samples/javascript/denseCaptions.js
[objects]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/vision/ai-vision-image-analysis-rest/samples/javascript/objects.js
[read]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/vision/ai-vision-image-analysis-rest/samples/javascript/read.js
[tags]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/vision/ai-vision-image-analysis-rest/samples/javascript/tags.js
[apiref]: https://docs.microsoft.com/javascript/api/@azure-rest/ai-vision
[freesub]: https://azure.microsoft.com/free/
[createinstance_azureaivision]: https://portal.azure.com/#view/Microsoft_Azure_Marketplace/GalleryItemDetailsBladeNopdl/id/Microsoft.CognitiveServicesComputerVision
[package]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/vision/ai-vision-image-analysis-rest/README.md

Просмотреть файл

@ -0,0 +1,69 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
const fs = require('fs');
const { ImageAnalysisClient } = require('@azure-rest/ai-vision-image-analysis');
const createClient = require('@azure-rest/ai-vision-image-analysis').default;
const { AzureKeyCredential } = require('@azure/core-auth');
// Load the .env file if it exists
require("dotenv").config();
const endpoint = process.env['VISION_ENDPOINT'] || '<your_endpoint>';
const key = process.env['VISION_KEY'] || '<your_key>';
const credential = new AzureKeyCredential(key);
const client = createClient (endpoint, credential);
const feature = [
'Caption',
'DenseCaptions',
'Objects',
'People',
'Read',
'SmartCrops',
'Tags'
];
const imagePath = '../sample.jpg';
async function analyzeImageFromFile() {
const imageBuffer = fs.readFileSync(imagePath);
const result = await client.path('/imageanalysis:analyze').post({
body: imageBuffer,
queryParameters: {
features: feature,
'smartCrops-aspect-ratios': [0.9, 1.33]
},
contentType: 'application/octet-stream'
});
const iaResult = result.body;
// Log the response using more of the API's object model
console.log(`Model Version: ${iaResult.modelVersion}`);
console.log(`Image Metadata: ${JSON.stringify(iaResult.metadata)}`);
if (iaResult.captionResult) {
console.log(`Caption: ${iaResult.captionResult.text} (confidence: ${iaResult.captionResult.confidence})`);
}
if (iaResult.denseCaptionsResult) {
iaResult.denseCaptionsResult.values.forEach(denseCaption => console.log(`Dense Caption: ${JSON.stringify(denseCaption)}`));
}
if (iaResult.objectsResult) {
iaResult.objectsResult.values.forEach(object => console.log(`Object: ${JSON.stringify(object)}`));
}
if (iaResult.peopleResult) {
iaResult.peopleResult.values.forEach(person => console.log(`Person: ${JSON.stringify(person)}`));
}
if (iaResult.readResult) {
iaResult.readResult.blocks.forEach(block => console.log(`Text Block: ${JSON.stringify(block)}`));
}
if (iaResult.smartCropsResult) {
iaResult.smartCropsResult.values.forEach(smartCrop => console.log(`Smart Crop: ${JSON.stringify(smartCrop)}`));
}
if (iaResult.tagsResult) {
iaResult.tagsResult.values.forEach(tag => console.log(`Tag: ${JSON.stringify(tag)}`));
}
}
analyzeImageFromFile();

Просмотреть файл

@ -0,0 +1,68 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
const { ImageAnalysisClient } = require('@azure-rest/ai-vision-image-analysis');
const createClient = require('@azure-rest/ai-vision-image-analysis').default;
const { AzureKeyCredential } = require('@azure/core-auth');
// Load the .env file if it exists
require("dotenv").config();
const endpoint = process.env['VISION_ENDPOINT'] || '<your_endpoint>';
const key = process.env['VISION_KEY'] || '<your_key>';
const credential = new AzureKeyCredential(key);
const client = createClient(endpoint, credential);
const features = [
'Caption',
'DenseCaptions',
'Objects',
'People',
'Read',
'SmartCrops',
'Tags'
];
const imageUrl = 'https://aka.ms/azai/vision/image-analysis-sample.jpg';
async function analyzeImageFromUrl() {
const result = await client.path('/imageanalysis:analyze').post({
body: {
url: imageUrl
},
queryParameters: {
features: features,
'smartCrops-aspect-ratios': [0.9, 1.33]
},
contentType: 'application/json'
});
const iaResult = result.body;
console.log(`Model Version: ${iaResult.modelVersion}`);
console.log(`Image Metadata: ${JSON.stringify(iaResult.metadata)}`);
if (iaResult.captionResult) {
console.log(`Caption: ${iaResult.captionResult.text} (confidence: ${iaResult.captionResult.confidence})`);
}
if (iaResult.denseCaptionsResult) {
iaResult.denseCaptionsResult.values.forEach(denseCaption => console.log(`Dense Caption: ${JSON.stringify(denseCaption)}`));
}
if (iaResult.objectsResult) {
iaResult.objectsResult.values.forEach(object => console.log(`Object: ${JSON.stringify(object)}`));
}
if (iaResult.peopleResult) {
iaResult.peopleResult.values.forEach(person => console.log(`Person: ${JSON.stringify(person)}`));
}
if (iaResult.readResult) {
iaResult.readResult.blocks.forEach(block => console.log(`Text Block: ${JSON.stringify(block)}`));
}
if (iaResult.smartCropsResult) {
iaResult.smartCropsResult.values.forEach(smartCrop => console.log(`Smart Crop: ${JSON.stringify(smartCrop)}`));
}
if (iaResult.tagsResult) {
iaResult.tagsResult.values.forEach(tag => console.log(`Tag: ${JSON.stringify(tag)}`));
}
}
analyzeImageFromUrl();

Просмотреть файл

@ -0,0 +1,41 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
const { ImageAnalysisClient } = require('@azure-rest/ai-vision-image-analysis');
const createClient = require('@azure-rest/ai-vision-image-analysis').default;
const { AzureKeyCredential } = require('@azure/core-auth');
// Load the .env file if it exists
require("dotenv").config();
const endpoint = process.env['VISION_ENDPOINT'] || '<your_endpoint>';
const key = process.env['VISION_KEY'] || '<your_key>';
const credential = new AzureKeyCredential(key);
const client = createClient(endpoint, credential);
const feature = [
'Caption'
];
const imageUrl = 'https://aka.ms/azai/vision/image-analysis-sample.jpg';
async function analyzeImage() {
const result = await client.path('/imageanalysis:analyze').post({
body: { url: imageUrl },
queryParameters: { features: feature},
contentType: 'application/json'
});
const iaResult = result.body;
// Process the response
if (iaResult.captionResult.text.length > 0) {
console.log(`This may be ${iaResult.captionResult.text}`);
} else {
console.log('No caption detected.');
}
}
analyzeImage();

Просмотреть файл

@ -0,0 +1,43 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
const { ImageAnalysisClient } = require('@azure-rest/ai-vision-image-analysis');
const createClient = require('@azure-rest/ai-vision-image-analysis').default;
const { AzureKeyCredential } = require('@azure/core-auth');
// Load the .env file if it exists
require("dotenv").config();
const endpoint = process.env['VISION_ENDPOINT'] || '<your_endpoint>';
const key = process.env['VISION_KEY'] || '<your_key>';
const credential = new AzureKeyCredential(key);
const client = createClient (endpoint, credential);
const feature = [
'DenseCaptions'
];
const imageUrl = 'https://aka.ms/azai/vision/image-analysis-sample.jpg';
async function analyzeImage() {
const result = await client.path('/imageanalysis:analyze').post({
body: { url: imageUrl },
queryParameters: { features: feature},
contentType: 'application/json'
});
const iaResult = result.body;
// Process the response
if (iaResult.denseCaptionsResult.values.length > 0) {
iaResult.denseCaptionsResult.values.forEach(caption => {
console.log(`Caption: ${caption.text} with confidence of ${caption.confidence}`);
});
} else {
console.log('No dense captions detected.');
}
}
analyzeImage();

Просмотреть файл

@ -0,0 +1,43 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
const { ImageAnalysisClient } = require('@azure-rest/ai-vision-image-analysis');
const createClient = require('@azure-rest/ai-vision-image-analysis').default;
const { AzureKeyCredential } = require('@azure/core-auth');
// Load the .env file if it exists
require("dotenv").config();
const endpoint = process.env['VISION_ENDPOINT'] || '<your_endpoint>';
const key = process.env['VISION_KEY'] || '<your_key>';
const credential = new AzureKeyCredential(key);
const client = createClient (endpoint, credential);
const feature = [
'Objects'
];
const imageUrl = 'https://aka.ms/azai/vision/image-analysis-sample.jpg';
async function analyzeImage() {
const result = await client.path('/imageanalysis:analyze').post({
body: { url: imageUrl },
queryParameters: { features: feature},
contentType: 'application/json'
});
const iaResult = result.body;
// Process the response
if (iaResult.objectsResult.values.length > 0) {
iaResult.objectsResult.values.forEach(object => {
console.log(`Detected object: ${object.tags[0].name} with confidence of ${object.tags[0].confidence}`);
});
} else {
console.log('No objects detected.');
}
}
analyzeImage();

Просмотреть файл

@ -0,0 +1,17 @@
{
"name": "image-analysis-samples",
"version": "1.0.0",
"description": "Samples for the Azure Image Analysis SDK",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "MIT",
"dependencies": {
"@azure/core-auth": "^1.5.0",
"@azure-rest/ai-vision-image-analysis": "file:../../azure-imageAnalysis-1.0.0-beta.1.tgz",
"cross-env": "^7.0.3",
"dotenv": "^16.3.1"
}
}

Просмотреть файл

@ -0,0 +1,43 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
const { ImageAnalysisClient } = require('@azure-rest/ai-vision-image-analysis');
const createClient = require('@azure-rest/ai-vision-image-analysis').default;
const { AzureKeyCredential } = require('@azure/core-auth');
// Load the .env file if it exists
require("dotenv").config();
const endpoint = process.env['VISION_ENDPOINT'] || '<your_endpoint>';
const key = process.env['VISION_KEY'] || '<your_key>';
const credential = new AzureKeyCredential(key);
const client = createClient (endpoint, credential);
const feature = [
'Read'
];
const imageUrl = 'https://aka.ms/azai/vision/image-analysis-sample.jpg';
async function analyzeImage() {
const result = await client.path('/imageanalysis:analyze').post({
body: { url: imageUrl },
queryParameters: { features: feature},
contentType: 'application/json'
});
const iaResult = result.body;
// Process the response
if (iaResult.readResult.blocks.length > 0) {
iaResult.readResult.blocks.forEach(block => {
console.log(`Detected text block: ${JSON.stringify(block)}`);
});
} else {
console.log('No text blocks detected.');
}
}
analyzeImage();

Просмотреть файл

@ -0,0 +1,2 @@
VISION_KEY=<your-subscription-key>
VISION_ENDPOINT=<your-endpoint-url>

Просмотреть файл

@ -0,0 +1,43 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
const { ImageAnalysisClient } = require('@azure-rest/ai-vision-image-analysis');
const createClient = require('@azure-rest/ai-vision-image-analysis').default;
const { AzureKeyCredential } = require('@azure/core-auth');
// Load the .env file if it exists
require("dotenv").config();
const endpoint = process.env['VISION_ENDPOINT'] || '<your_endpoint>';
const key = process.env['VISION_KEY'] || '<your_key>';
const credential = new AzureKeyCredential(key);
const client = createClient (endpoint, credential);
const feature = [
'Tags'
];
const imageUrl = 'https://aka.ms/azai/vision/image-analysis-sample.jpg';
async function analyzeImage() {
const result = await client.path('/imageanalysis:analyze').post({
body: { url: imageUrl },
queryParameters: { features: feature},
contentType: 'application/json'
});
const iaResult = result.body;
// Process the response
if (iaResult.tagsResult.values.length > 0) {
iaResult.tagsResult.values.forEach(tag => {
console.log(`Tag: ${tag.name} with confidence of ${tag.confidence}`);
});
} else {
console.log('No tags detected.');
}
}
analyzeImage();

Двоичные данные
sdk/vision/ai-vision-image-analysis-rest/samples/sample.jpg Normal file

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 91 KiB

Просмотреть файл

@ -0,0 +1,2 @@
*.js
*.js.map

Просмотреть файл

@ -0,0 +1,72 @@
# Azure AI Vision Image Analysis client library samples for TypeScript
These sample programs show how to use the TypeScript client libraries for Azure AI Vision Image Analysis in some common scenarios.
| **File Name** | **Description** |
| --------------------------------------------------------- | ---------------------------------------------------------------------------------------------- |
| [analyzeImageFromLocalFile.ts][analyzeImageFromLocalFile] | Analyze an image from a local file using Azure AI Vision Image Analysis service. |
| [analyzeImageFromUrl.ts][analyzeImageFromUrl] | Analyze an image from a URL using Azure AI Vision Image Analysis service. |
| [caption.ts][caption] | Generate a human-readable phrase that describes the contents of an image. |
| [denseCaptions.ts][denseCaptions] | Generate detailed descriptions of up to 10 regions of the image. |
| [objects.ts][objects] | Detect objects in an image and return their bounding box coordinates. |
| [read.ts][read] | Extract printed or handwritten text from images. |
| [tags.ts][tags] | Return content tags for recognizable objects, living beings, scenery, and actions in an image. |
## Prerequisites
The sample programs are compatible with [LTS versions of Node.js](https://github.com/nodejs/release#release-schedule).
You need [an Azure subscription][freesub] and the following Azure resources to run these sample programs:
- [Azure Computer Vision][createinstance_azureaivision]
Samples retrieve credentials to access the service endpoint from environment variables. Alternatively, edit the source code to include the appropriate credentials. See each individual sample for details on which environment variables/credentials it requires to function.
Adapting the samples to run in the browser may require some additional consideration. For details, please see the [package README][package].
## Setup
To run the samples using the published version of the package:
1. Install the dependencies using `npm`:
```bash
npm install
```
2. Transpile the TypeScript samples to JavaScript:
```bash
npm run build
```
3. Edit the file `sample.env`, adding the correct credentials to access the Azure service and run the samples. Then rename the file from `sample.env` to just `.env`. The sample programs will read this file automatically.
4. Run whichever samples you like (note that some samples may require additional setup, see the table above):
```bash
node analyzeImageFromLocalFile.js
```
Alternatively, run a single sample with the correct environment variables set (setting up the `.env` file is not required if you do this), for example (cross-platform):
```bash
npx cross-env VISION_ENDPOINT="<Computer Vision endpoint>" VISION_KEY="<your vision key>" node analyzeImageFromLocalFile.js
```
## Next Steps
Take a look at our [API Documentation]<!--TODO: publish refs [apiref]--> for more information about the APIs that are available in the clients.
[analyzeImageFromLocalFile]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/vision/ai-vision-image-analysis-rest/samples/typescript/analyzeImageFromLocalFile.ts
[analyzeImageFromUrl]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/vision/ai-vision-image-analysis-rest/samples/typescript/analyzeImageFromUrl.ts
[caption]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/vision/ai-vision-image-analysis-rest/samples/typescript/caption.ts
[denseCaptions]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/vision/ai-vision-image-analysis-rest/samples/typescript/denseCaptions.ts
[objects]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/vision/ai-vision-image-analysis-rest/samples/typescript/objects.ts
[read]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/vision/ai-vision-image-analysis-rest/samples/typescript/read.ts
[tags]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/vision/ai-vision-image-analysis-rest/samples/typescript/tags.ts
[apiref]: https://docs.microsoft.com/javascript/api/@azure-rest/ai-vision
[freesub]: https://azure.microsoft.com/free/
[createinstance_azureaivision]: https://portal.azure.com/#view/Microsoft_Azure_Marketplace/GalleryItemDetailsBladeNopdl/id/Microsoft.CognitiveServicesComputerVision
[package]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/vision/ai-vision-image-analysis-rest/README.md

Просмотреть файл

@ -0,0 +1,79 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
import * as fs from 'fs';
import createImageAnalysisClient, {
DenseCaptionOutput,
ImageAnalysisClient,
DetectedPersonOutput,
DetectedTextBlockOutput,
DetectedObjectOutput,
CropRegionOutput,
DetectedTagOutput,
isUnexpected
} from '@azure-rest/ai-vision-image-analysis';
import { AzureKeyCredential } from '@azure/core-auth';
// Load the .env file if it exists
import * as dotenv from "dotenv";
dotenv.config();
const endpoint: string = process.env['VISION_ENDPOINT'] || '<your_endpoint>';
const key: string = process.env['VISION_KEY'] || '<your_key>';
const credential = new AzureKeyCredential(key);
const client: ImageAnalysisClient = createImageAnalysisClient(endpoint, credential);
const features: string[] = [
'Caption',
'DenseCaptions',
'Objects',
'People',
'Read',
'SmartCrops',
'Tags'
];
const imagePath: string = '../sample.jpg';
async function analyzeImageFromFile(): Promise<void> {
const imageBuffer: Buffer = fs.readFileSync(imagePath);
const result = await client.path('/imageanalysis:analyze').post({
body: imageBuffer,
queryParameters: {
features: features,
'smartCrops-aspect-ratios': [0.9, 1.33]
},
contentType: 'application/octet-stream'
});
if (isUnexpected(result)) {
throw result.body.error;
}
console.log(`Model Version: ${result.body.modelVersion}`);
console.log(`Image Metadata: ${JSON.stringify(result.body.metadata)}`);
if (result.body.captionResult) {
console.log(`Caption: ${result.body.captionResult.text} (confidence: ${result.body.captionResult.confidence})`);
}
if (result.body.denseCaptionsResult) {
result.body.denseCaptionsResult.values.forEach((denseCaption: DenseCaptionOutput) => console.log(`Dense Caption: ${JSON.stringify(denseCaption)}`));
}
if (result.body.objectsResult) {
result.body.objectsResult.values.forEach((object: DetectedObjectOutput) => console.log(`Object: ${JSON.stringify(object)}`));
}
if (result.body.peopleResult) {
result.body.peopleResult.values.forEach((person: DetectedPersonOutput) => console.log(`Person: ${JSON.stringify(person)}`));
}
if (result.body.readResult) {
result.body.readResult.blocks.forEach((block: DetectedTextBlockOutput) => console.log(`Text Block: ${JSON.stringify(block)}`));
}
if (result.body.smartCropsResult) {
result.body.smartCropsResult.values.forEach((smartCrop: CropRegionOutput) => console.log(`Smart Crop: ${JSON.stringify(smartCrop)}`));
}
if (result.body.tagsResult) {
result.body.tagsResult.values.forEach((tag: DetectedTagOutput) => console.log(`Tag: ${JSON.stringify(tag)}`));
}
}
analyzeImageFromFile();

Просмотреть файл

@ -0,0 +1,63 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
import createImageAnalysisClient, {
DenseCaptionOutput,
ImageAnalysisClient,
DetectedPersonOutput,
DetectedTextBlockOutput,
DetectedObjectOutput,
CropRegionOutput,
DetectedTagOutput,
isUnexpected
} from '@azure-rest/ai-vision-image-analysis';
import { AzureKeyCredential } from '@azure/core-auth';
// Load the .env file if it exists
import * as dotenv from "dotenv";
dotenv.config();
const endpoint: string = process.env['VISION_ENDPOINT'] || '<your_endpoint>';
const key: string = process.env['VISION_KEY'] || '<your_key>';
const credential = new AzureKeyCredential(key);
const client: ImageAnalysisClient = createImageAnalysisClient(endpoint, credential);
const features: string[] = [
'Caption',
'DenseCaptions',
'Objects',
'People',
'Read',
'SmartCrops',
'Tags'
];
const imageUrl: string = 'https://aka.ms/azai/vision/image-analysis-sample.jpg';
async function analyzeImageFromUrl(): Promise<void> {
const result = await client.path('/imageanalysis:analyze').post({
body: {
url: imageUrl
},
queryParameters: {
features: features,
'smartCrops-aspect-ratios': [0.9, 1.33]
},
contentType: 'application/json'
});
if (isUnexpected(result)) {
throw result.body.error;
}
console.log(`Model Version: ${result.body.modelVersion}`);
console.log(`Image Metadata: ${JSON.stringify(result.body.metadata)}`);
if (result.body.captionResult) console.log(`Caption: ${result.body.captionResult.text} (confidence: ${result.body.captionResult.confidence})`);
if (result.body.denseCaptionsResult) result.body.denseCaptionsResult.values.forEach((denseCaption: DenseCaptionOutput) => console.log(`Dense Caption: ${JSON.stringify(denseCaption)}`));
if (result.body.objectsResult) result.body.objectsResult.values.forEach((object: DetectedObjectOutput) => console.log(`Object: ${JSON.stringify(object)}`));
if (result.body.peopleResult) result.body.peopleResult.values.forEach((person: DetectedPersonOutput) => console.log(`Person: ${JSON.stringify(person)}`));
if (result.body.readResult) result.body.readResult.blocks.forEach((block: DetectedTextBlockOutput) => console.log(`Text Block: ${JSON.stringify(block)}`));
if (result.body.smartCropsResult) result.body.smartCropsResult.values.forEach((smartCrop: CropRegionOutput) => console.log(`Smart Crop: ${JSON.stringify(smartCrop)}`));
if (result.body.tagsResult) result.body.tagsResult.values.forEach((tag: DetectedTagOutput) => console.log(`Tag: ${JSON.stringify(tag)}`));
}
analyzeImageFromUrl();

Просмотреть файл

@ -0,0 +1,42 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
import createImageAnalysisClient, { ImageAnalysisClient, isUnexpected } from '@azure-rest/ai-vision-image-analysis';
import { AzureKeyCredential } from '@azure/core-auth';
// Load the .env file if it exists
import * as dotenv from "dotenv";
dotenv.config();
const endpoint: string = process.env['VISION_ENDPOINT'] || '<your_endpoint>';
const key: string = process.env['VISION_KEY'] || '<your_key>';
const credential = new AzureKeyCredential(key);
const client: ImageAnalysisClient = createImageAnalysisClient(endpoint, credential);
const features: string[] = [
'Caption'
];
const imageUrl: string = 'https://aka.ms/azai/vision/image-analysis-sample.jpg';
async function analyzeImage(): Promise<void> {
const result = await client.path('/imageanalysis:analyze').post({
body: { url: imageUrl },
queryParameters: { features: features },
contentType: 'application/json'
})
if (isUnexpected(result)) {
throw result.body.error;
}
// Process the response
if (result.body.captionResult && result.body.captionResult.text.length > 0) {
console.log(`This may be ${result.body.captionResult.text}`);
} else {
console.log('No caption detected.');
}
}
analyzeImage();

Просмотреть файл

@ -0,0 +1,44 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
import createImageAnalysisClient, { ImageAnalysisClient, isUnexpected } from '@azure-rest/ai-vision-image-analysis';
import { AzureKeyCredential } from '@azure/core-auth';
// Load the .env file if it exists
import * as dotenv from "dotenv";
dotenv.config();
const endpoint: string = process.env['VISION_ENDPOINT'] || '<your_endpoint>';
const key: string = process.env['VISION_KEY'] || '<your_key>';
const credential = new AzureKeyCredential(key);
const client: ImageAnalysisClient = createImageAnalysisClient(endpoint, credential);
const features: string[] = [
'DenseCaptions'
];
const imageUrl: string = 'https://aka.ms/azai/vision/image-analysis-sample.jpg';
async function analyzeImage(): Promise<void> {
const result = await client.path('/imageanalysis:analyze').post({
body: { url: imageUrl },
queryParameters: { features: features },
contentType: 'application/json'
})
if (isUnexpected(result)) {
throw result.body.error;
}
// Process the response
if (result.body.denseCaptionsResult && result.body.denseCaptionsResult.values.length > 0) {
result.body.denseCaptionsResult.values.forEach(caption => {
console.log(`Caption: ${caption.text} with confidence of ${caption.confidence}`);
});
} else {
console.log('No dense captions detected.');
}
}
analyzeImage();

Просмотреть файл

@ -0,0 +1,44 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
import createImageAnalysisClient, { ImageAnalysisClient, isUnexpected } from '@azure-rest/ai-vision-image-analysis';
import { AzureKeyCredential } from '@azure/core-auth';
// Load the .env file if it exists
import * as dotenv from "dotenv";
dotenv.config();
const endpoint: string = process.env['VISION_ENDPOINT'] || '<your_endpoint>';
const key: string = process.env['VISION_KEY'] || '<your_key>';
const credential = new AzureKeyCredential(key);
const client: ImageAnalysisClient = createImageAnalysisClient(endpoint, credential);
const features: string[] = [
'Objects'
];
const imageUrl: string = 'https://aka.ms/azai/vision/image-analysis-sample.jpg';
async function analyzeImage(): Promise<void> {
const result = await client.path('/imageanalysis:analyze').post({
body: { url: imageUrl },
queryParameters: { features: features },
contentType: 'application/json'
})
if (isUnexpected(result)) {
throw result.body.error;
}
// Process the response
if (result.body.objectsResult && result.body.objectsResult.values.length > 0) {
result.body.objectsResult.values.forEach(object => {
console.log(`Detected object: ${object.tags[0].name} with confidence of ${object.tags[0].confidence}`);
});
} else {
console.log('No objects detected.');
}
}
analyzeImage();

Просмотреть файл

@ -0,0 +1,20 @@
{
"name": "image-analysis-samples",
"version": "1.0.0",
"description": "Samples for the Azure Image Analysis SDK",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"build": "tsc"
},
"author": "",
"license": "MIT",
"dependencies": {
"@azure/cognitiveservices-computervision": "^7.0.0",
"@azure/core-auth": "^1.5.0",
"@azure-rest/ai-vision-image-analysis": "next",
"cross-env": "^7.0.3",
"dotenv": "^16.3.1",
"typescript": "^4.1.2"
}
}

Просмотреть файл

@ -0,0 +1,44 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
import createImageAnalysisClient, { ImageAnalysisClient, isUnexpected } from '@azure-rest/ai-vision-image-analysis';
import { AzureKeyCredential } from '@azure/core-auth';
// Load the .env file if it exists
import * as dotenv from "dotenv";
dotenv.config();
const endpoint: string = process.env['VISION_ENDPOINT'] || '<your_endpoint>';
const key: string = process.env['VISION_KEY'] || '<your_key>';
const credential = new AzureKeyCredential(key);
const client: ImageAnalysisClient = createImageAnalysisClient(endpoint, credential);
const features: string[] = [
'Read'
];
const imageUrl: string = 'https://aka.ms/azai/vision/image-analysis-sample.jpg';
async function analyzeImage(): Promise<void> {
const result = await client.path('/imageanalysis:analyze').post({
body: { url: imageUrl },
queryParameters: { features: features },
contentType: 'application/json'
})
if (isUnexpected(result)) {
throw result.body.error;
}
// Process the response
if (result.body.readResult && result.body.readResult.blocks.length > 0) {
result.body.readResult.blocks.forEach(block => {
console.log(`Detected text block: ${JSON.stringify(block)}`);
});
} else {
console.log('No text blocks detected.');
}
}
analyzeImage();

Просмотреть файл

@ -0,0 +1,2 @@
AZURE_SUBSCRIPTION_KEY=<your-subscription-key>
AZURE_ENDPOINT=<your-endpoint-url>

Просмотреть файл

@ -0,0 +1,44 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
import createImageAnalysisClient, { ImageAnalysisClient, isUnexpected } from '@azure-rest/ai-vision-image-analysis';
import { AzureKeyCredential } from '@azure/core-auth';
// Load the .env file if it exists
import * as dotenv from "dotenv";
dotenv.config();
const endpoint: string = process.env['VISION_ENDPOINT'] || '<your_endpoint>';
const key: string = process.env['VISION_KEY'] || '<your_key>';
const credential = new AzureKeyCredential(key);
const client: ImageAnalysisClient = createImageAnalysisClient(endpoint, credential);
const features: string[] = [
'Tags'
];
const imageUrl: string = 'https://aka.ms/azai/vision/image-analysis-sample.jpg';
async function analyzeImage(): Promise<void> {
const result = await client.path('/imageanalysis:analyze').post({
body: { url: imageUrl },
queryParameters: { features: features },
contentType: 'application/json'
})
if (isUnexpected(result)) {
throw result.body.error;
}
// Process the response
if (result.body.tagsResult && result.body.tagsResult.values.length > 0) {
result.body.tagsResult.values.forEach(tag => {
console.log(`Tag: ${tag.name} with confidence of ${tag.confidence}`);
});
} else {
console.log('No tags detected.');
}
}
analyzeImage();

Просмотреть файл

@ -0,0 +1,8 @@
{
"compilerOptions": {
"target": "es6",
"module": "commonjs",
"strict": true,
"esModuleInterop": true
}
}

Просмотреть файл

@ -0,0 +1,31 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
import { AnalyzeFromBufferParameters, AnalyzeFromUrlParameters } from "./parameters";
import {
AnalyzeFromBuffer200Response,
AnalyzeFromBufferDefaultResponse,
AnalyzeFromUrl200Response,
AnalyzeFromUrlDefaultResponse,
} from "./responses";
import { Client, StreamableMethod } from "@azure-rest/core-client";
export interface AnalyzeFromBuffer {
/** Performs a single Image Analysis operation */
post(
options: AnalyzeFromBufferParameters
): StreamableMethod<AnalyzeFromBuffer200Response | AnalyzeFromBufferDefaultResponse>;
/** Performs a single Image Analysis operation */
post(
options: AnalyzeFromUrlParameters
): StreamableMethod<AnalyzeFromUrl200Response | AnalyzeFromUrlDefaultResponse>;
}
export interface Routes {
/** Resource for '/imageanalysis:analyze' has methods for the following verbs: post */
(path: "/imageanalysis:analyze"): AnalyzeFromBuffer;
}
export type ImageAnalysisClient = Client & {
path: Routes;
};

Просмотреть файл

@ -0,0 +1,44 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
import { getClient, ClientOptions } from "@azure-rest/core-client";
import { logger } from "./logger";
import { KeyCredential } from "@azure/core-auth";
import { ImageAnalysisClient } from "./clientDefinitions";
/**
* Initialize a new instance of `ImageAnalysisClient`
* @param endpoint - Azure AI Computer Vision endpoint (protocol and hostname, for example:
* https://<resource-name>.cognitiveservices.azure.com).
* @param credentials - uniquely identify client credential
* @param options - the parameter for all optional parameters
*/
export default function createClient(
endpoint: string,
credentials: KeyCredential,
options: ClientOptions = {}
): ImageAnalysisClient {
const baseUrl = options.baseUrl ?? `${endpoint}/computervision`;
options.apiVersion = options.apiVersion ?? "2023-10-01";
const userAgentInfo = `azsdk-js-imageAnalysis-rest/1.0.0-beta.1`;
const userAgentPrefix =
options.userAgentOptions && options.userAgentOptions.userAgentPrefix
? `${options.userAgentOptions.userAgentPrefix} ${userAgentInfo}`
: `${userAgentInfo}`;
options = {
...options,
userAgentOptions: {
userAgentPrefix,
},
loggingOptions: {
logger: options.loggingOptions?.logger ?? logger.info,
},
credentials: {
apiKeyHeaderName: options.credentials?.apiKeyHeaderName ?? "Ocp-Apim-Subscription-Key",
},
};
const client = getClient(baseUrl, credentials, options) as ImageAnalysisClient;
return client;
}

Просмотреть файл

@ -0,0 +1,14 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
import ImageAnalysisClient from "./imageAnalysisClient";
export * from "./imageAnalysisClient";
export * from "./parameters";
export * from "./responses";
export * from "./clientDefinitions";
export * from "./isUnexpected";
export * from "./models";
export * from "./outputModels";
export default ImageAnalysisClient;

Просмотреть файл

@ -0,0 +1,100 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
import {
AnalyzeFromBuffer200Response,
AnalyzeFromUrl200Response,
AnalyzeFromBufferDefaultResponse,
} from "./responses";
const responseMap: Record<string, string[]> = {
"POST /imageanalysis:analyze": ["200"],
};
export function isUnexpected(
response:
| AnalyzeFromBuffer200Response
| AnalyzeFromUrl200Response
| AnalyzeFromBufferDefaultResponse
): response is AnalyzeFromBufferDefaultResponse;
export function isUnexpected(
response:
| AnalyzeFromBuffer200Response
| AnalyzeFromUrl200Response
| AnalyzeFromBufferDefaultResponse
): response is AnalyzeFromBufferDefaultResponse {
const lroOriginal = response.headers["x-ms-original-url"];
const url = new URL(lroOriginal ?? response.request.url);
const method = response.request.method;
let pathDetails = responseMap[`${method} ${url.pathname}`];
if (!pathDetails) {
pathDetails = getParametrizedPathSuccess(method, url.pathname);
}
return !pathDetails.includes(response.status);
}
function getParametrizedPathSuccess(method: string, path: string): string[] {
const pathParts = path.split("/");
// Traverse list to match the longest candidate
// matchedLen: the length of candidate path
// matchedValue: the matched status code array
let matchedLen = -1,
matchedValue: string[] = [];
// Iterate the responseMap to find a match
for (const [key, value] of Object.entries(responseMap)) {
// Extracting the path from the map key which is in format
// GET /path/foo
if (!key.startsWith(method)) {
continue;
}
const candidatePath = getPathFromMapKey(key);
// Get each part of the url path
const candidateParts = candidatePath.split("/");
// track if we have found a match to return the values found.
let found = true;
for (let i = candidateParts.length - 1, j = pathParts.length - 1; i >= 1 && j >= 1; i--, j--) {
if (candidateParts[i]?.startsWith("{") && candidateParts[i]?.indexOf("}") !== -1) {
const start = candidateParts[i]!.indexOf("}") + 1,
end = candidateParts[i]?.length;
// If the current part of the candidate is a "template" part
// Try to use the suffix of pattern to match the path
// {guid} ==> $
// {guid}:export ==> :export$
const isMatched = new RegExp(`${candidateParts[i]?.slice(start, end)}`).test(
pathParts[j] || ""
);
if (!isMatched) {
found = false;
break;
}
continue;
}
// If the candidate part is not a template and
// the parts don't match mark the candidate as not found
// to move on with the next candidate path.
if (candidateParts[i] !== pathParts[j]) {
found = false;
break;
}
}
// We finished evaluating the current candidate parts
// Update the matched value if and only if we found the longer pattern
if (found && candidatePath.length > matchedLen) {
matchedLen = candidatePath.length;
matchedValue = value;
}
}
return matchedValue;
}
function getPathFromMapKey(mapKey: string): string {
const pathStart = mapKey.indexOf("/");
return mapKey.slice(pathStart);
}

Просмотреть файл

@ -0,0 +1,5 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
import { createClientLogger } from "@azure/logger";
export const logger = createClientLogger("imageAnalysis");

Просмотреть файл

@ -0,0 +1,8 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
/** An object holding the publicly reachable URL of an image to analyze. */
export interface ImageUrl {
/** Publicly reachable URL of an image to analyze. */
url: string;
}

Просмотреть файл

@ -0,0 +1,211 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
/** Represents the outcome of an Image Analysis operation. */
export interface ImageAnalysisResultOutput {
/** The generated phrase that describes the content of the analyzed image. */
captionResult?: CaptionResultOutput;
/**
* The up to 10 generated phrases, the first describing the content of the whole image,
* and the others describing the content of different regions of the image.
*/
denseCaptionsResult?: DenseCaptionsResultOutput;
/** Metadata associated with the analyzed image. */
metadata: ImageMetadataOutput;
/** The cloud AI model used for the analysis */
modelVersion: string;
/** A list of detected physical objects in the analyzed image, and their location. */
objectsResult?: ObjectsResultOutput;
/** A list of detected people in the analyzed image, and their location. */
peopleResult?: PeopleResultOutput;
/** The extracted printed and hand-written text in the analyze image. Also knows as OCR. */
readResult?: ReadResultOutput;
/**
* A list of crop regions at the desired as aspect ratios (if provided) that can be used as image thumbnails.
* These regions preserve as much content as possible from the analyzed image, with priority given to detected faces.
*/
smartCropsResult?: SmartCropsResultOutput;
/** A list of content tags in the analyzed image. */
tagsResult?: TagsResultOutput;
}
/** Represents a generated phrase that describes the content of the whole image. */
export interface CaptionResultOutput {
/**
* A score, in the range of 0 to 1 (inclusive), representing the confidence that this description is accurate.
* Higher values indicating higher confidence.
*/
confidence: number;
/** The text of the caption. */
text: string;
}
/**
* Represents a list of up to 10 image captions for different regions of the image.
* The first caption always applies to the whole image.
*/
export interface DenseCaptionsResultOutput {
/** The list of image captions. */
values: Array<DenseCaptionOutput>;
}
/** Represents a generated phrase that describes the content of the whole image or a region in the image */
export interface DenseCaptionOutput {
/**
* A score, in the range of 0 to 1 (inclusive), representing the confidence that this description is accurate.
* Higher values indicating higher confidence.
*/
confidence: number;
/** The text of the caption. */
text: string;
/** The image region of which this caption applies. */
boundingBox: ImageBoundingBoxOutput;
}
/** A basic rectangle specifying a sub-region of the image. */
export interface ImageBoundingBoxOutput {
/** X-coordinate of the top left point of the area, in pixels. */
x: number;
/** Y-coordinate of the top left point of the area, in pixels. */
y: number;
/** Width of the area, in pixels. */
w: number;
/** Height of the area, in pixels. */
h: number;
}
/** Metadata associated with the analyzed image. */
export interface ImageMetadataOutput {
/** The height of the image in pixels. */
height: number;
/** The width of the image in pixels. */
width: number;
}
/** Represents a list of physical object detected in an image and their location. */
export interface ObjectsResultOutput {
/** A list of physical object detected in an image and their location. */
values: Array<DetectedObjectOutput>;
}
/** Represents a physical object detected in an image. */
export interface DetectedObjectOutput {
/** A rectangular boundary where the object was detected. */
boundingBox: ImageBoundingBoxOutput;
/** A single-item list containing the object information. */
tags: Array<DetectedTagOutput>;
}
/**
* A content entity observation in the image. A tag can be a physical object, living being, scenery, or action
* that appear in the image.
*/
export interface DetectedTagOutput {
/**
* A score, in the range of 0 to 1 (inclusive), representing the confidence that this entity was observed.
* Higher values indicating higher confidence.
*/
confidence: number;
/** Name of the entity. */
name: string;
}
/** Represents a list of people detected in an image and their location. */
export interface PeopleResultOutput {
/** A list of people detected in an image and their location. */
values: Array<DetectedPersonOutput>;
}
/** Represents a person detected in an image. */
export interface DetectedPersonOutput {
/** A rectangular boundary where the person was detected. */
readonly boundingBox: ImageBoundingBoxOutput;
/**
* A score, in the range of 0 to 1 (inclusive), representing the confidence that this detection was accurate.
* Higher values indicating higher confidence.
*/
readonly confidence: number;
}
/** The results of a Read (OCR) operation. */
export interface ReadResultOutput {
/** A list of text blocks in the image. At the moment only one block is returned, containing all the text detected in the image. */
blocks: Array<DetectedTextBlockOutput>;
}
/** Represents a single block of detected text in the image. */
export interface DetectedTextBlockOutput {
/** A list of text lines in this block. */
lines: Array<DetectedTextLineOutput>;
}
/** Represents a single line of text in the image. */
export interface DetectedTextLineOutput {
/** Text content of the detected text line. */
text: string;
/** A bounding polygon around the text line. At the moment only quadrilaterals are supported (represented by 4 image points). */
boundingPolygon: Array<ImagePointOutput>;
/** A list of words in this line. */
words: Array<DetectedTextWordOutput>;
}
/** Represents the coordinates of a single pixel in the image. */
export interface ImagePointOutput {
/** The horizontal x-coordinate of this point, in pixels. Zero values corresponds to the left-most pixels in the image. */
x: number;
/** The vertical y-coordinate of this point, in pixels. Zero values corresponds to the top-most pixels in the image. */
y: number;
}
/**
* A word object consisting of a contiguous sequence of characters. For non-space delimited languages,
* such as Chinese, Japanese, and Korean, each character is represented as its own word.
*/
export interface DetectedTextWordOutput {
/** Text content of the word. */
text: string;
/** A bounding polygon around the word. At the moment only quadrilaterals are supported (represented by 4 image points). */
boundingPolygon: Array<ImagePointOutput>;
/** The level of confidence that the word was detected. Confidence scores span the range of 0.0 to 1.0 (inclusive), with higher values indicating a higher confidence of detection. */
confidence: number;
}
/**
* Smart cropping result. A list of crop regions at the desired as aspect ratios (if provided) that can be used as image thumbnails.
* These regions preserve as much content as possible from the analyzed image, with priority given to detected faces.
*/
export interface SmartCropsResultOutput {
/** A list of crop regions. */
values: Array<CropRegionOutput>;
}
/**
* A region at the desired aspect ratio that can be used as image thumbnail.
* The region preserves as much content as possible from the analyzed image, with priority given to detected faces.
*/
export interface CropRegionOutput {
/**
* The aspect ratio of the crop region.
* Aspect ratio is calculated by dividing the width of the region in pixels by its height in pixels.
* The aspect ratio will be in the range 0.75 to 1.8 (inclusive) if provided by the developer during the analyze call.
* Otherwise, it will be in the range 0.5 to 2.0 (inclusive).
*/
aspectRatio: number;
/** The bounding box of the region. */
boundingBox: ImageBoundingBoxOutput;
}
/**
* A list of entities observed in the image. Tags can be physical objects, living being, scenery, or actions
* that appear in the image.
*/
export interface TagsResultOutput {
/** A list of tags. */
values: Array<DetectedTagOutput>;
}
/** An object holding the publicly reachable URL of an image to analyze. */
export interface ImageUrlOutput {
/** Publicly reachable URL of an image to analyze. */
url: string;
}

Просмотреть файл

@ -0,0 +1,121 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
import { RequestParameters } from "@azure-rest/core-client";
import { ImageUrl } from "./models";
export interface AnalyzeFromBufferBodyParam {
/**
* The image to be analyzed
*
* Value may contain any sequence of octets
*/
body: string | Uint8Array | ReadableStream<Uint8Array> | NodeJS.ReadableStream;
}
export interface AnalyzeFromBufferQueryParamProperties {
/**
* A list of visual features to analyze.
* Seven visual features are supported: Caption, DenseCaptions, Read (OCR), Tags, Objects, SmartCrops, and People.
* At least one visual feature must be specified.
*/
features: string[];
/**
* The desired language for result generation (a two-letter language code).
* If this option is not specified, the default value 'en' is used (English).
* See https://aka.ms/cv-languages for a list of supported languages.
* At the moment, only tags can be generated in none-English languages.
*/
language?: string;
/**
* Boolean flag for enabling gender-neutral captioning for Caption and Dense Captions features.
* By default captions may contain gender terms (for example: 'man', 'woman', or 'boy', 'girl').
* If you set this to "true", those will be replaced with gender-neutral terms (for example: 'person' or 'child').
*/
"gender-neutral-caption"?: boolean;
/**
* A list of aspect ratios to use for smart cropping.
* Aspect ratios are calculated by dividing the target crop width in pixels by the height in pixels.
* Supported values are between 0.75 and 1.8 (inclusive).
* If this parameter is not specified, the service will return one crop region with an aspect
* ratio it sees fit between 0.5 and 2.0 (inclusive).
*/
"smartcrops-aspect-ratios"?: number[];
/**
* The version of cloud AI-model used for analysis.
* The format is the following: 'latest' (default value) or 'YYYY-MM-DD' or 'YYYY-MM-DD-preview', where 'YYYY', 'MM', 'DD' are the year, month and day associated with the model.
* This is not commonly set, as the default always gives the latest AI model with recent improvements.
* If however you would like to make sure analysis results do not change over time, set this value to a specific model version.
*/
"model-version"?: string;
}
export interface AnalyzeFromBufferQueryParam {
queryParameters: AnalyzeFromBufferQueryParamProperties;
}
export interface AnalyzeFromBufferMediaTypesParam {
/** The format of the HTTP payload. */
contentType: "application/octet-stream";
}
export type AnalyzeFromBufferParameters = AnalyzeFromBufferQueryParam &
AnalyzeFromBufferMediaTypesParam &
AnalyzeFromBufferBodyParam &
RequestParameters;
export interface AnalyzeFromUrlBodyParam {
/** The image to be analyzed */
body: ImageUrl;
}
export interface AnalyzeFromUrlQueryParamProperties {
/**
* A list of visual features to analyze.
* Seven visual features are supported: Caption, DenseCaptions, Read (OCR), Tags, Objects, SmartCrops, and People.
* At least one visual feature must be specified.
*/
features: string[];
/**
* The desired language for result generation (a two-letter language code).
* If this option is not specified, the default value 'en' is used (English).
* See https://aka.ms/cv-languages for a list of supported languages.
* At the moment, only tags can be generated in none-English languages.
*/
language?: string;
/**
* Boolean flag for enabling gender-neutral captioning for Caption and Dense Captions features.
* By default captions may contain gender terms (for example: 'man', 'woman', or 'boy', 'girl').
* If you set this to "true", those will be replaced with gender-neutral terms (for example: 'person' or 'child').
*/
"gender-neutral-caption"?: boolean;
/**
* A list of aspect ratios to use for smart cropping.
* Aspect ratios are calculated by dividing the target crop width in pixels by the height in pixels.
* Supported values are between 0.75 and 1.8 (inclusive).
* If this parameter is not specified, the service will return one crop region with an aspect
* ratio it sees fit between 0.5 and 2.0 (inclusive).
*/
"smartcrops-aspect-ratios"?: number[];
/**
* The version of cloud AI-model used for analysis.
* The format is the following: 'latest' (default value) or 'YYYY-MM-DD' or 'YYYY-MM-DD-preview', where 'YYYY', 'MM', 'DD' are the year, month and day associated with the model.
* This is not commonly set, as the default always gives the latest AI model with recent improvements.
* If however you would like to make sure analysis results do not change over time, set this value to a specific model version.
*/
"model-version"?: string;
}
export interface AnalyzeFromUrlQueryParam {
queryParameters: AnalyzeFromUrlQueryParamProperties;
}
export interface AnalyzeFromUrlMediaTypesParam {
/** The format of the HTTP payload. */
contentType: "application/json";
}
export type AnalyzeFromUrlParameters = AnalyzeFromUrlQueryParam &
AnalyzeFromUrlMediaTypesParam &
AnalyzeFromUrlBodyParam &
RequestParameters;

Просмотреть файл

@ -0,0 +1,40 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
import { RawHttpHeaders } from "@azure/core-rest-pipeline";
import { HttpResponse, ErrorResponse } from "@azure-rest/core-client";
import { ImageAnalysisResultOutput } from "./outputModels";
/** The request has succeeded. */
export interface AnalyzeFromBuffer200Response extends HttpResponse {
status: "200";
body: ImageAnalysisResultOutput;
}
export interface AnalyzeFromBufferDefaultHeaders {
/** String error code indicating what went wrong. */
"x-ms-error-code"?: string;
}
export interface AnalyzeFromBufferDefaultResponse extends HttpResponse {
status: string;
body: ErrorResponse;
headers: RawHttpHeaders & AnalyzeFromBufferDefaultHeaders;
}
/** The request has succeeded. */
export interface AnalyzeFromUrl200Response extends HttpResponse {
status: "200";
body: ImageAnalysisResultOutput;
}
export interface AnalyzeFromUrlDefaultHeaders {
/** String error code indicating what went wrong. */
"x-ms-error-code"?: string;
}
export interface AnalyzeFromUrlDefaultResponse extends HttpResponse {
status: string;
body: ErrorResponse;
headers: RawHttpHeaders & AnalyzeFromUrlDefaultHeaders;
}

Двоичный файл не отображается.

После

Ширина:  |  Высота:  |  Размер: 91 KiB

Просмотреть файл

@ -0,0 +1,378 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
import { assert } from "chai";
import { Context } from "mocha";
import {
ImageAnalysisClient,
CaptionResultOutput,
ImageAnalysisResultOutput,
ImagePointOutput,
ObjectsResultOutput,
TagsResultOutput,
} from "../../src/index.js";
import { createClient, createRecorder } from "./utils/recordedClient";
import { Recorder } from "@azure-tools/test-recorder";
describe("Analyze Tests", () => {
let recorder: Recorder;
let client: ImageAnalysisClient;
beforeEach(async function (this: Context) {
recorder = await createRecorder(this);
recorder.addSanitizers({
headerSanitizers: [{ key: "Ocp-Apim-Subscription-Key", value: "***********" }],
uriSanitizers: [{ target: "https://[a-zA-Z0-9-]*/", value: "https://endpoint/" }],
});
client = await createClient(recorder);
});
afterEach(async function () {
await recorder?.stop();
});
async function downloadUrlToUint8Array(url: string): Promise<Uint8Array> {
const response = await fetch(url);
if (!response.ok) {
throw new Error(`Failed to download content: ${response.status} ${response.statusText}`);
}
const buffer = await response.arrayBuffer();
return new Uint8Array(buffer);
}
it("Analyze from URL", async function () {
const allFeatures: string[] = [
"Caption",
"DenseCaptions",
"Objects",
"People",
"Read",
"SmartCrops",
"Tags",
];
const someFeatures: string[] = ["Caption", "Read"];
const testFeaturesList: string[][] = [allFeatures, someFeatures];
for (const testFeatures of testFeaturesList) {
const result = await client.path("/imageanalysis:analyze").post({
body: {
url: "https://aka.ms/azai/vision/image-analysis-sample.jpg",
},
queryParameters: {
features: testFeatures,
"smartCrops-aspect-ratios": [0.9, 1.33],
},
contentType: "application/json",
});
assert.isNotNull(result);
assert.equal(result.status, "200");
const iaResult: ImageAnalysisResultOutput = result.body as ImageAnalysisResultOutput;
validateResponse(iaResult, testFeatures, false);
}
});
it("Analyze from Stream", async function () {
const allFeatures: string[] = [
"Caption",
"DenseCaptions",
"Objects",
"People",
"Read",
"SmartCrops",
"Tags",
];
const someFeatures: string[] = ["Caption", "Read"];
const url: string = "https://aka.ms/azai/vision/image-analysis-sample.jpg";
const data: Uint8Array = await downloadUrlToUint8Array(url);
for (const testFeatures of [allFeatures, someFeatures]) {
const result = await client.path("/imageanalysis:analyze").post({
body: data,
queryParameters: {
features: testFeatures,
"smartCrops-aspect-ratios": [0.9, 1.33],
},
contentType: "application/octet-stream",
});
assert.isNotNull(result);
assert.equal(result.status, "200");
const iaResult: ImageAnalysisResultOutput = result.body as ImageAnalysisResultOutput;
validateResponse(iaResult, testFeatures, false);
}
});
function validateResponse(
iaResult: ImageAnalysisResultOutput,
testFeatures: string[],
genderNeutral: boolean
): void {
validateMetadata(iaResult);
const captionResult = iaResult.captionResult;
if (testFeatures.includes("Caption")) {
if (captionResult) {
validateCaption(captionResult, genderNeutral);
} else {
assert.fail("captionResult is null");
}
} else {
assert.isUndefined(captionResult);
}
if (testFeatures.includes("DenseCaptions")) {
validateDenseCaptions(iaResult);
} else {
assert.isUndefined(iaResult.denseCaptionsResult);
}
const objectsResult = iaResult.objectsResult;
if (testFeatures.includes("Objects")) {
if (objectsResult) {
validateObjectsResult(objectsResult);
} else {
assert.fail("objectsResult is null");
}
} else {
assert.isUndefined(objectsResult);
}
if (testFeatures.includes("Tags")) {
if (iaResult.tagsResult) {
validateTags(iaResult.tagsResult);
} else {
assert.fail("tagsResult is null");
}
} else {
assert.isUndefined(iaResult.tagsResult);
}
if (testFeatures.includes("People")) {
validatePeopleResult(iaResult);
} else {
assert.isUndefined(iaResult.peopleResult);
}
if (testFeatures.includes("SmartCrops")) {
validateSmartCrops(iaResult);
} else {
assert.isUndefined(iaResult.smartCropsResult);
}
const readResult = iaResult.readResult;
if (!testFeatures.includes("Read")) {
assert.isUndefined(readResult);
} else {
if (readResult) {
validateReadResult(iaResult);
} else {
assert.fail("readResult is null");
}
}
}
function validateReadResult(result: ImageAnalysisResultOutput): void {
const readResult = result.readResult;
if (!readResult) throw new Error("Read result is null");
const allText: string[] = [];
let words = 0;
let lines = 0;
const pagePolygon: ImagePointOutput[] = [
{ x: 0, y: 0 },
{ x: 0, y: result.metadata.height },
{ x: result.metadata.width, y: result.metadata.height },
{ x: result.metadata.width, y: 0 },
];
for (const block of readResult.blocks) {
for (const oneLine of block.lines) {
if (!oneLine.boundingPolygon.every((p) => isInPolygon(p, pagePolygon))) {
throw new Error("Bounding polygon is not in the page polygon");
}
words += oneLine.words.length;
lines++;
allText.push(oneLine.text);
for (const word of oneLine.words) {
if (word.confidence <= 0 || word.confidence >= 1) {
throw new Error("Invalid word confidence value");
}
if (!oneLine.text.includes(word.text)) {
throw new Error("One line text does not contain word text");
}
}
}
}
if (words !== 6) throw new Error("Words count is not equal to 6");
if (lines !== 3) throw new Error("Lines count is not equal to 3");
if (allText.join("\n") !== "Sample text\nHand writing\n123 456") {
throw new Error("All text content is not equal to the expected value");
}
}
function isInPolygon(suspectPoint: ImagePointOutput, polygon: ImagePointOutput[]): boolean {
let intersectCount = 0;
const points = [...polygon, polygon[0]];
for (let i = 0; i < points.length - 1; i++) {
const p1 = points[i];
const p2 = points[i + 1];
if (
p1.y > suspectPoint.y !== p2.y > suspectPoint.y &&
suspectPoint.x < ((p2.x - p1.x) * (suspectPoint.y - p1.y)) / (p2.y - p1.y) + p1.x
) {
intersectCount++;
}
}
const result = intersectCount % 2 !== 0;
if (!result) {
console.log(`Point ${suspectPoint} is not in polygon ${polygon}`);
}
return result;
}
function validateMetadata(iaResult: ImageAnalysisResultOutput): void {
assert.isAbove(iaResult.metadata.height, 0);
assert.isAbove(iaResult.metadata.width, 0);
assert.isFalse(iaResult.modelVersion.trim() === "");
}
function validateCaption(captionResult: CaptionResultOutput, genderNeutral: boolean): void {
assert.isNotNull(captionResult);
assert.isAbove(captionResult.confidence, 0);
assert.isBelow(captionResult.confidence, 1);
assert.isTrue(captionResult.text.toLowerCase().includes(genderNeutral ? "person" : "woman"));
assert.isTrue(captionResult.text.toLowerCase().includes("table"));
assert.isTrue(captionResult.text.toLowerCase().includes("laptop"));
}
function validateDenseCaptions(iaResult: ImageAnalysisResultOutput): void {
const denseCaptionsResult = iaResult.denseCaptionsResult;
assert.isNotNull(denseCaptionsResult);
assert.isAtLeast(denseCaptionsResult!.values.length, 1);
const firstCaption = denseCaptionsResult!.values[0];
assert.isNotNull(firstCaption);
assert.isNotNull(firstCaption.boundingBox);
assert.strictEqual(firstCaption.boundingBox.w, iaResult.metadata.width);
assert.strictEqual(firstCaption.boundingBox.h, iaResult.metadata.height);
assert.isNotNull(firstCaption.text);
if (iaResult.captionResult != null) {
assert.strictEqual(iaResult.captionResult.text, firstCaption.text);
}
const boundingBoxes = new Set<string>();
for (const oneDenseCaption of denseCaptionsResult!.values) {
assert.isNotNull(oneDenseCaption.boundingBox);
assert.isFalse(boundingBoxes.has(JSON.stringify(oneDenseCaption.boundingBox)));
boundingBoxes.add(JSON.stringify(oneDenseCaption.boundingBox));
assert.isNotNull(oneDenseCaption.text);
assert.isAbove(oneDenseCaption.confidence, 0);
assert.isBelow(oneDenseCaption.confidence, 1);
}
}
function validateObjectsResult(objectsResult: ObjectsResultOutput): void {
assert.isNotNull(objectsResult);
assert.isAtLeast(objectsResult.values.length, 0);
for (const oneObject of objectsResult.values) {
assert.isNotNull(oneObject.boundingBox);
assert.isTrue(
oneObject.boundingBox.x > 0 ||
oneObject.boundingBox.y > 0 ||
oneObject.boundingBox.h > 0 ||
oneObject.boundingBox.w > 0
);
assert.isNotNull(oneObject.tags);
for (const oneTag of oneObject.tags) {
assert.isFalse(oneTag.name.trim() === "");
assert.isAbove(oneTag.confidence, 0);
assert.isBelow(oneTag.confidence, 1);
}
}
assert.isAtLeast(
objectsResult.values.filter((v) => v.tags.filter((t) => t.name.toLowerCase() === "person"))
.length,
0
);
}
function validateTags(tagsResult: TagsResultOutput): void {
assert.isNotNull(tagsResult);
assert.isNotNull(tagsResult.values);
assert.isAtLeast(tagsResult.values.length, 0);
let found = 0;
const tagNames = new Set<string>();
for (const oneTag of tagsResult.values) {
assert.isAbove(oneTag.confidence, 0);
assert.isBelow(oneTag.confidence, 1);
assert.isFalse(oneTag.name.trim() === "");
if (["person", "woman", "laptop", "cat", "canidae"].includes(oneTag.name.toLowerCase())) {
found++;
}
assert.isFalse(tagNames.has(oneTag.name));
tagNames.add(oneTag.name);
}
assert.isAtLeast(found, 2);
}
function validatePeopleResult(iaResult: ImageAnalysisResultOutput): void {
const peopleResult = iaResult.peopleResult;
assert.isNotNull(peopleResult);
assert.isAtLeast(peopleResult!.values.length, 0);
const boundingBoxes = new Set<string>();
for (const onePerson of peopleResult!.values) {
assert.isNotNull(onePerson.boundingBox);
assert.isFalse(boundingBoxes.has(JSON.stringify(onePerson.boundingBox)));
boundingBoxes.add(JSON.stringify(onePerson.boundingBox));
assert.isAbove(onePerson.confidence, 0);
assert.isBelow(onePerson.confidence, 1);
}
}
function validateSmartCrops(iaResult: ImageAnalysisResultOutput): void {
const smartCropsResult = iaResult.smartCropsResult;
assert.isNotNull(smartCropsResult);
assert.isNotNull(smartCropsResult!.values);
assert.strictEqual(smartCropsResult!.values.length, 2);
const boundingBoxes = new Set<string>();
for (const oneCrop of smartCropsResult!.values) {
assert.isFalse(boundingBoxes.has(JSON.stringify(oneCrop.boundingBox)));
boundingBoxes.add(JSON.stringify(oneCrop.boundingBox));
}
}
});

Просмотреть файл

@ -0,0 +1,2 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.

Просмотреть файл

@ -0,0 +1,6 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
import * as dotenv from "dotenv";
dotenv.config();

Просмотреть файл

@ -0,0 +1,39 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT license.
import { Context } from "mocha";
import {
Recorder,
RecorderStartOptions,
assertEnvironmentVariable,
} from "@azure-tools/test-recorder";
import "./env";
import importedCreateClient, { ImageAnalysisClient } from "../../../src/index";
import { AzureKeyCredential } from "@azure/core-auth";
const envSetupForPlayback: Record<string, string> = {
VISION_ENDPOINT: "https://endpoint/",
VISION_KEY: "***********",
};
const recorderEnvSetup: RecorderStartOptions = {
envSetupForPlayback,
};
/**
* creates the recorder and reads the environment variables from the `.env` file.
* Should be called first in the test suite to make sure environment variables are
* read before they are being used.
*/
export async function createRecorder(context: Context): Promise<Recorder> {
const recorder = new Recorder(context.currentTest);
await recorder.start(recorderEnvSetup);
return recorder;
}
export async function createClient(recorder: Recorder): Promise<ImageAnalysisClient> {
const endpoint = assertEnvironmentVariable("VISION_ENDPOINT");
const key = assertEnvironmentVariable("VISION_KEY");
const credential = new AzureKeyCredential(key);
return importedCreateClient(endpoint, credential, recorder.configureClientOptions({}));
}

Просмотреть файл

@ -0,0 +1,5 @@
{
"extends": "../../../tsconfig.package",
"compilerOptions": { "outDir": "./dist-esm", "declarationDir": "./types" },
"include": ["src/**/*.ts", "./test/**/*.ts"]
}

Просмотреть файл

@ -0,0 +1,5 @@
directory: specification/ai/ImageAnalysis/
additionalDirectories: []
repo: Azure/azure-rest-api-specs
commit: 1aacda3283d91a87936cf1a091b202e4cdbf028e

32
sdk/vision/ci.yml Normal file
Просмотреть файл

@ -0,0 +1,32 @@
# NOTE: Please refer to https://aka.ms/azsdk/engsys/ci-yaml before editing this file.
trigger:
branches:
include:
- main
- hotfix/*
- release/*
- restapi*
paths:
include:
- sdk/vision/
pr:
branches:
include:
- main
- feature/*
- hotfix/*
- release/*
- restapi*
paths:
include:
- sdk/vision/
extends:
template: ../../eng/pipelines/templates/stages/archetype-sdk-client.yml
parameters:
ServiceDirectory: vision
Artifacts:
- name: azure-rest-ai-vision-image-analysis
safeName: azurerestaivisionimageanalysis