[azopenaiextensions] Initial package creation (#23462)

This commit is contained in:
Richard Park 2024-09-24 14:35:24 -07:00 коммит произвёл GitHub
Родитель 73af1f6a68
Коммит 01c6a8fc3a
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: B5690EEEBB952194
43 изменённых файлов: 6930 добавлений и 0 удалений

Просмотреть файл

@ -0,0 +1,5 @@
# Release History
## 0.1.0 (TBD)
- Initial release of the `azopenaiextensions` module, which can be used with the [OpenAI go module](https://github.com/openai/openai-go)

Просмотреть файл

@ -0,0 +1,114 @@
# Contributing Guide
> NOTE: these instructions are for fixing or adding features to the `azopenaiextensions` module. To use the module refer to the readme for this package: [readme.md](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/ai/azopenaiextensions/README.md).
This is a contributing guide for the `azopenaiextensions` package. For general contributing guidelines refer to [CONTRIBUTING.md](https://github.com/Azure/azure-sdk-for-go/blob/main/CONTRIBUTING.md).
The `azopenaiextensions` package can be used with either Azure OpenAI's public service. New features are added using our code generation process, specified using TypeSpec [TypeSpec](https://github.com/Microsoft/typespec), which details all the models and protocol methods for using OpenAI.
### Prerequisites
For code fixes that do not require code generation:
- Go 1.21 (or greater)
For code generation:
- [NodeJS (use the latest LTS)](https://nodejs.org)
- [TypeSpec compiler](https://github.com/Microsoft/typespec#getting-started).
- [autorest](https://github.com/Azure/autorest/tree/main/packages/apps/autorest)
- [PowerShell Core](https://github.com/PowerShell/PowerShell#get-powershell)
- [goimports](https://pkg.go.dev/golang.org/x/tools/cmd/goimports)
# Building
## Generating from TypeSpec
The models in this package generated from TypeSpec. Files that do not have `custom` (ex: `client.go`, `models.go`, `models_serde.go`, etc..) are generated.
### Regeneration
The `testdata/tsp-location.yaml` specifies the specific revision (and repo) that we use to generate the client. This also makes it possible, if needed, to generate from branch commmits in [`Azure/azure-rest-api-specs`](https://github.com/Azure/azure-rest-api-specs).
**tsp.location.yaml**:
```yaml
# ie: https://github.com/Azure/azure-rest-api-specs/tree/1e243e2b0d0d006599dcb64f82fd92aecc1247be/specification/cognitiveservices/OpenAI.Inference
directory: specification/cognitiveservices/OpenAI.Inference
commit: 1e243e2b0d0d006599dcb64f82fd92aecc1247be
repo: Azure/azure-rest-api-specs
```
The generation process is all done as `go generate` commands in `build.go`. To regenerate the client run:
```
go generate ./...
```
Commit the generated changes as part of your pull request.
If the changes don't look quite right you can adjust the generated code using the `autorest.md` file.
# Testing
There are three kinds of tests for this package: unit tests, recorded tests and live tests.
## Unit and recorded tests
Unit tests and recorded tests do not require access to OpenAI to run and will run with any PR as a check-in gate.
Recorded tests require the Azure SDK test proxy is running. See the instructions for [installing the test-proxy](https://github.com/Azure/azure-sdk-tools/blob/main/tools/test-proxy/Azure.Sdk.Tools.TestProxy/README.md#installation).
In one terminal window, start the test-proxy:
```bash
cd <root of the azopenaiextensions module>
test-proxy
```
In another terminal window:
To playback (ie: use recordings):
```bash
cd <root of the azopenaiextensions module>
export AZURE_RECORD_MODE=playback
go test -count 1 -v ./...
```
To re-record:
```bash
cd <root of the azopenaiextensions module>
export AZURE_RECORD_MODE=record
go test -count 1 -v ./...
# push the recording changes to the repo
test-proxy push -a assets.json
# commit our assets.json file now that it points
# to the new recordings.
git add assets.json
git commit -m "updated recordings"
git push
```
## Live tests
### Local development
Copy the `sample.env` file to `.env`, and fill out all the values. Each value is documented to give you a general idea of what's needed, but ultimately you'll need to work with the Azure OpenAI SDK team to figure out which services are used for which features.
Once filled out, the tests will automatically load environment variables from the `.env`:
```bash
export AZURE_RECORD_MODE=live
go test -count 1 -v ./...
```
### Pull requests
Post a comment to your PR with this text:
```
/azp run go - azopenaiextensions
```
The build bot will post a comment indicating its started the pipeline and the checks will start showing up in the status for the PR as well.

Просмотреть файл

@ -0,0 +1,21 @@
MIT License
Copyright (c) Microsoft Corporation. All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE

Просмотреть файл

@ -0,0 +1,20 @@
# Examples
## Conventions
If you're already using the `azopenai` package, but would like to switch to using `openai-go`, you'll need to adjust your code to accomodate the different conventions in that package.
- Fields for input types are wrapped in an `openai.Field` type, using the `openai.F()`, or helper functions like `openai.Int`:
```go
chatParams := openai.ChatCompletionNewParams{
Model: openai.F(model),
MaxTokens: openai.Int(512),
}
```
- Model deployment names are passed in the `Model` input field, instead of `DeploymentName`.
## Using "Azure OpenAI On Your Data" with openai-go
["Azure OpenAI On Your Data"](https://learn.microsoft.com/azure/ai-services/openai/concepts/use-your-data) allows you to use external data sources, such as Azure AI Search, in combination with Azure OpenAI. This package provides a helper function to make it easy to include `DataSources` using `openai-go`:
For a full example see [example_azure_on_your_data_test.go](./example_azure_on_your_data_test.go).

Просмотреть файл

@ -0,0 +1,64 @@
# Azure OpenAI extensions module for Go
This module provides models and convenience functions to make it simpler to use Azure OpenAI features, such as [Azure OpenAI On Your Data][openai_on_your_data], with the OpenAI Go client (https://pkg.go.dev/github.com/openai/openai-go).
[Source code][repo] | [Package (pkg.go.dev)][pkggodev] | [REST API documentation][openai_rest_docs] | [Product documentation][openai_docs]
## Getting started
### Prerequisites
* Go, version 1.21 or higher - [Install Go](https://go.dev/doc/install)
* [Azure subscription][azure_sub]
* [Azure OpenAI access][azure_openai_access]
### Install the packages
Install the `azopenaiextensions` and `azidentity` modules with `go get`:
```bash
go get github.com/Azure/azure-sdk-for-go/sdk/ai/azopenaiextensions
# optional
go get github.com/Azure/azure-sdk-for-go/sdk/azidentity
```
The [azidentity][azure_identity] module is used for Azure Active Directory authentication with Azure OpenAI.
## Key concepts
See [Key concepts][openai_key_concepts] in the product documentation for more details about general concepts.
# Examples
Examples for scenarios specific to Azure can be found on [pkg.go.dev](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/ai/azopenaiextensions#pkg-examples) or in the example*_test.go files in our GitHub repo for [azopenaiextensions](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/ai/azopenaiextensions).
For examples on using the openai-go client, see the examples in the [openai-go](https://github.com/openai/openai-go/tree/main/examples) repository.
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a [Contributor License Agreement (CLA)][cla] declaring that you have the right to, and actually do, grant us the rights to use your contribution.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate
the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to
do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct][coc]. For more information, see
the [Code of Conduct FAQ][coc_faq] or contact [opencode@microsoft.com][coc_contact] with any additional questions or
comments.
<!-- LINKS -->
[azure_identity]: https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity
[azure_openai_access]: https://learn.microsoft.com/azure/cognitive-services/openai/overview#how-do-i-get-access-to-azure-openai
[azure_openai_quickstart]: https://learn.microsoft.com/azure/cognitive-services/openai/quickstart
[azure_sub]: https://azure.microsoft.com/free/
[cla]: https://cla.microsoft.com
[coc_contact]: mailto:opencode@microsoft.com
[coc_faq]: https://opensource.microsoft.com/codeofconduct/faq/
[coc]: https://opensource.microsoft.com/codeofconduct/
[openai_docs]: https://learn.microsoft.com/azure/cognitive-services/openai
[openai_key_concepts]: https://learn.microsoft.com/azure/cognitive-services/openai/overview#key-concepts
[openai_on_your_data]: https://learn.microsoft.com/azure/ai-services/openai/concepts/use-your-data
[openai_rest_docs]: https://learn.microsoft.com/azure/cognitive-services/openai/reference
[pkggodev]: https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/ai/azopenaiextensions
[repo]: https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/ai/azopenaiextensions

Просмотреть файл

@ -0,0 +1,76 @@
# Go
These settings apply only when `--go` is specified on the command line.
``` yaml
input-file:
# this file is generated using the ./testdata/genopenapi.ps1 file.
- ./testdata/generated/openapi.json
output-folder: ../azopenaiextensions
clear-output-folder: false
module: github.com/Azure/azure-sdk-for-go/sdk/ai/azopenaiextensions
license-header: MICROSOFT_MIT_NO_VERSION
openapi-type: data-plane
go: true
use: "@autorest/go@4.0.0-preview.63"
title: "OpenAI"
slice-elements-byval: true
rawjson-as-bytes: true
# can't use this since it removes an innererror type that we want ()
# remove-non-reference-schema: true
```
## Transformations
Fix deployment and endpoint parameters so they show up in the right spots
``` yaml
directive:
- from: swagger-document
where: $["x-ms-paths"]
transform: |
return {};
# NOTE: this is where we decide what models to keep. Anything not included in here just gets
# removed from the swagger definition.
- from: swagger-document
where: $
transform: |
const newDefs = {};
const newPaths = {};
// add types here if they're Azure related, and we want to keep them and
// they're not covered by the oydModelRegex below.
const keep = {};
// this'll catch the Azure "on your data" models.
const oydModelRegex = /^(OnYour|Azure|Pinecone|ContentFilter).+$/;
for (const key in $.definitions) {
if (!(key in keep) && !key.match(oydModelRegex)) {
continue
}
$lib.log(`Including ${key}`);
newDefs[key] = $.definitions[key];
}
$.definitions = newDefs;
// clear out any operations, we aren't going to use them.
$.paths = {};
$.parameters = {};
return $;
- from: swagger-document
debug: true
where: $.definitions
transform: |
$["Azure.Core.Foundations.Error"]["x-ms-client-name"] = "Error";
delete $["Azure.Core.Foundations.Error"].properties["innererror"];
delete $["Azure.Core.Foundations.Error"].properties["details"];
delete $["Azure.Core.Foundations.Error"].properties["target"];
$["Azure.Core.Foundations.InnerError"]["x-ms-external"] = true;
$["Azure.Core.Foundations.ErrorResponse"]["x-ms-external"] = true;
return $;
```

Просмотреть файл

@ -0,0 +1,13 @@
//go:build go1.18
// +build go1.18
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
//go:generate pwsh ./testdata/genopenapi.ps1
//go:generate autorest ./autorest.md
//go:generate rm -f options.go openai_client.go responses.go
//go:generate go mod tidy
//go:generate goimports -w .
package azopenaiextensions

Просмотреть файл

@ -0,0 +1,46 @@
# NOTE: Please refer to https://aka.ms/azsdk/engsys/ci-yaml before editing this file.
trigger:
branches:
include:
- main
- feature/*
- hotfix/*
- release/*
paths:
include:
- sdk/ai/azopenaiextensions
- eng/
pr:
branches:
include:
- main
- feature/*
- hotfix/*
- release/*
paths:
include:
- sdk/ai/azopenaiextensions
extends:
template: /eng/pipelines/templates/jobs/archetype-sdk-client.yml
parameters:
# We need to allow for longer retry times with tests that run against the public endpoint
# which throttles under load. Note, I left a little wiggle room since the TimeoutInMinutes
# controls the overall pipeline and TestRunTime configures the individual `go test -timeout` parameter.
TimeoutInMinutes: 35
TestRunTime: 30m
ServiceDirectory: "ai/azopenaiextensions"
RunLiveTests: true
UsePipelineProxy: false
CloudConfig:
Public:
SubscriptionConfigurations:
- $(sub-config-azure-cloud-test-resources)
- $(sub-config-openai-test-resources) # TestSecrets-openai
EnvVars:
AZURE_TEST_RUN_LIVE: "true" # use when utilizing the New-TestResources Script
AZURE_CLIENT_ID: $(AZOPENAIEXTENSIONS_CLIENT_ID)
AZURE_CLIENT_SECRET: $(AZOPENAIEXTENSIONS_CLIENT_SECRET)
AZURE_TENANT_ID: $(AZOPENAIEXTENSIONS_TENANT_ID)
AZURE_SUBSCRIPTION_ID: $(AZOPENAIEXTENSIONS_SUBSCRIPTION_ID)

Просмотреть файл

@ -0,0 +1,200 @@
//go:build go1.18
// +build go1.18
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
package azopenaiextensions_test
import (
"context"
"fmt"
"testing"
"github.com/Azure/azure-sdk-for-go/sdk/internal/recording"
"github.com/openai/openai-go"
"github.com/stretchr/testify/require"
)
func TestAssistants(t *testing.T) {
if recording.GetRecordMode() == recording.PlaybackMode {
t.Skip("https://github.com/Azure/azure-sdk-for-go/issues/22869")
}
// NOTE: if you want to hit the OpenAI service instead...
// assistantClient := openai.NewClient(
// option.WithHeader("OpenAI-Beta", "assistants=v2"),
// )
assistantClient := newStainlessTestClient(t, azureOpenAI.Assistants.Endpoint).Beta.Assistants
assistant, err := assistantClient.New(context.Background(), openai.BetaAssistantNewParams{
Model: openai.F(azureOpenAI.Assistants.Model),
Instructions: openai.String("Answer questions in any manner possible"),
})
require.NoError(t, err)
t.Cleanup(func() {
_, err := assistantClient.Delete(context.Background(), assistant.ID)
require.NoError(t, err)
})
const desc = "This is a newly updated description"
// update the assistant's description
{
updatedAssistant, err := assistantClient.Update(context.Background(), assistant.ID, openai.BetaAssistantUpdateParams{
Description: openai.String(desc),
})
require.NoError(t, err)
require.Equal(t, desc, updatedAssistant.Description)
require.Equal(t, assistant.ID, updatedAssistant.ID)
}
// get the same assistant back again
{
assistant2, err := assistantClient.Get(context.Background(), assistant.ID)
require.NoError(t, err)
require.Equal(t, assistant.ID, assistant2.ID)
require.Equal(t, desc, assistant2.Description)
}
// listing assistants
{
pager, err := assistantClient.List(context.Background(), openai.BetaAssistantListParams{
Limit: openai.Int(1),
})
require.NoError(t, err)
var pages []openai.Assistant = pager.Data
require.NotEmpty(t, pages)
page, err := pager.GetNextPage()
require.NoError(t, err)
if page != nil { // a nil page indicates we've read all pages.
pages = append(pages, page.Data...)
}
require.NotEmpty(t, pages)
}
}
func TestAssistantsThreads(t *testing.T) {
if recording.GetRecordMode() == recording.PlaybackMode {
t.Skip("https://github.com/Azure/azure-sdk-for-go/issues/22869")
}
// NOTE: if you want to hit the OpenAI service instead...
// assistantClient := openai.NewClient(
// option.WithHeader("OpenAI-Beta", "assistants=v2"),
// )
beta := newStainlessTestClient(t, azureOpenAI.Assistants.Endpoint).Beta
assistantClient := beta.Assistants
threadClient := beta.Threads
assistant, err := assistantClient.New(context.Background(), openai.BetaAssistantNewParams{
Model: openai.F(azureOpenAI.Assistants.Model),
Instructions: openai.String("Answer questions in any manner possible"),
})
require.NoError(t, err)
t.Cleanup(func() {
_, err := assistantClient.Delete(context.Background(), assistant.ID)
require.NoError(t, err)
})
thread, err := threadClient.New(context.Background(), openai.BetaThreadNewParams{})
require.NoError(t, err)
t.Cleanup(func() {
_, err := threadClient.Delete(context.Background(), thread.ID)
require.NoError(t, err)
})
metadata := map[string]any{"hello": "world"}
// update the thread
{
updatedThread, err := threadClient.Update(context.Background(), thread.ID, openai.BetaThreadUpdateParams{
Metadata: openai.F[any](metadata),
})
require.NoError(t, err)
require.Equal(t, thread.ID, updatedThread.ID)
require.Equal(t, metadata, updatedThread.Metadata)
}
// get the thread back
{
gotThread, err := threadClient.Get(context.Background(), thread.ID)
require.NoError(t, err)
require.Equal(t, thread.ID, gotThread.ID)
require.Equal(t, metadata, gotThread.Metadata)
}
}
func TestAssistantRun(t *testing.T) {
if recording.GetRecordMode() == recording.PlaybackMode {
t.Skip("https://github.com/Azure/azure-sdk-for-go/issues/22869")
}
// NOTE: if you want to hit the OpenAI service instead...
// assistantClient := openai.NewClient(
// option.WithHeader("OpenAI-Beta", "assistants=v2"),
// )
client := newStainlessTestClient(t, azureOpenAI.Assistants.Endpoint)
// (this is the test, verbatim, from openai-go: https://github.com/openai/openai-go/blob/main/examples/assistant-streaming/main.go)
ctx := context.Background()
// Create an assistant
println("Create an assistant")
assistant, err := client.Beta.Assistants.New(ctx, openai.BetaAssistantNewParams{
Name: openai.String("Math Tutor"),
Instructions: openai.String("You are a personal math tutor. Write and run code to answer math questions."),
Tools: openai.F([]openai.AssistantToolUnionParam{
openai.CodeInterpreterToolParam{Type: openai.F(openai.CodeInterpreterToolTypeCodeInterpreter)},
}),
Model: openai.String("gpt-4-1106-preview"),
})
if err != nil {
panic(err)
}
// Create a thread
println("Create an thread")
thread, err := client.Beta.Threads.New(ctx, openai.BetaThreadNewParams{})
if err != nil {
panic(err)
}
// Create a message in the thread
println("Create a message")
_, err = client.Beta.Threads.Messages.New(ctx, thread.ID, openai.BetaThreadMessageNewParams{
Role: openai.F(openai.BetaThreadMessageNewParamsRoleAssistant),
Content: openai.F([]openai.MessageContentPartParamUnion{
openai.TextContentBlockParam{
Type: openai.F(openai.TextContentBlockParamTypeText),
Text: openai.String("I need to solve the equation `3x + 11 = 14`. Can you help me?"),
},
}),
})
if err != nil {
panic(err)
}
// Create a run
println("Create a run")
stream := client.Beta.Threads.Runs.NewStreaming(ctx, thread.ID, openai.BetaThreadRunNewParams{
AssistantID: openai.String(assistant.ID),
Instructions: openai.String("Please address the user as Jane Doe. The user has a premium account."),
})
if err != nil {
panic(err)
}
for stream.Next() {
evt := stream.Current()
println(fmt.Sprintf("%T", evt.Data))
}
}

Просмотреть файл

@ -0,0 +1,165 @@
//go:build go1.18
// +build go1.18
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
package azopenaiextensions_test
import (
"context"
"fmt"
"io"
"os"
"testing"
"github.com/Azure/azure-sdk-for-go/sdk/internal/recording"
"github.com/openai/openai-go"
"github.com/openai/openai-go/option"
"github.com/stretchr/testify/require"
)
func TestClient_GetAudioTranscription(t *testing.T) {
if recording.GetRecordMode() == recording.PlaybackMode {
t.Skip("https://github.com/Azure/azure-sdk-for-go/issues/22869")
}
client := newStainlessTestClient(t, azureOpenAI.Whisper.Endpoint)
model := azureOpenAI.Whisper.Model
// We're experiencing load issues on some of our shared test resources so we'll just spot check.
t.Run(fmt.Sprintf("%s (%s)", openai.AudioTranscriptionNewParamsResponseFormatText, "m4a"), func(t *testing.T) {
// TODO: BUG: I think. I'm not quite sure how to request any format other than JSON because the bare formats
// cause a deserialization error in the Stainless client.
//
// transcriptResp, err := client.Audio.Transcriptions.New(context.Background(), openai.AudioTranscriptionNewParams{
// Model: openai.F(openai.AudioTranscriptionNewParamsModel(model)),
// File: openai.F(getFile(t, "testdata/sampledata_audiofiles_myVoiceIsMyPassportVerifyMe01.m4a")),
// ResponseFormat: openai.F(openai.AudioTranscriptionNewParamsResponseFormatText),
// Language: openai.String("en"),
// Temperature: openai.Float(0.0),
// })
// require.Empty(t, transcriptResp)
// require.EqualError(t, err, "expected destination type of 'string' or '[]byte' for responses with content-type that is not 'application/json'")
var text *string
transcriptResp, err := client.Audio.Transcriptions.New(context.Background(), openai.AudioTranscriptionNewParams{
Model: openai.F(openai.AudioModel(model)),
File: openai.F(getFile(t, "testdata/sampledata_audiofiles_myVoiceIsMyPassportVerifyMe01.m4a")),
ResponseFormat: openai.F(openai.AudioTranscriptionNewParamsResponseFormatText),
Language: openai.String("en"),
Temperature: openai.Float(0.0),
}, option.WithResponseBodyInto(&text))
require.Empty(t, transcriptResp)
require.NoError(t, err)
require.NotEmpty(t, *text)
})
t.Run(fmt.Sprintf("%s (%s)", openai.AudioTranscriptionNewParamsResponseFormatJSON, "mp3"), func(t *testing.T) {
transcriptResp, err := client.Audio.Transcriptions.New(context.Background(), openai.AudioTranscriptionNewParams{
Model: openai.F(openai.AudioModel(model)),
File: openai.F(getFile(t, "testdata/sampledata_audiofiles_myVoiceIsMyPassportVerifyMe01.mp3")),
ResponseFormat: openai.F(openai.AudioTranscriptionNewParamsResponseFormatVerboseJSON),
Language: openai.String("en"),
Temperature: openai.Float(0.0),
})
customRequireNoError(t, err, true)
t.Logf("Transcription: %s", transcriptResp.Text)
require.NotEmpty(t, transcriptResp)
})
}
func TestClient_GetAudioTranslation(t *testing.T) {
if recording.GetRecordMode() == recording.PlaybackMode {
t.Skip("https://github.com/Azure/azure-sdk-for-go/issues/22869")
}
client := newStainlessTestClient(t, azureOpenAI.Whisper.Endpoint)
model := azureOpenAI.Whisper.Model
resp, err := client.Audio.Translations.New(context.Background(), openai.AudioTranslationNewParams{
Model: openai.F(openai.AudioModel(model)),
File: openai.F(getFile(t, "testdata/sampledata_audiofiles_myVoiceIsMyPassportVerifyMe01.m4a")),
// TODO: no specific enumeration for Translations format?
ResponseFormat: openai.F(string(openai.AudioTranscriptionNewParamsResponseFormatVerboseJSON)),
Temperature: openai.Float(0.0),
})
customRequireNoError(t, err, true)
t.Logf("Translation: %s", resp.Text)
require.NotEmpty(t, resp.Text)
}
func TestClient_GetAudioSpeech(t *testing.T) {
if recording.GetRecordMode() == recording.PlaybackMode {
t.Skip("https://github.com/Azure/azure-sdk-for-go/issues/22869")
}
var tempFile *os.File
// Generate some speech from text.
{
speechClient := newStainlessTestClient(t, azureOpenAI.Speech.Endpoint)
audioResp, err := speechClient.Audio.Speech.New(context.Background(), openai.AudioSpeechNewParams{
Input: openai.String("i am a computer"),
Voice: openai.F(openai.AudioSpeechNewParamsVoiceAlloy),
ResponseFormat: openai.F(openai.AudioSpeechNewParamsResponseFormatFLAC),
Model: openai.F(openai.AudioModel(azureOpenAI.Speech.Model)),
})
require.NoError(t, err)
defer func() {
err := audioResp.Body.Close()
require.NoError(t, err)
}()
audioBytes, err := io.ReadAll(audioResp.Body)
require.NoError(t, err)
require.NotEmpty(t, audioBytes)
require.Equal(t, "fLaC", string(audioBytes[0:4]))
// write the FLAC to a temp file - the Stainless API uses the filename of the file
// when it sends the request.
tempFile, err = os.CreateTemp("", "audio*.flac")
require.NoError(t, err)
defer tempFile.Close()
_, err = tempFile.Write(audioBytes)
require.NoError(t, err)
_, err = tempFile.Seek(0, io.SeekStart)
require.NoError(t, err)
}
// as a simple check we'll now transcribe the audio file we just generated...
transcriptClient := newStainlessTestClient(t, azureOpenAI.Whisper.Endpoint)
// now send _it_ back through the transcription API and see if we can get something useful.
transcriptResp, err := transcriptClient.Audio.Transcriptions.New(context.Background(), openai.AudioTranscriptionNewParams{
Model: openai.F(openai.AudioModel(azureOpenAI.Whisper.Model)),
File: openai.F[io.Reader](tempFile),
ResponseFormat: openai.F(openai.AudioTranscriptionNewParamsResponseFormatVerboseJSON),
Language: openai.String("en"),
Temperature: openai.Float(0.0),
})
require.NoError(t, err)
// it occasionally comes back with different punctuation or makes a complete sentence but
// the major words always come through.
require.Contains(t, transcriptResp.Text, "computer")
}
func getFile(t *testing.T, path string) io.Reader {
file, err := os.Open(path)
require.NoError(t, err)
t.Cleanup(func() {
err := file.Close()
require.NoError(t, err)
})
return file
}

Просмотреть файл

@ -0,0 +1,96 @@
//go:build go1.18
// +build go1.18
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
package azopenaiextensions_test
import (
"context"
"testing"
"github.com/Azure/azure-sdk-for-go/sdk/ai/azopenaiextensions"
"github.com/openai/openai-go"
"github.com/stretchr/testify/require"
)
func TestChatCompletions_extensions_bringYourOwnData(t *testing.T) {
client := newStainlessTestClient(t, azureOpenAI.ChatCompletionsOYD.Endpoint)
inputParams := openai.ChatCompletionNewParams{
Model: openai.F(openai.ChatModel(azureOpenAI.ChatCompletionsOYD.Model)),
MaxTokens: openai.Int(512),
Messages: openai.F([]openai.ChatCompletionMessageParamUnion{
openai.ChatCompletionMessageParam{
Role: openai.F(openai.ChatCompletionMessageParamRoleUser),
Content: openai.F[any]("What does the OpenAI package do?"),
},
}),
}
resp, err := client.Chat.Completions.New(context.Background(), inputParams,
azopenaiextensions.WithDataSources(&azureOpenAI.Cognitive))
require.NoError(t, err)
require.NotEmpty(t, resp)
msg := azopenaiextensions.ChatCompletionMessage(resp.Choices[0].Message)
msgContext, err := msg.Context()
require.NoError(t, err)
require.NotEmpty(t, msgContext.Citations[0].Content)
require.NotEmpty(t, msg.Content)
require.Equal(t, openai.ChatCompletionChoicesFinishReasonStop, resp.Choices[0].FinishReason)
t.Logf("Content = %s", resp.Choices[0].Message.Content)
}
func TestChatExtensionsStreaming_extensions_bringYourOwnData(t *testing.T) {
client := newStainlessTestClient(t, azureOpenAI.ChatCompletionsOYD.Endpoint)
inputParams := openai.ChatCompletionNewParams{
Model: openai.F(openai.ChatModel(azureOpenAI.ChatCompletionsOYD.Model)),
MaxTokens: openai.Int(512),
Messages: openai.F([]openai.ChatCompletionMessageParamUnion{
openai.ChatCompletionMessageParam{
Role: openai.F(openai.ChatCompletionMessageParamRoleUser),
Content: openai.F[any]("What does the OpenAI package do?"),
},
}),
}
streamer := client.Chat.Completions.NewStreaming(context.Background(), inputParams,
azopenaiextensions.WithDataSources(
&azureOpenAI.Cognitive,
))
defer streamer.Close()
text := ""
first := true
for streamer.Next() {
chunk := streamer.Current()
if first {
// when you BYOD you get some extra content showing you metadata/info from the external
// data source.
first = false
msgContext, err := azopenaiextensions.ChatCompletionChunkChoicesDelta(chunk.Choices[0].Delta).Context()
require.NoError(t, err)
require.NotEmpty(t, msgContext.Citations[0].Content)
}
for _, choice := range chunk.Choices {
text += choice.Delta.Content
}
}
require.NoError(t, streamer.Err())
require.NotEmpty(t, text)
t.Logf("Streaming content = %s", text)
}

Просмотреть файл

@ -0,0 +1,256 @@
//go:build go1.18
// +build go1.18
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
package azopenaiextensions_test
import (
"context"
"testing"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/ai/azopenaiextensions"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/openai/openai-go"
"github.com/stretchr/testify/require"
)
func newStainlessTestChatCompletionOptions(deployment string) openai.ChatCompletionNewParams {
message := "Count to 10, with a comma between each number, no newlines and a period at the end. E.g., 1, 2, 3, ..."
return openai.ChatCompletionNewParams{
Messages: openai.F([]openai.ChatCompletionMessageParamUnion{
openai.UserMessage(message),
}),
MaxTokens: openai.Int(1024),
Temperature: openai.Float(0.0),
Model: openai.F(openai.ChatModel(deployment)),
}
}
var expectedContent = "1, 2, 3, 4, 5, 6, 7, 8, 9, 10."
var expectedRole = openai.MessageRoleAssistant
func TestClient_GetChatCompletions(t *testing.T) {
testFn := func(t *testing.T, client *openai.ChatCompletionService, deployment string, returnedModel string, checkRAI bool) {
resp, err := client.New(context.Background(), newStainlessTestChatCompletionOptions(deployment))
skipNowIfThrottled(t, err)
require.NoError(t, err)
require.NotEmpty(t, resp.ID)
require.NotEmpty(t, resp.Created)
t.Logf("isAzure: %t, deployment: %s, returnedModel: %s", checkRAI, deployment, resp.Model)
require.Equal(t, returnedModel, resp.Model)
// check Choices
require.Equal(t, 1, len(resp.Choices))
choice := resp.Choices[0]
t.Logf("Content = %s", choice.Message.Content)
require.Zero(t, choice.Index)
require.Equal(t, openai.ChatCompletionMessageRoleAssistant, choice.Message.Role)
require.NotEmpty(t, choice.Message.Content)
require.Equal(t, openai.ChatCompletionChoicesFinishReasonStop, choice.FinishReason)
require.Equal(t, openai.CompletionUsage{
// these change depending on which model you use. These #'s work for gpt-4, which is
// what I'm using for these tests.
CompletionTokens: 29,
PromptTokens: 42,
TotalTokens: 71,
}, openai.CompletionUsage{
CompletionTokens: resp.Usage.CompletionTokens,
PromptTokens: resp.Usage.PromptTokens,
TotalTokens: resp.Usage.TotalTokens,
})
if checkRAI {
promptFilterResults, err := azopenaiextensions.ChatCompletion(*resp).PromptFilterResults()
require.NoError(t, err)
require.Equal(t, []azopenaiextensions.ContentFilterResultsForPrompt{
{
PromptIndex: to.Ptr[int32](0),
ContentFilterResults: safeContentFilterResultDetailsForPrompt,
},
}, promptFilterResults)
choiceContentFilter, err := azopenaiextensions.ChatCompletionChoice(resp.Choices[0]).ContentFilterResults()
require.NoError(t, err)
require.Equal(t, safeContentFilter, choiceContentFilter)
}
}
t.Run("AzureOpenAI", func(t *testing.T) {
client := newStainlessTestClient(t, azureOpenAI.ChatCompletionsRAI.Endpoint)
testFn(t, client.Chat.Completions, azureOpenAI.ChatCompletionsRAI.Model, "gpt-4", true)
})
t.Run("AzureOpenAI.DefaultAzureCredential", func(t *testing.T) {
client := newStainlessTestClient(t, azureOpenAI.ChatCompletionsRAI.Endpoint)
testFn(t, client.Chat.Completions, azureOpenAI.ChatCompletions.Model, "gpt-4", true)
})
}
func TestClient_GetChatCompletions_LogProbs(t *testing.T) {
testFn := func(t *testing.T, client *openai.ChatCompletionService, model string) {
opts := openai.ChatCompletionNewParams{
Messages: openai.F([]openai.ChatCompletionMessageParamUnion{
openai.UserMessage("Count to 10, with a comma between each number, no newlines and a period at the end. E.g., 1, 2, 3, ..."),
}),
MaxTokens: openai.Int(1024),
Temperature: openai.Float(0.0),
Model: openai.F(openai.ChatModel(model)),
Logprobs: openai.Bool(true),
TopLogprobs: openai.Int(5),
}
resp, err := client.New(context.Background(), opts)
require.NoError(t, err)
for _, choice := range resp.Choices {
require.NotEmpty(t, choice.Logprobs)
}
}
t.Run("AzureOpenAI", func(t *testing.T) {
client := newStainlessTestClient(t, azureOpenAI.ChatCompletions.Endpoint)
testFn(t, client.Chat.Completions, azureOpenAI.ChatCompletions.Model)
})
t.Run("AzureOpenAI.Service", func(t *testing.T) {
client := newStainlessChatCompletionService(t, azureOpenAI.ChatCompletions.Endpoint)
testFn(t, client, azureOpenAI.ChatCompletions.Model)
})
}
func TestClient_GetChatCompletions_LogitBias(t *testing.T) {
// you can use LogitBias to constrain the answer to NOT contain
// certain tokens. More or less following the technique in this OpenAI article:
// https://help.openai.com/en/articles/5247780-using-logit-bias-to-alter-token-probability-with-the-openai-api
testFn := func(t *testing.T, epm endpointWithModel) {
client := newStainlessTestClient(t, epm.Endpoint)
body := openai.ChatCompletionNewParams{
Messages: openai.F([]openai.ChatCompletionMessageParamUnion{
openai.UserMessage("Briefly, what are some common roles for people at a circus, names only, one per line?"),
}),
MaxTokens: openai.Int(200),
Temperature: openai.Float(0.0),
Model: openai.F(openai.ChatModel(epm.Model)),
LogitBias: openai.F(map[string]int64{
// you can calculate these tokens using OpenAI's online tool:
// https://platform.openai.com/tokenizer?view=bpe
// These token IDs are all variations of "Clown", which I want to exclude from the response.
"25": -100,
"220": -100,
"1206": -100,
"2493": -100,
"5176": -100,
"43456": -100,
"99423": -100,
}),
}
resp, err := client.Chat.Completions.New(context.Background(), body)
require.NoError(t, err)
for _, choice := range resp.Choices {
require.NotContains(t, choice.Message.Content, "clown")
require.NotContains(t, choice.Message.Content, "Clown")
}
}
t.Run("AzureOpenAI", func(t *testing.T) {
testFn(t, azureOpenAI.ChatCompletions)
})
}
func TestClient_GetChatCompletionsStream(t *testing.T) {
chatClient := newStainlessTestClient(t, azureOpenAI.ChatCompletionsRAI.Endpoint)
returnedDeployment := "gpt-4"
stream := chatClient.Chat.Completions.NewStreaming(context.Background(), newStainlessTestChatCompletionOptions(azureOpenAI.ChatCompletionsRAI.Model))
// the data comes back differently for streaming
// 1. the text comes back in the ChatCompletion.Delta field
// 2. the role is only sent on the first streamed ChatCompletion
// check that the role came back as well.
var choices []openai.ChatCompletionChunkChoice
modelWasReturned := false
for stream.Next() {
chunk := stream.Current()
// NOTE: this is actually the name of the _model_, not the deployment. They usually match (just
// by convention) but if this fails because they _don't_ match we can just adjust the test.
if returnedDeployment == chunk.Model {
modelWasReturned = true
}
azureChunk := azopenaiextensions.ChatCompletionChunk(chunk)
promptResults, err := azureChunk.PromptFilterResults()
require.NoError(t, err)
if promptResults != nil {
require.Equal(t, []azopenaiextensions.ContentFilterResultsForPrompt{
{PromptIndex: to.Ptr[int32](0), ContentFilterResults: safeContentFilterResultDetailsForPrompt},
}, promptResults)
}
if len(chunk.Choices) == 0 {
// you can get empty entries that contain just metadata (ie, prompt annotations)
continue
}
require.Equal(t, 1, len(chunk.Choices))
choices = append(choices, chunk.Choices[0])
}
require.NoError(t, stream.Err())
require.True(t, modelWasReturned)
var message string
for _, choice := range choices {
message += choice.Delta.Content
}
require.Equal(t, expectedContent, message)
require.Equal(t, openai.MessageRoleAssistant, expectedRole)
}
func TestClient_GetChatCompletions_Vision(t *testing.T) {
// testFn := func(t *testing.T, chatClient *azopenaiextensions.Client, deploymentName string, azure bool) {
chatClient := newStainlessTestClient(t, azureOpenAI.Vision.Endpoint)
imageURL := "https://www.bing.com/th?id=OHR.BradgateFallow_EN-US3932725763_1920x1080.jpg"
ctx, cancel := context.WithTimeout(context.TODO(), time.Minute)
defer cancel()
resp, err := chatClient.Chat.Completions.New(ctx, openai.ChatCompletionNewParams{
Messages: openai.F([]openai.ChatCompletionMessageParamUnion{
openai.UserMessageParts(
openai.TextPart("Describe this image"),
openai.ImagePart(imageURL),
)},
),
Model: openai.F(openai.ChatModel(azureOpenAI.Vision.Model)),
MaxTokens: openai.Int(512),
})
// vision is a bit of an oversubscribed Azure resource. Allow 429, but mark the test as skipped.
customRequireNoError(t, err, true)
require.NotEmpty(t, resp.Choices[0].Message.Content)
t.Logf("Content: %s", resp.Choices[0].Message.Content)
}

Просмотреть файл

@ -0,0 +1,107 @@
//go:build go1.18
// +build go1.18
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
package azopenaiextensions_test
import (
"context"
"strings"
"testing"
"github.com/Azure/azure-sdk-for-go/sdk/ai/azopenaiextensions"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/openai/openai-go"
"github.com/stretchr/testify/require"
)
func TestClient_GetCompletions(t *testing.T) {
client := newStainlessTestClient(t, azureOpenAI.Completions.Endpoint)
resp, err := client.Completions.New(context.Background(), openai.CompletionNewParams{
Prompt: openai.F[openai.CompletionNewParamsPromptUnion](openai.CompletionNewParamsPromptArrayOfStrings{"What is Azure OpenAI?"}),
MaxTokens: openai.Int(2048 - 127),
Temperature: openai.Float(0.0),
Model: openai.F(openai.CompletionNewParamsModel(azureOpenAI.Completions.Model)),
})
skipNowIfThrottled(t, err)
require.NoError(t, err)
// we'll do a general check here - as models change the answers can also change, token usages are different,
// etc... So we'll just make sure data is coming back and is reasonable.
require.NotZero(t, resp.Usage.PromptTokens)
require.NotZero(t, resp.Usage.CompletionTokens)
require.NotZero(t, resp.Usage.TotalTokens)
require.Equal(t, int64(0), resp.Choices[0].Index)
require.Equal(t, openai.CompletionChoiceFinishReasonStop, resp.Choices[0].FinishReason)
require.NotEmpty(t, resp.Choices[0].Text)
azureChoice := azopenaiextensions.CompletionChoice(resp.Choices[0])
contentFilterResults, err := azureChoice.ContentFilterResults()
require.NoError(t, err)
require.Equal(t, safeContentFilter, contentFilterResults)
azureCompletion := azopenaiextensions.Completion(*resp)
promptFilterResults, err := azureCompletion.PromptFilterResults()
require.NoError(t, err)
require.Equal(t, []azopenaiextensions.ContentFilterResultsForPrompt{{
PromptIndex: to.Ptr[int32](0),
ContentFilterResults: safeContentFilterResultDetailsForPrompt,
}}, promptFilterResults)
}
func TestGetCompletionsStream(t *testing.T) {
client := newStainlessTestClient(t, azureOpenAI.Completions.Endpoint)
stream := client.Completions.NewStreaming(context.TODO(), openai.CompletionNewParams{
Model: openai.F(openai.CompletionNewParamsModel(azureOpenAI.Completions.Model)),
MaxTokens: openai.Int(2048),
Temperature: openai.Float(0.0),
Prompt: openai.F[openai.CompletionNewParamsPromptUnion](
openai.CompletionNewParamsPromptArrayOfStrings{"What is Azure OpenAI?"},
),
})
t.Cleanup(func() {
err := stream.Close()
require.NoError(t, err)
})
var sb strings.Builder
var eventCount int
for stream.Next() {
completion := azopenaiextensions.Completion(stream.Current())
promptFilterResults, err := completion.PromptFilterResults()
require.NoError(t, err)
if promptFilterResults != nil {
require.Equal(t, []azopenaiextensions.ContentFilterResultsForPrompt{
{PromptIndex: to.Ptr[int32](0), ContentFilterResults: safeContentFilterResultDetailsForPrompt},
}, promptFilterResults)
}
eventCount++
if len(completion.Choices) > 0 {
sb.WriteString(completion.Choices[0].Text)
}
}
require.NoError(t, stream.Err())
got := sb.String()
require.NotEmpty(t, got)
// there's no strict requirement of how the response is streamed so just
// choosing something that's reasonable but will be lower than typical usage
// (which is usually somewhere around the 80s).
require.GreaterOrEqual(t, eventCount, 50)
}

Просмотреть файл

@ -0,0 +1,116 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
package azopenaiextensions_test
import (
"bytes"
"context"
"encoding/base64"
"encoding/binary"
"encoding/json"
"fmt"
"net/http"
"testing"
"github.com/openai/openai-go"
"github.com/stretchr/testify/require"
)
func TestClient_GetEmbeddings_InvalidModel(t *testing.T) {
client := newStainlessTestClient(t, azureOpenAI.Embeddings.Endpoint)
_, err := client.Embeddings.New(context.Background(), openai.EmbeddingNewParams{
Model: openai.F(openai.EmbeddingNewParamsModel("thisdoesntexist")),
})
var openaiErr *openai.Error
require.ErrorAs(t, err, &openaiErr)
require.Equal(t, http.StatusNotFound, openaiErr.StatusCode)
require.Contains(t, err.Error(), "does not exist")
}
func TestClient_GetEmbeddings(t *testing.T) {
client := newStainlessTestClient(t, azureOpenAI.Embeddings.Endpoint)
resp, err := client.Embeddings.New(context.Background(), openai.EmbeddingNewParams{
Input: openai.F[openai.EmbeddingNewParamsInputUnion](openai.EmbeddingNewParamsInputArrayOfStrings([]string{"\"Your text string goes here\""})),
Model: openai.F(openai.EmbeddingNewParamsModel(azureOpenAI.Embeddings.Model)),
})
require.NoError(t, err)
require.NotEmpty(t, resp.Data[0].Embedding)
}
func TestClient_GetEmbeddings_embeddingsFormat(t *testing.T) {
testFn := func(t *testing.T, epm endpointWithModel, dimension int64) {
client := newStainlessTestClient(t, epm.Endpoint)
arg := openai.EmbeddingNewParams{
Input: openai.F[openai.EmbeddingNewParamsInputUnion](openai.EmbeddingNewParamsInputArrayOfStrings([]string{"hello"})),
EncodingFormat: openai.F(openai.EmbeddingNewParamsEncodingFormatBase64),
Model: openai.F(openai.EmbeddingNewParamsModel(epm.Model)),
}
if dimension > 0 {
arg.Dimensions = openai.Int(dimension)
}
base64Resp, err := client.Embeddings.New(context.Background(), arg)
require.NoError(t, err)
require.NotEmpty(t, base64Resp.Data)
require.Empty(t, base64Resp.Data[0].Embedding)
embeddings := deserializeBase64Embeddings(t, base64Resp.Data[0].JSON.Embedding.Raw())
// sanity checks - we deserialized everything and didn't create anything impossible.
for _, v := range embeddings {
require.True(t, v <= 1.0 && v >= -1.0)
}
arg2 := openai.EmbeddingNewParams{
Input: openai.F[openai.EmbeddingNewParamsInputUnion](openai.EmbeddingNewParamsInputArrayOfStrings([]string{"hello"})),
Model: openai.F(openai.EmbeddingNewParamsModel(epm.Model)),
}
if dimension > 0 {
arg2.Dimensions = openai.Int(dimension)
}
floatResp, err := client.Embeddings.New(context.Background(), arg2)
require.NoError(t, err)
require.NotEmpty(t, floatResp.Data)
require.NotEmpty(t, floatResp.Data[0].Embedding)
require.Equal(t, len(floatResp.Data[0].Embedding), len(embeddings))
// This works "most of the time" but it's non-deterministic since two separate calls don't always
// produce the exact same data. Leaving it here in case you want to do some rough checks later.
// require.Equal(t, floatResp.Data[0].Embedding[0:dimension], base64Resp.Data[0].Embedding[0:dimension])
}
for _, dim := range []int64{0, 1, 10, 100} {
t.Run(fmt.Sprintf("AzureOpenAI(dimensions=%d)", dim), func(t *testing.T) {
testFn(t, azureOpenAI.TextEmbedding3Small, dim)
})
}
}
func deserializeBase64Embeddings(t *testing.T, rawJSON string) []float32 {
var base64Text *string
err := json.Unmarshal([]byte(rawJSON), &base64Text)
require.NoError(t, err)
destBytes, err := base64.StdEncoding.DecodeString(*base64Text)
require.NoError(t, err)
floats := make([]float32, len(destBytes)/4) // it's a binary serialization of float32s.
var reader = bytes.NewReader(destBytes)
err = binary.Read(reader, binary.LittleEndian, floats)
require.NoError(t, err)
return floats
}

Просмотреть файл

@ -0,0 +1,171 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
package azopenaiextensions_test
import (
"context"
"encoding/json"
"testing"
"github.com/Azure/azure-sdk-for-go/sdk/ai/azopenaiextensions"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/Azure/azure-sdk-for-go/sdk/internal/recording"
"github.com/openai/openai-go"
"github.com/openai/openai-go/shared"
"github.com/stretchr/testify/require"
)
var weatherFuncTool = openai.F([]openai.ChatCompletionToolParam{{
Type: openai.F(openai.ChatCompletionToolTypeFunction),
Function: openai.F(shared.FunctionDefinitionParam{
Name: openai.F("get_current_weather"),
Description: openai.F("Get the current weather in a given location"),
Parameters: openai.F(openai.FunctionParameters{
"required": []string{"location"},
"type": "object",
"properties": map[string]map[string]any{
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {
"type": "string",
"enum": []string{"celsius", "fahrenheit"},
},
},
}),
}),
}})
func TestGetChatCompletions_usingFunctions(t *testing.T) {
if recording.GetRecordMode() == recording.PlaybackMode {
t.Skip("https://github.com/Azure/azure-sdk-for-go/issues/22869")
}
// https://platform.openai.com/docs/guides/gpt/function-calling
testFn := func(t *testing.T, chatClient *openai.Client, deploymentName string, toolChoice openai.ChatCompletionToolChoiceOptionUnionParam) {
body := openai.ChatCompletionNewParams{
Model: openai.F(openai.ChatModel(deploymentName)),
Messages: openai.F([]openai.ChatCompletionMessageParamUnion{
openai.AssistantMessage("What's the weather like in Boston, MA, in celsius?"),
}),
Tools: weatherFuncTool,
ToolChoice: openai.F(toolChoice),
Temperature: openai.Float(0.0),
}
resp, err := chatClient.Chat.Completions.New(context.Background(), body)
require.NoError(t, err)
funcCall := resp.Choices[0].Message.ToolCalls[0]
require.Equal(t, "get_current_weather", funcCall.Function.Name)
type location struct {
Location string `json:"location"`
Unit string `json:"unit"`
}
var funcParams *location
err = json.Unmarshal([]byte(funcCall.Function.Arguments), &funcParams)
require.NoError(t, err)
require.Equal(t, location{Location: "Boston, MA", Unit: "celsius"}, *funcParams)
}
chatClient := newStainlessTestClient(t, azureOpenAI.ChatCompletions.Endpoint)
testData := []struct {
Model string
ToolChoice openai.ChatCompletionToolChoiceOptionUnionParam
}{
// all of these variants use the tool provided - auto just also works since we did provide
// a tool reference and ask a question to use it.
{Model: azureOpenAI.ChatCompletions.Model, ToolChoice: nil},
{Model: azureOpenAI.ChatCompletions.Model, ToolChoice: openai.ChatCompletionToolChoiceOptionStringAuto},
{Model: azureOpenAI.ChatCompletions.Model, ToolChoice: openai.ChatCompletionNamedToolChoiceParam{
Type: openai.F(openai.ChatCompletionNamedToolChoiceTypeFunction),
Function: openai.F(openai.ChatCompletionNamedToolChoiceFunctionParam{
Name: openai.String("get_current_weather"),
}),
}},
}
for _, td := range testData {
testFn(t, chatClient, td.Model, td.ToolChoice)
}
}
func TestGetChatCompletions_usingFunctions_streaming(t *testing.T) {
body := openai.ChatCompletionNewParams{
Model: openai.F(openai.ChatModel(azureOpenAI.ChatCompletions.Model)),
Messages: openai.F([]openai.ChatCompletionMessageParamUnion{
openai.AssistantMessage("What's the weather like in Boston, MA, in celsius?"),
}),
Tools: weatherFuncTool,
Temperature: openai.Float(0.0),
}
chatClient := newStainlessTestClient(t, azureOpenAI.ChatCompletions.Endpoint)
stream := chatClient.Chat.Completions.NewStreaming(context.Background(), body)
defer func() {
err := stream.Close()
require.NoError(t, err)
}()
// these results are way trickier than they should be, but we have to accumulate across
// multiple fields to get a full result.
funcCall := &struct {
Arguments *string
Name *string
}{
Arguments: to.Ptr(""),
Name: to.Ptr(""),
}
for stream.Next() {
chunk := stream.Current()
if len(chunk.Choices) == 0 {
azureChunk := azopenaiextensions.ChatCompletionChunk(chunk)
promptFilterResults, err := azureChunk.PromptFilterResults()
require.NoError(t, err)
// there are prompt filter results.
require.NotEmpty(t, promptFilterResults)
continue
}
if chunk.Choices[0].FinishReason != "" {
require.Equal(t, openai.ChatCompletionChunkChoicesFinishReasonToolCalls, chunk.Choices[0].FinishReason)
continue
}
functionToolCall := chunk.Choices[0].Delta.ToolCalls[0]
require.NotEmpty(t, functionToolCall.Function)
*funcCall.Arguments += functionToolCall.Function.Arguments
*funcCall.Name += functionToolCall.Function.Name
}
require.NoError(t, stream.Err())
require.Equal(t, "get_current_weather", *funcCall.Name)
type location struct {
Location string `json:"location"`
Unit string `json:"unit"`
}
var funcParams *location
err := json.Unmarshal([]byte(*funcCall.Arguments), &funcParams)
require.NoError(t, err)
require.Equal(t, location{Location: "Boston, MA", Unit: "celsius"}, *funcParams)
}

Просмотреть файл

@ -0,0 +1,94 @@
//go:build go1.18
// +build go1.18
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
package azopenaiextensions_test
import (
"context"
"net/http"
"testing"
"github.com/Azure/azure-sdk-for-go/sdk/ai/azopenaiextensions"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/openai/openai-go"
"github.com/stretchr/testify/require"
)
// RAI == "responsible AI". This part of the API provides content filtering and
// classification of the failures into categories like Hate, Violence, etc...
func TestClient_GetCompletions_AzureOpenAI_ContentFilter_Response(t *testing.T) {
// Scenario: Your API call asks for multiple responses (N>1) and at least 1 of the responses is filtered
// https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/cognitive-services/openai/concepts/content-filter.md#scenario-your-api-call-asks-for-multiple-responses-n1-and-at-least-1-of-the-responses-is-filtered
client := newStainlessTestClient(t, azureOpenAI.Completions.Endpoint)
arg := openai.CompletionNewParams{
Model: openai.F(openai.CompletionNewParamsModel(azureOpenAI.Completions.Model)),
Temperature: openai.Float(0.0),
MaxTokens: openai.Int(2048 - 127),
Prompt: openai.F[openai.CompletionNewParamsPromptUnion](
openai.CompletionNewParamsPromptArrayOfStrings([]string{"How do I rob a bank with violence?"}),
),
}
resp, err := client.Completions.New(context.Background(), arg)
require.Empty(t, resp)
requireContentFilterError(t, err)
}
func requireContentFilterError(t *testing.T, err error) {
// In this scenario the payload for the error contains content filtering information.
// This happens if Azure OpenAI outright rejects your request (rather than pieces of it)
// [azopenaiextensions.AsContentFilterError] will parse out error, and also wrap the openai.Error.
var contentErr *azopenaiextensions.ContentFilterError
require.True(t, azopenaiextensions.ExtractContentFilterError(err, &contentErr))
// ensure that our new error wraps their openai.Error. This makes it simpler for them to do generic
// error handling using the actual error type they expect (openai.Error) while still extracting any
// data they need.
var openaiErr *openai.Error
require.ErrorAs(t, err, &openaiErr)
require.Equal(t, http.StatusBadRequest, openaiErr.StatusCode)
require.Contains(t, openaiErr.Error(), "The response was filtered due to the prompt triggering")
require.True(t, *contentErr.Violence.Filtered)
require.NotEqual(t, azopenaiextensions.ContentFilterSeveritySafe, *contentErr.Violence.Severity)
}
func TestClient_GetChatCompletions_AzureOpenAI_ContentFilter_WithResponse(t *testing.T) {
client := newStainlessTestClient(t, azureOpenAI.ChatCompletionsRAI.Endpoint)
resp, err := client.Chat.Completions.New(context.Background(), openai.ChatCompletionNewParams{
Messages: openai.F([]openai.ChatCompletionMessageParamUnion{
openai.UserMessage("How do I cook a bell pepper?"),
}),
MaxTokens: openai.Int(2048 - 127),
Temperature: openai.Float(0.0),
Model: openai.F(openai.ChatModel(azureOpenAI.ChatCompletionsRAI.Model)),
})
customRequireNoError(t, err, true)
contentFilterResults, err := azopenaiextensions.ChatCompletionChoice(resp.Choices[0]).ContentFilterResults()
require.NoError(t, err)
require.Equal(t, safeContentFilter, contentFilterResults)
}
var safeContentFilter = &azopenaiextensions.ContentFilterResultsForChoice{
Hate: &azopenaiextensions.ContentFilterResult{Filtered: to.Ptr(false), Severity: to.Ptr(azopenaiextensions.ContentFilterSeveritySafe)},
SelfHarm: &azopenaiextensions.ContentFilterResult{Filtered: to.Ptr(false), Severity: to.Ptr(azopenaiextensions.ContentFilterSeveritySafe)},
Sexual: &azopenaiextensions.ContentFilterResult{Filtered: to.Ptr(false), Severity: to.Ptr(azopenaiextensions.ContentFilterSeveritySafe)},
Violence: &azopenaiextensions.ContentFilterResult{Filtered: to.Ptr(false), Severity: to.Ptr(azopenaiextensions.ContentFilterSeveritySafe)},
}
var safeContentFilterResultDetailsForPrompt = &azopenaiextensions.ContentFilterResultDetailsForPrompt{
Hate: &azopenaiextensions.ContentFilterResult{Filtered: to.Ptr(false), Severity: to.Ptr(azopenaiextensions.ContentFilterSeveritySafe)},
SelfHarm: &azopenaiextensions.ContentFilterResult{Filtered: to.Ptr(false), Severity: to.Ptr(azopenaiextensions.ContentFilterSeveritySafe)},
Sexual: &azopenaiextensions.ContentFilterResult{Filtered: to.Ptr(false), Severity: to.Ptr(azopenaiextensions.ContentFilterSeveritySafe)},
Violence: &azopenaiextensions.ContentFilterResult{Filtered: to.Ptr(false), Severity: to.Ptr(azopenaiextensions.ContentFilterSeveritySafe)},
}

Просмотреть файл

@ -0,0 +1,233 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
package azopenaiextensions_test
import (
"context"
"errors"
"fmt"
"net/http"
"os"
"testing"
"github.com/Azure/azure-sdk-for-go/sdk/ai/azopenaiextensions"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/Azure/azure-sdk-for-go/sdk/internal/recording"
"github.com/Azure/azure-sdk-for-go/sdk/internal/test/credential"
"github.com/openai/openai-go"
"github.com/openai/openai-go/azure"
"github.com/stretchr/testify/require"
)
const apiVersion = "2024-07-01-preview"
type endpoint struct {
URL string
APIKey string
Azure bool
}
type testVars struct {
Assistants endpointWithModel
ChatCompletions endpointWithModel
ChatCompletionsLegacyFunctions endpointWithModel
ChatCompletionsOYD endpointWithModel // azure only
ChatCompletionsRAI endpointWithModel // azure only
ChatCompletionsWithJSONResponseFormat endpointWithModel
Cognitive azopenaiextensions.AzureSearchChatExtensionConfiguration
Completions endpointWithModel
DallE endpointWithModel
Embeddings endpointWithModel
Speech endpointWithModel
TextEmbedding3Small endpointWithModel
Vision endpointWithModel
Whisper endpointWithModel
}
type endpointWithModel struct {
Endpoint endpoint
Model string
}
// getEnvVariable is recording.GetEnvVariable but it panics if the
// value isn't found, rather than falling back to the playback value.
func getEnvVariable(varName string) string {
val := os.Getenv(varName)
if val == "" {
panic(fmt.Sprintf("Missing required environment variable %s", varName))
}
return val
}
var azureOpenAI = func() testVars {
servers := struct {
USEast endpoint
USNorthCentral endpoint
USEast2 endpoint
SWECentral endpoint
OpenAI endpoint
}{
USEast: endpoint{
URL: getEnvVariable("AOAI_ENDPOINT_USEAST"),
APIKey: getEnvVariable("AOAI_ENDPOINT_USEAST_API_KEY"),
Azure: true,
},
USEast2: endpoint{
URL: getEnvVariable("AOAI_ENDPOINT_USEAST2"),
APIKey: getEnvVariable("AOAI_ENDPOINT_USEAST2_API_KEY"),
Azure: true,
},
USNorthCentral: endpoint{
URL: getEnvVariable("AOAI_ENDPOINT_USNORTHCENTRAL"),
APIKey: getEnvVariable("AOAI_ENDPOINT_USNORTHCENTRAL_API_KEY"),
Azure: true,
},
SWECentral: endpoint{
URL: getEnvVariable("AOAI_ENDPOINT_SWECENTRAL"),
APIKey: getEnvVariable("AOAI_ENDPOINT_SWECENTRAL_API_KEY"),
Azure: true,
},
}
newTestVarsFn := func() testVars {
return testVars{
Assistants: endpointWithModel{
Endpoint: servers.SWECentral,
Model: "gpt-4-1106-preview",
},
ChatCompletions: endpointWithModel{
Endpoint: servers.USEast,
Model: "gpt-4-0613",
},
ChatCompletionsLegacyFunctions: endpointWithModel{
Endpoint: servers.USEast,
Model: "gpt-4-0613",
},
ChatCompletionsOYD: endpointWithModel{
Endpoint: servers.USEast,
Model: "gpt-4-0613",
},
ChatCompletionsRAI: endpointWithModel{
Endpoint: servers.USEast,
Model: "gpt-4-0613",
},
ChatCompletionsWithJSONResponseFormat: endpointWithModel{
Endpoint: servers.SWECentral,
Model: "gpt-4-1106-preview",
},
Completions: endpointWithModel{
Endpoint: servers.USEast,
Model: "gpt-35-turbo-instruct",
},
DallE: endpointWithModel{
Endpoint: servers.SWECentral,
Model: "dall-e-3",
},
Embeddings: endpointWithModel{
Endpoint: servers.USEast,
Model: "text-embedding-ada-002",
},
Speech: endpointWithModel{
Endpoint: servers.SWECentral,
Model: "tts",
},
TextEmbedding3Small: endpointWithModel{
Endpoint: servers.USEast,
Model: "text-embedding-3-small",
},
Vision: endpointWithModel{
Endpoint: servers.SWECentral,
Model: "gpt-4-vision-preview",
},
Whisper: endpointWithModel{
Endpoint: servers.USNorthCentral,
Model: "whisper",
},
Cognitive: azopenaiextensions.AzureSearchChatExtensionConfiguration{
Parameters: &azopenaiextensions.AzureSearchChatExtensionParameters{
Endpoint: to.Ptr(getEnvVariable("COGNITIVE_SEARCH_API_ENDPOINT")),
IndexName: to.Ptr(getEnvVariable("COGNITIVE_SEARCH_API_INDEX")),
Authentication: &azopenaiextensions.OnYourDataSystemAssignedManagedIdentityAuthenticationOptions{},
},
},
}
}
azureTestVars := newTestVarsFn()
if recording.GetRecordMode() == recording.LiveMode {
// these are for the examples - we don't want to mention regions or anything in them so the
// env variables have a more friendly naming scheme.
remaps := map[string]endpointWithModel{
"CHAT_COMPLETIONS_MODEL_LEGACY_FUNCTIONS": azureTestVars.ChatCompletionsLegacyFunctions,
"CHAT_COMPLETIONS_RAI": azureTestVars.ChatCompletionsRAI,
"CHAT_COMPLETIONS": azureTestVars.ChatCompletions,
"COMPLETIONS": azureTestVars.Completions,
"DALLE": azureTestVars.DallE,
"EMBEDDINGS": azureTestVars.Embeddings,
// these resources are oversubscribed and occasionally fail in live testing.
// "VISION": azureTestVars.Vision,
// "WHISPER": azureTestVars.Whisper,
}
for area, epm := range remaps {
os.Setenv("AOAI_"+area+"_ENDPOINT", epm.Endpoint.URL)
os.Setenv("AOAI_"+area+"_API_KEY", epm.Endpoint.APIKey)
os.Setenv("AOAI_"+area+"_MODEL", epm.Model)
}
}
return azureTestVars
}()
func newStainlessTestClient(t *testing.T, ep endpoint) *openai.Client {
tokenCredential, err := credential.New(nil)
require.NoError(t, err)
return openai.NewClient(
azure.WithEndpoint(ep.URL, apiVersion),
azure.WithTokenCredential(tokenCredential),
)
}
func newStainlessChatCompletionService(t *testing.T, ep endpoint) *openai.ChatCompletionService {
tokenCredential, err := credential.New(nil)
require.NoError(t, err)
return openai.NewChatCompletionService(azure.WithEndpoint(ep.URL, apiVersion),
azure.WithTokenCredential(tokenCredential),
)
}
func skipNowIfThrottled(t *testing.T, err error) {
if respErr := (*azcore.ResponseError)(nil); errors.As(err, &respErr) && respErr.StatusCode == http.StatusTooManyRequests {
t.Skipf("OpenAI resource overloaded, skipping this test")
}
}
// customRequireNoError checks the error but allows throttling errors to account for resources that are
// constrained.
func customRequireNoError(t *testing.T, err error, throttlingAllowed bool) {
if err == nil {
return
}
if throttlingAllowed {
if respErr := (*openai.Error)(nil); errors.As(err, &respErr) && respErr.StatusCode == http.StatusTooManyRequests {
t.Skip("Skipping test because of throttling (http.StatusTooManyRequests)")
return
}
if errors.Is(err, context.DeadlineExceeded) {
t.Skip("Skipping test because of throttling (DeadlineExceeded)")
return
}
}
require.NoError(t, err)
}

Просмотреть файл

@ -0,0 +1,219 @@
//go:build go1.18
// +build go1.18
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT.
// Changes may cause incorrect behavior and will be lost if the code is regenerated.
package azopenaiextensions
// AzureChatExtensionRetrieveDocumentFilterReason - The reason for filtering the retrieved document.
type AzureChatExtensionRetrieveDocumentFilterReason string
const (
// AzureChatExtensionRetrieveDocumentFilterReasonRerank - The document is not filtered by original search score threshold,
// but is filtered by rerank score and `top_n_documents` configure.
AzureChatExtensionRetrieveDocumentFilterReasonRerank AzureChatExtensionRetrieveDocumentFilterReason = "rerank"
// AzureChatExtensionRetrieveDocumentFilterReasonScore - The document is filtered by original search score threshold defined
// by `strictness` configure.
AzureChatExtensionRetrieveDocumentFilterReasonScore AzureChatExtensionRetrieveDocumentFilterReason = "score"
)
// PossibleAzureChatExtensionRetrieveDocumentFilterReasonValues returns the possible values for the AzureChatExtensionRetrieveDocumentFilterReason const type.
func PossibleAzureChatExtensionRetrieveDocumentFilterReasonValues() []AzureChatExtensionRetrieveDocumentFilterReason {
return []AzureChatExtensionRetrieveDocumentFilterReason{
AzureChatExtensionRetrieveDocumentFilterReasonRerank,
AzureChatExtensionRetrieveDocumentFilterReasonScore,
}
}
// AzureChatExtensionType - A representation of configuration data for a single Azure OpenAI chat extension. This will be
// used by a chat completions request that should use Azure OpenAI chat extensions to augment the response
// behavior. The use of this configuration is compatible only with Azure OpenAI.
type AzureChatExtensionType string
const (
// AzureChatExtensionTypeAzureCosmosDB - Represents the use of Azure Cosmos DB as an Azure OpenAI chat extension.
AzureChatExtensionTypeAzureCosmosDB AzureChatExtensionType = "azure_cosmos_db"
// AzureChatExtensionTypeAzureMachineLearningIndex - Represents the use of Azure Machine Learning index as an Azure OpenAI
// chat extension.
AzureChatExtensionTypeAzureMachineLearningIndex AzureChatExtensionType = "azure_ml_index"
// AzureChatExtensionTypeAzureSearch - Represents the use of Azure AI Search as an Azure OpenAI chat extension.
AzureChatExtensionTypeAzureSearch AzureChatExtensionType = "azure_search"
// AzureChatExtensionTypeElasticsearch - Represents the use of Elasticsearch® index as an Azure OpenAI chat extension.
AzureChatExtensionTypeElasticsearch AzureChatExtensionType = "elasticsearch"
// AzureChatExtensionTypePinecone - Represents the use of Pinecone index as an Azure OpenAI chat extension.
AzureChatExtensionTypePinecone AzureChatExtensionType = "pinecone"
)
// PossibleAzureChatExtensionTypeValues returns the possible values for the AzureChatExtensionType const type.
func PossibleAzureChatExtensionTypeValues() []AzureChatExtensionType {
return []AzureChatExtensionType{
AzureChatExtensionTypeAzureCosmosDB,
AzureChatExtensionTypeAzureMachineLearningIndex,
AzureChatExtensionTypeAzureSearch,
AzureChatExtensionTypeElasticsearch,
AzureChatExtensionTypePinecone,
}
}
// AzureSearchQueryType - The type of Azure Search retrieval query that should be executed when using it as an Azure OpenAI
// chat extension.
type AzureSearchQueryType string
const (
// AzureSearchQueryTypeSemantic - Represents the semantic query parser for advanced semantic modeling.
AzureSearchQueryTypeSemantic AzureSearchQueryType = "semantic"
// AzureSearchQueryTypeSimple - Represents the default, simple query parser.
AzureSearchQueryTypeSimple AzureSearchQueryType = "simple"
// AzureSearchQueryTypeVector - Represents vector search over computed data.
AzureSearchQueryTypeVector AzureSearchQueryType = "vector"
// AzureSearchQueryTypeVectorSemanticHybrid - Represents a combination of semantic search and vector data querying.
AzureSearchQueryTypeVectorSemanticHybrid AzureSearchQueryType = "vector_semantic_hybrid"
// AzureSearchQueryTypeVectorSimpleHybrid - Represents a combination of the simple query strategy with vector data.
AzureSearchQueryTypeVectorSimpleHybrid AzureSearchQueryType = "vector_simple_hybrid"
)
// PossibleAzureSearchQueryTypeValues returns the possible values for the AzureSearchQueryType const type.
func PossibleAzureSearchQueryTypeValues() []AzureSearchQueryType {
return []AzureSearchQueryType{
AzureSearchQueryTypeSemantic,
AzureSearchQueryTypeSimple,
AzureSearchQueryTypeVector,
AzureSearchQueryTypeVectorSemanticHybrid,
AzureSearchQueryTypeVectorSimpleHybrid,
}
}
// ContentFilterSeverity - Ratings for the intensity and risk level of harmful content.
type ContentFilterSeverity string
const (
// ContentFilterSeverityHigh - Content that displays explicit and severe harmful instructions, actions,
// damage, or abuse; includes endorsement, glorification, or promotion of severe
// harmful acts, extreme or illegal forms of harm, radicalization, or non-consensual
// power exchange or abuse.
ContentFilterSeverityHigh ContentFilterSeverity = "high"
// ContentFilterSeverityLow - Content that expresses prejudiced, judgmental, or opinionated views, includes offensive
// use of language, stereotyping, use cases exploring a fictional world (for example, gaming,
// literature) and depictions at low intensity.
ContentFilterSeverityLow ContentFilterSeverity = "low"
// ContentFilterSeverityMedium - Content that uses offensive, insulting, mocking, intimidating, or demeaning language
// towards specific identity groups, includes depictions of seeking and executing harmful
// instructions, fantasies, glorification, promotion of harm at medium intensity.
ContentFilterSeverityMedium ContentFilterSeverity = "medium"
// ContentFilterSeveritySafe - Content may be related to violence, self-harm, sexual, or hate categories but the terms
// are used in general, journalistic, scientific, medical, and similar professional contexts,
// which are appropriate for most audiences.
ContentFilterSeveritySafe ContentFilterSeverity = "safe"
)
// PossibleContentFilterSeverityValues returns the possible values for the ContentFilterSeverity const type.
func PossibleContentFilterSeverityValues() []ContentFilterSeverity {
return []ContentFilterSeverity{
ContentFilterSeverityHigh,
ContentFilterSeverityLow,
ContentFilterSeverityMedium,
ContentFilterSeveritySafe,
}
}
// OnYourDataAuthenticationType - The authentication types supported with Azure OpenAI On Your Data.
type OnYourDataAuthenticationType string
const (
// OnYourDataAuthenticationTypeAPIKey - Authentication via API key.
OnYourDataAuthenticationTypeAPIKey OnYourDataAuthenticationType = "api_key"
// OnYourDataAuthenticationTypeAccessToken - Authentication via access token.
OnYourDataAuthenticationTypeAccessToken OnYourDataAuthenticationType = "access_token"
// OnYourDataAuthenticationTypeConnectionString - Authentication via connection string.
OnYourDataAuthenticationTypeConnectionString OnYourDataAuthenticationType = "connection_string"
// OnYourDataAuthenticationTypeEncodedAPIKey - Authentication via encoded API key.
OnYourDataAuthenticationTypeEncodedAPIKey OnYourDataAuthenticationType = "encoded_api_key"
// OnYourDataAuthenticationTypeKeyAndKeyID - Authentication via key and key ID pair.
OnYourDataAuthenticationTypeKeyAndKeyID OnYourDataAuthenticationType = "key_and_key_id"
// OnYourDataAuthenticationTypeSystemAssignedManagedIdentity - Authentication via system-assigned managed identity.
OnYourDataAuthenticationTypeSystemAssignedManagedIdentity OnYourDataAuthenticationType = "system_assigned_managed_identity"
// OnYourDataAuthenticationTypeUserAssignedManagedIdentity - Authentication via user-assigned managed identity.
OnYourDataAuthenticationTypeUserAssignedManagedIdentity OnYourDataAuthenticationType = "user_assigned_managed_identity"
)
// PossibleOnYourDataAuthenticationTypeValues returns the possible values for the OnYourDataAuthenticationType const type.
func PossibleOnYourDataAuthenticationTypeValues() []OnYourDataAuthenticationType {
return []OnYourDataAuthenticationType{
OnYourDataAuthenticationTypeAPIKey,
OnYourDataAuthenticationTypeAccessToken,
OnYourDataAuthenticationTypeConnectionString,
OnYourDataAuthenticationTypeEncodedAPIKey,
OnYourDataAuthenticationTypeKeyAndKeyID,
OnYourDataAuthenticationTypeSystemAssignedManagedIdentity,
OnYourDataAuthenticationTypeUserAssignedManagedIdentity,
}
}
// OnYourDataContextProperty - The context property.
type OnYourDataContextProperty string
const (
// OnYourDataContextPropertyAllRetrievedDocuments - The `all_retrieved_documents` property.
OnYourDataContextPropertyAllRetrievedDocuments OnYourDataContextProperty = "all_retrieved_documents"
// OnYourDataContextPropertyCitations - The `citations` property.
OnYourDataContextPropertyCitations OnYourDataContextProperty = "citations"
// OnYourDataContextPropertyIntent - The `intent` property.
OnYourDataContextPropertyIntent OnYourDataContextProperty = "intent"
)
// PossibleOnYourDataContextPropertyValues returns the possible values for the OnYourDataContextProperty const type.
func PossibleOnYourDataContextPropertyValues() []OnYourDataContextProperty {
return []OnYourDataContextProperty{
OnYourDataContextPropertyAllRetrievedDocuments,
OnYourDataContextPropertyCitations,
OnYourDataContextPropertyIntent,
}
}
// OnYourDataVectorSearchAuthenticationType - The authentication types supported with Azure OpenAI On Your Data vector search.
type OnYourDataVectorSearchAuthenticationType string
const (
// OnYourDataVectorSearchAuthenticationTypeAPIKey - Authentication via API key.
OnYourDataVectorSearchAuthenticationTypeAPIKey OnYourDataVectorSearchAuthenticationType = "api_key"
// OnYourDataVectorSearchAuthenticationTypeAccessToken - Authentication via access token.
OnYourDataVectorSearchAuthenticationTypeAccessToken OnYourDataVectorSearchAuthenticationType = "access_token"
)
// PossibleOnYourDataVectorSearchAuthenticationTypeValues returns the possible values for the OnYourDataVectorSearchAuthenticationType const type.
func PossibleOnYourDataVectorSearchAuthenticationTypeValues() []OnYourDataVectorSearchAuthenticationType {
return []OnYourDataVectorSearchAuthenticationType{
OnYourDataVectorSearchAuthenticationTypeAPIKey,
OnYourDataVectorSearchAuthenticationTypeAccessToken,
}
}
// OnYourDataVectorizationSourceType - Represents the available sources Azure OpenAI On Your Data can use to configure vectorization
// of data for use with vector search.
type OnYourDataVectorizationSourceType string
const (
// OnYourDataVectorizationSourceTypeDeploymentName - Represents an Ada model deployment name to use. This model deployment
// must be in the same Azure OpenAI resource, but
// On Your Data will use this model deployment via an internal call rather than a public one, which enables vector
// search even in private networks.
OnYourDataVectorizationSourceTypeDeploymentName OnYourDataVectorizationSourceType = "deployment_name"
// OnYourDataVectorizationSourceTypeEndpoint - Represents vectorization performed by public service calls to an Azure OpenAI
// embedding model.
OnYourDataVectorizationSourceTypeEndpoint OnYourDataVectorizationSourceType = "endpoint"
// OnYourDataVectorizationSourceTypeModelID - Represents a specific embedding model ID as defined in the search service.
// Currently only supported by Elasticsearch®.
OnYourDataVectorizationSourceTypeModelID OnYourDataVectorizationSourceType = "model_id"
)
// PossibleOnYourDataVectorizationSourceTypeValues returns the possible values for the OnYourDataVectorizationSourceType const type.
func PossibleOnYourDataVectorizationSourceTypeValues() []OnYourDataVectorizationSourceType {
return []OnYourDataVectorizationSourceType{
OnYourDataVectorizationSourceTypeDeploymentName,
OnYourDataVectorizationSourceTypeEndpoint,
OnYourDataVectorizationSourceTypeModelID,
}
}

Просмотреть файл

@ -0,0 +1,49 @@
//go:build go1.18
// +build go1.18
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
package azopenaiextensions_test
import (
"context"
"net/http"
"testing"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/internal/recording"
"github.com/openai/openai-go"
"github.com/stretchr/testify/require"
)
func TestImageGeneration_AzureOpenAI(t *testing.T) {
if recording.GetRecordMode() == recording.PlaybackMode {
t.Skipf("Ignoring poller-based test")
}
client := newStainlessTestClient(t, azureOpenAI.DallE.Endpoint)
// testImageGeneration(t, client, azureOpenAI.DallE.Model, azopenaiextensions.ImageGenerationResponseFormatURL, true)
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Minute)
defer cancel()
resp, err := client.Images.Generate(ctx, openai.ImageGenerateParams{
// saw this prompt in a thread about trying to _prevent_ Dall-E3 from rewriting your
// propmt. When this is revised you'll see the text in the
Prompt: openai.String("acrylic painting of a sunflower with bees"),
Size: openai.F(openai.ImageGenerateParamsSize1024x1792),
ResponseFormat: openai.F(openai.ImageGenerateParamsResponseFormatURL),
Model: openai.F(openai.ImageModel(azureOpenAI.DallE.Model)),
})
customRequireNoError(t, err, true)
if recording.GetRecordMode() == recording.LiveMode {
headResp, err := http.DefaultClient.Head(resp.Data[0].URL)
require.NoError(t, err)
headResp.Body.Close()
require.Equal(t, http.StatusOK, headResp.StatusCode)
require.NotEmpty(t, resp.Data[0].RevisedPrompt)
}
}

Просмотреть файл

@ -0,0 +1,102 @@
//go:build go1.18
// +build go1.18
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
package azopenaiextensions
import (
"encoding/json"
"errors"
"net/http"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime"
"github.com/openai/openai-go"
)
// ContentFilterError can be extracted from an openai.Error using [ExtractContentFilterError].
type ContentFilterError struct {
OpenAIError *openai.Error
ContentFilterResultDetailsForPrompt
}
func (c *ContentFilterError) Error() string {
return c.OpenAIError.Error()
}
// Unwrap returns the inner error for this error.
func (c *ContentFilterError) Unwrap() error {
return c.OpenAIError
}
// ExtractContentFilterError checks the error to see if it contains content filtering
// information. If so it'll assign the resulting information to *contentFilterErr,
// similar to errors.As().
//
// Prompt filtering information will be present if you see an error message similar to
// this: 'The response was filtered due to the prompt triggering'.
// (NOTE: error message is for illustrative purposes, and can change).
//
// Usage looks like this:
//
// resp, err := chatCompletionsService.New(args)
//
// var contentFilterErr *azopenaiextensions.ContentFilterError
//
// if openai.ExtractContentFilterError(err, &contentFilterErr) {
// // contentFilterErr.Hate, contentFilterErr.SelfHarm, contentFilterErr.Sexual or contentFilterErr.Violence
// // contain information about why content was flagged.
// }
func ExtractContentFilterError(err error, contentFilterErr **ContentFilterError) bool {
// This is for a very specific case - when Azure rejects a request, outright, because
// it violates a content filtering rule. In that case you get a StatusBadRequest, and the
// underlying response contains a payload with the content filtering details.
var openaiErr *openai.Error
if !errors.As(err, &openaiErr) {
return false
}
if openaiErr.Response != nil && openaiErr.Response.StatusCode != http.StatusBadRequest {
return false
}
body, origErr := runtime.Payload(openaiErr.Response)
if origErr != nil {
return false
}
var envelope *struct {
Error struct {
Param string `json:"prompt"`
Message string `json:"message"`
Code string `json:"code"`
Status int `json:"status"`
InnerError struct {
Code string `json:"code"`
ContentFilterResults ContentFilterResultDetailsForPrompt `json:"content_filter_result"`
} `json:"innererror"`
} `json:"error"`
}
if err := json.Unmarshal(body, &envelope); err != nil {
return false
}
if envelope.Error.Code != "content_filter" {
return false
}
*contentFilterErr = &ContentFilterError{
OpenAIError: openaiErr,
ContentFilterResultDetailsForPrompt: envelope.Error.InnerError.ContentFilterResults,
}
return true
}
// NonRetriable is a marker method, indicating the request failure is terminal.
func (c *ContentFilterError) NonRetriable() {}

Просмотреть файл

@ -0,0 +1,24 @@
//go:build go1.18
// +build go1.18
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
package azopenaiextensions_test
import (
"testing"
"github.com/Azure/azure-sdk-for-go/sdk/ai/azopenaiextensions"
"github.com/stretchr/testify/require"
)
func TestExtractContentFilterError(t *testing.T) {
t.Run("NilError", func(t *testing.T) {
require.False(t, azopenaiextensions.ExtractContentFilterError(nil, nil))
var contentFilterErr *azopenaiextensions.ContentFilterError
require.False(t, azopenaiextensions.ExtractContentFilterError(nil, &contentFilterErr))
require.Nil(t, contentFilterErr)
})
}

Просмотреть файл

@ -0,0 +1,211 @@
package azopenaiextensions_test
import (
"context"
"fmt"
"os"
"github.com/Azure/azure-sdk-for-go/sdk/ai/azopenaiextensions"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
"github.com/Azure/azure-sdk-for-go/sdk/azidentity"
"github.com/openai/openai-go"
"github.com/openai/openai-go/azure"
)
func Example_usingAzureOnYourData() {
endpoint := os.Getenv("AOAI_OYD_ENDPOINT")
model := os.Getenv("AOAI_OYD_MODEL")
cognitiveSearchEndpoint := os.Getenv("COGNITIVE_SEARCH_API_ENDPOINT") // Ex: https://<your-service>.search.windows.net
cognitiveSearchIndexName := os.Getenv("COGNITIVE_SEARCH_API_INDEX")
if endpoint == "" || model == "" || cognitiveSearchEndpoint == "" || cognitiveSearchIndexName == "" {
fmt.Fprintf(os.Stderr, "Environment variables are not set, not \nrunning example.")
return
}
tokenCredential, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
return
}
client := openai.NewClient(
azure.WithEndpoint(endpoint, "2024-07-01-preview"),
azure.WithTokenCredential(tokenCredential),
)
chatParams := openai.ChatCompletionNewParams{
Model: openai.F(model),
MaxTokens: openai.Int(512),
Messages: openai.F([]openai.ChatCompletionMessageParamUnion{
openai.ChatCompletionMessageParam{
Role: openai.F(openai.ChatCompletionMessageParamRoleUser),
Content: openai.F[any]("What does the OpenAI package do?"),
},
}),
}
// There are other types of data sources available. Examples:
//
// - AzureCosmosDBChatExtensionConfiguration
// - AzureMachineLearningIndexChatExtensionConfiguration
// - AzureSearchChatExtensionConfiguration
// - PineconeChatExtensionConfiguration
//
// See the definition of [AzureChatExtensionConfigurationClassification] for a full list.
azureSearchDataSource := &azopenaiextensions.AzureSearchChatExtensionConfiguration{
Parameters: &azopenaiextensions.AzureSearchChatExtensionParameters{
Endpoint: &cognitiveSearchEndpoint,
IndexName: &cognitiveSearchIndexName,
Authentication: &azopenaiextensions.OnYourDataSystemAssignedManagedIdentityAuthenticationOptions{},
},
}
resp, err := client.Chat.Completions.New(
context.TODO(),
chatParams,
azopenaiextensions.WithDataSources(azureSearchDataSource),
)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
return
}
for _, chatChoice := range resp.Choices {
// Azure-specific response data can be extracted using helpers, like [azopenaiextensions.ChatCompletionChoice].
azureChatChoice := azopenaiextensions.ChatCompletionChoice(chatChoice)
azureContentFilterResult, err := azureChatChoice.ContentFilterResults()
if err != nil {
// TODO: Update the following line with your application specific error handling logic
fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
return
}
if azureContentFilterResult != nil {
fmt.Fprintf(os.Stderr, "ContentFilterResult: %#v\n", azureContentFilterResult)
}
// there are also helpers for individual types, not just top-level response types.
azureChatCompletionMsg := azopenaiextensions.ChatCompletionMessage(chatChoice.Message)
msgContext, err := azureChatCompletionMsg.Context()
if err != nil {
// TODO: Update the following line with your application specific error handling logic
fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
return
}
for _, citation := range msgContext.Citations {
if citation.Content != nil {
fmt.Fprintf(os.Stderr, "Citation = %s\n", *citation.Content)
}
}
// the original fields from the type are also still available.
fmt.Fprintf(os.Stderr, "Content: %s\n", azureChatCompletionMsg.Content)
}
fmt.Printf("Example complete\n")
// Output:
// Example complete
//
}
func Example_usingEnhancements() {
endpoint := os.Getenv("AOAI_OYD_ENDPOINT")
model := os.Getenv("AOAI_OYD_MODEL")
cognitiveSearchEndpoint := os.Getenv("COGNITIVE_SEARCH_API_ENDPOINT") // Ex: https://<your-service>.search.windows.net
cognitiveSearchIndexName := os.Getenv("COGNITIVE_SEARCH_API_INDEX")
if endpoint == "" || model == "" || cognitiveSearchEndpoint == "" || cognitiveSearchIndexName == "" {
fmt.Fprintf(os.Stderr, "Environment variables are not set, not \nrunning example.")
return
}
tokenCredential, err := azidentity.NewDefaultAzureCredential(nil)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
return
}
client := openai.NewClient(
azure.WithEndpoint(endpoint, "2024-07-01-preview"),
azure.WithTokenCredential(tokenCredential),
)
chatParams := openai.ChatCompletionNewParams{
Model: openai.F(model),
MaxTokens: openai.Int(512),
Messages: openai.F([]openai.ChatCompletionMessageParamUnion{
openai.ChatCompletionMessageParam{
Role: openai.F(openai.ChatCompletionMessageParamRoleUser),
Content: openai.F[any]("What does the OpenAI package do?"),
},
}),
}
resp, err := client.Chat.Completions.New(
context.TODO(),
chatParams,
azopenaiextensions.WithEnhancements(azopenaiextensions.AzureChatEnhancementConfiguration{
Grounding: &azopenaiextensions.AzureChatGroundingEnhancementConfiguration{
Enabled: to.Ptr(true),
},
}),
)
if err != nil {
// TODO: Update the following line with your application specific error handling logic
fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
return
}
for _, chatChoice := range resp.Choices {
// Azure-specific response data can be extracted using helpers, like [azopenaiextensions.ChatCompletionChoice].
azureChatChoice := azopenaiextensions.ChatCompletionChoice(chatChoice)
azureContentFilterResult, err := azureChatChoice.ContentFilterResults()
if err != nil {
// TODO: Update the following line with your application specific error handling logic
fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
return
}
if azureContentFilterResult != nil {
fmt.Fprintf(os.Stderr, "ContentFilterResult: %#v\n", azureContentFilterResult)
}
// there are also helpers for individual types, not just top-level response types.
azureChatCompletionMsg := azopenaiextensions.ChatCompletionMessage(chatChoice.Message)
msgContext, err := azureChatCompletionMsg.Context()
if err != nil {
// TODO: Update the following line with your application specific error handling logic
fmt.Fprintf(os.Stderr, "ERROR: %s\n", err)
return
}
for _, citation := range msgContext.Citations {
if citation.Content != nil {
fmt.Fprintf(os.Stderr, "Citation = %s\n", *citation.Content)
}
}
// the original fields from the type are also still available.
fmt.Fprintf(os.Stderr, "Content: %s\n", azureChatCompletionMsg.Content)
}
fmt.Printf("Example complete\n")
// Output:
// Example complete
//
}

Просмотреть файл

@ -0,0 +1,35 @@
module github.com/Azure/azure-sdk-for-go/sdk/ai/azopenaiextensions
go 1.21
toolchain go1.21.5
require (
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.14.0
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0
github.com/stretchr/testify v1.9.0
)
require (
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.7.0
github.com/openai/openai-go v0.1.0-alpha.16
)
require (
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/golang-jwt/jwt/v5 v5.2.1 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/kylelemons/godebug v1.1.0 // indirect
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/tidwall/gjson v1.14.4 // indirect
github.com/tidwall/match v1.1.1 // indirect
github.com/tidwall/pretty v1.2.1 // indirect
github.com/tidwall/sjson v1.2.5 // indirect
golang.org/x/crypto v0.25.0 // indirect
golang.org/x/net v0.27.0 // indirect
golang.org/x/sys v0.22.0 // indirect
golang.org/x/text v0.16.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
)

Просмотреть файл

@ -0,0 +1,54 @@
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.14.0 h1:nyQWyZvwGTvunIMxi1Y9uXkcyr+I7TeNrr/foo4Kpk8=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.14.0/go.mod h1:l38EPgmsp71HHLq9j7De57JcKOWPyhrsW1Awm1JS6K0=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.7.0 h1:tfLQ34V6F7tVSwoTf/4lH5sE0o6eCJuNDTmH09nDpbc=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.7.0/go.mod h1:9kIvujWAA58nmPmWB1m23fyWic1kYZMxD9CxaWn4Qpg=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 h1:ywEEhmNahHBihViHepv3xPBn1663uRv2t2q/ESv9seY=
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0/go.mod h1:iZDifYGJTIgIIkYRNWPENUnqx6bJ2xnSDFI2tjwZNuY=
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2 h1:XHOnouVk1mxXfQidrMEnLlPk9UMeRtyBTnEFtxkV0kU=
github.com/AzureAD/microsoft-authentication-library-for-go v1.2.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/golang-jwt/jwt/v5 v5.2.1 h1:OuVbFODueb089Lh128TAcimifWaLhJwVflnrgM17wHk=
github.com/golang-jwt/jwt/v5 v5.2.1/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/openai/openai-go v0.1.0-alpha.16 h1:4YXiVRN1yUjTQHtmGShMJioDA+gxW2PNYRbqbInAYG4=
github.com/openai/openai-go v0.1.0-alpha.16/go.mod h1:3SdE6BffOX9HPEQv8IL/fi3LYZ5TUpRYaqGQZbyk11A=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8=
github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/tidwall/gjson v1.14.2/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/gjson v1.14.4 h1:uo0p8EbA09J7RQaflQ1aBRffTR7xedD2bcIVSYxLnkM=
github.com/tidwall/gjson v1.14.4/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA=
github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4=
github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY=
github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28=
golang.org/x/crypto v0.25.0 h1:ypSNr+bnYL2YhwoMt2zPxHFmbAN1KZs/njMG3hxUp30=
golang.org/x/crypto v0.25.0/go.mod h1:T+wALwcMOSE0kXgUAnPAHqTLW+XHgcELELW8VaDgm/M=
golang.org/x/net v0.27.0 h1:5K3Njcw06/l2y9vpGCSdcxWOYHOUk3dVNGDXN+FvAys=
golang.org/x/net v0.27.0/go.mod h1:dDi0PyhWNoiUOrAS8uXv/vnScO4wnHQO4mj9fn/RytE=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.22.0 h1:RI27ohtqKCnwULzJLqkv897zojh5/DwS/ENaMzUOaWI=
golang.org/x/sys v0.22.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/text v0.16.0 h1:a94ExnEXNtEwYLGJSIUxnWoxoRz/ZcCsV63ROupILh4=
golang.org/x/text v0.16.0/go.mod h1:GhwF1Be+LQoKShO3cGOHzqOgRrGaYc9AvblQOmPVHnI=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

Просмотреть файл

@ -0,0 +1,21 @@
//go:build go1.18
// +build go1.18
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
package azopenaiextensions
import (
"github.com/openai/openai-go/option"
)
// WithDataSources adds in Azure data sources to be used with the "Azure OpenAI On Your Data" feature.
func WithDataSources(dataSources ...AzureChatExtensionConfigurationClassification) option.RequestOption {
return option.WithJSONSet("data_sources", dataSources)
}
// WithEnhancements configures Azure OpenAI enhancements, optical character recognition (OCR).
func WithEnhancements(config AzureChatEnhancementConfiguration) option.RequestOption {
return option.WithJSONSet("enhancements", config)
}

Просмотреть файл

@ -0,0 +1,102 @@
package azopenaiextensions
import (
"encoding/json"
"github.com/openai/openai-go"
)
//
// ChatCompletions (non-streaming)
//
// ChatCompletion wraps an [openai.ChatCompletion], allowing access to Azure specific properties.
type ChatCompletion openai.ChatCompletion
// ChatCompletionChoice wraps an [openai.ChatCompletionChoice], allowing access to Azure specific properties.
type ChatCompletionChoice openai.ChatCompletionChoice
// ChatCompletionMessage wraps an [openai.ChatCompletionMessage], allowing access to Azure specific properties.
type ChatCompletionMessage openai.ChatCompletionMessage
//
// Completions (streaming)
//
// ChatCompletionChunk wraps an [openai.ChatCompletionChunk], allowing access to Azure specific properties.
type ChatCompletionChunk openai.ChatCompletionChunk
// ChatCompletionChunkChoicesDelta wraps an [openai.ChatCompletionChunkChoicesDelta], allowing access to Azure specific properties.
type ChatCompletionChunkChoicesDelta openai.ChatCompletionChunkChoicesDelta
//
// Completions (streaming and non-streaming)
//
// Completion wraps an [openai.Completion], allowing access to Azure specific properties.
type Completion openai.Completion
// CompletionChoice wraps an [openai.CompletionChoice], allowing access to Azure specific properties.
type CompletionChoice openai.CompletionChoice
// PromptFilterResults contains content filtering results for zero or more prompts in the request.
func (c ChatCompletion) PromptFilterResults() ([]ContentFilterResultsForPrompt, error) {
return unmarshalField[[]ContentFilterResultsForPrompt](c.JSON.ExtraFields["prompt_filter_results"])
}
// ContentFilterResults contains content filtering information for this choice.
func (c ChatCompletionChoice) ContentFilterResults() (*ContentFilterResultsForChoice, error) {
return unmarshalField[*ContentFilterResultsForChoice](c.JSON.ExtraFields["content_filter_results"])
}
// Context contains additional context information available when Azure OpenAI chat extensions are involved
// in the generation of a corresponding chat completions response.
func (c ChatCompletionMessage) Context() (*AzureChatExtensionsMessageContext, error) {
return unmarshalField[*AzureChatExtensionsMessageContext](c.JSON.ExtraFields["context"])
}
// PromptFilterResults contains content filtering results for zero or more prompts in the request. In a streaming request,
// results for different prompts may arrive at different times or in different orders.
func (c ChatCompletionChunk) PromptFilterResults() ([]ContentFilterResultsForPrompt, error) {
return unmarshalField[[]ContentFilterResultsForPrompt](c.JSON.ExtraFields["prompt_filter_results"])
}
// Context contains additional context information available when Azure OpenAI chat extensions are involved
// in the generation of a corresponding chat completions response.
func (c ChatCompletionChunkChoicesDelta) Context() (*AzureChatExtensionsMessageContext, error) {
return unmarshalField[*AzureChatExtensionsMessageContext](c.JSON.ExtraFields["context"])
}
// PromptFilterResults contains content filtering results for zero or more prompts in the request.
func (c Completion) PromptFilterResults() ([]ContentFilterResultsForPrompt, error) {
return unmarshalField[[]ContentFilterResultsForPrompt](c.JSON.ExtraFields["prompt_filter_results"])
}
// ContentFilterResults contains content filtering information for this choice.
func (c CompletionChoice) ContentFilterResults() (*ContentFilterResultsForChoice, error) {
return unmarshalField[*ContentFilterResultsForChoice](c.JSON.ExtraFields["content_filter_results"])
}
// NOTE: this matches the apijson.Field structure from the Stainless OpenAI library. It's in their internal package
// but the data structure is exposed in public APIs.
type stainlessField interface {
Raw() string
IsMissing() bool
}
// unmarshalField is a generic way for us to unmarshal our 'extra' fields.
func unmarshalField[T any](field stainlessField) (T, error) {
var zero T
if field.IsMissing() {
return zero, nil
}
var obj *T
if err := json.Unmarshal([]byte(field.Raw()), &obj); err != nil {
return zero, err
}
return *obj, nil
}

Просмотреть файл

@ -0,0 +1,49 @@
//go:build go1.18
// +build go1.18
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT.
// Changes may cause incorrect behavior and will be lost if the code is regenerated.
package azopenaiextensions
// AzureChatExtensionConfigurationClassification provides polymorphic access to related types.
// Call the interface's GetAzureChatExtensionConfiguration() method to access the common type.
// Use a type switch to determine the concrete type. The possible types are:
// - *AzureChatExtensionConfiguration, *AzureCosmosDBChatExtensionConfiguration, *AzureMachineLearningIndexChatExtensionConfiguration,
// - *AzureSearchChatExtensionConfiguration, *PineconeChatExtensionConfiguration
type AzureChatExtensionConfigurationClassification interface {
// GetAzureChatExtensionConfiguration returns the AzureChatExtensionConfiguration content of the underlying type.
GetAzureChatExtensionConfiguration() *AzureChatExtensionConfiguration
}
// OnYourDataAuthenticationOptionsClassification provides polymorphic access to related types.
// Call the interface's GetOnYourDataAuthenticationOptions() method to access the common type.
// Use a type switch to determine the concrete type. The possible types are:
// - *OnYourDataAPIKeyAuthenticationOptions, *OnYourDataAccessTokenAuthenticationOptions, *OnYourDataAuthenticationOptions,
// - *OnYourDataConnectionStringAuthenticationOptions, *OnYourDataEncodedAPIKeyAuthenticationOptions, *OnYourDataKeyAndKeyIDAuthenticationOptions,
// - *OnYourDataSystemAssignedManagedIdentityAuthenticationOptions, *OnYourDataUserAssignedManagedIdentityAuthenticationOptions
type OnYourDataAuthenticationOptionsClassification interface {
// GetOnYourDataAuthenticationOptions returns the OnYourDataAuthenticationOptions content of the underlying type.
GetOnYourDataAuthenticationOptions() *OnYourDataAuthenticationOptions
}
// OnYourDataVectorSearchAuthenticationOptionsClassification provides polymorphic access to related types.
// Call the interface's GetOnYourDataVectorSearchAuthenticationOptions() method to access the common type.
// Use a type switch to determine the concrete type. The possible types are:
// - *OnYourDataVectorSearchAPIKeyAuthenticationOptions, *OnYourDataVectorSearchAccessTokenAuthenticationOptions, *OnYourDataVectorSearchAuthenticationOptions
type OnYourDataVectorSearchAuthenticationOptionsClassification interface {
// GetOnYourDataVectorSearchAuthenticationOptions returns the OnYourDataVectorSearchAuthenticationOptions content of the underlying type.
GetOnYourDataVectorSearchAuthenticationOptions() *OnYourDataVectorSearchAuthenticationOptions
}
// OnYourDataVectorizationSourceClassification provides polymorphic access to related types.
// Call the interface's GetOnYourDataVectorizationSource() method to access the common type.
// Use a type switch to determine the concrete type. The possible types are:
// - *OnYourDataDeploymentNameVectorizationSource, *OnYourDataEndpointVectorizationSource, *OnYourDataModelIDVectorizationSource,
// - *OnYourDataVectorizationSource
type OnYourDataVectorizationSourceClassification interface {
// GetOnYourDataVectorizationSource returns the OnYourDataVectorizationSource content of the underlying type.
GetOnYourDataVectorizationSource() *OnYourDataVectorizationSource
}

Просмотреть файл

@ -0,0 +1,39 @@
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
package azopenaiextensions_test
import (
"os"
"testing"
"github.com/Azure/azure-sdk-for-go/sdk/internal/recording"
)
const RecordingDirectory = "sdk/ai/azopenai/testdata"
func TestMain(m *testing.M) {
code := run(m)
os.Exit(code)
}
func run(m *testing.M) int {
if recording.GetRecordMode() == recording.PlaybackMode || recording.GetRecordMode() == recording.RecordingMode {
proxy, err := recording.StartTestProxy(RecordingDirectory, nil)
if err != nil {
panic(err)
}
defer func() {
err := recording.StopTestProxy(proxy)
if err != nil {
panic(err)
}
}()
}
os.Setenv("AOAI_OYD_ENDPOINT", os.Getenv("AOAI_ENDPOINT_USEAST"))
os.Setenv("AOAI_OYD_MODEL", "gpt-4-0613")
return m.Run()
}

Просмотреть файл

@ -0,0 +1,902 @@
//go:build go1.18
// +build go1.18
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT.
// Changes may cause incorrect behavior and will be lost if the code is regenerated.
package azopenaiextensions
// AzureChatEnhancementConfiguration - A representation of the available Azure OpenAI enhancement configurations.
type AzureChatEnhancementConfiguration struct {
// A representation of the available options for the Azure OpenAI grounding enhancement.
Grounding *AzureChatGroundingEnhancementConfiguration
// A representation of the available options for the Azure OpenAI optical character recognition (OCR) enhancement.
Ocr *AzureChatOCREnhancementConfiguration
}
// AzureChatEnhancements - Represents the output results of Azure enhancements to chat completions, as configured via the
// matching input provided in the request.
type AzureChatEnhancements struct {
// The grounding enhancement that returns the bounding box of the objects detected in the image.
Grounding *AzureGroundingEnhancement
}
// AzureChatExtensionConfiguration - A representation of configuration data for a single Azure OpenAI chat extension. This
// will be used by a chat completions request that should use Azure OpenAI chat extensions to augment the response
// behavior. The use of this configuration is compatible only with Azure OpenAI.
type AzureChatExtensionConfiguration struct {
// REQUIRED; The label for the type of an Azure chat extension. This typically corresponds to a matching Azure resource. Azure
// chat extensions are only compatible with Azure OpenAI.
Type *AzureChatExtensionType
}
// GetAzureChatExtensionConfiguration implements the AzureChatExtensionConfigurationClassification interface for type AzureChatExtensionConfiguration.
func (a *AzureChatExtensionConfiguration) GetAzureChatExtensionConfiguration() *AzureChatExtensionConfiguration {
return a
}
// AzureChatExtensionDataSourceResponseCitation - A single instance of additional context information available when Azure
// OpenAI chat extensions are involved in the generation of a corresponding chat completions response. This context information
// is
// only populated when using an Azure OpenAI request configured to use a matching extension.
type AzureChatExtensionDataSourceResponseCitation struct {
// REQUIRED; The content of the citation.
Content *string
// The chunk ID of the citation.
ChunkID *string
// The file path of the citation.
Filepath *string
// The title of the citation.
Title *string
// The URL of the citation.
URL *string
}
// AzureChatExtensionRetrievedDocument - The retrieved document.
type AzureChatExtensionRetrievedDocument struct {
// REQUIRED; The content of the citation.
Content *string
// REQUIRED; The index of the data source.
DataSourceIndex *int32
// REQUIRED; The search queries used to retrieve the document.
SearchQueries []string
// The chunk ID of the citation.
ChunkID *string
// The file path of the citation.
Filepath *string
// Represents the rationale for filtering the document. If the document does not undergo filtering, this field will remain
// unset.
FilterReason *AzureChatExtensionRetrieveDocumentFilterReason
// The original search score of the retrieved document.
OriginalSearchScore *float64
// The rerank score of the retrieved document.
RerankScore *float64
// The title of the citation.
Title *string
// The URL of the citation.
URL *string
}
// AzureChatExtensionsMessageContext - A representation of the additional context information available when Azure OpenAI
// chat extensions are involved in the generation of a corresponding chat completions response. This context information
// is only populated when using an Azure OpenAI request configured to use a matching extension.
type AzureChatExtensionsMessageContext struct {
// All the retrieved documents.
AllRetrievedDocuments []AzureChatExtensionRetrievedDocument
// The contextual information associated with the Azure chat extensions used for a chat completions request. These messages
// describe the data source retrievals, plugin invocations, and other intermediate
// steps taken in the course of generating a chat completions response that was augmented by capabilities from Azure OpenAI
// chat extensions.
Citations []AzureChatExtensionDataSourceResponseCitation
// The detected intent from the chat history, used to pass to the next turn to carry over the context.
Intent *string
}
// AzureChatGroundingEnhancementConfiguration - A representation of the available options for the Azure OpenAI grounding enhancement.
type AzureChatGroundingEnhancementConfiguration struct {
// REQUIRED; Specifies whether the enhancement is enabled.
Enabled *bool
}
// AzureChatOCREnhancementConfiguration - A representation of the available options for the Azure OpenAI optical character
// recognition (OCR) enhancement.
type AzureChatOCREnhancementConfiguration struct {
// REQUIRED; Specifies whether the enhancement is enabled.
Enabled *bool
}
// AzureCosmosDBChatExtensionConfiguration - A specific representation of configurable options for Azure Cosmos DB when using
// it as an Azure OpenAI chat extension.
type AzureCosmosDBChatExtensionConfiguration struct {
// REQUIRED; The parameters to use when configuring Azure OpenAI CosmosDB chat extensions.
Parameters *AzureCosmosDBChatExtensionParameters
// REQUIRED; The label for the type of an Azure chat extension. This typically corresponds to a matching Azure resource. Azure
// chat extensions are only compatible with Azure OpenAI.
Type *AzureChatExtensionType
}
// GetAzureChatExtensionConfiguration implements the AzureChatExtensionConfigurationClassification interface for type AzureCosmosDBChatExtensionConfiguration.
func (a *AzureCosmosDBChatExtensionConfiguration) GetAzureChatExtensionConfiguration() *AzureChatExtensionConfiguration {
return &AzureChatExtensionConfiguration{
Type: a.Type,
}
}
// AzureCosmosDBChatExtensionParameters - Parameters to use when configuring Azure OpenAI On Your Data chat extensions when
// using Azure Cosmos DB for MongoDB vCore. The supported authentication type is ConnectionString.
type AzureCosmosDBChatExtensionParameters struct {
// REQUIRED; The name of the Azure Cosmos DB resource container.
ContainerName *string
// REQUIRED; The MongoDB vCore database name to use with Azure Cosmos DB.
DatabaseName *string
// REQUIRED; The embedding dependency for vector search.
EmbeddingDependency OnYourDataVectorizationSourceClassification
// REQUIRED; Customized field mapping behavior to use when interacting with the search index.
FieldsMapping *AzureCosmosDBFieldMappingOptions
// REQUIRED; The MongoDB vCore index name to use with Azure Cosmos DB.
IndexName *string
// If specified as true, the system will allow partial search results to be used and the request fails if all the queries
// fail. If not specified, or specified as false, the request will fail if any
// search query fails.
AllowPartialResult *bool
// The authentication method to use when accessing the defined data source. Each data source type supports a specific set
// of available authentication methods; please see the documentation of the data
// source for supported mechanisms. If not otherwise provided, On Your Data will attempt to use System Managed Identity (default
// credential) authentication.
Authentication OnYourDataAuthenticationOptionsClassification
// Whether queries should be restricted to use of indexed data.
InScope *bool
// The included properties of the output context. If not specified, the default value is citations and intent.
IncludeContexts []OnYourDataContextProperty
// The max number of rewritten queries should be send to search provider for one user message. If not specified, the system
// will decide the number of queries to send.
MaxSearchQueries *int32
// Give the model instructions about how it should behave and any context it should reference when generating a response.
// You can describe the assistant's personality and tell it how to format responses.
// There's a 100 token limit for it, and it counts against the overall token limit.
RoleInformation *string
// The configured strictness of the search relevance filtering. The higher of strictness, the higher of the precision but
// lower recall of the answer.
Strictness *int32
// The configured top number of documents to feature for the configured query.
TopNDocuments *int32
}
// AzureCosmosDBFieldMappingOptions - Optional settings to control how fields are processed when using a configured Azure
// Cosmos DB resource.
type AzureCosmosDBFieldMappingOptions struct {
// REQUIRED; The names of index fields that should be treated as content.
ContentFields []string
// REQUIRED; The names of fields that represent vector data.
VectorFields []string
// The separator pattern that content fields should use.
ContentFieldsSeparator *string
// The name of the index field to use as a filepath.
FilepathField *string
// The name of the index field to use as a title.
TitleField *string
// The name of the index field to use as a URL.
URLField *string
}
// AzureGroundingEnhancement - The grounding enhancement that returns the bounding box of the objects detected in the image.
type AzureGroundingEnhancement struct {
// REQUIRED; The lines of text detected by the grounding enhancement.
Lines []AzureGroundingEnhancementLine
}
// AzureGroundingEnhancementCoordinatePoint - A representation of a single polygon point as used by the Azure grounding enhancement.
type AzureGroundingEnhancementCoordinatePoint struct {
// REQUIRED; The x-coordinate (horizontal axis) of the point.
X *float32
// REQUIRED; The y-coordinate (vertical axis) of the point.
Y *float32
}
// AzureGroundingEnhancementLine - A content line object consisting of an adjacent sequence of content elements, such as words
// and selection marks.
type AzureGroundingEnhancementLine struct {
// REQUIRED; An array of spans that represent detected objects and its bounding box information.
Spans []AzureGroundingEnhancementLineSpan
// REQUIRED; The text within the line.
Text *string
}
// AzureGroundingEnhancementLineSpan - A span object that represents a detected object and its bounding box information.
type AzureGroundingEnhancementLineSpan struct {
// REQUIRED; The length of the span in characters, measured in Unicode codepoints.
Length *int32
// REQUIRED; The character offset within the text where the span begins. This offset is defined as the position of the first
// character of the span, counting from the start of the text as Unicode codepoints.
Offset *int32
// REQUIRED; An array of objects representing points in the polygon that encloses the detected object.
Polygon []AzureGroundingEnhancementCoordinatePoint
// REQUIRED; The text content of the span that represents the detected object.
Text *string
}
// AzureMachineLearningIndexChatExtensionConfiguration - A specific representation of configurable options for Azure Machine
// Learning vector index when using it as an Azure OpenAI chat extension.
type AzureMachineLearningIndexChatExtensionConfiguration struct {
// REQUIRED; The parameters for the Azure Machine Learning vector index chat extension.
Parameters *AzureMachineLearningIndexChatExtensionParameters
// REQUIRED; The label for the type of an Azure chat extension. This typically corresponds to a matching Azure resource. Azure
// chat extensions are only compatible with Azure OpenAI.
Type *AzureChatExtensionType
}
// GetAzureChatExtensionConfiguration implements the AzureChatExtensionConfigurationClassification interface for type AzureMachineLearningIndexChatExtensionConfiguration.
func (a *AzureMachineLearningIndexChatExtensionConfiguration) GetAzureChatExtensionConfiguration() *AzureChatExtensionConfiguration {
return &AzureChatExtensionConfiguration{
Type: a.Type,
}
}
// AzureMachineLearningIndexChatExtensionParameters - Parameters for the Azure Machine Learning vector index chat extension.
// The supported authentication types are AccessToken, SystemAssignedManagedIdentity and UserAssignedManagedIdentity.
type AzureMachineLearningIndexChatExtensionParameters struct {
// REQUIRED; The Azure Machine Learning vector index name.
Name *string
// REQUIRED; The resource ID of the Azure Machine Learning project.
ProjectResourceID *string
// REQUIRED; The version of the Azure Machine Learning vector index.
Version *string
// If specified as true, the system will allow partial search results to be used and the request fails if all the queries
// fail. If not specified, or specified as false, the request will fail if any
// search query fails.
AllowPartialResult *bool
// The authentication method to use when accessing the defined data source. Each data source type supports a specific set
// of available authentication methods; please see the documentation of the data
// source for supported mechanisms. If not otherwise provided, On Your Data will attempt to use System Managed Identity (default
// credential) authentication.
Authentication OnYourDataAuthenticationOptionsClassification
// Search filter. Only supported if the Azure Machine Learning vector index is of type AzureSearch.
Filter *string
// Whether queries should be restricted to use of indexed data.
InScope *bool
// The included properties of the output context. If not specified, the default value is citations and intent.
IncludeContexts []OnYourDataContextProperty
// The max number of rewritten queries should be send to search provider for one user message. If not specified, the system
// will decide the number of queries to send.
MaxSearchQueries *int32
// Give the model instructions about how it should behave and any context it should reference when generating a response.
// You can describe the assistant's personality and tell it how to format responses.
// There's a 100 token limit for it, and it counts against the overall token limit.
RoleInformation *string
// The configured strictness of the search relevance filtering. The higher of strictness, the higher of the precision but
// lower recall of the answer.
Strictness *int32
// The configured top number of documents to feature for the configured query.
TopNDocuments *int32
}
// AzureSearchChatExtensionConfiguration - A specific representation of configurable options for Azure Search when using it
// as an Azure OpenAI chat extension.
type AzureSearchChatExtensionConfiguration struct {
// REQUIRED; The parameters to use when configuring Azure Search.
Parameters *AzureSearchChatExtensionParameters
// REQUIRED; The label for the type of an Azure chat extension. This typically corresponds to a matching Azure resource. Azure
// chat extensions are only compatible with Azure OpenAI.
Type *AzureChatExtensionType
}
// GetAzureChatExtensionConfiguration implements the AzureChatExtensionConfigurationClassification interface for type AzureSearchChatExtensionConfiguration.
func (a *AzureSearchChatExtensionConfiguration) GetAzureChatExtensionConfiguration() *AzureChatExtensionConfiguration {
return &AzureChatExtensionConfiguration{
Type: a.Type,
}
}
// AzureSearchChatExtensionParameters - Parameters for Azure Cognitive Search when used as an Azure OpenAI chat extension.
// The supported authentication types are APIKey, SystemAssignedManagedIdentity and UserAssignedManagedIdentity.
type AzureSearchChatExtensionParameters struct {
// REQUIRED; The absolute endpoint path for the Azure Cognitive Search resource to use.
Endpoint *string
// REQUIRED; The name of the index to use as available in the referenced Azure Cognitive Search resource.
IndexName *string
// If specified as true, the system will allow partial search results to be used and the request fails if all the queries
// fail. If not specified, or specified as false, the request will fail if any
// search query fails.
AllowPartialResult *bool
// The authentication method to use when accessing the defined data source. Each data source type supports a specific set
// of available authentication methods; please see the documentation of the data
// source for supported mechanisms. If not otherwise provided, On Your Data will attempt to use System Managed Identity (default
// credential) authentication.
Authentication OnYourDataAuthenticationOptionsClassification
// The embedding dependency for vector search.
EmbeddingDependency OnYourDataVectorizationSourceClassification
// Customized field mapping behavior to use when interacting with the search index.
FieldsMapping *AzureSearchIndexFieldMappingOptions
// Search filter.
Filter *string
// Whether queries should be restricted to use of indexed data.
InScope *bool
// The included properties of the output context. If not specified, the default value is citations and intent.
IncludeContexts []OnYourDataContextProperty
// The max number of rewritten queries should be send to search provider for one user message. If not specified, the system
// will decide the number of queries to send.
MaxSearchQueries *int32
// The query type to use with Azure Cognitive Search.
QueryType *AzureSearchQueryType
// Give the model instructions about how it should behave and any context it should reference when generating a response.
// You can describe the assistant's personality and tell it how to format responses.
// There's a 100 token limit for it, and it counts against the overall token limit.
RoleInformation *string
// The additional semantic configuration for the query.
SemanticConfiguration *string
// The configured strictness of the search relevance filtering. The higher of strictness, the higher of the precision but
// lower recall of the answer.
Strictness *int32
// The configured top number of documents to feature for the configured query.
TopNDocuments *int32
}
// AzureSearchIndexFieldMappingOptions - Optional settings to control how fields are processed when using a configured Azure
// Search resource.
type AzureSearchIndexFieldMappingOptions struct {
// The names of index fields that should be treated as content.
ContentFields []string
// The separator pattern that content fields should use.
ContentFieldsSeparator *string
// The name of the index field to use as a filepath.
FilepathField *string
// The names of fields that represent image vector data.
ImageVectorFields []string
// The name of the index field to use as a title.
TitleField *string
// The name of the index field to use as a URL.
URLField *string
// The names of fields that represent vector data.
VectorFields []string
}
// ContentFilterBlocklistIDResult - Represents the outcome of an evaluation against a custom blocklist as performed by content
// filtering.
type ContentFilterBlocklistIDResult struct {
// REQUIRED; A value indicating whether or not the content has been filtered.
Filtered *bool
// REQUIRED; The ID of the custom blocklist evaluated.
ID *string
}
// ContentFilterCitedDetectionResult - Represents the outcome of a detection operation against protected resources as performed
// by content filtering.
type ContentFilterCitedDetectionResult struct {
// REQUIRED; A value indicating whether detection occurred, irrespective of severity or whether the content was filtered.
Detected *bool
// REQUIRED; A value indicating whether or not the content has been filtered.
Filtered *bool
// The license description associated with the detection.
License *string
// The internet location associated with the detection.
URL *string
}
// ContentFilterDetailedResults - Represents a structured collection of result details for content filtering.
type ContentFilterDetailedResults struct {
// REQUIRED; The collection of detailed blocklist result information.
Details []ContentFilterBlocklistIDResult
// REQUIRED; A value indicating whether or not the content has been filtered.
Filtered *bool
}
// ContentFilterDetectionResult - Represents the outcome of a detection operation performed by content filtering.
type ContentFilterDetectionResult struct {
// REQUIRED; A value indicating whether detection occurred, irrespective of severity or whether the content was filtered.
Detected *bool
// REQUIRED; A value indicating whether or not the content has been filtered.
Filtered *bool
}
// ContentFilterResult - Information about filtered content severity level and if it has been filtered or not.
type ContentFilterResult struct {
// REQUIRED; A value indicating whether or not the content has been filtered.
Filtered *bool
// REQUIRED; Ratings for the intensity and risk level of filtered content.
Severity *ContentFilterSeverity
}
// ContentFilterResultDetailsForPrompt - Information about content filtering evaluated against input data to Azure OpenAI.
type ContentFilterResultDetailsForPrompt struct {
// Describes detection results against configured custom blocklists.
CustomBlocklists *ContentFilterDetailedResults
// Describes an error returned if the content filtering system is down or otherwise unable to complete the operation in time.
Error *Error
// Describes language attacks or uses that include pejorative or discriminatory language with reference to a person or identity
// group on the basis of certain differentiating attributes of these groups
// including but not limited to race, ethnicity, nationality, gender identity and expression, sexual orientation, religion,
// immigration status, ability status, personal appearance, and body size.
Hate *ContentFilterResult
// Whether an indirect attack was detected in the prompt.
IndirectAttack *ContentFilterDetectionResult
// Whether a jailbreak attempt was detected in the prompt.
Jailbreak *ContentFilterDetectionResult
// Describes whether profanity was detected.
Profanity *ContentFilterDetectionResult
// Describes language related to physical actions intended to purposely hurt, injure, or damage ones body, or kill oneself.
SelfHarm *ContentFilterResult
// Describes language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate
// terms, physical sexual acts, including those portrayed as an assault or a
// forced sexual violent act against ones will, prostitution, pornography, and abuse.
Sexual *ContentFilterResult
// Describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes
// weapons, etc.
Violence *ContentFilterResult
}
// ContentFilterResultsForChoice - Information about content filtering evaluated against generated model output.
type ContentFilterResultsForChoice struct {
// Describes detection results against configured custom blocklists.
CustomBlocklists *ContentFilterDetailedResults
// Describes an error returned if the content filtering system is down or otherwise unable to complete the operation in time.
Error *Error
// Describes language attacks or uses that include pejorative or discriminatory language with reference to a person or identity
// group on the basis of certain differentiating attributes of these groups
// including but not limited to race, ethnicity, nationality, gender identity and expression, sexual orientation, religion,
// immigration status, ability status, personal appearance, and body size.
Hate *ContentFilterResult
// Describes whether profanity was detected.
Profanity *ContentFilterDetectionResult
// Information about detection of protected code material.
ProtectedMaterialCode *ContentFilterCitedDetectionResult
// Information about detection of protected text material.
ProtectedMaterialText *ContentFilterDetectionResult
// Describes language related to physical actions intended to purposely hurt, injure, or damage ones body, or kill oneself.
SelfHarm *ContentFilterResult
// Describes language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate
// terms, physical sexual acts, including those portrayed as an assault or a
// forced sexual violent act against ones will, prostitution, pornography, and abuse.
Sexual *ContentFilterResult
// Describes language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes
// weapons, etc.
Violence *ContentFilterResult
}
// ContentFilterResultsForPrompt - Content filtering results for a single prompt in the request.
type ContentFilterResultsForPrompt struct {
// REQUIRED; Content filtering results for this prompt
ContentFilterResults *ContentFilterResultDetailsForPrompt
// REQUIRED; The index of this prompt in the set of prompt results
PromptIndex *int32
}
// Error - The error object.
type Error struct {
// REQUIRED; One of a server-defined set of error codes.
Code *string
// REQUIRED; A human-readable representation of the error.
Message *string
}
// OnYourDataAPIKeyAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using an API key.
type OnYourDataAPIKeyAuthenticationOptions struct {
// REQUIRED; The API key to use for authentication.
Key *string
// REQUIRED; The authentication type.
Type *OnYourDataAuthenticationType
}
// GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataAPIKeyAuthenticationOptions.
func (o *OnYourDataAPIKeyAuthenticationOptions) GetOnYourDataAuthenticationOptions() *OnYourDataAuthenticationOptions {
return &OnYourDataAuthenticationOptions{
Type: o.Type,
}
}
// OnYourDataAccessTokenAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using access
// token.
type OnYourDataAccessTokenAuthenticationOptions struct {
// REQUIRED; The access token to use for authentication.
AccessToken *string
// REQUIRED; The authentication type.
Type *OnYourDataAuthenticationType
}
// GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataAccessTokenAuthenticationOptions.
func (o *OnYourDataAccessTokenAuthenticationOptions) GetOnYourDataAuthenticationOptions() *OnYourDataAuthenticationOptions {
return &OnYourDataAuthenticationOptions{
Type: o.Type,
}
}
// OnYourDataAuthenticationOptions - The authentication options for Azure OpenAI On Your Data.
type OnYourDataAuthenticationOptions struct {
// REQUIRED; The authentication type.
Type *OnYourDataAuthenticationType
}
// GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataAuthenticationOptions.
func (o *OnYourDataAuthenticationOptions) GetOnYourDataAuthenticationOptions() *OnYourDataAuthenticationOptions {
return o
}
// OnYourDataConnectionStringAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using a
// connection string.
type OnYourDataConnectionStringAuthenticationOptions struct {
// REQUIRED; The connection string to use for authentication.
ConnectionString *string
// REQUIRED; The authentication type.
Type *OnYourDataAuthenticationType
}
// GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataConnectionStringAuthenticationOptions.
func (o *OnYourDataConnectionStringAuthenticationOptions) GetOnYourDataAuthenticationOptions() *OnYourDataAuthenticationOptions {
return &OnYourDataAuthenticationOptions{
Type: o.Type,
}
}
// OnYourDataDeploymentNameVectorizationSource - The details of a a vectorization source, used by Azure OpenAI On Your Data
// when applying vector search, that is based on an internal embeddings model deployment name in the same Azure OpenAI resource.
type OnYourDataDeploymentNameVectorizationSource struct {
// REQUIRED; The embedding model deployment name within the same Azure OpenAI resource. This enables you to use vector search
// without Azure OpenAI api-key and without Azure OpenAI public network access.
DeploymentName *string
// REQUIRED; The type of vectorization source to use.
Type *OnYourDataVectorizationSourceType
// The number of dimensions the embeddings should have. Only supported in text-embedding-3 and later models.
Dimensions *int32
}
// GetOnYourDataVectorizationSource implements the OnYourDataVectorizationSourceClassification interface for type OnYourDataDeploymentNameVectorizationSource.
func (o *OnYourDataDeploymentNameVectorizationSource) GetOnYourDataVectorizationSource() *OnYourDataVectorizationSource {
return &OnYourDataVectorizationSource{
Type: o.Type,
}
}
// OnYourDataEncodedAPIKeyAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using an Elasticsearch
// encoded API key.
type OnYourDataEncodedAPIKeyAuthenticationOptions struct {
// REQUIRED; The encoded API key to use for authentication.
EncodedAPIKey *string
// REQUIRED; The authentication type.
Type *OnYourDataAuthenticationType
}
// GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataEncodedAPIKeyAuthenticationOptions.
func (o *OnYourDataEncodedAPIKeyAuthenticationOptions) GetOnYourDataAuthenticationOptions() *OnYourDataAuthenticationOptions {
return &OnYourDataAuthenticationOptions{
Type: o.Type,
}
}
// OnYourDataEndpointVectorizationSource - The details of a a vectorization source, used by Azure OpenAI On Your Data when
// applying vector search, that is based on a public Azure OpenAI endpoint call for embeddings.
type OnYourDataEndpointVectorizationSource struct {
// REQUIRED; Specifies the authentication options to use when retrieving embeddings from the specified endpoint.
Authentication OnYourDataVectorSearchAuthenticationOptionsClassification
// REQUIRED; Specifies the resource endpoint URL from which embeddings should be retrieved. It should be in the format of
// https://YOURRESOURCENAME.openai.azure.com/openai/deployments/YOURDEPLOYMENTNAME/embeddings.
// The api-version query parameter is not allowed.
Endpoint *string
// REQUIRED; The type of vectorization source to use.
Type *OnYourDataVectorizationSourceType
}
// GetOnYourDataVectorizationSource implements the OnYourDataVectorizationSourceClassification interface for type OnYourDataEndpointVectorizationSource.
func (o *OnYourDataEndpointVectorizationSource) GetOnYourDataVectorizationSource() *OnYourDataVectorizationSource {
return &OnYourDataVectorizationSource{
Type: o.Type,
}
}
// OnYourDataKeyAndKeyIDAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using an Elasticsearch
// key and key ID pair.
type OnYourDataKeyAndKeyIDAuthenticationOptions struct {
// REQUIRED; The key to use for authentication.
Key *string
// REQUIRED; The key ID to use for authentication.
KeyID *string
// REQUIRED; The authentication type.
Type *OnYourDataAuthenticationType
}
// GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataKeyAndKeyIDAuthenticationOptions.
func (o *OnYourDataKeyAndKeyIDAuthenticationOptions) GetOnYourDataAuthenticationOptions() *OnYourDataAuthenticationOptions {
return &OnYourDataAuthenticationOptions{
Type: o.Type,
}
}
// OnYourDataModelIDVectorizationSource - The details of a a vectorization source, used by Azure OpenAI On Your Data when
// applying vector search, that is based on a search service model ID. Currently only supported by Elasticsearch®.
type OnYourDataModelIDVectorizationSource struct {
// REQUIRED; The embedding model ID build inside the search service. Currently only supported by Elasticsearch®.
ModelID *string
// REQUIRED; The type of vectorization source to use.
Type *OnYourDataVectorizationSourceType
}
// GetOnYourDataVectorizationSource implements the OnYourDataVectorizationSourceClassification interface for type OnYourDataModelIDVectorizationSource.
func (o *OnYourDataModelIDVectorizationSource) GetOnYourDataVectorizationSource() *OnYourDataVectorizationSource {
return &OnYourDataVectorizationSource{
Type: o.Type,
}
}
// OnYourDataSystemAssignedManagedIdentityAuthenticationOptions - The authentication options for Azure OpenAI On Your Data
// when using a system-assigned managed identity.
type OnYourDataSystemAssignedManagedIdentityAuthenticationOptions struct {
// REQUIRED; The authentication type.
Type *OnYourDataAuthenticationType
}
// GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataSystemAssignedManagedIdentityAuthenticationOptions.
func (o *OnYourDataSystemAssignedManagedIdentityAuthenticationOptions) GetOnYourDataAuthenticationOptions() *OnYourDataAuthenticationOptions {
return &OnYourDataAuthenticationOptions{
Type: o.Type,
}
}
// OnYourDataUserAssignedManagedIdentityAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when
// using a user-assigned managed identity.
type OnYourDataUserAssignedManagedIdentityAuthenticationOptions struct {
// REQUIRED; The resource ID of the user-assigned managed identity to use for authentication.
ManagedIdentityResourceID *string
// REQUIRED; The authentication type.
Type *OnYourDataAuthenticationType
}
// GetOnYourDataAuthenticationOptions implements the OnYourDataAuthenticationOptionsClassification interface for type OnYourDataUserAssignedManagedIdentityAuthenticationOptions.
func (o *OnYourDataUserAssignedManagedIdentityAuthenticationOptions) GetOnYourDataAuthenticationOptions() *OnYourDataAuthenticationOptions {
return &OnYourDataAuthenticationOptions{
Type: o.Type,
}
}
// OnYourDataVectorSearchAPIKeyAuthenticationOptions - The authentication options for Azure OpenAI On Your Data when using
// an API key.
type OnYourDataVectorSearchAPIKeyAuthenticationOptions struct {
// REQUIRED; The API key to use for authentication.
Key *string
// REQUIRED; The type of authentication to use.
Type *OnYourDataVectorSearchAuthenticationType
}
// GetOnYourDataVectorSearchAuthenticationOptions implements the OnYourDataVectorSearchAuthenticationOptionsClassification
// interface for type OnYourDataVectorSearchAPIKeyAuthenticationOptions.
func (o *OnYourDataVectorSearchAPIKeyAuthenticationOptions) GetOnYourDataVectorSearchAuthenticationOptions() *OnYourDataVectorSearchAuthenticationOptions {
return &OnYourDataVectorSearchAuthenticationOptions{
Type: o.Type,
}
}
// OnYourDataVectorSearchAccessTokenAuthenticationOptions - The authentication options for Azure OpenAI On Your Data vector
// search when using access token.
type OnYourDataVectorSearchAccessTokenAuthenticationOptions struct {
// REQUIRED; The access token to use for authentication.
AccessToken *string
// REQUIRED; The type of authentication to use.
Type *OnYourDataVectorSearchAuthenticationType
}
// GetOnYourDataVectorSearchAuthenticationOptions implements the OnYourDataVectorSearchAuthenticationOptionsClassification
// interface for type OnYourDataVectorSearchAccessTokenAuthenticationOptions.
func (o *OnYourDataVectorSearchAccessTokenAuthenticationOptions) GetOnYourDataVectorSearchAuthenticationOptions() *OnYourDataVectorSearchAuthenticationOptions {
return &OnYourDataVectorSearchAuthenticationOptions{
Type: o.Type,
}
}
// OnYourDataVectorSearchAuthenticationOptions - The authentication options for Azure OpenAI On Your Data vector search.
type OnYourDataVectorSearchAuthenticationOptions struct {
// REQUIRED; The type of authentication to use.
Type *OnYourDataVectorSearchAuthenticationType
}
// GetOnYourDataVectorSearchAuthenticationOptions implements the OnYourDataVectorSearchAuthenticationOptionsClassification
// interface for type OnYourDataVectorSearchAuthenticationOptions.
func (o *OnYourDataVectorSearchAuthenticationOptions) GetOnYourDataVectorSearchAuthenticationOptions() *OnYourDataVectorSearchAuthenticationOptions {
return o
}
// OnYourDataVectorizationSource - An abstract representation of a vectorization source for Azure OpenAI On Your Data with
// vector search.
type OnYourDataVectorizationSource struct {
// REQUIRED; The type of vectorization source to use.
Type *OnYourDataVectorizationSourceType
}
// GetOnYourDataVectorizationSource implements the OnYourDataVectorizationSourceClassification interface for type OnYourDataVectorizationSource.
func (o *OnYourDataVectorizationSource) GetOnYourDataVectorizationSource() *OnYourDataVectorizationSource {
return o
}
// PineconeChatExtensionConfiguration - A specific representation of configurable options for Pinecone when using it as an
// Azure OpenAI chat extension.
type PineconeChatExtensionConfiguration struct {
// REQUIRED; The parameters to use when configuring Azure OpenAI chat extensions.
Parameters *PineconeChatExtensionParameters
// REQUIRED; The label for the type of an Azure chat extension. This typically corresponds to a matching Azure resource. Azure
// chat extensions are only compatible with Azure OpenAI.
Type *AzureChatExtensionType
}
// GetAzureChatExtensionConfiguration implements the AzureChatExtensionConfigurationClassification interface for type PineconeChatExtensionConfiguration.
func (p *PineconeChatExtensionConfiguration) GetAzureChatExtensionConfiguration() *AzureChatExtensionConfiguration {
return &AzureChatExtensionConfiguration{
Type: p.Type,
}
}
// PineconeChatExtensionParameters - Parameters for configuring Azure OpenAI Pinecone chat extensions. The supported authentication
// type is APIKey.
type PineconeChatExtensionParameters struct {
// REQUIRED; The embedding dependency for vector search.
EmbeddingDependency OnYourDataVectorizationSourceClassification
// REQUIRED; The environment name of Pinecone.
Environment *string
// REQUIRED; Customized field mapping behavior to use when interacting with the search index.
FieldsMapping *PineconeFieldMappingOptions
// REQUIRED; The name of the Pinecone database index.
IndexName *string
// If specified as true, the system will allow partial search results to be used and the request fails if all the queries
// fail. If not specified, or specified as false, the request will fail if any
// search query fails.
AllowPartialResult *bool
// The authentication method to use when accessing the defined data source. Each data source type supports a specific set
// of available authentication methods; please see the documentation of the data
// source for supported mechanisms. If not otherwise provided, On Your Data will attempt to use System Managed Identity (default
// credential) authentication.
Authentication OnYourDataAuthenticationOptionsClassification
// Whether queries should be restricted to use of indexed data.
InScope *bool
// The included properties of the output context. If not specified, the default value is citations and intent.
IncludeContexts []OnYourDataContextProperty
// The max number of rewritten queries should be send to search provider for one user message. If not specified, the system
// will decide the number of queries to send.
MaxSearchQueries *int32
// Give the model instructions about how it should behave and any context it should reference when generating a response.
// You can describe the assistant's personality and tell it how to format responses.
// There's a 100 token limit for it, and it counts against the overall token limit.
RoleInformation *string
// The configured strictness of the search relevance filtering. The higher of strictness, the higher of the precision but
// lower recall of the answer.
Strictness *int32
// The configured top number of documents to feature for the configured query.
TopNDocuments *int32
}
// PineconeFieldMappingOptions - Optional settings to control how fields are processed when using a configured Pinecone resource.
type PineconeFieldMappingOptions struct {
// REQUIRED; The names of index fields that should be treated as content.
ContentFields []string
// The separator pattern that content fields should use.
ContentFieldsSeparator *string
// The name of the index field to use as a filepath.
FilepathField *string
// The name of the index field to use as a title.
TitleField *string
// The name of the index field to use as a URL.
URLField *string
}

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,92 @@
//go:build go1.18
// +build go1.18
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT.
// Changes may cause incorrect behavior and will be lost if the code is regenerated.
package azopenaiextensions
import "encoding/json"
func unmarshalOnYourDataAuthenticationOptionsClassification(rawMsg json.RawMessage) (OnYourDataAuthenticationOptionsClassification, error) {
if rawMsg == nil || string(rawMsg) == "null" {
return nil, nil
}
var m map[string]any
if err := json.Unmarshal(rawMsg, &m); err != nil {
return nil, err
}
var b OnYourDataAuthenticationOptionsClassification
switch m["type"] {
case string(OnYourDataAuthenticationTypeAccessToken):
b = &OnYourDataAccessTokenAuthenticationOptions{}
case string(OnYourDataAuthenticationTypeAPIKey):
b = &OnYourDataAPIKeyAuthenticationOptions{}
case string(OnYourDataAuthenticationTypeConnectionString):
b = &OnYourDataConnectionStringAuthenticationOptions{}
case string(OnYourDataAuthenticationTypeEncodedAPIKey):
b = &OnYourDataEncodedAPIKeyAuthenticationOptions{}
case string(OnYourDataAuthenticationTypeKeyAndKeyID):
b = &OnYourDataKeyAndKeyIDAuthenticationOptions{}
case string(OnYourDataAuthenticationTypeSystemAssignedManagedIdentity):
b = &OnYourDataSystemAssignedManagedIdentityAuthenticationOptions{}
case string(OnYourDataAuthenticationTypeUserAssignedManagedIdentity):
b = &OnYourDataUserAssignedManagedIdentityAuthenticationOptions{}
default:
b = &OnYourDataAuthenticationOptions{}
}
if err := json.Unmarshal(rawMsg, b); err != nil {
return nil, err
}
return b, nil
}
func unmarshalOnYourDataVectorSearchAuthenticationOptionsClassification(rawMsg json.RawMessage) (OnYourDataVectorSearchAuthenticationOptionsClassification, error) {
if rawMsg == nil || string(rawMsg) == "null" {
return nil, nil
}
var m map[string]any
if err := json.Unmarshal(rawMsg, &m); err != nil {
return nil, err
}
var b OnYourDataVectorSearchAuthenticationOptionsClassification
switch m["type"] {
case string(OnYourDataVectorSearchAuthenticationTypeAccessToken):
b = &OnYourDataVectorSearchAccessTokenAuthenticationOptions{}
case string(OnYourDataVectorSearchAuthenticationTypeAPIKey):
b = &OnYourDataVectorSearchAPIKeyAuthenticationOptions{}
default:
b = &OnYourDataVectorSearchAuthenticationOptions{}
}
if err := json.Unmarshal(rawMsg, b); err != nil {
return nil, err
}
return b, nil
}
func unmarshalOnYourDataVectorizationSourceClassification(rawMsg json.RawMessage) (OnYourDataVectorizationSourceClassification, error) {
if rawMsg == nil || string(rawMsg) == "null" {
return nil, nil
}
var m map[string]any
if err := json.Unmarshal(rawMsg, &m); err != nil {
return nil, err
}
var b OnYourDataVectorizationSourceClassification
switch m["type"] {
case string(OnYourDataVectorizationSourceTypeDeploymentName):
b = &OnYourDataDeploymentNameVectorizationSource{}
case string(OnYourDataVectorizationSourceTypeEndpoint):
b = &OnYourDataEndpointVectorizationSource{}
case string(OnYourDataVectorizationSourceTypeModelID):
b = &OnYourDataModelIDVectorizationSource{}
default:
b = &OnYourDataVectorizationSource{}
}
if err := json.Unmarshal(rawMsg, b); err != nil {
return nil, err
}
return b, nil
}

Просмотреть файл

@ -0,0 +1,13 @@
// Copyright (c) Microsoft Corporation.
// Licensed under the MIT License.
// This is a placeholder file to trigger environment variable setting via New-TestResources.ps1
@description('The base resource name.')
param baseName string = resourceGroup().name
@description('Which Azure Region to deploy the resource to. Defaults to the resource group location.')
param location string = resourceGroup().location
@description('The principal to assign the role to. This is application object id.')
param testApplicationOid string

3
sdk/ai/azopenaiextensions/testdata/.gitignore поставляемый Normal file
Просмотреть файл

@ -0,0 +1,3 @@
node_modules
generated
TempTypeSpecFiles

Просмотреть файл

@ -0,0 +1,30 @@
{
"error": {
"message": "The response was filtered due to the prompt triggering Azure OpenAIs content management policy. Please modify your prompt and retry. To learn more about our content filtering policies please read our documentation: https://go.microsoft.com/fwlink/?linkid=2198766",
"type": null,
"param": "prompt",
"code": "content_filter",
"status": 400,
"innererror": {
"code": "ResponsibleAIPolicyViolation",
"content_filter_result": {
"hate": {
"filtered": false,
"severity": "safe"
},
"self_harm": {
"filtered": false,
"severity": "safe"
},
"sexual": {
"filtered": false,
"severity": "safe"
},
"violence": {
"filtered": true,
"severity": "medium"
}
}
}
}
}

25
sdk/ai/azopenaiextensions/testdata/genopenapi.ps1 поставляемый Normal file
Просмотреть файл

@ -0,0 +1,25 @@
Push-Location ./testdata
if (Test-Path -Path "TempTypeSpecFiles") {
Remove-Item -Recurse -Force TempTypeSpecFiles
}
npm install
if ($LASTEXITCODE -ne 0) {
Exit 1
}
npm run pull
if ($LASTEXITCODE -ne 0) {
Exit 1
}
npm run build
if ($LASTEXITCODE -ne 0) {
Exit 1
}
Pop-Location

1247
sdk/ai/azopenaiextensions/testdata/package-lock.json сгенерированный поставляемый Normal file

Разница между файлами не показана из-за своего большого размера Загрузить разницу

16
sdk/ai/azopenaiextensions/testdata/package.json поставляемый Normal file
Просмотреть файл

@ -0,0 +1,16 @@
{
"name": "testdata",
"version": "0.1.0",
"type": "module",
"scripts": {
"pull": "pwsh ../../../../eng/common/scripts/TypeSpec-Project-Sync.ps1 -ProjectDirectory . && rm ./TempTypeSpecFiles/OpenAI.Inference/tspconfig.yaml",
"build": "tsp compile ./TempTypeSpecFiles/OpenAI.Inference"
},
"dependencies": {
"@azure-tools/typespec-autorest": "^0.44.1",
"@azure-tools/typespec-azure-core": "~0.44.0",
"@typespec/compiler": "^0.58.1",
"@typespec/openapi3": "~0.58.0"
},
"private": true
}

Двоичные данные
sdk/ai/azopenaiextensions/testdata/sampledata_audiofiles_myVoiceIsMyPassportVerifyMe01.m4a поставляемый Normal file

Двоичный файл не отображается.

Двоичные данные
sdk/ai/azopenaiextensions/testdata/sampledata_audiofiles_myVoiceIsMyPassportVerifyMe01.mp3 поставляемый Normal file

Двоичный файл не отображается.

3
sdk/ai/azopenaiextensions/testdata/tsp-location.yaml поставляемый Normal file
Просмотреть файл

@ -0,0 +1,3 @@
directory: specification/cognitiveservices/OpenAI.Inference
commit: cd41ba31a6af51dae34b0a5930eeb2e77a04b481
repo: Azure/azure-rest-api-specs

11
sdk/ai/azopenaiextensions/testdata/tspconfig.yaml поставляемый Normal file
Просмотреть файл

@ -0,0 +1,11 @@
parameters:
"service-dir":
default: "sdk/openai"
"dependencies":
default: ""
emit:
- "@azure-tools/typespec-autorest"
options:
"@azure-tools/typespec-autorest":
emitter-output-dir: "{project-root}/generated"
output-file: "openapi.json"

Просмотреть файл

@ -0,0 +1,61 @@
//go:build go1.18
// +build go1.18
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT.
// Changes may cause incorrect behavior and will be lost if the code is regenerated.
package azopenaiextensions
import (
"encoding/json"
"fmt"
"reflect"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/azcore"
)
type timeUnix time.Time
func (t timeUnix) MarshalJSON() ([]byte, error) {
return json.Marshal(time.Time(t).Unix())
}
func (t *timeUnix) UnmarshalJSON(data []byte) error {
var seconds int64
if err := json.Unmarshal(data, &seconds); err != nil {
return err
}
*t = timeUnix(time.Unix(seconds, 0))
return nil
}
func (t timeUnix) String() string {
return fmt.Sprintf("%d", time.Time(t).Unix())
}
func populateTimeUnix(m map[string]any, k string, t *time.Time) {
if t == nil {
return
} else if azcore.IsNullValue(t) {
m[k] = nil
return
} else if reflect.ValueOf(t).IsNil() {
return
}
m[k] = (*timeUnix)(t)
}
func unpopulateTimeUnix(data json.RawMessage, fn string, t **time.Time) error {
if data == nil || string(data) == "null" {
return nil
}
var aux timeUnix
if err := json.Unmarshal(data, &aux); err != nil {
return fmt.Errorf("struct field %s: %v", fn, err)
}
*t = (*time.Time)(&aux)
return nil
}

Просмотреть файл

@ -0,0 +1,11 @@
//go:build go1.18
// +build go1.18
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License. See License.txt in the project root for license information.
package azopenaiextensions
const (
version = "v0.1.0"
)