* Prepping for release tomorrow
* Updating with required changelog section.
* Updating to the 2024-08-01-preview API version.
* Bump release date to probable Thursday release.
* Skip some tests if we're not doing code generation.
* Updating dependencies.
* Of course, the streaming version can throttle as well.
* The listing API depends on the "after this assistant" ID existing. I was just paging over the list of assistants randomly, and that is hazardous - now I always start with a known assistant ID (my own).
* Adding in the same throttling checks that we have in OpenAI, since it's the same underlying reosurces that are oversubscribed
* Updating so our code compiles under the latest alpha from OpenAI
* Stop examples from running
* Regenerated to accomodate the latest API version (2024-07-01-preview), which adds in new batching and file APIs.
* Updated help text for the function parameters.
* Tests were literally doing nothing and weren't needed - we create clients all over the place using all the constructors.
* Remove empty section.
* Upload files works, and the other routes related to it _should_ work. Still need to test some more.
* Fixing a bug where I didn't need to pass in a pointer anymore.
* Very basic test for batches. The timing of the API makes it hard for us to test normally.
* Updated to handle an anonymous type
* Using latest commit from PR.
* updates for merged spec.
* fixed generation
* removing orphaned model
* updated recordings
* skipping test
* reducing coverage until we can reenable file tests.
* added comment about workaround
* removing orphaned type
* updated changelog
---------
Co-authored-by: Richard Park <ripark@microsoft.com>
* Incorporate common sub config changes to live testing
* Remove federated auth field from sdk yaml files
* Default federated auth mode to true in pipeline templates
* Remove secret based sub configs from yaml template
* Fix fed auth defaults for ci testing
* Fix missing service connection in overridden cloud configs
* Add missing file paths parameters
Updating to use the rawjson-as-bytes config option. It doesn't change anything in our client, as it is today, because we've properly typed all 'any' fields.
Also, made Charles a CODEOWNER for this folder.
Fixes#21009
* Updated changelog, added in examples for more complex scenarios.
* Noticed I was using context.Background() sometimes in examples, but it should always be context.TODO()
* Bump version for breaking change that Joel pointed out!
* go get -u all
* Change date to TBD - we're still discussing when the final spec changes are going to be in.
* Adding in a specific example that uses the code interpreter and renaming the old file now that we have more than one full file example.
* go get -u and updating the example file so it'll render as an entire file, not just a function or segment of code.
Originally: Make sure all the examples are runnable.
Then, lots of miscellany as quite a few test/CI related things came in at once:
* Account for oversubscribed resources for audio and visual (again)
* Fix to no longer require godotenv (just load through VSCode's test feature).
* go get -u and go mod tidy
* Skipping tests that are breaking because of sanitization of the ID field.
It was getting difficult to tell what was and wasn't covered and also correlate tests to models, which can sometimes come with differing behavior.
So in this PR I:
- Moved things that don't need to be env vars (ie: non-secrets) to just be constants in the code - this includes model names and generic server identifiers for regions that we use to coordinate new feature development.
- Consolidated all of the setting of endpoint and models into one spot to make it simpler to double-check.
- Consolidated tests that tested the same thing into sub-tests with OpenAI or AzureOpenAI names.
- If a function was only called by one test moved it into the test as an anonymous func
Also, I added in a test for logit_probs and logprobs/toplogprobs.
bufio.Scanner has an implicit max size for it's internal buffer but SSE has no restriction on how large chunks can be.
We need to allow for arbitrarily large chunks and, luckily, bufio.Reader can already handle that.
Updating doc comments based on feedback from an issue, as well as previous PR feedback. Also, the rev we regen'd from brings in the `ChatCompletions.Model` field that had previously been omitted.
Fixes#22642Fixes#22664
- Adds in the Dimensions parameter for Embeddings, allowing greater control over the size of the returned embeddings slice.
- Allow controlling the format the embeddings come back as, avoiding potential deserialization errors due to JSON deserialization.
Fixes#22483