Merge pull request #7486 from microsoft/isidorn/brilliant-galliform
update docs for token limit
This commit is contained in:
Коммит
6e72013d96
|
@ -196,7 +196,7 @@ Extension authors can choose which model is the most appropriate for their exten
|
|||
```typescript
|
||||
const allModels = await vscode.lm.selectChatModels(MODEL_SELECTOR);
|
||||
```
|
||||
> **Note**: All models have a limit of `4K` tokens. The returned model object from the `selectChatModels` call has a `maxInputTokens` attribute that shows the token limit. These limits will be expanded as we learn more about how extensions are using the language models.
|
||||
> **Note**: The recommended GPT-4o model has a limit of `6K` tokens. The returned model object from the `selectChatModels` call has a `maxInputTokens` attribute that shows the token limit. These limits will be expanded as we learn more about how extensions are using the language models.
|
||||
|
||||
### Rate limiting
|
||||
|
||||
|
|
Загрузка…
Ссылка в новой задаче