Merge pull request #7486 from microsoft/isidorn/brilliant-galliform

update docs for token limit
This commit is contained in:
Isidor Nikolic 2024-08-05 20:39:06 +02:00 коммит произвёл GitHub
Родитель 6194ffa7d0 57b0899699
Коммит 6e72013d96
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: B5690EEEBB952194
1 изменённых файлов: 1 добавлений и 1 удалений

Просмотреть файл

@ -196,7 +196,7 @@ Extension authors can choose which model is the most appropriate for their exten
```typescript
const allModels = await vscode.lm.selectChatModels(MODEL_SELECTOR);
```
> **Note**: All models have a limit of `4K` tokens. The returned model object from the `selectChatModels` call has a `maxInputTokens` attribute that shows the token limit. These limits will be expanded as we learn more about how extensions are using the language models.
> **Note**: The recommended GPT-4o model has a limit of `6K` tokens. The returned model object from the `selectChatModels` call has a `maxInputTokens` attribute that shows the token limit. These limits will be expanded as we learn more about how extensions are using the language models.
### Rate limiting