token limit is now 16k
This commit is contained in:
Родитель
22d953ca9f
Коммит
d6b21fc043
|
@ -194,7 +194,7 @@ Extension authors can choose which model is the most appropriate for their exten
|
|||
```typescript
|
||||
const allModels = await vscode.lm.selectChatModels(MODEL_SELECTOR);
|
||||
```
|
||||
> **Note**: The recommended GPT-4o model has a limit of `6K` tokens. The returned model object from the `selectChatModels` call has a `maxInputTokens` attribute that shows the token limit. These limits will be expanded as we learn more about how extensions are using the language models.
|
||||
> **Note**: The recommended GPT-4o model has a limit of `16K` tokens. The returned model object from the `selectChatModels` call has a `maxInputTokens` attribute that shows the token limit. These limits will be expanded as we learn more about how extensions are using the language models.
|
||||
|
||||
### Rate limiting
|
||||
|
||||
|
|
Загрузка…
Ссылка в новой задаче