Skip to content

Commit

Permalink
readme
Browse files Browse the repository at this point in the history
  • Loading branch information
HelgeSverre committed Oct 30, 2024
1 parent 4b39d6a commit ec5ba7f
Showing 1 changed file with 13 additions and 8 deletions.
21 changes: 13 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -447,7 +447,6 @@ TextractUsingS3Upload::cleanupFileUsing(function (string $filePath) {
}
```


**Note**

Textract is not available in all regions:
Expand Down Expand Up @@ -478,15 +477,21 @@ Available Models:

| Model Identifier | Model | Note |
|--------------------------------|--------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `Engine::GPT_4_OMNI_MINI` | 'gpt-4o-mini' | Optimized version of GPT-4 with enhanced JSON mode capabilities |
| `Engine::GPT_4_OMNI` | 'gpt-4o' | Enhanced GPT-4 model with improved JSON mode and vision capabilities |
| `Engine::GPT_4_TURBO` | 'gpt-4-turbo' | Latest GPT-4 model optimized for performance and efficiency |
| `Engine::GPT_4_1106_PREVIEW` | 'gpt-4-1106-preview' | GPT-4 Turbo, featuring improved instruction following, JSON mode, reproducible outputs, parallel function calling. Maximum 4,096 output tokens. Preview model, not yet for production traffic. |
| `Engine::GPT_4_VISION` | 'gpt-4-vision-preview' | Multimodal model capable of processing both text and images |
| `Engine::GPT_3_TURBO_1106` | 'gpt-3.5-turbo-1106' | Updated GPT-3.5 Turbo, with improvements similar to GPT-4 Turbo. Returns up to 4,096 output tokens. |
| `Engine::GPT_4` | 'gpt-4' | Large multimodal model, capable of solving complex problems with greater accuracy. Suited for both chat and traditional completions tasks. |
| `Engine::GPT4_32K` | 'gpt-4-32k' | Extended version of GPT-4 with a larger context window of 32,768 tokens. |
| `Engine::GPT_3_TURBO_INSTRUCT` | 'gpt-3.5-turbo-instruct' | Similar to `text-davinci-003`, optimized for legacy Completions endpoint, not for Chat Completions. |
| `Engine::GPT_3_TURBO_16K` | 'gpt-3.5-turbo-16k' | Extended version of GPT-3.5 Turbo, supporting a larger context window of 16,385 tokens. |
| `Engine::GPT_3_TURBO` | 'gpt-3.5-turbo' | Optimized for chat using the Chat Completions API, suitable for traditional completion tasks. |
| `Engine::TEXT_DAVINCI_003` | 'text-davinci-003' | Legacy model, better quality and consistency for language tasks. To be deprecated on Jan 4, 2024. |
| `Engine::TEXT_DAVINCI_002` | 'text-davinci-002' | Similar to `text-davinci-003` but trained with supervised fine-tuning. To be deprecated on Jan 4, 2024. |
| `Engine::GPT_O1_MINI` | 'o1-mini' | Specialized model optimized for specific tasks |
| `Engine::GPT_O1_PREVIEW` | 'o1-preview' | Preview version of the O1 model with enhanced capabilities |
| `Engine::GPT_4` | 'gpt-4' | Large multimodal model, capable of solving complex problems with greater accuracy |
| `Engine::GPT4_32K` | 'gpt-4-32k' | Extended version of GPT-4 with a larger context window of 32,768 tokens |
| `Engine::GPT_3_TURBO_INSTRUCT` | 'gpt-3.5-turbo-instruct' | Similar to text-davinci-003, optimized for legacy Completions endpoint |
| `Engine::GPT_3_TURBO_16K` | 'gpt-3.5-turbo-16k' | Extended version of GPT-3.5 Turbo, supporting a larger context window of 16,385 tokens |
| `Engine::GPT_3_TURBO` | 'gpt-3.5-turbo' | Optimized for chat using the Chat Completions API |
| `Engine::TEXT_DAVINCI_003` | 'text-davinci-003' | **DEPRECATED** - Legacy model, no longer operational |
| `Engine::TEXT_DAVINCI_002` | 'text-davinci-002' | **DEPRECATED** - Legacy model, no longer operational |

**`$maxTokens` (int)**

Expand Down

0 comments on commit ec5ba7f

Please sign in to comment.