OpenAI has announced a series of updates and enhancements to its models and products for developers, as well as a price reduction for many aspects of its platform. Here is a summary of the most significant announcements:
- GPT-4 Turbo: An advanced version of the GPT-4 model with knowledge of worldwide events up to April 2023. It can handle a context window of 128K, enabling it to process the equivalent of over 300 pages of text in a single request. In addition, its optimized performance has allowed for a 3x price reduction for input tokens and 2x for output tokens compared with GPT-4.
- Updates on function calls: Allows for the description of functions from external applications or APIs to the models, and the model can intelligently choose to generate a JSON object with arguments to call these functions. Now multiple actions can be requested in a single message.
- Improvement in instruction following and JSON mode: GPT-4 Turbo has improved in tasks requiring careful instruction following and now supports a new JSON mode, which ensures that the model responds with valid JSON.
- Reproducible outputs and log probabilities: A new
seed
parameter enables reproducible outputs. Furthermore, a feature will be launched to return the log probabilities of the most likely output tokens. - Updated GPT-3.5 Turbo: A new version of GPT-3.5 Turbo has been released that defaults to a 16K context window and includes improvements in instruction tracking and JSON mode.
- Assistants API: Allows for the construction of agent-like experiences within proprietary applications. With this API, it’s possible to maintain the state of conversation threads persistently and introduces tools such as the Code Interpreter and Retrieval.
- GPT-4 Turbo with vision: This version can accept images as inputs, enabling use cases such as caption generation and image analysis.
- DALL·E 3: Developers can now integrate DALL·E 3 directly into their applications via the image API.
- Text-to-speech (TTS): A TTS model has been launched offering six preset voices and two model variants, allowing for the generation of human-quality speech from text.
- Experimental access to GPT-4 customization: An experimental access program for fine-tuning GPT-4 has been created; however, it is noted that more work is required to achieve significant improvements over the base model compared to the gains made with GPT-3.5 customization.
- Custom models: For organizations that require more customization than fine-tuning can offer, a Custom Models program has been launched, which allows working with OpenAI researchers to train a GPT-4 custom-tailored to a specific domain.
- Price reduction: Various prices have been lowered across the platform, passing on savings to developers.
These updates signify significant advancements in the technical capabilities of AI models and in the accessibility and versatility for developers looking to incorporate these tools into their applications.
https://openai.com/blog/new-models-and-developer-products-announced-at-devday