Thomson Reuters Enhances AI Results with Fine-Tuning of Claude 3 Haiku on Amazon Bedrock
Thomson Reuters has started fine-tuning the Claude 3 Haiku model on Amazon Bedrock to enhance AI results in their industry. This follows the general availability of fine-tuning for Claude 3 Haiku as of November 1, 2024.
Fine-tuning offers several benefits, including improved results on specific tasks, faster speeds at lower costs, consistent formatting, user-friendly API, and secure data preservation. Thomson Reuters aims to leverage these advantages to make AI results faster and more relevant.
The fine-tuning process involves creating a custom model using high-quality prompt-completion pairs. This allows customers like Thomson Reuters to tailor the model's knowledge and capabilities to their business needs. The service is currently available in preview in the US West (Oregon) AWS Region, supporting text-based fine-tuning with context lengths up to 32K tokens.
SK Telecom has already seen success with fine-tuning. They improved customer support using a fine-tuned Claude model, resulting in a 73% increase in positive feedback and a 37% improvement in KPIs. While the specific companies enrolled for fine-tuning Claude 3 Haiku in Amazon Bedrock have not been publicly disclosed, Anthropic has announced the availability of Claude 3 models including Haiku on the platform.
Thomson Reuters is fine-tuning Claude 3 Haiku on Amazon Bedrock to improve AI results in their industry. This follows the general availability of fine-tuning for the model, offering benefits such as better results on specialized tasks and faster speeds. The success of SK Telecom in customer support highlights the potential of fine-tuning for businesses.