AWS is launching Bedrock, an AI service that will allow users to build out generative models from AI21 Labs, Anthropic, Stability AI, and Amazon.
Generative AI models such as ChatGPT have taken the technology world by storm as they threaten to permeate the mainstream. With the announcement of Bedrock, it’s clear that Amazon’s ready to go all-in just as big tech competitors Microsoft and Google have.
Bedrock will allow AWS users to build out generative AI from foundation models (FMs) — GPT-4 would be an example of such a model with ChatGPT being a generative AI application built on top of it.
Build & scale your #GenerativeAI apps with Amazon Bedrock. ☁️
Learn how access to leading foundation models makes it easy to build apps quickly while keeping your data private & safe.
https://t.co/mjYtDgC6No #AWS #machinelearning pic.twitter.com/9xYGfTJILs
— Amazon Web Services (@awscloud) April 13, 2023
According to a blog post announcing the service, Bedrock is “a serverless experience” where users can “privately customize FMs with their own data, and easily integrate and deploy them into their applications.”
To coincide with Bedrock’s launch, Amazon also announced Titan, which includes two new foundational models developed by Amazon Machine Learning.
Details are scarce concerning Titan at the moment with Amazon reps keeping technical specifications under wraps. However, AWS vice president Bratin Saha told reporters that Amazon’s been using “a fine-tuned version” of Titan to surface search results on the company’s homepage.
Users won’t be limited to Amazon’s in-house FMs, though, as the company also announced Bedrock integration for some of the industry’s most popular models, including Jurassic-2, a multi-lingual LLM, and Claude, a conversational agent from Anthropic built on the company’s “Constitutional AI” foundation.
Bedrock will also provide on-platform API access to Stability AI’s models, including Stable Diffusion, a popular text-to-image-generator.
Amazon may be arriving a bit late to the party with the launch of GPT-4 now a month in the rearview, but Bedrock and Titan could prove troublesome for the incumbent sector leaders thanks to the near-ubiquity of AWS and the ease-of-use it provides.
The costs of training a generative AI model can be phenomenal. Unfortunately, once a model is trained on a given dataset, it is essentially ‘dirty’ with that data and potentially prone to “hallucinate” information from it in response to unrelated queries.
Related: Elon Musk reportedly buys thousands of GPUs for Twitter AI project
Bedrock allows users to get around this problem by giving them an option to use pre-existing FMs as a backbone in support of their data. For customers already on AWS, and those bringing their data to the AWS ecosystem, this means their data remains as secure as it normally is on Amazon’s cloud, and is never injected into training datasets.
Per the Amazon announcement, “none of the customer’s data is used to train the underlying models, and since all data is encrypted and does not leave a customer’s Virtual Private Cloud (VPC), customers can trust that their data will remain private and confidential.”
Amazon’s hoping to turn the generative artificial intelligence (AI) duel between Google’s Bard and Microsoft/OpenAI’s ChatGPT into a battle royale with the announcement of its Bedrock service and the debut of two new in-house large language models (LLMs).
Baidu, another Chinese tech company, attempted to break through with the launch of Ernie — a nod to Google’s Bard — but poor reception to the product caused a 10% tumble in the company’s shares.
As Titan and Bedrock get set to join the first wave of public-facing generative AI models, tech outfits around the world are readying their entrances. Chinese tech firm Alibaba is set to debut its own AI chatbot, called “Tongyi Qianwen” in hopes of bringing some competition to the western corporations currently dominating the space.