Amazon will offer options to companies that want to use a foundational LLM model in AWS and then customize it for their own proprietary data, company needs, and customer experience.
Earlier this year, Amazon announced that the company is building a new large language model (LLM) to power Alexa. The aim is to compete with other generative AI tools like ChatGPT and Microsoft 365 Copilot that have been stealing Alexa’s thunder as a go-to personal assistant. This is likely good news for many Alexa users.
Critics believe Alexa has stagnated. In a high-profile action, Toyota’s recent decision to phase out its Alexa integration and consider integrating ChatGPT into its in-house voice assistant illustrated the foundation for Amazon’s focus.
CEO Andy Jassy stated, during an earnings call, that Amazon has “conviction” about building the world’s best personal assistant but admits that it is difficult to achieve across many domains and a broad surface area. However, the advent of large language models and generative AI should make it possible for Amazon to build a better personal assistant. Jassy explained that Amazon is building an LLM that’s much larger and more generalized than the previous model, making it more capable as well. The company believes that this new technology will significantly accelerate its vision of becoming the world’s best personal assistant.
Jassy also focused on Amazon Web Services (AWS) and the company’s AI offerings. Amazon has been heavily investing in LLMs for several years, as well as in customized machine-learning chips optimized for LLM workloads. For example, Trainium is a chip specializing in machine learning training, while another, Inferentia, specializes in predictions. Amazon recently released the second version of both chips, and Jassy thinks that a lot of machine learning training and inference will run on AWS.
Amazon will offer options to companies that want to use a foundational model in AWS and then customize it for their own proprietary data, company needs, and customer experience. Bedrock, a generative AI service also announced recently, is a managed foundational model service where customers can run foundational models from Amazon or leading LLM providers like AI21, Anthropic, or Stability AI. They can then customize the models while keeping the same security and privacy features used for the rest of their applications in AWS.