Alexa has just passed a digital assistant something ordinary, although reliable, to a powerful enigma. Alexa+ replaces the original plane with conversational intelligence, proactivity and true. It is a lot to take and ask many questions. I managed to obtain some responses from Daniel Rausch, vice president of Alexa and Amazon and the people who helped build Alexa Plus, to obtain some answers.
Although I met Rausch at the end of a long day, it seemed energized and clearly very proud of the creation of Amazon.
I started asking him about something that bothers me from the launch: how much of Alexa+ is Claude? Anthrope, Claude developer, was promoted as a partner in the development of Alexa Plus and listed during the presentation along with Amazon Nova, Amazon’s large language model.
Rausch, however, quickly disabled me from that notion.
“You saw Amazon Nova models there and they are definitely where we started at Amazon,” Rousch began, “we always start with our own technology.”
Rause explained that there is “incredible price performance in these models, incredible latency, incredible precision.”
Mother Roca Principles
However, it goes beyond that. Rausch explained that Amazon’s rock bed is a species of cloud -based base for all its generative work of AI. Anthrope is a “really important partner,” said Rausch, but the built Amazon system is “model agnostic.”
“Bedrock’s goal is to serve state -generation and highly capable models,” he said. That means that the system will choose and choose the best models for work.
It makes sense that this may also have been Rausch’s form of not identifying which models, including Claude, are used when. “With Alexa,” he added, “we have access to the complete suite.”
However, it goes beyond that. Large language models are not the end or destiny. Rausch told me that the Alexa+ experience extends and is based on them. He discussed the idea of ”information experts” presented during the launch event. The models use these experts to collect relevant and objective information. Rausch used the example of his constant consultations about baseball, specifically the Yankees, but added that the system is intelligent enough to know that, at home, only he likes to talk incessantly about baseball, while his daughter has no interest.
“I would say that the models are helping to orchestrate the general experience and are the base and are helping us build the rest,” Rausch told me.
That art
During the Children of the Presentation section, or the part that described the new “exploration and stories with Alexa”, I noticed what seemed to generative art of ia.
Rausch confirmed that Alexa+ generates that art on the march based on children’s ideas. During the development, Rausch put the tools in front of the children of some relatives to evaluate their reactions: “It is super fun. A child is describing the story, Alexa is helping them explore: ‘Hey, what would you like to write a story?'” For example, the child describes a bearded dragon interpreting a saxophone. “Alexa is deactivating some creative works of art, asking about the path of history, ‘Where does the bearded dragon live?’ Or “which city is visiting the bearded dragon?” Of course, children are unlimited in their imagination.
Sounds like fun. I noticed that the images I saw resembled a generative AI, and Rausch told me that it was exactly what they were. However, it would not reveal which generative image model is using Alexa+. All Rausch would only say that “it is one of the models in Bedrock.”
Security
As with any generative AI, the key to Alexa +’s usefulness is the data or rather its data. The generative tasks will be handled in the cloud, but Roush told me that everything will be encrypted “in transit.”
“It is incredibly safe and meets our standard confidence practices in general, which include deep security and privacy,” he added.
“We always start with our own technology.”
Daniel Rausch, vice president of Amazon Alexa and Echo
Naturally, this led me to ask about the technology of children and the safeguards that Amazon has built around the creation of generative images.
Rausch described him as “incredibly safe”, and said there are many safeguards to ensure that “children always stay safe.”
I know that many companies say that their generative image platforms are safe, but Amazon has a history in the development of friendly platforms and systems for children. “Explore and stores with Alexa” is an extension of all that work.
Not always a screen
It was difficult to ignore the generalized use of the intelligent exhibition Echo Show 21 throughout the demonstration of Alexa+. Naturally, I have been wondering on all smart speakers, at least those new enough to support Alexa Plus. What will the experience with them be like?
During the demonstrations, Amazon actually increased the length of the answers for the screens echo 21 to show what Alexa knows. But nobody wants to stop looking at a speaker waiting for these answers. Rausch told me that Alexa Plus designed to offer more concise responses in the speakers. All this is customizable.
“The idea is that we choose the correct types of answers and the correct interactions for the device and the modality in which it is,” he added.
Is Amazon ready?
When Amazon delivers Alexa+ in March, that could mean that millions of echo owners suddenly have generative conversations of AI with the newest chatbot in the block. It is potentially a great computational elevator. Is Amazon ready?
“Yes,” Rausch smiled, “it is very nice to have AWS on Amazon.”
The Mass Amazon cloud computing platform admits countless websites and services and will now provide bandwidth for Alexa+. Even so, it will not be everything, everywhere, all at once. Amazon plans to implement Alexa+ in waves.
And now I feel that I will understand Alexa+ a little better when those waves first arrived at the digital shore.