- Claude Chatbot from Anthrope now has a memory function at request
- The AI will remember the past chats only when a user asked specifically
- The feature is being implemented first with the subscribers of Max, Team and Enterprise before expanding to other plans
Anthrope has given Claude a memory update, but it will only be activated when choosing it. The new feature allows Claude to remember past conversations, providing the AI Chatbot with information to help continue with previous projects and apply what he has discussed before in his next conversation.
The update is reaching the Max, Team and Enterprise of Claude subscribers first, although it will probably be more widely available at some point. If you have it, you can ask Claude to look for previous messages linked to your work space or project.
However, unless you explicitly ask, Claude will not throw an eye back. That means that Claude will maintain a generic type of personality by default. That is for the good of privacy, according to anthropic. Claude can remember his discussions if he wishes, without crawling in his dialogue without invitation.
In comparison, the OpenAI chatgpt automatically stores past chats unless you choose not to participate, and uses them to shape their future responses. Google Gemini goes further, using his conversations with AI and his search history and Google account data, at least if he leaves it. Claude’s approach does not collect bread crumbs that refers to previous conversations without asking him to do so.
Attend
Claude remembers
Adding memory may not seem like a big problem. Even so, you will feel the impact immediately if you have ever tried to restart a project interrupted by days or weeks without a useful, digital or other way assistant. Making it a subscription choice is a good touch to accommodate how comfortable people are currently.
Many may want help of AI without delivering control to the chatbots that they never forget. Claude Side that tension cleanly by remembering something that you deliberately summon.
But it is not magic. Since Claude does not retain a personalized profile, he will not proactively remember preparing for the events mentioned in other chats or anticipating style changes when writing to a colleague versus a public commercial presentation, unless half of the conversation is requested.
In addition, if there are problems with this memory approach, the Anthrope implementation strategy will allow the company to correct any error before it is widely available for all Claude users. It will also be worth seeing if the creation of a long -term context such as Chatgpt and Gemini are doing it will be more attractive or unpleasant for users compared to Claude’s form of making memory an aspect at the request to use the Chatbot AI.
And that means it works perfectly. The recovery depends on Claude’s ability to arise correct extracts, not just the most recent or longer chat. If the summaries are confused or the context is incorrect, it could end more confused than before. And although the friction of having to ask Claude to use his memory is a benefit, he still means that he will have to remember that the characteristic exists, what some can find annoying. Even so, if the anthropic is right, a small limit is a good thing, not a limitation. And users will be happy that Claude remembers that, and nothing more, without a request.