- Wetransfer users were outraged when it seemed that an updated terms of service implied that their data would be used to train AI models.
- The company moved quickly to ensure users that does not use charged content for AI training
- Wetransfer rewritten the clause in a clearer language
The Wetransfer file exchange platform spent a frantic day reassuring users that it has no intention of using any file loaded to train AI models, after an update of its service terms suggested that anything that is sent through the platform to make or improve automatic learning tools.
The offensive language buried in cough said that the use of Wetransfer gave the company the right to use the data “in order to operate, develop, market and improve the service or new technologies or services, even improve the performance of automatic learning models that improve our content moderation process, according to the privacy and cookie policy.”
That part of automatic learning and the general general nature of the text seemed to suggest that Wetransfer could do what he wanted with his data, without any specific protection or explanatory qualifiers to relieve suspicions.
Perhaps understandably, many Wetransfer users, which include many creative professionals, were upset about what this seemed to involve. Many began publishing their plans to change Wetransfer to other services along the same lines. Others began to warn that people should encrypt files or change to physical delivery methods of the old school.
It is time to stop using @wetransfer that since August 8 has decided that they will possess anything that transfer to Power ai pic.twitter.com/syr1jnmemxJuly 15, 2025
Wetransfer noticed the growing fury around the language and hastened to try to turn off the fire. The company rewritten the cough section and shared a blog that explains the confusion, repeatedly promising that anyone’s data would be used without their permission, especially for AI models.
“According to your comments, we understood that it may not have been clear that it retains the property and control of its content. Since then, we have updated the terms even more to make them easier to understand,” Wetransfer wrote on the blog. “We have also eliminated the mention of automatic learning, since it is not something that Wetransfer uses in relation to the customer’s content and may have caused some apprehension.”
While it continues to grant a standard license to improve Wetransfer, the new text omits references to automatic learning, focusing in its place in the family reach necessary to execute and improve the platform.
Clarified privacy
If this feels a bit like Vu, it is because something very similar happened about a year and a half ago with another file transfer platform, Dropbox. A change in the small print of the company implied that Dropbox was taking content loaded by users to train AI models. The public protest led Dropbox to apologize for confusion and fix the offender plaque.
The fact that it happened again in such a similar way is interesting not because of the awkward legal language used by software companies, but because it implies instinctive distrust in these companies to protect their information. Assuming the worst is the default approach when there is uncertainty, and companies have to make additional effort to relieve those tensions.
Sensitivity of creative professionals even the appearance of the misuse of the data. In an era in which tools such as Dall ยท E, Midjourney and Chatgpt train about the work of artists, writers and musicians, bets are very real. The demands and boycott of artists on how their creations are used, not to mention the suspicions for the use of corporate data, make the types of guarantees offered by Wetransfer will probably be something that technology companies will want to have in their place from the beginning, so that they do not face the bad luck of their clients.
You may also like