Someday, someone will come up with an equation that precisely defines the tipping point between our natural laziness and our willingness to give up personal privacy to alleviate some of it. I don’t think today is that day.
Years ago, I wrote about a robot that you could teleoperate from anywhere in the world via WiFi. We tested it in our offices and I even used it in my house, where it scared my family. Back then, we didn’t think too much about the privacy implications because it was me, not a third party, driving the robot and seeing what it could see.
TechRadar AI Week 2025
This article is part of TechRadar’s AI Week 2025. Covering the basics of artificial intelligence, we’ll show you how to get the most out of ChatGPT, Gemini or Claude, along with detailed features, news and the top talking points in the world of AI.
With ads for 1X’s still-unreleased robot appearing on Subway billboards, as well as only limited testing in some journalists’ homes, consumers are being asked to consider their willingness to invite the 5’6″, 66-pound humanoid robot into their home. While the $20,000 (or $499 per month rental forever) robot is designed for autonomy, the reality is that it could face numerous unknown scenarios in your home. In those cases, 1X technicians can, with your permission, take over, teleoperate, and apparently train the robot’s Redwood AI.
Even in casual conversations with people, this news gives them food for thought, but we decided to survey our almost half a million WhatsApp followers with this question:
“The 1X Neo is a new $20,000 home robot that can be controlled remotely by a human. But how about an Al robot learning skills based on your data at home?”
While the majority (409 people) said they were unsure about this but still thought “a cleaning robot would be fantastic,” a substantial number (340 respondents) were decidedly less optimistic, choosing: “It sounds horrible and a total violation of privacy.”
73 described Neo as what they had “always dreamed of” and only 48 were happy to let 1X and Neo do their training at home.
I understand the concern and to be honest it is far from new. In 2019, when Sony was unveiling the latest update to its AIBO robot dog, some raised concerns about a mobile robot with a camera in its snout, built-in facial recognition AI (useful for AIBO to remember familiar faces), and Sony’s access to the collected data.
At the time, Sony stored data locally and in its cloud, but processed it in a way that was not identifiable as personal information. Still, the robot couldn’t be sold in Illinois because its capabilities circumvented the state’s Biometric Information Privacy Act.
With AI and much more powerful 1X models, one might assume that privacy concerns should triple.
I asked regulatory and technology attorney Kathleen McGee via email how concerned consumers should be.
McGee, who previously served as a government attorney and most recently Chief of the Internet and Technology Bureau at the New York Attorney General’s Office and is now a partner in Lowenstein Sandler’s data privacy, security, security and risk management practice, told me that the data that companies like 1X collect “ranges from the mundane (where you put the dish soap) to the very personal (real-time video capture of your home, physical layout and images of you and your home’s occupants, including children). Any data collection that is so sensitive and continuous requires high-level security measures to ensure that data is anonymized, kept only when necessary, and that AI models are trained in accordance with ethical and legal standards.”
Clarity, McGee notes, is key. “Suspected users of these products need to be very clear about how data is used and shared, and what rights users have to delete it; when an AI model is created and trained with your sensitive data, it is virtually impossible to completely undo it.”
1X, however, makes it clear in its FAQ that while data collected from “real-world tasks” is used to build NEO’s base intelligence and increase both its capabilities and security, “we do not use this data to create a profile of you, nor do we sell it. If you do not want to participate in helping to further improve NEO, you can always opt out.”
However, data aside, a robotic camera attached to fully articulated limbs and hands raises the specter of a remote-controlled ransacking of your home. Reddit is well stocked with these concerns.
Potential users of these products should be very clear about how data is used and shared.
Kathleen McGee
In a scathing post about privacy concerns about the Neo robot, Reddit user GrandyRetroCandy wrote:
“If the police go to the 1X office. They say ‘we have a warrant.’ They might order an operator to take control of the Neo Robot, and while you’re out shopping or out and about, they might have this robot go through your wallet. Your diary. Your house. Your drawers. And see everything about you.”
That sounds scary, but GrandRetroCandy quickly clarified:
“Technically, that part is not legal. It’s technically possible (it could be done), but it’s not legal. But if they have a court order, they can see all the camera footage stored on your Neo Robot. That part is legal.”
McGee also told me: “Another general concern for these types of domestic products is the potential exposure of data that a user may believe is private to them, but which may be the subject of a subpoena, search warrant, or intrusion by a threatening actor. Concerns about user privacy cannot be divorced from security concerns.”
AI needs your data… and you need your privacy
Basically, the idea of someone suddenly using the X1 Neo to wander around your house and go through your stuff is beyond likelihood, if not possibility.
The truth is that humanoid robots will never be practical and useful without a good amount of data input from each user and household, especially in the early days when they are likely to make mistakes.
In the case of robotics and automation, one of the main advances in recent years has been simulated training. It has helped autonomous driving and many of these early humanoid robots. And yes, we can see how difficult it is to prepare humanoid robots for the unexpected.
At this point, 1X Neo Beta is so unprepared that most of his abilities are teleoperated. Preparing humanoid robots for the spotlight remains hard work. In Russia, the robot Idol was so unprepared for the bright lights of fame that he stood out spectacularly.
Freely giving away data that cannot be used to invade our privacy will help these robots learn and improve quickly, but there must be limits and controls.
Much of the responsibility falls on companies like 1X, especially those developing AI. As McGee noted in an email, “Many jurisdictions have privacy laws, and for AI developers, the focus should always be on compliance with the strictest regulations. Again, both ethics and the law have a place here, and we advise our clients to build a solid foundation of trust and transparency to ensure stability and longevity in their AI design.”
As of April of this year, only 20 US states have data privacy laws. At least in the EU, they have the GDPR (General Data Protection Regulation), which is so strict that some AI technologies have been restricted in the 27 countries that make up the EU. The UK has an almost identical GDPR.
There’s probably a middle ground between what we have here in the US and the GDPR, but the intent should be the same: the safe training of an army of humanoid robots that know how to help us, and even do our household chores for us without raising massive privacy alarms.

The best MacBooks and Macs for all budgets
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.



