Cynthia Lummis proposes Rise’s Law, a project of AI that requires transparency for legal immunity

Senator Cynthia Lummis (R-WY) has introduced the Law of Responsible Innovation and Secure Experience (RISE) of 2025, a legislative proposal designed to clarify the responsibility frameworks for artificial intelligence (AI) used by professionals.

The bill could bring transparency of AI developers, which does not require the models to be open source.

In a press release, Lummis said that the Law of Increase would mean that professionals, such as doctors, lawyers, engineers and financial advisors, remain legally responsible for the advice they provide, even when informed by AI Systems.

At that time, IA developers who create systems can only protect themselves from civil liability when things go wrong if they publicly launch model cards.

The proposed bill defines the model cards as detailed technical documents that disseminate the training data sources of an AI system, the cases of use provided, the performance metrics, the known limitations and the possible ways of failure. All this is intended to help professionals to evaluate whether the tool is appropriate for their work.

“Wyoming values ​​both innovation and responsibility; Rise’s law creates predictable standards that foster the development of the safer while preserving professional autonomy,” Lummis said in a press release.

“This legislation does not create general immunity for AI,” Lummis continued.

However, the immunity granted under this law has clear limits. The legislation excludes protection for developers in cases of recklessness, intentional misconduct, fraud, knowing misrepresentation or when the actions are out of the defined scope of professional use.

In addition, developers face a duty of continuous responsibility under the law of increase. The documentation and specifications of the AI ​​must be updated within 30 days after the implementation of new versions or discover significant failure modes, reinforce the obligations of continuous transparency.

It stops below the open source

Rise Law, as written now, stops to demand that AI models become an open source completely open.

Developers can retain patented information, but only if the written material is not related to security, and each omission is accompanied by a written justification that explains the commercial secret exemption.

In a previous interview with Coindesk, Simon Kim, the CEO of Hashed, one of the main venture capital funds of Korea, spoke about the danger of centralized and closed code that is effectively a black box.

“Openai is not open, and is controlled by very few people, so it is quite dangerous. Do this type of [closed source] The fundamental model is similar to making a ‘God’, but we don’t know how it works, “Kim said at that time.



Leave a Comment

Your email address will not be published. Required fields are marked *