- Microsoft has published its transparency report of AI 2025
- Describe your plans to build and maintain responsible for the responsible
- New regulations with regard to the use of AI, and Microsoft wants to be ready
With AI and great language models (LLM) increasingly used in many parts of modern life, the reliability and security of these models have become an important consideration for companies such as Microsoft.
The company has moved to describe its approach to the future of AI in its transparency report of the responsible for 2025, exposing how the future of technology evolves in the coming years.
Just as we have seen the most widely adopted AI adopted by companies, we have also seen a wave of regulations worldwide that aim to establish the safe and responsible use of AI tools and the implementation of Government policies of AI that help companies administer the risks associated with the use of AI.
A practical approach
In the report, the second following an initial launch in May 2024, Microsoft establishes how important investments in tools, policies and practices of the responsible.
These include the expanded risk management and the mitigation of “modalities beyond the text, the images similar to those of the texts, the audio and the video, and the additional support for the agent systems”, in addition to taking a “proactive approach and in layers” to the new regulations such as the law of the EU, provide customers with materials and resources so that they are ready and that meet the incoming requirements.
Consistent risk management, supervision, review and red AI team and generative launches are found together with continuous research and development to ‘inform our understanding of sociotechnical problems related to the latest advances in AI’, with the border laboratory of the company that helps Microsoft “, press the border of what the systems can do in terms of capacity, efficiency and efficiency. security”.
As AI progresses, Microsoft says that it plans to build more adaptable tools and practices and invest in risk management systems to “provide tools and practices for the most common risks in the implementation scenarios.”
However, that is not all, since Microsoft also plans to deepen its work with respect to incoming regulations by supporting effective governance throughout the AI supply chain.
He says that he is also working internally and externally to “clarify the roles and expectations”, as well as continue with the “measurement and risk assessment of AI and the tools to operate it on scale”, sharing advances with its broader ecosystem to support the safer standards and standards.
“Our report highlights the new developments related to the way we build and implement AI systems in a responsible way, how we support our clients and the broader ecosystem, and how we learn and evolve,” said Teresa Hutson, CVP, Trust Technology Group and Natasha Crambton, head of AI Responsible.
“We hope to hear your comments about the progress we have achieved and the opportunities to collaborate on everything that remains to be done. Together, we can advance in the government of AI efficiently and effectively, promoting confidence in AI systems at a rhythm that coincides with the opportunities that are coming.”