- Report the “AI responsible” (and similar) claims are increasing in job advertisements
- Legal, Education, Mathematics and R&S are using the most
- Regulatory pressure may not be influencing the trend, could it be just a keyword?
The new data of the statements of fact that despite the strongest regulations, the corporate image and the brand are mainly promoting the mentions of the responsible in the work announcements, not the fulfillment of the policy.
The analysis of the work platform, which sought terms such as “responsible”, “ethics”, “ethics of AI”, “governance of AI” and “ai, discovered that there was a weak correlation (0.21) between the strength of the national regulation of AI and the mentions of the responsible in work publications.
Human -centered occupations in legal, education, mathematics and R&D were among the most probable sectors to use such terms, and technological companies are more likely to discuss AI in a broader way.
The responsible is just a keyword
Although the terms of the responsible are increasing worldwide (from about 0% in 2019), they only represent less than 1% of the related ads on average.
The Netherlands, the United Kingdom, Canada, the United States and Australia lead the way, however, in fact, they pointed out high regulation countries such as the United Kingdom and those within the European Union have no significantly higher mentions of those keywords compared to the lighter regulated countries.
In fact, the differences were more notable between labor sectors instead of the regions, with a legal form (6.5%) above average.
The subsequent analysis of fact of the mentions of the responsible in the world of work in world suggests that the regulatory pressure alone could be insufficient to promote the widespread adoption of keywords, which suggests that the mentions of “responsible people” are more likely to be part of market -based incentives and corporate responsibility strategies.
“This suggests that other factors, including reputation concerns or international commercial strategies, could be promoting the responsible adoption of AI, or more, than regulatory requirements,” the researchers shared.
With the increase in public concern about the risks of AI, these terms can serve as signaling tools aimed at customers, investors and the market in general, instead of reflecting deep internal change and commitment.