In the AI era, specifically the era of GenAI, the conversation has shifted from eagerness to concern, with many leaders expressing a need for regulation in the sector.
With the technology’s evolution vastly outpacing existing legislation, governments at every level find themselves in a relentless game of catch-up, struggling to create regulations that can match AI’s constantly evolving and growing nature. There is a critical need to bridge the gap between AI’s capabilities and the pillars of safety, transparency and accountability that underpin public trust and societal welfare.
Local governments are increasingly recognizing the importance of regulating AI to protect consumer rights, such as the need for fairness in AI decision-making, especially in critical areas such as credit scoring, employment and law enforcement. Local regulations could also focus on preventing discrimination, ensuring accuracy in AI algorithms and protecting consumers from unfair or deceptive practices.
On National Level, the federal government in the U.S. has also recognized the critical importance of overseeing AI development and use.
A significant milestone in this regard is the Biden Administration’s “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
At the international level, AI regulation could extend, among other things, to the challenges posed by both legitimate and illegitimate actors. Legitimate actors include governments and corporations that may use AI in ways that, while legal, raise ethical or security concerns. Illegitimate actors refer to individuals or groups that utilize AI for harmful purposes, such as cybercrime, misinformation campaigns, or other forms of digital manipulation.
Source : Forbes