EU Approves World’s First ‘AI Law’: Effective As Early As Next Month

EU Approves World's First 'AI Law' Effective As Early As Next Month

The European Union has given its approval to the world’s inaugural ‘AI law,’ set to take effect as soon as next month. The final hurdle remains to be voted on by the European Parliament. A final vote will be held at the plenary session in March or April. The actual application is expected to occur in 2026.

The 27 European Union (EU) countries will enact the ‘AI Act’, the world’s first artificial intelligence (AI) regulation law, on the 2nd (local time). ) was unanimously approved. The AI ​​law is expected to go into effect before summer at the earliest, following a final vote in the European Parliament next month.

“Today, member states supported the political agreement reached in December, recognizing the perfect balance that negotiators found between innovation and safety,” EU Internal Market Commissioner Thierry Breton said.

Previously, in December last year, the AI ​​Act passed three-party negotiations between the EU Council, Executive Committee, and European Parliament, the most important hurdle in the EU legislative process.

 The final hurdle for the AI ​​law remains the European Parliament voting process. It is scheduled to go into effect after a vote by the responsible committee of the European Parliament on the 13th and a final vote at the plenary session in March or April. Reuters explained that it is expected that it will take 12 to 24 months for the legislated AI law to take effect in each member state, so actual application is expected to be possible in 2026. Violation of relevant regulations will result in a fine of up to 35 million euros (approximately 50.5 billion won) or 7% of global sales.

When the AI ​​regulation law proposed by the European Commission three years ago takes full effect, the obligation to be transparent in the use of AI technology will be strengthened. Generative AI developers must comply with transparency obligations before putting their technology on the market. Companies that use high-risk technologies, such as self-driving cars, must also disclose information. In addition, deepfake (fake) photos and videos created by AI must be disclosed to the public that they were created by AI.

After the draft was proposed, powerful generative AI such as OpenAI’s ChatGPT and Google’s Bard emerged, and regulatory provisions related to so-called general-purpose AI were also added. The use of AI technology to gather biometric data for creating a facial recognition database, which categorizes individuals according to sensitive attributes like political or religious beliefs, sexual orientation, and race, is not allowed.

Reuters reported that this agreement emphasized the need for new regulations as the issue of deepfake abuse emerged as a major concern. Referring to the recent controversial deepfake incident of global pop star Taylor Swift, EU Commission Vice President Margrethe Vestager said, “The damage that AI can cause if misused, the liability of platforms, and why it is so important to enforce technology regulation are “What happened to me speaks volumes,” he said.

See More:

First Successful Remote Control Robot With Legs Not Wheels’ In Space

Why In China’s Apple And Samsung Smartphones Become A Graveyard

Robotization Of Humans: Is Musk’s ‘Neuralink’ An Innovation Or A Fraud

The Final Match Between Korea And Japan Was Canceled

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version