Strengthening Protections for Users of Generative AI… KCC Releases Guidelines

Photo of author

By Global Team

The Korea Communications Commission has announced guidelines to protect the rights of users of generative AI services. This measure aims to prevent side effects arising from the development of AI technology and to create a safe service environment.

On the 28th, Lee Jin-sook, the chairperson of the Korea Communications Commission, revealed the ‘Guidelines for User Protection of Generative AI Services’ targeting providers and developers of generative AI services, which will be implemented from March 28. These guidelines are designed to protect users from issues such as the invasion of personal privacy, generation of discriminatory or biased information, and false content amidst the proliferation of generative AI services.

The guidelines include basic principles that must be adhered to throughout the operation of services, as well as six actionable measures that developers and providers must follow.

◎ First, to protect the dignity rights of users, service providers are encouraged to build an algorithm that checks whether the content generated by AI might violate these rights. Providers must recognize their management responsibility for AI outputs and establish internal monitoring systems and user reporting procedures.

◎ Second, it is crucial to inform users that the content was created by AI and to provide information about AI’s decision-making processes to ensure users clearly recognize the content generated by AI.

◎ Third, a filtering function should be introduced to prevent the generation of discriminatory and biased information, and there should be procedures for reporting such issues if they arise.

◎ Fourth, if AI services collect user data or utilize it for learning, they must have prior consent procedures in place, and a designated data management officer within the company must oversee operations.

◎ Fifth, to clearly identify the responsible party in case of issues, service providers should establish inspection systems to ensure users are not harmed, thus setting up risk management frameworks.

◎ Sixth, internal management measures should be prepared to prevent the creation and distribution of inappropriate content, and it should be continually checked whether AI outputs adhere to ethical and moral standards.

To enhance the effectiveness of these guidelines, the Korea Communications Commission plans to provide exemplary cases of user protection from major AI services.

This announcement comes amid increasing demands for policy responses to protect users as generative AI technology rapidly advances and poses potential social problems. The emergence of serious social issues such as sophisticated manipulated video (deepfake) sex crimes, discrimination-related information spread, and false information generated by AI highlights the need for responsible operation of AI services.

Guidelines for User Protection of Generative AI Services (provided by the Korea Communications Commission)
Guidelines for User Protection of Generative AI Services (provided by the Korea Communications Commission)

The Korea Communications Commission has collaborated with the Korea Information Society Development Institute (KISDI) and AI experts, analyzing domestic and international cases to prepare these guidelines. Additionally, the main contents were presented at the ‘AI Service User Protection Conference’ held last September, and discussions with major AI companies were conducted to ensure realistic and actionable content was included.

Shin Young-kyu, director of the Broadcasting and Communication User Policy Bureau at the Korea Communications Commission, expressed hope that through these guidelines, AI service providers can develop more systematic user protection measures. “Ultimately, this will serve as a foundation to increase trust in AI services and guarantee users’ rights,” he stated.

The guidelines are available on the Korea Communications Commission’s website (www.kcc.go.kr), and the commission plans to review their validity every two years from the implementation date and make necessary improvements. Efforts will continue to address new user protection issues arising from the advancement of AI technology and to create a safe AI service environment.

Leave a Comment