The rapid proliferation of artificial intelligence (AI) systems across diverse sectors underscores the fundamental need for regulatory frameworks that address ethical, legal, and social implications of its deployment. This article examines the inherent challenges AI poses to traditional regulatory approaches, particularly concerning key pillars of responsible AI (RAI): adherence to human rights, fairness, non-discrimination, explainability, and accountability. Recognizing the lag between technological advancement and regulatory development, we pose a third-party, system-level AI certification framework as an interim solution. This framework is designed to bridge the current regulatory gap and complement future legislation. Our work provides a comprehensive analysis of certification processes, detailing key actors and mechanisms involved in AI system auditing. Through a detailed case study of a pilot certification program in the financial industry, we offer insights into the practical implementation, challenges, and potential of such a framework. This research begins to establish a recognized and actionable AI certification system, aimed at guiding AI development towards alignment with global standards. By offering a path towards responsible AI implementation, this work addresses the urgent need for governance mechanisms that keep pace with rapid technological advancement and ensure the responsible development and deployment of AI systems.
Emma Kallina
Emma is driven by a desire to design technology that enhances human well-being – beyond human performance. She started her PhD at the Institute...