AI certification and assurance from the company's perspective

The use of artificial intelligence (AI) in the private and professional environment is continuously increasing. At the same time, regulations through AI certification and protection are being discussed at European level. The Fraunhofer Institutes IAO and IPA have therefore analyzed the current regulatory measures and summarized the requirements and needs of companies in a new white paper.

"The white paper "AI certification and safeguarding in the
Context of the EU AI ACT" (Image: www.ipa.fraunhofer.de)

In recent years, the importance of artificial intelligence (AI) has increased significantly in both private and professional life. AI has the potential to transform numerous industries and areas of our society by improving efficiency and quality in various use cases. However, there are also significant risks and uncertainties associated with the use of AI, such as algorithmic errors, liability risks, discrimination and data breaches. If AI-based systems are not developed, operated and tested according to uniform safety standards, they can affect the safety of products and services.

To counteract these challenges, the European Union (EU) presented a draft law for the EU AI Act in 2021, which aims to regulate AI. In June 2023, the EU Commission and EU Parliament were able to agree on a proposal and are currently negotiating its implementation with the EU member states. As an important factor for AI certification and assurance, the AI Act is also of central importance for the Innovation Park Artificial Intelligence, which is building Europe's largest ecosystem for AI development. "In our Ipai ecosystem, we want to promote AI applications that are not only innovative and efficient, but also safe and ethically responsible. A practical understanding of the EU standards from the AI Act for AI safeguarding helps us and our member companies to take all the necessary precautions," emphasizes Moritz Gräter, CEO of Ipai.

In preparation for the upcoming EU AI Act, the Fraunhofer Institute for Industrial Engineering IAO and the Fraunhofer Institute for Manufacturing Engineering and Automation IPA have therefore examined the perspective of companies. The research team recorded the current status of legal regulations on AI protection. Based on the results of interviews with experts, the team also formulated requirements on the part of companies, research and educational institutions for the implementation of safeguarding and certification processes for AI systems.

Lack of requirements for AI protection as an uncertainty factor for Company

According to the current draft legislation for the EU AI Act, AI applications are to be categorized into different risk levels and subject to different requirements. Operators of high-risk AI applications will be obliged to check their conformity with these requirements in a self-assessment and can then use the CE seal as a certificate.

Although these requirements convey very clearly what AI use in compliance with the EU AI Act should look like, it is currently still unclear how this can be achieved. There is currently a lack of concrete measures for checking the requirements. "Companies have difficulties, for example, in assessing under what circumstances their AI application is transparent enough or what error rate is tolerable," explains Janika Kutz, team leader at the Fraunhofer IAO's Cognitive Service Systems Research and Innovation Center KODIS. There is also concern that the effort required for certification will exceed the resources of start-ups and small and medium-sized enterprises in particular. It is feared that legal expertise will be required to implement requirements correctly and that compliance will increase development times and costs so that European companies will not be able to keep up with international competitors.

Companies formulate clear requirements for AI certification

An important result of the interviews is that certification must be feasible for companies of all sizes. Securing and certifying every use case is costly and resource-intensive, which is why companies have clear requirements for regulations on AI security. Factors such as the transparency and feasibility of certification processes, clear roles of authorities and institutions and the preservation of innovative capacity are highlighted as particularly important. "The companies surveyed agree that AI certification should always focus on the added value for end users," says Prof. Dr. Marco Huber, Head of the Cyber Cognitive Intelligence department at Fraunhofer IPA, summarizing the results of the interviews.

External support services for companies are in demand

Based on the statements of the interviewees, most companies do not appear to be sufficiently prepared for the upcoming regulations of the EU AI Act. Companies can benefit from information transfer, knowledge transfer and networking and are also interested in individual consulting services as well as practical methods and tools to support the safeguarding and certification of AI-based systems. Through the ongoing exchange between regulatory authorities, industry, research institutions and the general public, a comprehensive understanding of the opportunities and challenges of AI use can be developed. This will lead to the development of AI systems that are more effective, safer and more practice-oriented.

Source: www.ipa.fraunhofer.de 

(Visited 80 times, 1 visits today)

More articles on the topic