The European Commission is committed to regulate the use of AI. The risk-based approach that will be at the core of the AI regulation will be supported by European standards, some adapted from ISO-IEC, and others developed by European Standardization Organizations like CEN, CENELEC or ETSI. In order to provide the EU with the proper set of standards, a top-down approach is being developed by the CEN-CENELEC JTC 21 taking into account European specificities. Those specificities include the need for actionable standards, the European very broad scope of risk, and the EU regulation timeline which implies approved harmonized standards by the end of 2024.
NIST contributes to the research, standards, and data required to realize the full promise of artificial intelligence (AI) as a tool that will enable American innovation, enhance economic security, and improve our quality of life. Much of NIST’s work focuses on cultivating trust in the design, development, use, and governance of AI technologies and systems. NIST is doing this by—
• Conducting fundamental research to advance trustworthy AI technologies and understand and measure their capabilities and limitations
• Establishing benchmarks and developing data and metrics to evaluate AI technologies
• Leading and participating in the development of technical AI standards
• Contributing to discussions and development of AI policies
Emerging technologies including artificial intelligence are changing the speed and complexity of society. Laws and regulations face a difficult issue: how to keep up with the change. Goal-based governance is better to address the issue than rule-based one. But it comes with another difficulty: big gap between goals and operation. AI Governance Guidelines published by the Ministry of Economy, Trade and Industry bridge the gap. They help companies improve AI governance, which is design and operation of technological, organizational, and social systems by stakeholders for the purpose of managing risks posed by the use of AI at levels acceptable to stakeholders and maximizing their positive impact. The speaker will explain AI Governance Guidelines and their background.
The Risk Evolution, Detection, Evaluation, and Control of Accidents (REDECA) framework was introduced in 2021 to highlight the role that artificial intelligence (AI) in the anticipation and control of exposure risks in a worker’s immediate environment. In this talk, we present a case study that details the implementation of the REDECA framework for occupational safety improvement of agriculture workers. We identify the related safety issues using a systematic process, and offer AI solutions that can improve the associated safety metrics.
After showing several AI case studies in safety, healthy and well-being domain, I explain the IBM’s challenges to manage AI quality supporting the AI case studies.
Presentation contains NEC’s digital business initiatives from the three perspectives of “business process,” “technology,” and “competency,” as well as its case studies.
Fujitsu researches and develops various advanced and trusted AI technologies to make the world more sustainable and to implement a better society. This talk will introduce Fujitsu’s Trusted AI technologies: explainable AI, AI quality and AI ethics focusing on cases from the healthcare and industrial plant domains.
Hitachi published a white paper on AI ethics to promote the management of AI technology.
And, it aims to use AI technology to realize a safe and secure environment for society and workplace.
I would like to these concepts and some use cases.