A Better Path Forward for AI By Addressing 培训, 治理 and Risk Gaps

梅根·Maneval
作者: 梅根·Maneval, CISM, CRISC, Vice President of Product Strategy and Evangelism at RiskOptics
发表日期: 2024年5月7日

In an age in which artificial intelligence (AI) permeates virtually every facet of our lives, the imperative for robust oversight has never been more essential. ISACA最近调查了3个,270 IT审计, 风险, 治理, 关于人工智能的隐私和网络安全专业人士, 包括知识差距, 政策, 风险, 工作和更多. 这 人工智能现状调查 sheds light on some pressing concerns and opportunities in the realm of AI, highlighting the need for more comprehensive training and stringent ethical standards.

培训的必要性

尽管人工智能技术被迅速采用, there remains a significant gap in available training and guidance. The survey reveals that only a quarter of respondents feel extremely or very familiar with AI, 46%的人认为自己是初学者. 这 lack of proper enablement extends into organizational training programs, with 40% of organizations offering no AI training at all. Even more concerning is that training is typically reserved for those in technical roles, leaving most of the workforce unprepared for the evolving digital landscape.

这 discrepancy poses a 风险 not just to the effective and efficient use of AI but also to the ethical implications that come with its deployment. 没有见多识广的劳动力, organizations may fail to recognize or address potential biases and security 风险 inherent in AI systems.

治理差距

The survey also points to a startling gap in AI 治理 oversight. Only 15% of organizations have a formal policy governing the use of AI technology. 这 stark deficit in 治理 is coupled with the finding that merely 34% of respondents believe their organizations adequately prioritize AI ethical standards, 只有32%的人表示担心安全问题, 比如数据隐私和偏见, 得到了充分的处理.

The survey’s findings underscore the critical need for robust AI 治理 frameworks that do more than simply guide the technical deployment of AI. 这些框架必须确保人工智能的使用合乎道德, 透明并与组织目标一致. Effective 治理 should encompass not only policy development and enforcement but also continuous monitoring and adaptation as AI technologies and their organizational impacts evolve.

应对人工智能风险

The proliferation of AI technologies introduces significant 风险 that organizations must urgently address. A substantial 60% of respondents to the ISACA survey are very or extremely worried about the potential for generative AI to be exploited by bad actors, including the creation of more sophisticated phishing attacks. 此外, 81% of respondents identify misinformation and disinformation as the biggest 风险 associated with AI. 尽管存在这些风险, only 20% feel confident in their ability to detect AI-powered misinformation, and just 23% believe their organizations are equipped to handle these challenges effectively.

Perhaps most troubling is that only 35% of those surveyed view the addressing of AI 风险 as an immediate priority for their organization. 这 gap between the recognition of AI 风险 and the prioritization of mitigative actions underscores the need for a strategic approach to AI 风险 management. Organizations must not only acknowledge these 风险 but also actively integrate 风险 management into their AI 治理 frameworks, ensuring they have the processes and tools in place to detect, 有效应对和缓解人工智能相关威胁.

你的前进之路

为了弥合这些差距, organizations must prioritize the development of formal AI 治理 frameworks that not only address the operational aspects of AI but also its broader impacts on the organization. 这包括为人工智能的使用制定明确的指导方针, 数据处理, 以及降低风险和偏见. Equally important is the expansion of AI training programs across all levels of the organization, ensuring that every employee is equipped not only to use AI tools effectively but also responsibly.

As AI continues to reshape the landscape of modern businesses, I'm optimistic about the future of 数字的信任 在一个人工智能驱动的世界里. By fostering an environment of continuous learning and diligent oversight, we can harness the transformative power of AI while safeguarding the principles of 数字的信任.

编者按: ISACA is addressing the AI knowledge gap with new courses on AI essentials, 审计生成人工智能和人工智能治理. 欲知详情,请浏览 6j5q.thedairyking.com/ai.

额外的资源