Open handling of GenAI leads to less shadow AI

Companies see artificial intelligence as a great opportunity to increase their economic performance and revolutionize the work of their employees, for example by making them more productive or saving time because they can find information faster. However, these benefits can only be realized through a coordinated AI strategy across the entire company.

The benefits of GenAI are tempting, but if employees use AI tools unregulated, it can quickly lead to problems for companies. (Image: Dropbox.com)

71 % of employees were already using AI tools before their superiors knew about it. This unauthorized use of AI technology that can access potentially sensitive company data is called "shadow AI". Shadow AI describes the use of generative AI products without IT oversight, using unprotected personal accounts that do not have the security checks or data agreements that would apply to a corporate account. Without a comprehensive strategy and employee training, such a situation can expose the company to significant risks.

Leaving no one behind with the introduction of AI

One of the main problems of shadow AI by individual users is inappropriate and non-compliant use due to a lack of uniform corporate guidelines. This fragmented, individualized approach to the introduction of generative AI (GenAI) contributes to splitting the company: On one side are the employees who use the latest technology without sharing with their colleagues, and on the other are the reluctant non-users who lag behind their supposedly more advanced colleagues. Without a holistic strategy and training for employees on how to use these new tools, those who experiment on an individual basis may be frustrated by not being able to gain useful insights or accurate results. Furthermore, these initial negative experiences carry the risk of employees abandoning AI tools altogether.

Unauthorized use puts sensitive data at risk

A current Study by Veritas found that 31 % of respondents admitted to sharing potentially sensitive information with generative AI tools. Business accounts for AI products typically have agreements in place to ensure that company data is not used to train AI models. However, personal accounts, which are often used in shadow AI, usually do not have these agreements. This means that any company data shared via a personal account could inadvertently be used to train the AI model.

Securing company data should therefore always be a primary concern. In addition, serious consequences can arise if employees use these powerful tools without guidance or their own judgment. AI tools are still prone to erroneous or inaccurate results and even "hallucinations". Relying on erroneous results without questioning them can lead to wrong decisions and potential legal or financial repercussions for the company.

AI strategy that sets rules but also invites experimentation

To meet these challenges, companies should pursue a coordinated AI strategy. It is important that IT teams identify trustworthy providers and agree clear terms for handling sensitive data. Working with vendors that have sound AI principles, including rules for data security and the prevention of data breaches, will minimize cyber risks and legal liabilities. For companies with sufficient resources, developing a customized AI solution by leveraging existing large-scale language models is also a viable option. The result is a powerful AI that integrates seamlessly into the company's data ecosystem and processes, increasing productivity and freeing up time for strategic tasks.

To get the most out of their AI investments, companies should also develop a comprehensive program that continuously educates their employees on best practices for integrating AI into their daily work. This will ensure that all employees can reap the benefits of AI technology. In every team, there is an "early tech adopter" whose curiosity and passion puts them ahead of others who are more hesitant to experiment. Such employees, working with their IT teams, can become AI champions within the organization, sharing learnings, best practices and insights with colleagues and fostering a collaborative learning environment.

Combining ethics and innovation

Within the confines of the company's AI strategy, automating routine tasks can help employees increase their performance and save time to focus on the work that brings the most value to the business. However, it is important to remember that AI should not be used as a substitute for human intelligence and review. AI is now able to automate numerous tasks and generate large amounts of content within seconds. But employees still need to use their own critical thinking. Because if they haven't really read the text generated by the AI or haven't really considered the problem they are trying to solve, they will only create bigger problems in the future. Therefore, for all the AI euphoria, companies need to keep thinking about the long-term ethical and social impact of AI on the workforce, while ensuring that AI complements human capabilities in a balanced way.

 

Author:

(Dropbox.com)

Christopher (Chris) Noon is Director and Global Head of Commercial Intelligence & Analytics (CIA) at Dropbox. As such, he leads the company's data science initiatives. His team develops tools to visualize customer engagement and identify trends. Prior to his career at Dropbox, Chris was a lecturer in ancient history and archaeology at Oxford University. He moved from academia to the technology industry with the aim of using his expertise to bridge the gap between technology and education. For these efforts he was awarded a Fellowship of the Royal Society of Art.

(Visited 176 times, 1 visits today)

More articles on the topic