AI has advanced rapidly in recent years. From a commercial standpoint, we’re now witnessing  the influence generative AI is having on businesses, both positive and negative. While ChatGPT and Bard have proven to be useful tools for developers, marketers, and consumers, they also run the danger of mistakenly disclosing sensitive and confidential information.

As a result, from a security standpoint, it is always advantageous to plan ahead of time and anticipate what can happen next.

“Interactive AI” is one of the most recent advances in AI technology, and Mustafa Suleyman, co-founder of DeepMind, described it as “a huge shift in what technology can do.” To put it simply, interactive AI is more than just data analysis and user instructions in the form of prompts. When engaging with humans and other technological tools, it is far more sensitive and adaptable.

As we continue to explore this new area of AI, it is critical that we keep in mind the security dangers and implications it poses for companies. As cybersecurity professionals, it is our responsibility to maintain control over the technology and to establish clear guidelines and constraints on its capabilities.

Interactive AI can be used for activities like geolocation and navigation or speech-to-text applications, ushering in the next generation of chatbots and digital assistants. While generative AI tools can write code, conduct computations, and engage in human-like discussions, interactive AI can also produce new material.

What We Have Learned From The GenAI Phase

When considering the security implications of advancements in AI technology, such as interactive AI, we must first address existing concerns about generative AI models and LLMs. These include ethical considerations, political and ideological biases, unfiltered models, and offline functionality.

Specifically, ethical considerations allude to the necessity of avoiding LLMs engaging in unethical or inappropriate behaviour.

Developers have been able to construct restrictions and guardrails that ensure AI systems refuse requests for dangerous or immoral content by going through a process of ‘instruction tuning’ to fine-tune their models. As interactive AI develops and gains more autonomy than generative AI models, we must make certain that these policies and safeguards stay in place to prevent AI from interacting with harmful, objectionable, or unlawful information. 

Moreover, unfiltered AI chatbots have posed a huge security concern since they operate outside of the limits imposed by closed models such as ChatGPT. One distinguishing element of these models is their offline functionality, which makes usage tracking challenging. The lack of control should raise red flags for security professionals, as users may engage in illegal activities without discovery.

Businesses that want to engage with interactive AI must learn from these worries about the generative wave as they implement the technology’s next generation.

Best Practice For Business Security

As with any new technology, organisations must collaborate with IT and security teams, as well as their workers, to create strong security measures to manage the related risks. 

This might include the following as best practice:

Adopting a data-first strategy:   This approach, especially within a Zero Trust framework, prioritises data security within the business. By identifying and understanding how data is stored, used, and moves across an organisation, and controlling who has access to that data, it ensures security teams can quickly respond to threats such as unauthorised access to sensitive data.

Strict access controls:   With hybrid and distributed workforces, this is crucial to preventing unauthorised users from interacting with and exploiting AI systems. Alongside continuous monitoring and intelligence gathering, limiting access helps security teams identify and respond to potential security breaches in a prompt manner. This is a more effective approach than outright blocking tools, which can lead to a shadow IT risk and productivity losses.

Collaborating with AI:    On the opposite end of the scale, AI and machine learning can also play a significant role in enhancing business security and productivity. It can aid security teams by simplifying security processes and improving their effectiveness so they can focus their time where it’s most needed. For employees, adequate training around the safe and secure use of AI tools is a must, while also recognising the inevitability of human error.

Establishing clear ethical guidelines:   Organisations should outline clear rules for the use of AI within their business. This includes addressing any biases and ensuring they have built-in policies and guardrails to prevent AI systems from producing or engaging with harmful content. This is now an ongoing process as businesses have created corporate policies for AI tools, including those leveraging existing GPTs and proprietary AI tools. These policies govern usage and data protection. Large enterprises should look to fine-tune their own Large Language Models (LLMs), requiring expanded AI corporate policies and security policies to protect proprietary company data.

While interactive AI  represents a huge advancement, companies must exercise caution in this new territory.

AI is here to stay – that’s a fact. A more moral and responsible AI-powered future can be achieved by using the advantages of new developments in AI while reducing the danger of exploitation by putting best practices and strong security measures, such as embracing a data-first policy, into effect. 

Jason Kemmerer is Solutions Architect – Data Security and Insider Risk at Forcepoint

Source: Cyber Security Intelligence