AI Chatbots

The World of AI Chatbots – Could it be harmful to your company?


AI Chatbots powered by large language models (LLMs) are not just the world’s new favourite pastime.

The technology is increasingly being recruited to boost workers’ productivity and efficiency, and given its increasing capabilities, it’s poised to replace some jobs entirely, including in areas as diverse as coding, content creation, and customer service.

Many companies have already tapped into LLM algorithms, and chances are good that yours will likely follow suit in the near future. In other words, in many industries it is no longer a case of “to bot or not to bot”.

But before you rush to welcome the new “hire” and use it to streamline some of your business workflows and processes, there are a few questions you should ask yourself.

 

Is it safe for my company to share data with an LLM?

Radius-IT-Telcoms_cloud_security_ 1

LLMs are trained on large quantities of text available online, which then helps the resulting model to interpret and make sense of people’s queries, also known as prompts. However, every time you ask a chatbot for a piece of code or a simple email to your client, you may also hand over data about your company.

“An LLM does not (as of writing) automatically add information from queries to its model for others to query,” according to the United Kingdom’s National Cyber Security Centre (NCSC). “However, the query will be visible to the organisation providing the LLM. Those queries are stored and will almost certainly be used for developing the LLM service or model at some point,” according to NCSC.

This could mean that the LLM provider or its partners are able to read the queries and may incorporate them in some way into the future versions of the technology. Chatbots may not forget or ever delete your input as access to more data is what sharpens their output. The more input they are fed, the better they become, and your company or personal data will be caught up in the calculations and may be accessible to those at the source.

Perhaps in order to help dispel data privacy concerns, Open AI introduced the ability to turn off chat history in ChatGPT in late April. “Conversations that are started when chat history is disabled won’t be used to train and improve our models, and won’t appear in the history sidebar,” developers wrote in Open AI blog.

Another risk is that queries stored online may be hacked, leaked, or accidentally made publicly accessible. The same applies to every third-party provider.

 

What are some known flaws in Chatbots?

Radius-IT-Telcoms_cloud_security_ 2 1

Every time a new technology or a software tool becomes popular, it attracts hackers like bees to a honeypot. When it comes to LLMs, their security has been tight so far – at least, it seems so, however, there have been a few exceptions.

OpenAI’s ChatGPT made headlines in March due to a leak of some users’ chat history and payment details, forcing the company to temporarily take ChatGPT offline on March 20th. The company revealed on March 24th that a bug in an open source library allowed some users to see titles from another active user’s chat history”.

“It’s also possible that the first message of a newly-created conversation was visible in someone else’s chat history if both users were active around the same time,” according to Open AI. “Upon deeper investigation, we also discovered that the same bug may have caused the unintentional visibility of payment-related information of 1.2% of the ChatGPT Plus subscribers who were active during a specific nine-hour window,” reads the blog.

Also, security researcher Kai Greshake and his team demonstrated how Microsoft’s LLM Bing Chat could be turned into a ‘social engineer’ that can, for example, trick users into giving up their personal data or clicking on a phishing link.

 

Have some companies already experienced LLM-related incidents?

Radius-IT-Telcoms_cloud_security_ 3 1

In late March, the South Korean outlet The Economist Korea reported about three independent incidents in Samsung Electronics.

While the company asked its employees to be careful about what information they enter in their query, some of them accidentally leaked internal data while interacting with ChatGPT.

One Samsung employee entered faulty source code related to the semiconductor facility measurement database seeking a solution. Another employee did the same with a program code for identifying defective equipment because he wanted code optimization. The third employee uploaded recordings of a meeting to generate the meeting minutes.

To keep up with progress related to AI while protecting its data at the same time, Samsung has announced that it is planning to develop its own internal “AI service” that will help employees with their job duties.

 

What checks should companies make before sharing their data?

Uploading company data into the model means you are sending proprietary data directly to a third party, such as OpenAI, and giving up control over it. We know OpenAI uses the data to train and improve its generative AI model, but the question remains: is that the only purpose?

If you do decide to adopt ChapGPT or similar tools into your business operations in any way, you should follow a few simple rules.

  1. First, carefully investigate how these tools and their operators access, store and share your company data.
  2. Second, develop a formal policy covering how your business will use generative AI tools and consider how their adoption works with current policies, especially your customer data privacy policy.
  3. Third, this policy should define the circumstances under which your employees can use the tools and should make your staff aware of limitations such as that they must never put sensitive company or customer information into a chatbot conversation.

 

How should employees implement this new tool?

Radius-IT-Telcoms_cloud_security_ 4 1

When asking LLM for a piece of code or letter to a customer, use it as an advisor who needs to be checked. Always verify its output to make sure it’s factual and accurate – and so avoid, for example, legal trouble. These tools can “hallucinate”, i.e. churn out answers in clean, crisp, readily understood, and clear language that is simply wrong, but seems correct because it’s practically unidentifiable from all its correct output.

In one notable case, Brian Hood, the Australian regional mayor of Hepburn Shire, recently stated he might sue OpenAI if it does not correct ChatGPT’s false claims that he had served time in prison for bribery. This was after ChatGPT had falsely named him as a guilty party in a bribery scandal from the early 2000s related to Note Printing Australia, a Reserve Bank of Australia subsidiary. Hood did work for the subsidiary, but he was the whistleblower who notified authorities and helped expose the bribery scandal.

When using LLM-generated answers, look out for possible copyright issues. In January 2023, three artists as class representatives filed a class-action lawsuit against the Stability AI and Midjourney art generators. The artists claim that Stability AI’s co-created software Stable Diffusion was trained on billions of images scraped from the internet without their owners’ consent, including on images created by the trio.

 

What are some data privacy safeguards that companies can make?

Radius-IT-Telcoms_cloud_security_ 5 1

To name just a few, put in place access controls, teach employees to avoid inputting sensitive information, use security software with multiple layers of protection along with secure remote access tools, and take measures to protect data centres.

Indeed, adopt a similar set of security measures as with software supply chains in general and other IT assets that may contain vulnerabilities. People may think this time is different because these chatbots are more intelligent than artificial, but the reality is that this is yet more software with all its possible flaws.


We hope that you enjoyed our blog regarding the future of AI in the workplace and if you want to discuss any of the above points further – please get in touch.

You can also follow Radius on Instagram, Facebook or LinkedIn for more security updates.

Source: ESET


Call our sales team now on LoCall 0818 592500.

Alternatively, please send us a message via the form below and we’ll call you back.

Get in Touch

Name(Required)
Email(Required)
Please let us know what's on your mind. Have a question for us? Ask away.
This field is for validation purposes and should be left unchanged.

Certified Excellence


Radius maintain both ISO quality and Information Security certification. With GDPR regulations now in force, it’s critical that your IT partner handles your organisation’s sensitive data with the highest of standards.

ISO Quality and Information Security certification requires rigorous processes to be embedded at the heart of everything we do. Radius is proud to maintain this standard, awarded to only the very top tier of IT service providers.

Industry leading partnerships

Radius is a gold Microsoft partner for Datacenter and Cloud Solutions, a preferred HP and Cisco partner and a Retail Excellence Ireland gold partner. These partnerships give us unrivalled access to the best technology to support our clients’ IT and Telecoms needs.