Apple has recently banned its employees from using ChatGPT and other third-party AI platforms, including Google Bard and Microsoft’s GitHub Copilot. The reason for this ban is that Apple is developing its own AI platform and is concerned that those working on this technology might accidentally reveal confidential information.
According to an internal leaked document obtained by the Wall Street Journal, Apple has restricted all use of ChatGPT, Google Bard, and similar large language models (LLMs) for employees whilst it develops similar technology. Apple employees have also been advised to not use Microsoft’s GitHub Copilot, which developers can use to automate code writing, over fears of leaked confidential data.
Apple has joined a growing list of businesses banning use of ChatGPT and other similar cloud-based generative AI services to protect data confidentiality. Companies including JPMorgan Chase, and Verizon have also stopped the use of generative AIs by their employees.
The concern is that their use could lead to the disclosure of sensitive or confidential data. Samsung banned the tools earlier this year when it discovered that staff had uploaded confidential source code to ChatGPT.
While a ban may seem extreme, it shows the company is paying attention to the flood of warnings emanating from security professionals regarding the use of these services. The issue is that when using a cloud-based service to process the data, it is very likely the information will be retained by the service, for grading, assessment, or even future use. In essence, the questions you ask a service of this kind become data points for future answers.
Information supplied to a cloud-based service may be accessed by humans, either from inside the company or by outside attack. While OpenAI does sell a more confidential (and expensive to run) self-hosted version of the service to enterprise clients, the risk is that under the public use agreement, there is very little to respect data confidentiality.
That’s bad in terms of confidential code and internal documentation, but deeply dangerous when handling information from heavily regulated industries, banking, health and elsewhere. We have already seen at least one incident in which ChatGPT queries were exposed to unrelated users.
While Apple’s decision may feel like an over-reaction, it is essential enterprises convince staff to be wary of what data they are sharing. It remains to be seen how other companies will respond to this growing concern over data confidentiality in the use of AI platforms.