OpenAI’s ChatGPT app for iPad, iPhone hits 500K downloads

OpenAI shipped its ChatGPT app for iPads and iPhones just a week ago, but it has already become one of the most popular applications in the last two years, with over half a million downloads in the first six days. That’s a real achievement, but also a challenge — that’s half a million potential data vulnerabilities.

Not to rest on its laurels, this year’s favorite smart assistant (so far) is now also available in 41 additional nations. There’s little doubt that this has been one of the most successful software/service introductions of all time, but that doesn’t change the inherent risk of these technologies.

The popularity of the app should wave a red flag for IT leaders, who must redouble efforts to warn staff not to input valuable personal or corporate data into the service. The danger in doing so is that data gathered by OpenAI has already been attacked once, and it’s only a matter of time until someone gets at that information.

After all, digital security today isn’t about if an incident happens, but when.

To coin a phrase from Apple’s playbook, the best way to protect data online is not to put the information there in the first place. That’s why iPhones and other products from Cupertino (via China, India, and Vietnam) work on the principle of data minimization, reducing the quantity of information collected and taking pains to reduce the need to send it to servers for processing.

That’s a great approach, not just because it reduces the quantity of information that can slip out but because it also reduces the opportunity for humans to make mistakes in the first place.

We don’t have that protection with ChatGPT apps. Beyond a wholesale ban on using the service and application on managed devices, IT admins are almost completely reliant on trust when it comes to ensuring their staff don’t share confidential data with the bot.

Still, humans are humans, so it’s inevitable that — no matter how stern the exhortations against such use — we can be certain some people will accidentally share confidential data through the app. They may not even realize they are doing it, simply seeing it as the equivalent of searching the web.

It’s a similar threat to that of shadow IT, with humans accidentally sharing confidential information in exchange for what seems to be convenience.

IT must consider the App Privacy label OpenAI has attached to its product at the App Store. That label makes it clear that when using the app, the following data is linked to the user:

Available online, OpenAI’s own Privacy Policy should also be explored, although the company has not disclosed the training data it uses for its latest bots.

The challenge here is that IT must consider the limitations of the latter alongside the inevitability of human nature. Regulators are already concerned about the privacy implications. In Canada, privacy regulators are investigating the company’s privacy practices, with similar activity taking place in Europe. (OpenAI seems pretty concerned about these investigations and has warned that it may or may not shut shop in Europe if the law is too rigorous.)

The deluge of activity around generative AI in general, and ChatGPT in particular ,should not mask the sweeping  repercussions of these technologies, which offer vast productivity benefits, but threaten job security at a mass scale.

In the short term at least, IT admins should do their utmost to ensure these consumer-simple products don’t threaten confidential business data. And for that to happen, users must be warned not to share data with these services until ratified under company security and privacy policy.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

http://www.computerworld.com/category/security/index.rss

Leave a Reply