Apple reportedly decided to limit the use of ChatGPT and other AI apps to employees

0
19
1097309.jpeg
1097309.jpeg

Even if OpenAI has recently released the official ChatGPT app for iPhone and iPad, Apple probably won’t be able to use it given that, according to what was reported by the Wall Street Journalthe Cupertino company would have “limited” (shouldn’t have banned outright) its employees from using the chatbot and other third-party AI tools.

APPLE LIMITS USE OF CHATBOT

Apple, as has also been done by other companies in recent months, including Samsung, would be concerned by the possible disclosure of confidential data through this type of application. According to an internal document, Apple would also have asked employees not to use Copilot from GitHub, owned by Microsoft, an app that automates the writing of software code.

ChatGPT is a chatbot based on a large language model (Generative Pre-trained Transformer), specialized in the conversation with a human user, able to answer questions, synthesize, write texts and perform other tasks. Officially launched in November last year, it was enhanced in March with GPT-4 and with plugins that allow web access and direct interaction with some websites.

ARTIFICIAL INTELLIGENCE CONCERNS EVERYONE

While on the one hand there are growing concerns about the lack of regulation and the dangers that could arise from incorrect use, on the other hand there are also companies who fear the possible disclosure of confidential information by their employees who can access to chatbots to facilitate the performance of certain tasks.

When generative artificial intelligence models are used, in fact, as also officially declared, the data entered by users and the history of questions are sent to the developer’s servers to allow continuous improvements to the platform. In recent months, ChatGPT was temporarily taken offline precisely because a bug had allowed some users to see the titles from the chat history of others.

Precisely for this reason, in the following days, the Italian Privacy Guarantor had access to the chatbot blocked, requesting OperAI to implement some tools both to protect the confidentiality of the data and to make its use more transparent. The agreement was reached after about a month, once OpenAI had implemented what was requested.

APPLE WOULD BE DEVELOPING ITS OWN MODEL

Toning to Apple, you know how stringent the security measures it takes are usually to protect both product information and, more importantly, consumer data. The protection of privacy has always been a cornerstone of its communication.

Apple was among the first companies to enter the field of artificial intelligence with its SIRI voice assistant, launched in 2011, but it still lagged behind other voice assistants precisely because of its privacy protection policy which seeks to maintain most user data on devices, not on external servers.

In Cupertino, however, interest in generative artificial intelligence is still very high and a large proprietary language model is reportedly under development. At the head of the division there would be John Giannandrea, hired in 2018 after working for Google. Apple has acquired to date several startups that deal precisely with artificial intelligence.

Tim Cook, the CEO of Apple, however, expressed some concerns about the progress made in this sector, in a general sense, and during the usual call to profits on the sidelines of the presentation of the latest quarterly data, he underlined the importance of approach this technology in a “deliberate and thoughtful” way but not before having resolved a number of issues.