Samsung employees have made a serious mistake when using ChatGPT

0
22
samsung hq by nbbj 3 1020x610.jpg
samsung hq by nbbj 3 1020x610.jpg

Samsung workers have unknowingly leaked sensitive data by using ChatGPT to help them with their tasks. Engineers from its semiconductor division used the AI ​​bot to help them troubleshoot their source code. But in doing so, the workers entered confidential data, such as the source code of a new program, notes from internal meetings, data related to their hardware. The result is that, in just under a month, there were three incidents of employees leaking confidential information through ChatGPT. Since ChatGPT retains the data entered by users for further training, these Samsung trade secrets are now in the hands of OpenAI, the company behind the AI ​​service. In one of the cases mentioned, an employee asked ChatGPT to optimize test sequences to identify chip failures, which is confidential. In another case, an employee used ChatGPT to convert meeting notes into a presentation, the content of which was obviously not something Samsung would have wanted outside parties to know about. Following the incidents, Samsung Electronics issued a warning to its workers about the potential dangers of leaking confidential information, stating that such data is impossible to recover as it is now stored on servers belonging to OpenAI. In the semiconductor industry, where competition is fierce, any kind of data breach could spell disaster for the company in question. It does not appear that Samsung has any recourse to request the recovery or deletion of the sensitive data that OpenAI now has in its possession. Some have argued that this fact makes ChatGPT non-compliant with the EU GDPR, as this is one of the fundamental principles of the law that governs how companies collect and use data. It is also one of the reasons why Italy has banned the use of ChatGPT throughout the country (opens in a new tab).

SEE ALSO  This is how the iPhone's “Isolation Mode” works, turning your phone into a bunker