ChatGPT could leak data, Samsung employees believed

company Samsung fixes an issue with the chatbot ChatGPT. He allowed the company’s employees to use it to increase productivity, but they worked with it a little differently than they should have. We don’t know if they forgot the rules of how to actually work with this AI system, or if management forgot to set them up enough in the first place. However, it was found that in at least three cases, Samsung employees entrusted the chatbot with internal information that could become the basis for further training of the system, and the information could then reach other unauthorized persons.

One of the employees entered the code into ChatGPT with the chatbot optimizing it. This code was intended to determine production yield and find production errors. A second employee committed the code to ChatGPT’s internal database, hoping to find bugs in it and help solve its poor functionality. In the third case, the employee entered the transcripts from the meetings into the system to create abbreviated notes from them.

Samsung Electronics has already issued a warning to its employees not to enter such data into ChatGPT, because after the data is sent, it is already stored on external servers that the company cannot access and retrieve or delete. Such content may be distributed to people who should not have access to it. Samsung also limited the size of queries that system staff can make to 1024 bytes. At the same time, it is also planning its own development of a similar system for domestic purposes.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top