In companies, employees sometimes have access to the requests of their colleagues on Chatgpt. And they make surprising (and very generating) finds.
Feelings, sexuality, conflicts or digestive problems … There are things that we would like to know not to know. Especially on his colleagues. One day, Louise* entered in spite of herself in the intimacy of her office neighbor. The fault in the common chatgpt account of the company. The employees of his company connect with the same identifiers, and everyone has access to the requests of others.
When Louise notes in the history a research entitled “Message to a sexologist”, she cannot help clicking. “I know that I shouldn’t have, but I read: my colleague said that she had difficulties in bed with her boyfriend, that he was a fetishist and that it worried: she asked Chatgpt to help her formulate an email to a sexologist,” she says.
“Do not make it crisp information”
“I was super embarrassed, I deleted her request. I do not understand how she was able to leave that because she masters the tool very well, she knows that research is visible,” explains Louise, who did not dare to talk about it with the interested party.
“I tell myself that either she does not care, or she forgot to delete, or she is an exhibitionist.”
“The first reflex to have in this kind of situation is not to make it crisp information or a ragot, to erase the compromising request and to go talk about it discreetly to the person concerned”, advises Agathe Lemaire, lawyer in labor law.
“In general, you have to apply the precautionary principle and never make personal requests on a professional tool, whether by email or via the company’s AI, because everything you can write maybe with you,” she insists.
“I am underwater, and I see him talking about his personal life with Chatgpt”
As the days go by, Victor* also discovered a very personal facet of his colleague. The latter tends to use the artificial intelligence tool of the box as a shrink rather than a working tool, without erasing the history accessible to all. “I learn that he is getting married next year but he has doubts, says Victor. He confides in all aspects of his relationship: complicity, sexuality, money …”
His colleague continues requests during his working time, while ensuring that she is overwhelmed. “It’s really annoying, I am underwater, and I see him talking about his personal life with Chatgpt”, breathes the employee of an audio-visual production company.
“You can no longer look at the person normally, at first I found it funny but very quickly there is a feeling of voyeurism,” said the employee.
“There is an omerta that is created, everyone is aware, but the more time passes, the more we have read, and the more annoying it is to warn the person,” he explains.
“I was 3rd on the list of employees to be fired”
Thomas*, an employee in a small company in the tech field, has had a slightly different experience. “One day, I read a request from our director: he asks Chatgpt how to make us dismiss ‘in a human way’,” says the young man.
“He sent our private information to AI and asked him to make a classification of employees to turn first.”
“He asked Chatgpt to classify us according to how much we cost the company, our productivity and especially the number of maladie stops we had, I was 3rd on the list,” recalls the employee.
The company was flowing and one day the boss took action. “He dismissed employees one after the other, in the exact order recommended by Chatgpt,” recalls Thomas.
“It was like in a reality TV, he told her all his thoughts and we read live, we knew who was the next one. I think he knew we saw everything, because he was not stupid. Either he had nothing to do, or it was a strategy.”
A “completely illegal” practice, according to Agathe Lemaire. “Already, there is a RGPD subject to protect personal data. Then, we cannot make lists of nominative employees to be dismissed, and even less according to health criteria, it is clearly discriminatory,” she explains.
“It can also disadvantage the employee more insidiously”
The information unveiled in an AI tool can clearly turn against the reckless people who do not use “ephemeral” mode or even simply their personal account.
“I have a colleague who had a stomach ache, she described all her digestive problems on the shared account … In the smallest detail,” says William*, an employee in a consulting company with less than 100 employees. “There is another one who is looking for another live job on Chatgpt, but no one knows that he wants to resign, it’s a bit annoying.”
“You really have to be careful, the employer can already blame the employee for looking for a job during his office hours, and if there is a trial, he can rely on requests,” explains Agathe Lemaire.
“It can also disadvantage the employee more insidiously, for example to have a promotion or a salary increase.”
The law professional believes that it is also up to the company to seize these issues. “The employer also has an information obligation, he must absolutely recall that the tool is not personal,” she explains. She therefore encourages companies to amend their IT charter in this sense.
* The first names have been modified