Voice interface for ChatGPT, OpenAI report sounds alarm

In July, OpenAI launched a new voice interface for ChatGPT. It allows for a more natural and human interaction with artificial intelligence. However, a recent report by the company highlights the potential risks associated with this innovation, including the possibility that users may become emotionally attached to the chatbot.

This interface may actually enhance the anthropomorphization of AI, a phenomenon where users perceive machines as human beings.

OpenAI develops risks in the white paper

OpenAI detailed these concerns in a “system map” for GPT-4o. This is a document that outlines the risks associated with the model, as well as the security measures put in place to try to mitigate them. This post is interesting because it comes in the context of growing criticism and concerns about the company. For example, several employees concerned about the long-term risks of AI left the group. These former collaborators accuse OpenAI of taking “reckless” risks in its race to commercialize and of silencing dissenting voices.

To return to the report on the potential dangers of GPT-4o, it highlights three main points

  • The spread of misinformation,
  • the strengthening of public prejudice,
  • the possibility of assisting in the creation of chemical or biological weapons.

More worryingly, the paper also warns against the possibility that the AI ​​model could

  • mislead consumers,
  • and circumvent established controls to avoid unsafe acts.

OpenIA, transparency is welcome but still insufficient

While OpenAI’s transparency efforts have been praised for trying to assess the potential risks of using its tools, some experts believe it could go further.

Lucie-Aimée Kaffee, an applied policy researcher at Hugging Face, explains, for example, that the paper does not provide enough detail about the model’s training data, nor its origins. It highlights the need to address the issue of consent in relation to the large data sets used. This point is essential because the source of the knowledge is unknown and so are their authors.

For his part, Neil Thompson of the Massachusetts Institute of Technology, we must be very careful, because the risks indicated in this report are perhaps only the tip of the iceberg. Other dangers may arise once AI is widely deployed in the real world. Therefore, it would be essential to have continuous evaluation of new models.

Finally, OpenAI highlights the risk of emotional connection created by the voice interface. This situation can then have a dual effect. Indeed, if on the one hand it can help isolated individuals meet the need for social interaction, it can also negatively affect human relationships while creating an over-reliance on AI.

ChatGPT’s voice interface is a big step forward because it makes AI interactions more natural. However, it raises important questions about safety and emotional and ethical impacts.

Who is OpenAI?

Founded in 2015 in San Francisco, OpenAI is a company specializing in the development of artificial intelligence (AI). Its main goal is to create an artificial general intelligence that is both safe and useful for all of humanity. This “ambitious” mission is inevitably accompanied by debates about AI ethics and security.

Several of his products quickly established themselves as “leaders” in the field. Among his most famous creations are GPT-4o, DALL-E (models capable of generating images from text descriptions), and Sora (a video generation model). The launch of ChatGPT in November 2022 marked a turning point in the popularization of conversational agents and generative AI, with worldwide enthusiasm. This led to 100 million users in just 8 weeks

OpenAI consists of a for-profit subsidiary, OpenAI, Inc., and a for-profit subsidiary, OpenAI Global, LLC. Microsoft owns about 49% of OpenAI, having invested $13 billion in it, while providing OpenAI with computing capabilities through its Azure cloud platform.

Leave a Comment