Forty researchers call for monitoring the “thoughts” of AI systems

Forty researchers call monitoring "thoughts" new: This article explores the topic in depth.

Nevertheless,

Forty researchers call monitoring &quot. Similarly, thoughts" new:

Forty researchers call monitoring "thoughts": This article explores the topic in depth.

Similarly,

Forty researchers call monitoring "thoughts":

Investigate the “thoughts” of “reasoning” models of artificial intelligence? For example, This is what 41 researchers financed by the largest IA companies – Antrophic. Therefore. For example, Openai, Google Deepmind, Meta -, scientists affiliated with various institutions such as UK Ai Security Instititous and big names in the domain like Yoshua Bengio,

The idea they defend: deploy surveillance technologies for “thought channels” of AI systems for Y “Detect the.. Similarly, intention to behave” to behave “. In addition, In an article (a paper position entitled « Chain of Thought Monitorability : a new. Similarly. Furthermore, fragile opportunity for AI safety ») Published this July 15, the signatories call the forty researchers call monitoring “thoughts” new developers of “border models” to look at the possibility of following the “sons of thought” (chain of thoughtsCot) models as they develop them.

The proposal is part of a context of difficulty understanding the way in which machine learning models.. Nevertheless, major models of language built thanks to these techniques reach their results. Consequently, More recently. Consequently. Furthermore, it is part of the assertion that models like O3 of Openai or the DEEPSEEK R1 would be capable of “reasoning”. Furthermore, For the authors of the article. Similarly. Moreover, “COT monitoring is a precious complement to the safety measures of border models, as it offers a rare overview of the way in which the agents make decisions”.

Models of what? – Forty researchers call monitoring "thoughts" – Forty researchers call monitoring "thoughts" new

forty researchers call monitoring “thoughts” new

Pushed by Openai in the summer of 2023. Similarly. the expression “border model” describes according to the company “Very efficient foundation models that could have sufficient dangerous capacities to lay serious risks for public security”. Meta qualifies forty researchers call monitoring “thoughts” for example its Llama 3.1 model as a “border” model.

If it is debated in the scientific environment. the concept of foundation model is defined in European law as “An AI system trained on a large dataset. and built for the generality of its productions”.

Reasoning language models (reasoning language models) are a qualifier attached to different LLM since the publication of. the O1 system of Open AI. in September 2024. They operated thanks to logic of “thinking threads”. these features thanks to which a generative system provides an overview forty researchers call monitoring “thoughts” new of the stages by which it has passed. to produce its response. The latter indeed illustrate how a large model of language divides a question. a task in steps, which he performs one after the other to finally produce a global response.

forty researchers call monitoring “thoughts”

Open black boxes?

Explanitability. transparency have also been at the top of the researchers to facilitate the audit. understanding of all kinds of algorithmic models for many years, whether they are generative or, for example, in charge of sorting or moderating the waves of content present on social networks. In 2018, the Villani report already underlined the need to facilitate understanding of their operation.

The just published paper position precisely seeks to attract attention to these challenges forty researchers call monitoring “thoughts” new of explanability. His co-signs. which have four renowned experts support, including the Nobel Prize Nobel Geoffrey Hinton or the founder of Safe Superintelligence Inc. and ex-Open Ai Ilya Sustkever, call to develop techniques of monitoring “thinking sons”, but also to maintain these features. The goal: to forty researchers call monitoring “thoughts” explore them more precisely to gain a better understanding of the. operation of the LLM. But also make sure that “The current degree of visibility” Lost.

The publication is made in a context in which the largest companies in the sector are in.. open competition. For the past few weeks. Meta has notably worked out to be awarded a good number of specialists working so far for Google Deepmind. Openai, Anthropic or even Apple to develop its own artificial intelligence laboratory. Historically committed to the subjects of “IA security”.. AI Safety, the co forty researchers call monitoring “thoughts” new -founder of Anthropic Dario Amodei indicated in April wishing to “open the black box” of AI systems by 2027.

Monitor the “thoughts” of AI systems, an additional anthropomorphization?

If it forty researchers call monitoring “thoughts” is signed by multiple eggs in the field. this article also digs a furrow already largely drawn from comparisons from the functioning of technical systems to human behavior. The practice is ardently debated by scientists like the linguist Emily Bender. the computer scientist Timnit Gebru. for whom she blurs the understanding of the public and decision -makers of the real functioning of these systems.

In this case. evoking the “sons of thought” of statistical models participates directly in instilling the idea that these forty researchers call monitoring “thoughts” new machines are aware – at the end of the O1 model of Openai. Emily Bender and the sociologist Alex Hanna had described as “ridiculous” the decision of the company to present it as capable of reasoning.

This vagueness is at the origin of multiple debates in the community of forty researchers call monitoring. “thoughts” research in artificial intelligence – in 2022. the engineer Blake Lemoine had been dismissed from Google after affirming that the Lamda model was aware. To a certain extent. it also allows companies in the sector of “Continue to do what they want”explained the author of the investigation Empire of AIKaren Hao. à Next.

Further reading: The battery of the Vivo X300 Pro. the Oppo Find X9 Pro has been updated in a new leakDurable, solid and extensible: DJI launches a new power forty researchers call monitoring “thoughts” new plant power plantCasio launches the first mechanical watches in its historyCdiscount drops the price of AirPods Pro 2, stocks are limited to viewAmazon smashes the prices of the Xiaomi Redmi 14C, Redmi Note 14 and Redmi Note 14 Pro.

Further reading: Sold at nearly 300 euros, this iPhone 13 already risks stock breaking at CdiscountSubnautica 2 goes to court: the founders pursue KraftonPrime Day-Action Cam Dji Dji Osmo Action 4 “4 stars” at € 189.00 (-18%)Sony Xperia 1 VII: Sony Germany publishes a press release concerning the closing problems after the sales stopA new half-price formula for 18-22 year olds;.

Comments (0)
Add Comment