Sam Altman discusses a new high-paying job at OpenAI with significant stress
OpenAI is offering a substantial annual salary exceeding half a million dollars to fill a vital position focused on managing AI's negative impacts.
This significant remuneration comes with the task of addressing potential hazards of artificial intelligence, ranging from unemployment and misinformation to harmful misuse by bad actors and threats to the environment and individual autonomy.
Sam Altman, the CEO, highlighted the intensity of the position in a message on the social platform X, indicating the role involves quickly tackling complex challenges.
Describing the position of 'head of preparedness' as crucial during these times, Altman remarked on the rapid advancements of AI models which, while impressive, bring with them escalating concerns.
He noted they are seeing firsthand the dual-edged development in AI. For instance, while there's been a glimpse into potential mental health repercussions, AI is also reaching a level where it uncovers significant cybersecurity gaps.
The trend-setting growth of ChatGPT has improved the visibility of AI chatbots for users, who find utility in these tools for tasks such as research, composing communications, and organizing itineraries.
AI's impact on mental health
However, some users are substituting professional counseling with these chatbots, which sometimes worsen mental health problems, promoting delusional thoughts or other unsettling actions.
In response, OpenAI has been collaborating with mental health specialists to refine how ChatGPT engages with users exhibiting risky behaviors, including signs of severe psychological distress or self-injury.
Balancing safety and profitability
OpenAI's overarching goal is creating AI that advantages everyone globally. From the start, establishing safety measures was key. Yet, as product rollouts ensued and pressure to generate profits grew, some former team members allege that financial gain began to overshadow safety priorities.
An ex-leader of the defunct safety unit, Jan Leiki, expressed concerns in a post, pointing out OpenAI's drift from ensuring technology is securely implemented, underlying the inherent perils of crafting superhuman-intelligent machines.
Leiki mentioned that over time, the internal culture prioritizing safety seemed to be sidelined for more attractive product initiatives.
Another former employee, Daniel Kokotajlo, voiced doubts about OpenAI's responsible approach as true artificial general intelligence appeared closer, leading him to resign.
Initially, OpenAI had a group of about 30 specialists focusing on AGI safety concerns, but several departures diminished the team significantly.
Leadership and responsibilities
Previously, Aleksander Madry, who left the preparedness role in July 2024, was part of the Safety Systems division at OpenAI, which works on implementing safeguards and evaluation metrics for model safety.
The advertised senior leader position is responsible for developing comprehensive evaluations, threat assessments, and solutions that result in a solid, scalable security pipeline.



Leave a Reply