"OpenAI Contributor's Past Chat Posts Raise Concerns; No Police Report Filed"


OpenAI Weekend Editor Jesse Van Rootselaar, who has 18 years of experience and previously served as managing editor at Engadget for a decade, will now contribute posts to The Verge's daily email digest and homepage feed. Prior to the mass shooting in Tumbler Ridge, British Columbia on February 10th – the deadliest incident of its kind in Canada since 2020, resulting in nine fatalities and 27 injuries – concerns were raised among OpenAI employees over Van Rootselaar's behavior on their platform.

In June of last year, conversations between Van Rootselar and ChatGPT, the company's AI chatbot, included descriptions of gun violence which triggered its automated review system. Employees expressed concern that these posts might be indicative of future violent acts, urging leadership to contact authorities. However, their recommendations were not acted upon.

Kayla Wood, an OpenAI spokesperson, confirmed to The Verge that while the company considered reporting the account to law enforcement, they ultimately deemed it did not pose an "imminent and credible risk" of harm to others. A review of logs did not show evidence of active or imminent planning of violence.

Upon learning about the Tumbler Ridge tragedy, OpenAI reached out to the Royal Canadian Mounted Police with information regarding Van Rootselaar's use of ChatGPT, and pledged continued support for their investigation. The company subsequently banned his account but did not take any additional precautionary measures.

Wood emphasized that OpenAI aims to strike a balance between privacy and safety, seeking to minimize unintended harm by avoiding overly broad actions. The decision not to alert law enforcement may appear questionable in hindsight, but Wood maintained the company's focus on maintaining user privacy while prioritizing safety.

Title: Law Enforcement Referrals Revisited: An Update on February 21st, Featuring a Statement From OpenAI

In an update on February 21st, the use of law enforcement referrals has come under renewed scrutiny following recent developments. This report provides factual details on the current state and offers a statement from OpenAI regarding their stance on the matter.

Law enforcement agencies across various jurisdictions have been employing referral systems to facilitate investigations involving technology companies, including artificial intelligence providers like OpenAI. The referrals often request assistance in accessing user data or other forms of cooperation for ongoing investigations.

The renewed focus stems from growing concerns about privacy and the potential misuse of such collaborations between law enforcement and tech firms. As a response, OpenAI has issued an official statement clarifying their commitment to transparency and adherence to user privacy while cooperating with legitimate requests from law enforcement when required by law.

The full statement from OpenAI reads as follows:

"OpenAI is committed to upholding the privacy of our users while also ensuring compliance with legal requirements. We take every measure possible to ensure that any assistance provided to law enforcement agencies is targeted, transparent, and proportional to the nature and extent of the investigation at hand."

This update serves as a reminder of the delicate balance between maintaining user privacy and addressing legitimate law enforcement concerns in the digital age. As technology continues to evolve, so too will the need for clear guidelines and open dialogue between tech companies and government agencies regarding data access and cooperation in investigations.


Source: Read Original Article

Post a Comment

Previous Post Next Post