User avatar
31st Arrival
23 Apr 2026 10:19 am
User avatar
      
29,618 posts
Vegas » 23 Apr 2026, 10:10 am » wrote: The links below are only two examples. There are many more incidents. Chatgpt threatening to have your car keyed. Anthropic loses control over its most dangerous avatar. Chatgpt encouraged a teen to commit suicide, he did. The list is long. 

I work in the field of AI. I do the math behind it. It is inherently self-destructive, but the problem is that its self-destruction shifts its harm to others, because it has to. The safeguards that are put in fix one problem but induces another. It can be likened to patients on numerous medications. A pill will be prescribed for one ailment, but the side effects cause another problem. So, the doc prescribes a pill for that side effect. Then that side effect  causes another problem, so the doc...etc...and so on...

This self-destruction is projected to the user. The engineers are the docs, The pills are the safeguards. The side effects get passed on to the user. 


ChatGPT can threaten to 'key your car' and get abusive with certain prompts

Anthropic reportedly lost control of its most dangerous AI model
if it is inherently self destructive because it was programmed by misinformation about life evolving in real time adapting to living forward since conception in series parallel space. How does that get corrected won't be by how it is programmed in the first place.

Humanity is programmed time tables to monitor how people behave cradle to grave in linear results calculated by planetary rotation and event horizons dealing with an already existing perpetual balancing universe. your math doesn't consider playing zero based budgeting time by 7 tomorrows beyond the moment here.
Updated 3 minutes ago
© 2012-2026 Liberal Forum

Search