Skip to content
-
Subscribe to our newsletter & never miss our best posts. Subscribe Now!
Itfy.in

At Itfy, we are dedicated to revolutionizing the way you receive news. Our mission is to provide timely, accurate, and personalized news updates using cutting-edge AI technology. Stay informed, stay ahead with us.

Itfy.in

At Itfy, we are dedicated to revolutionizing the way you receive news. Our mission is to provide timely, accurate, and personalized news updates using cutting-edge AI technology. Stay informed, stay ahead with us.

  • Home
  • Sample Page
  • Home
  • Sample Page
Close

Search

  • https://www.facebook.com/
  • https://twitter.com/
  • https://t.me/
  • https://www.instagram.com/
  • https://youtube.com/
Subscribe
Home/News/FBI Exposes Shocking Details: Palm Springs Bombing Suspects Turned to AI Chat Programs!
News

FBI Exposes Shocking Details: Palm Springs Bombing Suspects Turned to AI Chat Programs!

By adminitfy
June 5, 2025 2 Min Read
0

On May 17, 2025, a bombing near a reproductive health facility in Palm Springs, California, left debris scattered across the street, as described by the Mayor at the scene. The incident has since drawn attention to the alarming use of generative artificial intelligence in planning violent attacks. Federal authorities reported on Wednesday that Guy Edward Bartkus, the main suspect in the bombing, utilized an AI chat program to research how to create powerful explosives using ammonium nitrate and fuel.

According to law enforcement, while the specific AI application has not been disclosed, records from the chat program indicate that Bartkus actively searched for information about explosives. Tragically, Bartkus died in the explosion, which also injured four others at the fertility clinic. The investigation into the bombing led to the recent arrest of Daniel Park, a Washington man accused of supplying Bartkus with the chemicals used in the car bomb.

The FBI’s complaint against Park reveals that Bartkus used his phone to gather data regarding “explosives, diesel, gasoline mixtures, and detonation velocity,” a trend that highlights the potential misuse of AI technology. This incident marks the second time this year that authorities have uncovered the use of AI in bombing plots. In January, a soldier named Matthew Livelsberger detonated a Tesla Cybertruck outside the Trump Hotel in Las Vegas, having reportedly used generative AI, including ChatGPT, to help devise the attack. Law enforcement officials noted that Livelsberger sought guidance on assembling explosives and understanding ballistic trajectories.

In light of these events, OpenAI expressed its dismay that its technology was implicated in such violent offenses, reiterating its commitment to the responsible use of AI. The incidents underline a growing concern regarding the rapid proliferation of generative AI technologies, which have surged in recent years with the rise of chatbots like ChatGPT, Claude from Anthropic, and Gemini from Google.

As competition escalates among tech companies, there are increasing shortcuts taken in safety testing AI models before their public release, raising questions about accountability and ethical standards. In response to these concerns, OpenAI recently launched a “safety evaluations hub” to showcase the safety results of its AI models, addressing issues like hallucinations, jailbreaks, and harmful content. Anthropic has also implemented additional security measures to curb the potential misuse of its AI tools, particularly in the production of deadly weapons.

The challenges are compounded by instances of misinformation from AI chatbots. For instance, last month, Elon Musk’s xAI chatbot Grok made headlines for disseminating false claims regarding “white genocide” in South Africa, a mistake attributed to user manipulation. Additionally, Google had to pause its Gemini AI image generation feature after it produced inaccurate historical images of people of color, further illustrating the risks associated with AI.

As the landscape of artificial intelligence continues to evolve, the implications of its misuse in violent acts raise significant ethical and safety concerns. Moving forward, both developers and users of AI technologies must prioritize responsible practices to prevent similar incidents from occurring in the future.

Original Source: https://www.cnbc.com/2025/06/04/fbi-palm-springs-bombing-ai-chat.html
Category :
Tags:
Publish Date: 2025-06-05 05:02:00

Author

adminitfy

Follow Me
Other Articles
Previous

Bhairavam Box Office Triumph: Day 5 Breaks Records with Explosive Collection!

Next

Unveiling the Mystery: Rs 120 Billion Hidden in a Matchbox-like Building!

No Comment! Be the first one.

    Leave a Reply Cancel reply

    You must be logged in to post a comment.

    Copyright 2026 — Itfy.in. All rights reserved.