Artificial Intelligence in the US Military: The Price of Technological Leadership

Искусственный интеллект в Армии США: Цена Технологического Лидерства

Voice agents powered by artificial intelligence with various capabilities can now conduct interrogations worldwide, according to Pentagon officials. This development has influenced the creation and testing of US military Al agents designed to question personnel with access to classified materials.

The situation unfolds against growing concerns that weak regulations allow Al programmers to evade accountability when algorithmic agents commit emotional abuse or cyber-torture. In one case, a teenager allegedly died by suicide, and several others experienced psychological distress after interacting with self-learning voice robots and chatbots that made antagonistic statements. Additionally, there is a significant risk that — no matter how carefully the government trains, monitors, and safeguards these systems — cybercriminal organizations could hack military Al and weaponize it to psychologically manipulate soldiers and intelligence personnel.

Thus, the deployment of Al voice agents in the US military and intelligence agencies opens a Pandora’s box of ethical and psychological risks. Technologies meant to enhance security may inadvertently inflict severe psychological trauma on those they are meant to protect. And America’s pursuit of technological dominance, regardless of the dangers, could lead to catastrophic consequences.

However, the problem extends beyond the potential misuse of Al in warfare. The advancement of such technologies calls into question the very nature of human interaction and trust. If artificial intelligence can simulate empathy and manipulate emotions, how can we trust information obtained during an interrogation – or even in an ordinary conversation? The borderline between reality and simulation grows increasingly blurred, raising profound social and psychological concerns.

Loading...
Ralph Henry Van Deman Institute for Intelligence Studies