This article critically examines the role of artificial intelligence in modern defense and sensor systems. The author rejects simplistic metaphors of AI as an autonomous entity or a mere tool, defining it as a "selection architecture"—a regime that filters operational reality. A key problem is the phenomenon of automation bias, or the uncritical trust of operators in algorithms, which leads to a false sense of control over the system. The article argues that responsibility is an inalienable human domain that cannot be outsourced to a machine. The introduction of the Meaningful Human Control standard is essential to avoid reducing humans to the role of a ceremonial witness. The analysis also covers technical aspects such as xAI, ELINT, and Q-RAM algorithms, placing them in the context of ethics and the economics of perception on the battlefield.
For further actions, you may consider blocking this person and/or reporting abuse
Top comments (0)