dc.description.abstract | Disinformation has always been part and parcel of warfare. However, sophisticated
Artificial intelligence techniques such as machine learning, have now made the process of
manipulation of information much easier and more convincing than ever before. ‘Deepfakes’, the
highly realistic, but fabricated images, videos and audio are increasingly being used as a war tactic
to spread disinformation in both international armed conflicts (IACs) and non-international armed
conflicts (NIAcs). Deepfakes can easily deceive and mislead people by making it appear that someone
has said or done something that they have never said or done in reality. It can falsify commands,
create confusion during conflicts and spread false rumors about an opposing party.
Moreover, when the public becomes aware of the possibility of an image, audio or video being a
deepfake, establishing the authenticity of original information also becomes challenging. This paper
will engage in a qualitative study of how deepfakes pose a threat to both combatants and civilians
affected by modern warfare through sowing confusion, impersonating political and military leaders,
undermining public trust, influencing public opinions and also fabricating evidence in post-war trials.
Deepfakes can cause unfathomable damage to countries already affected by war and further
jeopardize their democracy and national security. Hence, the use of deepfakes extends beyond the
traditional classification of disinformation in International Humanitarian Law (IHL) as a permissible
ruse of war. Accordingly, the objective of this paper is to analyze the adequacy and potential
enhancements of existing IHL principles in addressing the deepfake-driven threats in warfare. | en_US |