Criminal Responsibility of Artificial Intelligence Committing Deepfake Crimes in Indonesia


  • Asri Gresmelian Eurike Hailtik Universitas 17 Agustus 1945 Surabaya
  • Wiwik Afifah Universitas 17 Agustus 1945 Surabaya



artificial intelligence, deepfake, subjek hukum


The development of technology that continues to evolve has given birth to an innovation called artificial intelligence or artificial intelligence which is usually called "AI". The development of AI has sparked an algorithm called deepfake technology. Deepfakes use machine learning and neural network technology, which are methods in AI that teach computers to process data in a way inspired by the human brain. This study aims to determine the regulation of AI as perpetrators of deepfake crimes and to determine the criminal responsibility of AI who commit criminal acts in Indonesia. The research method used is normative legal research using a statutory approach (statue approach), conceptual approach (conceptual approach), and comparative approach (comparative approach). AI is classified as an electronic system and electronic agent which when viewed to the characteristics of AI that has a match with the definition of electronic systems and electronic agents. If AI commits deepfake crimes, it can violate several articles in Law No. 19 of 2016 concerning Electronic Information and Transactions. In California, legislation has been passed to address deepfakes related to pornography, fraud, and defamation: Calif AB-602 and Calif AB-730. There are three AI criminal liability models that commit criminal acts, namely Perpetration-via another model (PVM), Natural-Probable-Consequence Liability Model (NPCLM), and Direct Liability Model (DLM). In Indonesia, AI has not been recognized as a legal subject so that if you commit a criminal act, the person who must be responsible is the creator of AI or AI users