Peter Hartnett '19   United Nations SPecial Summit on Technology sst@worldmun.org

Peter Hartnett '19
United Nations SPecial Summit on Technology sst@worldmun.org

 

Peter is a senior at Harvard College, concentrating in Applied Mathematics with a language citation in Arabic. In college, Peter has traveled to Peru for a mission, Oman and Jordan to study Arabic, and Russia for a cultural exchange through Lowell House at Harvard. He appreciates the perspective that international experiences provide and looks forward to WorldMUN 2019 as another formative learning opportunity. This will be Peter’s first conference, so he is eager to engage with the global WorldMUN community. Upon graduation, Peter will commission as an Officer in the United States Air Force. Therefore, he is especially interested in international affairs as relevant to his upcoming military service. Outside of school and military training, Peter enjoys travel and sports.

Topic: Lethal Autonomous Weapons

Advances in the realm of artificial intelligence have enabled new technology, from mobile applications for navigation and translation to learning machines that outperform human players in board games. Autonomy may transform the state of transportation, with research and development of self-driving cars, and other fields such as healthcare. The trend towards autonomy extends to the military context, with advanced weapons systems such as remotely-piloted aircraft. It is generally accepted by military establishments that a human must make the decision to use lethal force. However, it is not unrealistic to consider lethal autonomous weapons that learn to “play” combat missions, in the same way that a machine might learn to play a board game. Recently, representatives of the robotics and artificial intelligence industry, including Elon Musk of Tesla and SpaceX and Mustafa Suleyman of Google Mind, issued a letter to the UN Convention on Certain Conventional Weapons, calling for a ban on lethal autonomous weapons. Questions about attribution and control of such systems raise ethical and technical concerns. To date, no such ban has been enacted, and even so, the UN would face enforcement challenges. The Special Summit on Futuristic Technology will be tasked with addressing this topic of lethal autonomous weapons, as it is important in regards to military conflict and the future conduct of actors around the globe. How does one define lethal autonomous weapons, or human decision-making in the use of lethal force? Should there be an international ban on such systems? If so, what enforcement and accountability measures would be in place? How do nations balance the upside of artificial intelligence in industrial applications with possible misuse of technology in the military context?