TY - GEN
T1 - Ethics and Governance of Artificial Intelligence
T2 - 31st International Joint Conference on Artificial Intelligence, IJCAI 2022
AU - Zhang, Baobao
AU - Anderljung, Markus
AU - Kahn, Lauren
AU - Dreksler, Noemi
AU - Horowitz, Michael C.
AU - Dafoe, Allan
N1 - Funding Information:
We want to thank Charlie Giattino, Emmie Hine, Tegan Mc-Caslin, Kwan Yee Ng, and Catherine Peng for their research assistance. For helpful feedback and input, we want to thank: Catherine Aiken, Carolyn Ashurst, Miles Brundage, Rosie Campbell, Alexis Carlier, Jeff Ding, Owain Evans, Ben Garfinkel, Katja Grace, Ross Gruetzemacher, Jade Leung, Alex Lintz, Max Negele, Toby Shevlane, Brian Tse, Eva Vivalt, Waqar Zaidi, Remco Zwetsloot, our colleagues at our respective institutions, and our anonymous reviewers. We are also grateful for research support from the Center for Security and Emerging Technology at Georgetown University and the Berkeley Existential Risk Initiative. This research was supported by: the Ethics and Governance of AI Fund, the Open Philanthropy Project grant for “Oxford University - Research on the Global Politics of AI,” the Minerva Research Initiative under Grant #FA9550-18-1-0194, and the CIFAR Azrieli Global Scholars Program. The research reported here should solely be attributed to the authors; all errors are the responsibilities of the authors.
Funding Information:
We want to thank Charlie Giattino, Emmie Hine, Tegan Mc-Caslin, Kwan Yee Ng, and Catherine Peng for their research assistance. For helpful feedback and input, we want to thank: Catherine Aiken, Carolyn Ashurst, Miles Brundage, Rosie Campbell, Alexis Carlier, Jeff Ding, Owain Evans, Ben Garfinkel, Katja Grace, Ross Gruetzemacher, Jade Leung, Alex Lintz, Max Negele, Toby Shevlane, Brian Tse, Eva Vi-valt, Waqar Zaidi, Remco Zwetsloot, our colleagues at our respective institutions, and our anonymous reviewers. We are also grateful for research support from the Center for Security and Emerging Technology at Georgetown University and the Berkeley Existential Risk Initiative. This research was supported by: the Ethics and Governance of AI Fund, the Open Philanthropy Project grant for “Oxford University – Research on the Global Politics of AI,” the Minerva Research Initiative under Grant #FA9550-18-1-0194, and the CIFAR Azrieli Global Scholars Program. The research reported here should solely be attributed to the authors; all errors are the responsibilities of the authors.
Publisher Copyright:
© 2022 International Joint Conferences on Artificial Intelligence. All rights reserved.
PY - 2022
Y1 - 2022
N2 - Machine learning (ML) and artificial intelligence (AI) researchers play an important role in the ethics and governance of AI, including through their work, advocacy, and choice of employment. Nevertheless, this influential group's attitudes are not well understood, undermining our ability to discern consensuses or disagreements between AI/ML researchers. To examine these researchers' views, we conducted a survey of those who published in two top AI/ML conferences (N = 524). We compare these results with those from a 2016 survey of AI/ML researchers and a 2018 survey of the US public. We find that AI/ML researchers place high levels of trust in international organizations and scientific organizations to shape the development and use of AI in the public interest; moderate trust in most Western tech companies; and low trust in national militaries, Chinese tech companies, and Facebook. While the respondents were overwhelmingly opposed to AI/ML researchers working on lethal autonomous weapons, they are less opposed to researchers working on other military applications of AI, particularly logistics algorithms. A strong majority of respondents think that AI safety research should be prioritized more and a majority that ML institutions should conduct pre-publication review to assess potential harms. Being closer to the technology itself, AI/ML researchers are well placed to highlight new risks and develop technical solutions, so this novel data has broad relevance. The findings should help to improve how researchers, private sector executives, and policymakers think about regulations, governance frameworks, guiding principles, and national and international governance strategies for AI.
AB - Machine learning (ML) and artificial intelligence (AI) researchers play an important role in the ethics and governance of AI, including through their work, advocacy, and choice of employment. Nevertheless, this influential group's attitudes are not well understood, undermining our ability to discern consensuses or disagreements between AI/ML researchers. To examine these researchers' views, we conducted a survey of those who published in two top AI/ML conferences (N = 524). We compare these results with those from a 2016 survey of AI/ML researchers and a 2018 survey of the US public. We find that AI/ML researchers place high levels of trust in international organizations and scientific organizations to shape the development and use of AI in the public interest; moderate trust in most Western tech companies; and low trust in national militaries, Chinese tech companies, and Facebook. While the respondents were overwhelmingly opposed to AI/ML researchers working on lethal autonomous weapons, they are less opposed to researchers working on other military applications of AI, particularly logistics algorithms. A strong majority of respondents think that AI safety research should be prioritized more and a majority that ML institutions should conduct pre-publication review to assess potential harms. Being closer to the technology itself, AI/ML researchers are well placed to highlight new risks and develop technical solutions, so this novel data has broad relevance. The findings should help to improve how researchers, private sector executives, and policymakers think about regulations, governance frameworks, guiding principles, and national and international governance strategies for AI.
UR - http://www.scopus.com/inward/record.url?scp=85137851263&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85137851263&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85137851263
T3 - IJCAI International Joint Conference on Artificial Intelligence
SP - 5787
EP - 5791
BT - Proceedings of the 31st International Joint Conference on Artificial Intelligence, IJCAI 2022
A2 - De Raedt, Luc
A2 - De Raedt, Luc
PB - International Joint Conferences on Artificial Intelligence
Y2 - 23 July 2022 through 29 July 2022
ER -