(Wired) When politicians, tech executives, and researchers gathered in the UK last week to discuss the risks of artificial intelligence, one prominent worry was that algorithms might someday turn against their human masters. More quietly, the group made progress on controlling the use of AI for military ends.
On November 1, at the US embassy in London, US vice president Kamala Harris announced a range of AI initiatives, and her warnings about the threat AI poses to human rights and democratic values got people’s attention. But she also revealed a declaration signed by 31 nations to set guardrails around military use of AI. It pledges signatories to use legal reviews and training to ensure military AI stays within international laws, develop the technology cautiously and transparently, avoid unintended biases in systems that use AI, and continue to discuss how the technology can be developed and deployed responsibly.