Source: Bloomberg
American intelligence agencies are trying to deal with a difficult new challenge: let artificial intelligence AI be safe and reliable for American spies.
A department at the Office of the National Intelligence Director's Office is using companies and universities to help use rapidly developing AI technology to help cope with global competitors such as China.The challenge is to ensure that it will not open the back door or generate false data for the highest secret of the country.
"The intelligence community hopes to use the existing big language model, but there are many unknown." Tim McKinnon, a data scientist of one of the projects in the Office of the State Intelligence Director's Office, said, "The ultimate goal is to cooperate with a trusted model."
The U.S. military and intelligence community hopes to use the powerful functions of AI and start with China. Focusing on reliability and security is its efforts.As the government and its contractors embrace this emerging technology, the AI recruitment in Washington region has increased significantly.The most urgent concerns are concentrated on large language models.
"The intelligence community has a healthy attitude and enthusiasm for artificial intelligence and a certain degree of enthusiasm." Emily Harding, director of the intelligence, national security, and technical project of the Strategic and International Research Center, said that she can use analysts to deal with a large numberInformation, but doubt the reliability of the current model.
"This is a tool in the earliest use stage," she said.
NAND Mulchandani, chief technical officer of the Central Intelligence Bureau, believes that AI can digest a large amount of content and find a model that is difficult for humans to be difficult to distinguish, thereby increasing productivity.He also believes that this is a way to compete with the number of Chinese intelligence personnel.
"Human beings are great, but it is difficult to expand." Mulchandani said in an interview that "helping expansion of human beings with technology is wise business measures."
American spy agencies have begun testing their AI projects.Bloomberg News reported in September that the CIA is preparing to launch a tool similar to ChatGPT to allow analysts to better obtain open source information.
Mckinnon said that AI is easily suffering from internal threats and vulnerable to external intervention.These threats may be that humans try to deceive systems to leak confidential information, that is, the model "jailbreak".Or, in turn, the invasive system "trying to spay the information that it should not be obtained from humans."
McKinnon's Bengal Plan has collected suggestions from some companies, a subsidiary of Amazon, but he refused to disclose whether he cooperated with any specific company.
The project hopes to find the way to deal with potential deviation or toxic output, thereby reducing some of the risks of AI.
"There are only a few models that can be used publicly at present, but they will become more common, and their training methods may be deviated." Mckinnon, "If the model is poisoned, we want to alleviate this situation."