NSA, CIA senior officials address artificial intelligence threats and opportunities

Paul NakasoneLAST WEEK, TWO SENIOR UNITED States intelligence officials shared rare insights on artificial intelligence, as they discussed some of the opportunities and threats of this new technological paradigm for their agencies. On Wednesday, Lakshmi Raman, Director of Artificial Intelligence at the Central Intelligence Agency, addressed the topic during an on-stage interview at Politico’s AI & Tech Summit in Washington, DC. On Thursday, the National Security Agency’s outgoing director, Army General Paul Nakasone, discussed the same subject at the National Press Club’s Headliners Luncheon in the US capital.

Nakasone (pictured) noted in his remarks that the US Intelligence Community, as well as the Department of Defense, have been using artificial intelligence for quite some time. Thus, artificial intelligence systems are already integral in managing and analyzing information on a daily basis. In doing so, such systems contribute in important ways to the decision-making by the NSA’s human personnel. At the same time, the NSA has been using artificial intelligence to develop and define best-practices guidelines and principles for intelligence methodologies and evaluation.

Currently, the United States maintains a clear advantage in artificial intelligence over is adversaries, Nakasone said. However, that advantage “should not be taken for granted”. As artificial intelligence organizational principles are increasingly integrated into the day-to-day functions of the intelligence and security enterprise, new risks are emerging by that very use. For this reason, the NSA has launched its new Artificial Intelligence Security Center within its existing Cybersecurity Collaboration Center. The mission of the Cybersecurity Collaboration Center is to develop links with the private sector in the US and its partner nations to “secure emerging technologies” and “harden the US Defense Industrial Base”.

Nakasone added that the decision to create the Artificial Intelligence Security Center resulted from an NSA study, which alerted officials to the national security challenges stemming from adversarial attacks against the artificial intelligence models that are currently in use. These attacks, focusing on sabotage or theft of critical artificial intelligence technologies, could originate from other generative artificial intelligence technologies that are under the command of adversarial actors.

Last Wednesday, the CIA’s Raman discussed some of the ways that artificial intelligence is currently being put to use by her agency to improve its analytical and operational capabilities. Raman noted that the CIA is developing an artificial intelligence chatbot, which is meant to help its analysts refine their research and analytical writing capabilities. Additionally, artificial intelligence systems are being used to analyze quantities of collected data that are too large for human analysts to manage. By devoting artificial intelligence resources to the relatively menial and low-level tasks of data-sifting and sorting, the CIA enables its analysts to dedicate more time to strategic-level products.

At the same time, however, the CIA is concerned about the rapid development of artificial intelligence by nations such as China and Russia, Raman said. New capabilities in artificial intelligence, especially the generative kind, will inevitably provide US adversaries with tools and capabilities that will challenge American national security in the coming years, she concluded.

Author: Joseph Fitsanakis | Date: 02 October 2023 | Permalink

China assesses emotions of subjects using AI technology that monitors skin pores

Xinjiang POLICE STATIONS IN CHINA are reportedly experimenting with a new technology that uses artificial intelligence to detect the emotions of subjects, and even monitors their skin pores, according to a source who spoke to the BBC. The source is a software engineer, whose identity has not been disclosed by the BBC. He said he helped install the controversial technology in a number of police stations in the Chinese region of Xinjiang.

Xinjiang, China’s most impoverished region, is home to 12 million Uighurs, most of whom are Muslims. The Chinese state is currently engaged in a campaign to quell separatist views among some Uighurs, while forcibly integrating the general population into mainstream Chinese culture through a state-run program of forcible assimilation. It is believed that at least a million Uighurs are currently living in detention camps run by the Communist Party of China, ostensibly for “re-education”. Xinjiang is often referred to as the world’s most heavily surveilled region.

According to the BBC’s Panorama program, patents filed by Chinese companies point to the development of facial recognition programs that can distinguish subjects by ethnicity, and appear to be “specifically designed to identify Uighur people”. Among them are artificial intelligence systems that are able to detect facial micro-expressions, so as to analyze the emotions of subjects. According to Panorama, some systems even monitor “minute changes” in skin pores on the face of subjects, as a means of detecting micro-expressions. The software then allegedly produces a pie chart that details a subject’s state of mind.

The BBC said it reached out to the Chinese embassy in London, which claimed to have “no knowledge” of these alleged surveillance programs. In a statement issued on Tuesday, the Chinese embassy said that “the political, economic and social rights and freedom of religious belief in all ethnic groups in Xinjiang are fully guaranteed”. It added that people in Xinjiang “live in harmony and enjoy a stable and peaceful life with no restriction to personal freedom”.

Author: Joseph Fitsanakis | Date: 25 May 2021 | Permalink

British SIGINT agency vows to integrate artificial intelligence into its operations

GCHQBRITAIN’S GOVERNMENT COMMUNICATIONS HEADQUARTERS, one of the world’s most advanced signals intelligence agencies, has published a position paper that vows to embrace artificial intelligence in its operations. For over 100 years, GCHQ, as it is known, has been in charge of spying on global communications on behalf of the British state, while protecting the government’s own communications systems from foreign espionage. In a report published on Thursday, the agency says it intends to use artificial intelligence (AI) to detect and analyze complex threats, and to fend against AI-enabled security challenges posed by Britain’s adversaries.

The report, entitled “Pioneering a New National Security: The Ethics of AI”, includes a foreword by GCHQ Director, Jeremy Fleming. Fleming was a career officer of the Security Service (MI5) until he became head of GCHQ in 2017. In his introductory note he argues that “technology and data” are engrained in the structure of GCHQ, and that AI has “the potential […] to transform [the agency’s] future operations”. The report acknowledges that GCHQ has been using AI for some time for functions including intelligence collection and automated translation. But the ability of AI to distinguish patterns in large sets of data in seconds, which would normally take humans months or years to detect, offers a transformational potential that should not be overlooked, it posits.

Security-related applications of AI are endless, says the report. They include measures against online child exploitation —for instance by detecting the methods used by child sex abusers to conceal their identities across multiple online platforms. Another potentially revolutionary application would be mapping global drug- or human-trafficking networks, by analyzing up-to-the-minute financial transactions and money-laundering activities around the world. Illicit activities that take place in the so-called “dark web” could also be mapped and monitored by AI systems, according to the report.

The report also states that GCHQ will seek ways to promote AI-related research and development in the United Kingdom. Its goal will be to establish bridges with industry by funding start-up ventures in AI, it states. Lastly, GCHQ will seek to formulate an ethical code of practice in AI, which will include best-practice guidelines, and will purposely recruit a diverse personnel of engineers, computer and data scientists. Future reports will tackle emerging technologies such as computational science and synthetic biology, among many others, the GCHQ report concludes.

Author: Joseph Fitsanakis | Date: 26 February 2021 | Permalink