Logo

American Security Council Foundation

Back to main site

Alan W. Dowd is a Senior Fellow with the American Security Council Foundation, where he writes on the full range of topics relating to national defense, foreign policy and international security. Dowd’s commentaries and essays have appeared in Policy Review, Parameters, Military Officer, The American Legion Magazine, The Journal of Diplomacy and International Relations, The Claremont Review of Books, World Politics Review, The Wall Street Journal Europe, The Jerusalem Post, The Financial Times Deutschland, The Washington Times, The Baltimore Sun, The Washington Examiner, The Detroit News, The Sacramento Bee, The Vancouver Sun, The National Post, The Landing Zone, Current, The World & I, The American Enterprise, Fraser Forum, American Outlook, The American and the online editions of Weekly Standard, National Review and American Interest. Beyond his work in opinion journalism, Dowd has served as an adjunct professor and university lecturer; congressional aide; and administrator, researcher and writer at leading think tanks, including the Hudson Institute, Sagamore Institute and Fraser Institute. An award-winning writer, Dowd has been interviewed by Fox News Channel, Cox News Service, The Washington Times, The National Post, the Australian Broadcasting Corporation and numerous radio programs across North America. In addition, his work has been quoted by and/or reprinted in The Guardian, CBS News, BBC News and the Council on Foreign Relations. Dowd holds degrees from Butler University and Indiana University. Follow him at twitter.com/alanwdowd.

ASCF News

Scott Tilley is a Senior Fellow at the American Security Council Foundation, where he writes the “Technical Power” column, focusing on the societal and national security implications of advanced technology in cybersecurity, space, and foreign relations.

He is an emeritus professor at the Florida Institute of Technology. Previously, he was with the University of California, Riverside, Carnegie Mellon University’s Software Engineering Institute, and IBM. His research and teaching were in the areas of computer science, software & systems engineering, educational technology, the design of communication, and business information systems.

He is president and founder of the Center for Technology & Society, president and co-founder of Big Data Florida, past president of INCOSE Space Coast, and a Space Coast Writers’ Guild Fellow.

He has authored over 150 academic papers and has published 28 books (technical and non-technical), most recently Systems Analysis & Design (Cengage, 2020), SPACE (Anthology Alliance, 2019), and Technical Justice (CTS Press, 2019). He wrote the “Technology Today” column for FLORIDA TODAY from 2010 to 2018.

He is a popular public speaker, having delivered numerous keynote presentations and “Tech Talks” for a general audience. Recent examples include the role of big data in the space program, a four-part series on machine learning, and a four-part series on fake news.

He holds a Ph.D. in computer science from the University of Victoria (1995).

Contact him at stilley@cts.today.

European nations may be hesitant to trust AI for cybersecurity

Wednesday, May 13, 2020

Categories: ASCF News National Preparedness Cyber Security

Comments: 0

When U.S. leaders talk about the promise of artificial intelligence, one application they regularly discuss is cybersecurity. But experts say European countries have thus far proven to be more measured in their approach to AI, fearing the technology is not yet reliable enough to remove human analysts.

Consider France, which along with the United Kingdom and Germany, has become one of Europe’s AI hubs. According to a report by France Digitale, a company that advocates for the rights of start-ups in France, French startups were using AI 38 percent more than they did a year earlier.

But the advancement of AI in the defense sector has not been as prominent in some European countries. That’s in part because the systems need a large amount of data to be reliable, according to Nicolas Arpagian, vice president of strategy and public affairs at Orange Cyberdefense, a French-based company working with Europol and other cybersecurity companies to build strategic and technological counter-measures to face cybersecurity attacks.

“It's very difficult to know what the data can be used for, and if you let the computer or if you let the algorithm take decisions [to prevent cyberattacks,] and that's a false positive, you won’t be able to intervene early enough to stop decisions that were taken on the basis of this [erroneous] data detected by the algorithm,” he said.

Orange Cyberdefense’s approach is training human analysts to detect the behavioral patterns hackers reveal. The company also relies on artificial intelligence to act as an assistant and to keep humans in the lead role.

“You need the analyst, the human being, the human brain and the human experience to deal with and to understand a changing situation,” Arpagian said.

At the same time, pressure from Russia, China and other adversaries in the AI market has pushed the United States to designate more resources for the development of the technology in the defense sector, according to a 2019 Congressional Research Service report. In recent years, China has focused on the development of advanced AI to make faster and well-informed decisions about attacks, the report found. Russia has focused on robotics, although it’s also active in the use of AI in the defense sector.

Moving to use AI in U.S. cybersecurity ops

In February, the Department of Defense adopted five principles to ensure the ethical use of the technology. Secretary of Defense Mark Esper said the United States and its allies must accelerate the adoption of AI and “lead in its national security applications to maintain our strategic position, prevail on future battlefields, and safeguard the rules-based international order.”

In September, Lt. Gen. Jack Shanahan, the director of the Joint Artificial Intelligence Center, said the center’s mission was to accelerate the Pentagon’s adoption and integration of AI in cybersecurity and battlefield operations.

“We are seeing initial momentum across the department in terms of fielding AI-enabled capabilities,” he said on a call with reporters. “It is difficult work, yet it is critically important work. It demands the right combination of tactical urgency and strategic patience.¨

The Pentagon has taken the first step to increase the use of AI and machine learning during its operations as implementing AI in cybersecurity operations “is essential for protecting the security of our nation,” according to the department’s formal artificial intelligence strategy released in 2019. The technology will be incorporated to reduce inefficiencies from manual, data-focused tasks, and shift human resources to higher-level reasoning cybersecurity operations, the strategy laid out.

Artificial intelligence can play a key role identifying unknown attacks, as human analysts normally know enough about recurring threats to accurately detect cyber risks such as evasion techniques and malware behaviors, said Shimon Oren, head of cybersecurity and threat research at Deep Instinct, an American company that uses AI and deep learning technology to prevent and detect malware used in cyberattacks.

Oren said the use of artificial intelligence and deep learning technology is crucial to train and teach the systems to make decisions and draw conclusions on new threat scenarios that will be presented to it post-training. The technology will free human analysts to do the type of work computers “absolutely cannot do,” he said.

For example, the U.S. intelligence community is looking to fully automate well-defined AI processes, as AI systems can perform tasks “significantly beyond what was possible only recently, and in some cases, even beyond what humans can achieve,” according to the 2019 Augmenting Intelligence using Machines Initiative.

“It's very hard for [humans], even when we're very experienced and knowledgeable, to extrapolate what might be the next kind of attack, how it might look like, what exactly will the next kind of malware do and how will it go about doing what it's meant to do. And for that reason, exactly we need to rely on AI,” Oren said.

But relying on only one method to detect cyberattacks is a mistake, Arpagian and Oren agreed. While there is a high possibility of human analysts missing information that hints at an attack, AI systems often are not up to the latest technology to be as successful as they are expected to be, Arpagian said.

Orange Cyberdefense has been focusing on the integration of augmented intelligence rather than AI until this is developed enough to be meaningful, Arpagian said. The company has faced some criticism from others who have embraced AI fully, instead of using the technology as a tool for assistance.

“If you say you are not using artificial intelligence tools [to prevent cyberattacks] you could seem to be a bit old fashioned and outdated,” Arpagian said. “But augmented intelligence is something we need to have and, afterwards, when we have enough data on a specific activity on a very specific domain, then maybe the artificial intelligence will be able to deal with [cyberattacks] on its own.”

Many European countries are not prepared to integrate AI as the intelligence service lacks the readiness to properly begin using the technology within its agencies, according to the French civil servant Nicolas Tenzer, who has authored official reports to the French government and has served as a senior consultant to international organizations.

“When it comes to propaganda, for instance, [the intelligence service] is not really trained -- they don’t really know the best way to respond to that,” he said. “The second problem is there must be a true willingness from the government [to use the technology.]”

Tenzer said the lack of readiness and lack of collaboration between agencies will make it difficult to integrate AI in the defense sector to the extent the United States has.

U.S. efforts to implement AI include the American AI Initiative, which President Donald Trump announced in 2019 as an effort to promote the use of the technology in various fields including infrastructure, health and defense.

In 2019, the Department of Defense and the Naval Information Warfare Systems Command posted a challenge to get input from industry partners on how to automate the Security Operations Center by using artificial intelligence and machine learning, specifically how to detect modern malware strains pre-attack. FireEye, an intelligence-led company that provides software to investigate cybersecurity attacks, was awarded $100,000 as it provided the best model to detect attacks and respond in a short time, according to a March 3 release.

Earlier this year, the Center for Security and Emerging Technology at Georgetown University launched the Cybersecurity and Artificial Intelligence Project to study the overlap between cybersecurity, artificial intelligence and national security. The project, directed by Ben Buchanan, is expected to study how artificial intelligence can be used on offensive cyberoperations apart from defensive operations.

“Technology is fundamental to cyber operations on offense and defense,” Buchanan said. “The reason why AI is important is that there’s just so much data that you need a machine to be able to do the first pass through the data [during offensive and defensive operations.]”

Photo: The use of artificial intelligence in the defense system has spread quickly in the United States, but European countries are not using the technology as much.

Link: https://www.fifthdomain.com/cyber/2020/05/06/european-nations-may-be-hesitant-to-trust-ai-for-cybersecurity/

Comments RSS feed for comments on this page

There are no comments yet. Be the first to add a comment by using the form below.