Connect with us

Business

AI-Powered Automation Can Be Both a Part of the Problem and Part of the Solution

Published

on

There are real security concerns that should be addressed ahead of further government adoption of a truly automated future.

Much as a character in the 1960s movie The Graduate said “the future is plastics,” government today has identified digital technology as key to its ability to operate both now and in the future. Government’s growing focus on technologies like artificial intelligence, machine learning and automated processes is well-documented. While many of these goals are aspirational, some advances already demonstrate the way forward for agency operations.

Take, for example, robotic process automation. Using RPA to automate routine, repetitive and often tedious tasks is increasingly common, and NASA and the General Services Administration are two front-runners in this RPA adoption. In one of the earliest federal government use cases, NASA’s Shared Services Center in 2018 launched RPA “bots” to automate funding distribution that had been approved by humans, track balances and report via spreadsheets back to headquarters. NASA now uses bots to transfer data from encrypted emails to internal systems. GSA employs bots for Section 508 disability contract compliance and to serve invoice notifications, among other uses. 

During the COVID pandemic, use of RPA in the form of “chatbots” expanded to help thinly stretched federal personnel grapple with the pivot to telework and the surge in demand for digital services. 

RPA yields measurable benefits for the government, and the deployment of these capabilities will only continue to grow, saving thousands of labor hours and freeing employees to focus on more complex tasks requiring human thinking and judgment.

A Federal Voyage of Discovery

Simple RPA and the use of automation to do rote tasks faster and at scale are already transforming government. But intelligent automation (IA) is what forward-leaning government decision-makers are increasingly focused on in their vision for an automated future. 

Intelligent automation marries automation with increasingly powerful AI capabilities in areas such as data analytics, machine vision and natural language processing. Government has been instrumental in driving progress in this space; many of us have seen videos of dog-like robots performing complex tasks (and even dancing!) without realizing it resulted from work initially funded by the Defense Advanced Research Projects Agency.

But there remain myriad issues to address before the whole of government follows the early adopters of IA, and now is the time to lay the framework for how we will both accelerate and secure the use of these capabilities. 

The Government Ownership and Oversight of Data in Artificial Intelligence Act (or GOOD AI) is one such early move toward oversight, aiming to “secure and protect information handled by federal contractors using artificial intelligence (AI) technology, such as biometric data from facial recognition scans.”  It also calls on the Office of Management and Budget to launch an Artificial Intelligence Hygiene Working Group “to ensure that government contractors are securing and using data collected by AI technologies to protect national security and in a way that ensures the privacy and rights of all Americans.”

Securing Automation By Automating to Secure

In light of the potential and power of RPA and especially of IA, there are real security concerns that should be addressed ahead of further government adoption. Complex technologies and ecosystems of networked capabilities require a robust, “defense-in-depth” approach that weaves together overlapping and integrated protections. These protections must go well beyond the already outdated perimeter defenses of the past. 

Part of the answer is to harness AI itself—including intelligent automation—to power ecosystems or platforms of interoperable security across the federal government. AI-powered instrumentation can produce a network of “always-on” sensors generating data that increasingly mature AI can use to discern what normal and abnormal activities look like and flag anomalies in real time. Machine learning can differentiate between “merely abnormal” and truly malicious activity in a fraction of a second. 

This approach provides broad and potentially even global visibility, leveraging insight gleaned from attacks against other targets. It couples this breadth of view with the potential for depth extending as far down as visibility into specific processes running on individual devices. When combined with automated processes and the ability to make and implement decisions in sub-second timeframes, agencies and their personnel have the keys for comprehensive security and for successful implementation of the recent federal zero trust maturity model

This concept of an AI-enabled security platform approach minimizes the likelihood of a successful attack on increasingly automated digital technology in government, and can limit the damage whenever a breach does occur. In a “government of tomorrow” equipped with empowered teams and futuristic technologies including intelligent automation, coupling maturing AI/ML with platforms of interoperable data builds on a proven and comprehensive approach—and capitalizes on public-private collaboration. Partnerships that weave together smart solutions, strategies and lessons learned from both government and the private sector can help us better secure an increasingly automated future. 

Jim Richberg is the public sector field CISO and vice president of information security at Fortinet. He is the former National Intelligence Manager for Cyber at the Office of the Director of National Intelligence.

Source: https://www.nextgov.com/ideas/2022/01/ai-powered-automation-can-be-both-part-problem-and-part-solution/360294/

Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © 2023 Cyber Reports Cyber Security News All Rights Reserved Website by Top Search SEO