Artificial Intelligence can streamline public comment for federal agencies

“We need you to get out there and — for once in your life — focus your indiscriminate rage in a useful direction.” comedian John Oliver told the audience of his TV show Last Week Tonight.

Oliver’s urging of viewers to exercise their civic rights came during the Federal Communication Commission’s public comment period for the pending Net Neutrality rules in 2014, which would have a profound impact on the future development of the Internet. Within hours of the TV show host’s rant, the FCC’s capacity to accept public comments online crashed, highlighting just how ill-prepared the system was to handle any meaningful level of civic participation.

The FCC’s comment consideration schedule for Net Neutrality rules consequently was seriously delayed.

Six hundred staff lawyers at the Commission subsequently spent nearly nine months to gather, divvy up, read, and categorize what eventually totaled 4 million public responses to the proposed rules, costing taxpayers an estimated $4 million.

{mosads}Prior to this technological crisis at the FCC, the Commission had elected not to participate in the federal government’s decade-old Regulations.gov portal, which aims to provide a standardized system for notifying the public about proposed rules, and to accept related comments. Out of the approximately 300 existing federal agencies, to date around 120 besides the FCC have also elected not to participate in Regulation.gov’s attempt to rationalize the public notice and comment process.

Notwithstanding its insistence on a “proprietary” public comment system, the FCC was the first federal agency to enter discussions with my company, Notice and Comment, about ways to utilize advanced analytics and cognitive computing to accelerate and expand the Commission’s internal capacity.

What became immediately clear to me was that — although not impossible to overcome — the lack of consistency and shared best practices across all federal agencies in accepting and reviewing public comments was a serious impediment. The promise of Natural Language Processing and cognitive computing to make the public comment process light years faster and more transparent becomes that much more difficult without a consensus among federal agencies on what type of data is collected – and how.

“There is a whole bunch of work we have to do around getting government to be more customer friendly and making it at least as easy to file your taxes as it is to order a pizza or buy an airline ticket,” President Obama recently said in an interview with WIRED. “Whether it’s encouraging people to vote or dislodging Big Data so that people can use it more easily, or getting their forms processed online more simply — there’s a huge amount of work to drag the federal government and state governments and local governments into the 21st century.”

To that sentiment, I say, “Amen.”

I would also add that in addition to making government more customer friendly, the Office of Science and Technology Policy should initiate a broad conversation now among thought leaders from the private and public sectors concerning which information to include in the critical comment-review process.

Currently, only comments formally submitted to an agency and signed by individuals are taken into account with little in the way of additional metadata required, such as commenters’ zip codes, for example. Just this one additional basic data point — not even down to the street address level — with no additional cost to taxpayers could enable the identification of communities with inherent concerns over a proposed regulation.

Just last week, the President’s National Science and Technology Council released a new report titled, “Preparing for the Future of Artificial Intelligence.” Recognizing the profound implications for this emerging technology, the White House now proposes expending billions of dollars to thoroughly assess.

Artificial Intelligence’s promises and pitfalls. The regulatory system rightly received specific attention in the report, mostly in terms of the government’s role to encourage AI technology innovation while also ensuring economic fairness and public safety.

I would suggest expanding the discussion around Artificial Intelligence and regulatory processes to include how the technology should be leveraged to ensure fairness and responsiveness in the very basic processes of rulemaking – in particular public notices and comments. These technologies could also enable us to consider not just public comments formally submitted to an agency, but the entire universe of statements made through social media posts, blogs, chat boards — and conceivably every other electronic channel of public communication.

Obviously, an anonymous comment on the Internet should not carry the same credibility as a formally submitted, personally signed statement, just as sworn testimony in court holds far greater weight than a grapevine rumor. But so much public discussion today occurs on Facebook pages, in Tweets, on news website comment sections, etc. Anonymous speech enjoys explicit protection under the Constitution, based on a justified expectation that certain sincere statements of sentiment might result in unfair retribution from the government.

Should we simply ignore the valuable insights about actual public sentiment on specific issues made possible through the power of Artificial Intelligence, which can ascertain meaning from an otherwise unfathomable ocean of relevant public conversations? With certain qualifications, I believe Artificial Intelligence, or AI, should absolutely be employed in the critical effort to gain insights from public comments – signed or anonymous.

“In the criminal justice system, some of the biggest concerns with Big Data are the lack of data and the lack of quality data,” the NSTC report authors state. “AI needs good data. If the data is incomplete or biased, AI can exacerbate problems of bias.” As a former federal criminal prosecutor and defense attorney, I am well familiar with the absolute necessity to weigh the relative value of various forms of evidence – or in this case, data.

If deployed appropriately — consistent with our Constitutional values — AI offers the most promising technology to access new forms of intelligence and knowledge from the full range of unstructured data generated by all digital forums for public debate.

Precisely what data about public sentiment regarding government policies should be obtained from which sources, and what value to attach each source type, in my opinion should be part of the larger public discussion currently forming around AI and governance issues. Without a concerted effort to artfully deploy advanced cognitive computing and NLP technologies to the public notice and comment process, we can anticipate many repeat episodes of Last Week Tonight.

Davis is the founder and CEO of Notice and Comment Inc., a former federal prosecutor, he founded the company while serving as the debate coach for the nationally renowned Howard University debate team.


 

 The views expressed by Contributors are their own and are not the views of The Hill.

 

 

 

 

Tags Artificial intelligence FCC Net neutrality Obama administration Public comment

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video