Many Artificial Intelligence Initiatives Included in the NDAA

PinIt

The NDAA guidelines reestablish an artificial intelligence advisor to the president and push education initiatives to create a tech-savvy workforce.

There’s plenty of debate surrounding why the USA’s current regulatory stance on artificial intelligence (AI) and cybersecurity remains fragmented. Regardless of your thoughts on the matter, the recently passed National Defense Authorization Act (NDAA) includes quite a few AI and cybersecurity driven initiatives for both the military and non-military entities.

Why use the NDAA for artificial intelligence funding?

It’s common to attach provisions to bills when Congress, the Senate, or both know the bill must pass by a certain time. The NDAA is one such bill. It has a yearly deadline every year — or the country’s military completely loses funding — leading to lawmakers using it to pass laws that don’t always make it on their own. (This year’s bill was initially vetoed. But the veto was overridden on January 1.)

The bill contains 4,500 pages worth of information. Along with a few different initiatives, one particular move outlines both the military and the government’s new interest in artificial intelligence.

Military-specific artificial intelligence gets elevated

One of the biggest moves in the bill has to do with the newly created Joint AI Center (JAIC). It moves from the under the supervision of the DOD’s CIO to the deputy secretary of defense. It moves higher in the DOD hierarchy, possibly underscoring just how crucial new cybersecurity initiatives are to the Department of Defense.

To that end, the JAIC is the Department of Defense’s (DoD) AI Center of Excellence that provides expertise to help the Department harness the game-changing power of AI. The mission of the JAIC is to transform the DoD by accelerating the delivery and adoption of artificial intelligence. The goal is to use AI to solve large and complex problem sets that span multiple services, then ensure the Services and Components have real-time access to ever-improving libraries of data sets and tools. 

The center will also receive its own oversight board – matching other bill provisions dealing with AI ethics – and will soon have acquisition authority as well. The center will be creating reports biannually about its work and its integration with other notable agencies.

The secretary of defense will also investigate whether the DoD can use AI ethically, both acquired and developed technologies. This will happen within 180 days of the bill’s passing, creating a pressing deadline for handling ethics issues surrounding both new technologies and the often-controversial use of military AI-use.

The DoD will receive a steering committee on emerging technology as well as new hiring guidelines for AI-technologists. The defense department will also take on five new AI-driven initiatives designed to improve efficiency at the DoD.

Massive cybersecurity attention

The second massive provision in the bill is a large piece of cybersecurity legislation. The Cyberspace Solarium Commission worked on quite a few pieces of legislation that made it into the bill’s final version. The bill creates a White House cyber director position. It also gives the Cybersecurity and Infrastructure Security Agency (CISA) more authority for threat hunting.

It directs the executive branch to conduct “continuity of the economy” planning to protect critical economic infrastructure in the case of cyberattacks. It also establishes a joint cyber planning office at CISA.

The Cybersecurity Security Model Certification (CMMC) will fall under the Government Accountability Office, and the government will require regular briefings from the DoD on its progress. CMMC is the government’s accreditation body, and this affects anyone in the defense contract supply chain.

Non-DOD provisions could boost innovation

Entities outside the Department of Defense will have new initiatives as well. The National AI Initiative hopes to reestablish the United States as a leading authority and provider of artificial intelligence. The initiative will coordinate research, development, and deployment of new artificial intelligence programs among the DOD as well as civilian agencies.

This coordination should help bring coherence and consistency to research and development. In the past, critics have cited a lack of realistic and workable regulations as a clear reason the United States has fallen behind in AI development.

It will advise future presidents on the state of AI within the country to increase competitiveness and leadership. The country can expect more training initiatives and regular updates about the science itself. It will lead and coordinate strategic partnerships and international collaboration with key allies and provide those opportunities to the US economy.

AI bias is a huge concern among business and US citizens, so the National AI Initiative Advisory Committee will also create a subcommittee on AI and law enforcement. It’s findings on data security, and legal standards could affect how businesses handle their own data security in the future.

The National Science Foundation will run awards, competitions, grants, and other incentives to develop trustworthy AI. The country is betting heavily on new initiatives to increase trust among US consumers as AI becomes a more important part of our lives.

NIST will expand its mission to create frameworks and standards for AI adoption. NIST guidelines already offer companies a framework for assessing cybersecurity. The updates will help develop trustworthy AI and spell a pathway for AI adoption that consumers will trust and embrace.

New initiatives to reestablish US innovation

As countries scramble to first place in AI readiness, these initiatives hope to fix some key gaps leading to the US’s lagging authority. The NDAA guidelines reestablish an AI-advisor to the president and push education initiatives to create a tech-savvy workforce.

It also helps create guidelines for businesses already frantically adopting AI-driven initiatives, providing critical guidance for cybersecurity and sustainability frameworks. Between training and NIST frameworks, businesses could see a new era of trustworthy and ethical AI – the sort that creates real insights and efficiency while mitigating risk.

Other countries are investing heavily in AI development, so new and expanded provisions will help secure the United States’ place as a world leader in AI. Governmental funding and collaboration with civilian researchers and development teams is one way the US can remain truly competitive in new technology – the presence of such a robust body of AI-focused legislations suggests lawmakers are making this a priority.

Elizabeth Wallace

About Elizabeth Wallace

Elizabeth Wallace is a Nashville-based freelance writer with a soft spot for data science and AI and a background in linguistics. She spent 13 years teaching language in higher ed and now helps startups and other organizations explain - clearly - what it is they do.

Leave a Reply

Your email address will not be published. Required fields are marked *