Generative artificial intelligence (AI) tools have exploded in popularity in the past year.

Generative AI – powered by large language models (or LLMs for short) that can analyze and generate text – produce new instantaneous content from text to images based on user prompts. This technology helps employees perform routine tasks quickly and efficiently, freeing them up to focus on high-value work. Rather than replace jobs and human judgment, generative AI serves to augment human expertise and improve overall efficacy and productivity.

KPMG in Canada research earlier this year found that one in five Canadians are already using generative AI platforms.1 Just over half save up to five hours a week and over two-thirds said the time saved by using generative AI tools allowed them to take on additional work that they otherwise wouldn’t have the capacity to take on.2

According to KPMG International’s newly released 2023 Annual CEO Outlook Survey, 70 per cent of global CEOs are making generative AI their top investment priority in the medium term, with over half (52 per cent) expecting a return-on-investment (ROI) in three-to-five years. In Canada, 75 per cent of the CEOs at some of the country’s biggest organizations are also investing heavily in this technology, with 55 per cent expecting to see a ROI in three-to-five years.3

They identified ethical challenges, including bias in datasets, a lack of regulation, the cost of implementation, and the lack of technical capability and skills to implement as the major challenges to adopting generative AI in their organization.

Recently the U.S. Space Force temporarily banned the use of web-based generative AI tools and LLMs that power them on government systems, citing concerns over cybersecurity, data handling, and procurement requirements, according to a Bloomberg News report.4 Bloomberg also reported that the Space Force intends to release new guidance within 30 days and the tools are not to be used unless specifically approved.

This speaks to the need for organizations to get ahead of their employees.

With so many people experimenting with this technology and using it in their work, organizations need to manage the risks by developing responsible AI frameworks and educating their employees. This is the only way to govern AI use, control access, and empower your people.

Yet only about a third (32 per cent) of Canadian small- and medium-sized businesses recently told KPMG that they have developed, or are in the process of developing, generative AI policies, controls, and guardrails. Another 46 per cent agreed somewhat, suggesting that they are still in the early stages of figuring this out.

At least there’s a recognition – and a start – being made to manage the risks.

For the aerospace and defence (A&D) industry, the technology is opening new applications, such as virtual training environments, simulating military scenarios, and assessing operational risks and allocation of resources.

In August, the U.S. Pentagon created a generative AI task force to analyze and integrate LLM tools across the U.S. Defense Department and has already found 200 potential uses for them, according to Bloomberg News.5 Their experiments have focused on developing data integration and digital platforms across the military.6 The goal is to use AI-enabled data in decision-making, sensors, and firepower. The Pentagon is inviting industry and academics to a Defense Data and AI Symposium in Washington in February to determine the viable use of LLMs and explore the future of data analytics and AI.7

The overriding concern – in the private and public sector alike – is the phenomenon called hallucinations when the AI software fabricates information or delivers incorrect results not backed by real-world data. AI hallucinations can be false content, news, or information about people, events, or facts. The spread of misinformation can have potentially catastrophic results.

While the technology holds much promise and has already shown its potential, the defence industry will need to wrap it up tightly in a responsible AI framework and governance model.

There are many examples of traditional AI already being developed in the defence industry – ranging from the new AI combat system Aegis for the U.S. Navy and the U.S. Army’s Tactical Intelligence Targeting Access Node (TITAN) to the Future Tactical Unmanned Aircraft System (FTUAS). There are many other use cases being developed in the defence sector, but these examples demonstrate the need for continuous innovation and opportunities for partnering amongst players within the industry to meet the need to deliver quickly while balancing the potential risks.

Generative AI is different. The publicly available tools heighten privacy and security concerns. The defence industry will need programs that are traceable, transparent, and importantly, private, with strict protocols on access.

The stakes are just too high without it.


1. One in five Canadians using generative AI platforms - KPMG in Canada
2. Ibid
3. CEO Outlook - Canadian insights - KPMG in Canada
4. “US Space Force Pauses Generative AI Use Based on Security Concerns,” Bloomberg News, October 11, 2023
5. DOD Announces Establishment of Generative AI Task Force, U.S. Department of Defense, August 10, 2023, “Pentagon Urges AI Companies to Share More About Their Technology,” Bloomberg News, September 29, 2023
6. “The US Military Is Taking Generative AI Out for a Spin,” Bloomberg News, July 5, 2023
7. Advantage DOD24 Defense Data & AI Symposium, February 20-22, 2024

Connect with us

Stay up to date with what matters to you

Gain access to personalized content based on your interests by signing up today

Connect with us