AI and the Future of Hacking

PinIt

The introduction of AI tools into business workflows can make some information available that may be even more valuable to hackers—and also harder to protect from them.

It seems increasingly likely that AI is well on its way toward transforming IT operations across industries, from the smallest bootstrapped start-ups to huge legacy corporations. In some ways, the advent of AI has been an equalizing force: because AI is still in its infancy, all companies are in equally inchoate stages as they explore how to leverage these new tools. And most companies are also on equal footing when it comes to figuring out new security measures to accompany the use of AI both within and without the workplace.

I believe that, as was the case with disruptive technologies that came before it, a major concern in that exploration is hitting the right balance of innovation and information security. As much as AI has the potential to enhance business operations, it has equal potential to expose businesses to danger. I think that it’s a good idea for CTOs like myself to approach AI adoption very carefully because even as it revolutionizes the way IT leaders conduct business, it will similarly revolutionize the way hackers and other bad actors strike.

Hackers have long targeted data as their “loot,” be it proprietary company data that can be leveraged in the context of business competition or personal data that can help grant access to individuals’ personal wealth (among other things). The theft and exploitation of data is not going to stop with AI.

Hacks of Data Collected by AI

Because AI both learns from data and is used to handle and parse large amounts of data, its rise could lead to the exposure of more and different kinds of data for hackers to target. That is to say, the introduction of AI tools into business workflows has the potential to lead folks to make available some information that may be even more valuable to hackers—and also harder to protect from them.

When anyone, be it a company decision-maker or an individual, chooses to use AI, they need to be aware of the risks involved with giving data to an AI program and be sure they’re confident in the decision. In a corporate setting, I tend to think a single team member should avoid taking on this decision alone. An informed group, including not only the CTO but also representatives of other key groups like marketing and legal, should help steer all decisions regarding AI tools.

See also: Cybersecurity Will Shift in 2023 Thanks to AI

Hacks Facilitated by AI

It’s my opinion that those aiming to craft a responsible AI use policy that’s in a company’s best interests can benefit from an understanding of the shifting threats they’re up against. While AI advancements may not change what hackers target—information will likely remain a valuable commodity—it will almost certainly change how they target it. After all, AI is nothing if not a powerful tool for automating all sorts of analytic processes at scale, and that includes nefarious ones. So while many CTOs are, naturally, eager to make the most of AI’s many potential use cases, it behooves us all to remember that hackers are, too.

One of the goals many developers are keen on pursuing as they build AI tools is to make these tools appear as human as possible. Large language models (LLM) like ChatGPT can emulate not just fluency in language but tone and even engaging (if not always accurate) conversation. At its most successful, this could mean that an AI would be able to pass the Turing Test, a test proposed by computer scientist Alan Turing to determine a machine’s ability to exhibit human characteristics; passing the Turing Test means the machine can trick an unwitting human into thinking they’re having a conversation with another person.

This has significant possible implications for the scale and effectiveness of phishing hacks. Phishing entails the impersonation of an institution or individual to deceive a victim into turning over money, sensitive information, or confidential access (via passwords, etc). Modern phishing scams, like the IRS hoaxes that have ratcheted up in recent years, are already lightyears more sophisticated than the “Nigerian prince” schemes of the early 2000s, but they can be onerous and time-intensive to pull off convincingly. AI may provide the solution hackers are looking for, and CTOs should be dreading. The promise of a tool that can rapidly parse the information needed to pull off an advanced imitation of a person who represents an institution (or anyone else, for that matter) opens the door for many simultaneous conversations that convince victims to give up information. This could mean that a relatively small-scale hacker operation could have the potential to commit more, and more advanced, fraud than ever before.

See also: 3 Things to Do to Keep Safe from Cybercrimes

But it doesn’t end there. If a team member at your organization is interacting (or “talking”) with an LLM AI, even if careful to keep sensitive data close to the vest, they might still be giving it information that can create risk for your company without realizing it. In talking with any person, even about ostensibly anodyne subjects, an AI is learning about that person: the way they chat, the kinds of topics they’re interested in, and more.

In fact, even if your team never interacts with AI, there is still a lot of relatively unprotected data about all of us—from browsing histories to email subscriptions to other information that can be leveraged for manipulation—floating around the internet and available for purchase. This is potentially true for all team members, up to and including your company’s CEO (and even the CTO). It’s not hard to imagine a hacker either buying personal data with which to train AI or using what an AI learns about a person through their direct interactions with it in order to impersonate them.

This is frightening enough on a personal level—just imagine what information such a machine would be able to gather from your friends and family members when they think they’re chatting with you. On a professional level, however, it creates the opportunity for employees to carry out tasks on behalf of hackers, tasks that could jeopardize your company, in response to directives that appear to be coming from colleagues. The attacks that could be created with this information would be more tailored and thus more complex and advanced than the typical phishing we see today.

This advancement carries over to the more basic and granular elements of hacking, as well. As of now, if a hacker wants to gather information about one of your team members or your organization as a whole, they have to be ready to put in some time. With AI, much of the tedium of reconnaissance can be streamlined with the help of algorithms that can digest huge swaths of information almost instantly. Moreover, AI is able to analyze data as it’s collected, which for hackers means identifying vulnerabilities and points of attack. All this amounts to enabling hackers to launch more, larger-scale attacks quickly.

Relatedly, if an AI is able to glean information and identify and analyze vulnerabilities in the blink of an eye, it will also be able to develop sophisticated countermeasures faster than anything we’ve seen before. Typically, if a hacker wants to install malware on a computer, server, or network, they must first a) conduct reconnaissance to find the chinks in the armor, b) determine what kind of malware (viruses, ransomware, spyware, etc.) can best exploit these vulnerabilities, and then c) write the software. AI’s powerful analysis and extrapolation capabilities mean that this three-step process may be able to happen almost instantly. This could mean highly tailored and almost imperceptible malware that can harvest information and evade antivirus efforts almost in real time.

This is why I say that CTOs and CIOs need to be thinking steps ahead of hackers. All IT leaders need to understand what makes a company vulnerable to these attacks and shore up their defenses before the new wave of AI-enabled hacking hits the shore.

See also: Study Notes Vulnerabilities in Code-Generating AI Systems

Navigating the Future in the Present

Conversation about AI’s evolution is already so ubiquitous, you almost wouldn’t know that we’re still in its earliest days. Previous technologies have tended to be further along in development and adoption before they’ve reached the level of layperson familiarity that AI is already enjoying. But I believe we’re still in a bit of a “Wild West” when it comes to AI. Because it’s such an early-stage phenomenon, there’s not currently enough understanding as to how to respond to the hacks AI will enable at scale. It’s even less clear how to prevent those attacks. And while the new technology is also incredibly exciting and will open untold doors for CIOs and CTOs across industries, we should maintain awareness that hackers are just as excited as we are.

Frank Laura

About Frank Laura

Frank Laura has nearly 30 years of technology experience in industries ranging from banking and loans to marketing and promotions. Frank joined the EngageSmart team in 2019 as the Chief Technology Officer and has helped the company cement its position as a leader in customer engagement software while going public in September 2021. Before EngageSmart, Frank served as Chief Information Officer at Progressive Leasing, Entertainment Publications, and Quicken Loans. Frank’s specialties include systems architecture, technology planning, data center development, software engineering, technical operations, and IT governance.

Leave a Reply

Your email address will not be published. Required fields are marked *