YOU ARE AT:OpinionReader ForumArtificial Intelligence: The Major Cyber Threats for 2024 (Reader Forum)

Artificial Intelligence: The Major Cyber Threats for 2024 (Reader Forum)

What are the major cyber threats related to AI?

Artificial intelligence has been hailed by many as the “future of work” thanks to its ability to assist workers in a variety of industries and streamline their operations. With AI, workers can automate the more menial tasks of their jobs, allowing them to focus greater portions of their time and effort on the aspects that require their attention, thereby improving their efficiency and productivity, despite critics’ lingering questions about how AI can pose a threat to society. 

But where does the truth of artificial intelligence lie? Is it a paradigm shift we should be excited to embrace? Or is it a sinister threat that we must avoid?

The truth of this technology lies somewhere in between. Many of these critics do not realize that the threat is not inherent to the technology but results from wrongdoers abusing its capabilities.

Virtually every innovation in history has experienced the same phenomenon — if there is a way for bad actors to leverage a new technology for their own nefarious gain, they will find it. While we must address the concerns that AI poses a threat, we must not let this prevent innovators from embracing the technology in a way that can genuinely help society. Instead, it becomes a matter of identifying and mitigating the negative use cases of AI so that we can create an ecosystem where positive use cases can thrive. 

The first step in doing so is to better understand the various cyber threats that leverage AI technology.

How scammers abuse AI to improve their schemes

Many of AI’s most visible use cases today are of models in the generative AI category, which synthesizes written or audiovisual materials to create new content. While there are numerous examples of ways this technology can be used for good — drafting emails, conducting more efficient research, or powering customer service chatbots, for instance — there are also some malicious uses of this innovation that could cause harm.

One of the main ways in which wrongdoers are weaponizing generative AI is to improve phishing schemes. With these scams, a scammer attempts to convince a victim to reveal personal information by impersonating a trusted source, such as a friend, loved one, coworker, boss, or business partner. 

In the past, canny and tech-savvy individuals could spot these fraudulent messages with relative ease by picking up on mistakes like grammatical errors or inconsistencies in voice. However, generative AI has allowed scammers to create more convincing written materials than ever before.

Today, a scammer can train a generative AI model on a library of messages written by the person they hope to impersonate and then prompt the model to generate a message in their style, effectively mimicking their diction and syntax. Due to these advanced tools, distinguishing between authentic and fraudulent messages is becoming much more difficult.

To make matters worse, it’s not just written materials that generative AI is getting increasingly good at creating, as “deepfakes” — fraudulent images, videos, and audio — are also becoming an increasingly pervasive threat. With deepfakes, A scammer can feed a person’s likeness into an AI model and use it to create images and audio for any number of illegitimate uses, from reputational damage and blackmail to the manipulation of elections or financial markets, any of which can have severe consequences.

How AI’s data analytics capabilities are being leveraged to automate attacks

A secondary capability of artificial intelligence that wrongdoers are exploiting is the technology’s capability for advanced data analytics. An AI model can process large data sets nearly instantaneously, giving it much greater efficiency (and often accuracy) than humans. Although these capabilities have numerous positive implications, they could be significantly detrimental to society in the wrong hands.

Using AI’s ability for advanced data analysis, hackers have found ways to automate cyber attacks by training models to constantly probe a network for vulnerabilities, allowing them to identify weaknesses to exploit faster than network operators could remedy them. As a result, hackers are operating with increased efficiency, which has enabled them to increase not only the volume of their attacks but also the difficulty of identifying and thwarting them.

Depending on the target of these automated attacks, the damage could be catastrophic. The world we live in today is increasingly interconnected and run on computers, giving hackers a way to control entire industries. 

For example, if a hacker automates an attack against one link in a supply chain, the consequences could reverberate throughout the whole industry. Should the target be critical infrastructure, however, it could cause financial ruin or even loss of life. With shipping routes, air traffic control, traffic lights, telecommunications systems, power grids, and financial markets among the potential targets for AI-powered attacks, finding a solution to these threats is an incredibly urgent task.

Thwarting wrongdoers to create a safer future for AI

Thankfully, AI technology has developed so that network operators can take a “fight fire with fire” approach to many of the tools wrongdoers use to execute these threats and repurpose them to serve the needs of a more positive use case. 

For instance, the models that hackers train to probe networks for weaknesses can be used by network operators to identify areas that require repair or strengthening. AI tools are also being developed to help evaluate the legitimacy of text, video, and audio, allowing people to distinguish between authentic and AI-generated materials.

Still, the most important thing for us to achieve regarding artificial intelligence is greater awareness. By remaining aware of these potential threats, we can be better equipped to fight against them. 

For example, understanding fundamental cybersecurity practices — including strong password use and access control — is essential to defend against cyber attacks. Similarly, being able to identify potentially suspicious messages can help people avoid falling victim to phishing schemes. 

Artificial intelligence can and should be a force of positive change in the world, despite the bad actors who are using this powerful tool to hurt people and threatening to undermine people’s ability to use it in legitimate, beneficial ways. We must understand these potential threats caused by the abuse of AI technology, as this will help us fight back against wrongdoers and make AI safer for everyone.

Ed Watal is the founder and principal of Intellibus, an INC 5000 Top 100 Software firm based in Reston, Virginia. He regularly serves as a board advisor to the world’s largest financial institutions. C-level executives rely on him for IT strategy & architecture due to his business acumen & deep IT knowledge. One of Ed’s key projects includes BigParser (an Ethical AI Platform and an A Data Commons for the World).  He has also built and sold several Tech & AI startups. Prior to becoming an entrepreneur, he worked in some of the largest global financial institutions, including RBS, Deutsche Bank, and Citigroup. He is the author of numerous articles and one of the defining books on cloud fundamentals called ‘Cloud Basics.’ Ed has substantial teaching experience and has served as a lecturer for universities globally, including NYU and Stanford. Ed has been featured on Fox News, Information Week, and NewsNation.

ABOUT AUTHOR

Reader Forum
Reader Forumhttps://www.rcrwireless.com
Submit Reader Forum articles to engageRCR@rcrwireless.com. Articles submitted to RCR Wireless News become property of RCR Wireless News and will be subject to editorial review and copy edit. Posting of submitted Reader Forum articles shall be at RCR Wireless News sole discretion.