Real Security Concerns Are Scarier Than Doomsday Predictions


The metaverse, artificial intelligence (AI) run amok, the singularity … many far-out situations have become a dinner-table conversation. Will AI take over the world? Will you one day have a computer chip in your brain? These science fiction ideas may never come to fruition, but some do point to existing security risks.

While nobody can predict the future, should we worry about any of these issues? What’s the difference between a real threat and hype? 

The Promise of the Metaverse

If you asked 10 tech-minded people to define the metaverse, you might get 10 different answers. Some say it’s a digital place where advanced virtual reality (VR) technology creates an immersive experience. Others say it’s a life in which you could spend 24 hours a day online working, socializing, shopping and enjoying yourself. 

The truth is some people already spend way too much time online. In fact, the typical global internet user spends almost 7 hours a day with some kind of device. 

Metaverse Meets Reality

The problem with the metaverse is that a truly immersive experience requires more than just a fancy VR headset. How do you run or wander around in a digital space? You either need a lot of space or a highly advanced, multidirectional treadmill.

You might consider planting a chip in your brain to trick you into living in another world. But we’re still a long way from that reality. Some early experiments with chips in monkey brains have turned out to be fatal.  

What unsettles us most about ideas like this? It might not be a physical intrusion. Perhaps we might fear missing out on an event or opportunity. Also, we fear that the technology could get out of control. 

Before you go rushing out to buy virtual real estate, be aware the average value of NFTs, ‘unique’ digital objects that saw sales in the millions in 2021, fell 83% from January to March 2022. Some predict that this kind of digital marketplace will never break out of its niche nature

And out-of-control technology? Perhaps it’s already upon us.

The Danger of AI

Elon Musk, who also funded the experiments with brain implants in monkeys, has famously warned about the grave dangers of AI. While this topic has kicked off a heated debate, the reality is that threat actors are already using AI.

Take AI-driven phishing attacks. Powered by AI, attackers can target phishing emails to certain segments of employees or specific executives in a practice known as ‘spear phishing’. However, attackers didn’t invent this. Instead, digital marketing started it to capture more business. We’ve all received targeted emails from marketing engines for years.

Attackers show a keen interest in AI tools that speed up email creation and distribution. Also, attackers or honest workers can use AI to identify high-value targets using data from online bios, emails, news reports and social media. It’s simply automated marketing adapted for attackers.

AI-Powered Malware

Once they trick you into downloading an infected file, an AI-infused malware payload could be unleashed on your servers. The theory says malware could analyze network traffic to blend in with normal communications. AI-powered malware could one day learn how to target high-value endpoints instead of grinding through a long list of targets. The attacker could also equip the malware with a self-destruct or self-pause mechanism to avoid anti-malware or sandboxing detection.

Who Needs AI-Powered Malware Anyway?

If you’re worried about AI-powered attacks, consider a recent case published by the UK National Cyber Security Centre. They reported an organization paid a ransom of nearly £6.5 million ($8.6 million) to decrypt their stolen files. But the company made no effort to discover the cause of the breach. Less than two weeks later, the same attacker got into the network again, using the exact same ransomware tactics. The victim felt they had no other option but to pay the ransom again.

If a company’s current security standards are sub-par, threat actors don’t need highly sophisticated tools for intrusion. 

Fight Fire With Fire

In the meantime, advanced security solutions use AI to deter threats. The reasons are simple. To secure large attack surfaces and defend against rising attack rates, AI is the logical choice to monitor and secure massive amounts of data. Under-resourced security operations benefit greatly from AI to stay ahead of threats. AI can help with threat detection accuracy, investigation acceleration and response automation. 

AI-Driven Security Protection Works Now

AI-infused security tools help defenders speed up their response to cyber attacks. In some cases, with AI assistance, they can speed up threat investigation by up to 60 times. 

According to IBM’s latest data breach cost report, the use of AI and automation is the single most impactful factor in reducing the time to detect and respond to cyberattacks. It also has the greatest impact on reducing the cost of a data breach.

Today’s security operators struggle to keep pace with the malicious actors, even without criminals using futuristic AI tools. The best strategy is to proactively close gaps and equip security teams with machine learning and automation tools to level the playing field.

What Future Do You Want?

Beyond the current threats, we still wonder about the future. Chips in people’s brains are certainly a long way off. In the meantime, there are plenty of threats that exist today, but we also have the means to thwart them.

The metaverse may or may not come to pass as some envision it. Maybe it will just be another online destination where some people spend their time. Would you rather put on a complex sensor-laden suit, strap on headgear and connect with friends online or get together with them at a real location where you are free from the trappings of tech?

The post Real Security Concerns Are Scarier Than Doomsday Predictions appeared first on Security Intelligence.