When we think of the term cybersecurity, we think of machines defending against machines. Hackers and users direct the actions of the software, but it’s the systems doing the heavy lifting. When cyber criminals attempt to bypass a bank’s firewalls to access personal financial information, we call it “hacking”. When your grandpa clicks on an email link promising him a free yacht, it’s not really a hacking problem anymore – at least, not a software hacking problem. It’s a grandpa hacking problem. More generally speaking, it’s a human problem, and is called social engineering in the field of cybersecurity. The idea is to make grandpa want to click the link, rather than simply trying to brute-force his password.
If only it were so easy.
How humans interact with technology is an important part of defending against not only cyber-security threats like phishing links and denial-of-service attacks, but also against disinformation. After all, if someone can convince your grandpa that free yachts are a click away online, they can convince him that 5G towers can spread Covid-19. We characterize the former as simple human error and the latter as falling for a conspiracy theory, but the truth is they are both cases where a human, when presented with misleading information in a digital format, chooses to take it at face-value.
The real difference between the two cases is what the malicious actor wants: in the former, they may simply want access to grandpa’s credit card information. In the latter, they may want grandpa to share this 5g theory with his friends, family and colleagues – the aim is not only to influence his outlook, but the outlook of society at large.
Social cybersecurity is the application of social psychology, marketing, communications and sociology to computer security. Researchers in this emerging field study how people interact with cybersecurity threats in an intensely digital world, and how we can counter these threats while preserving a democratic and transparent internet. The term is relatively new, but the premise that humans represent the weakest link in any organization’s cybersecurity is well established. Just ask anyone in an IT-related field, and they will tell you that user error accounts for most of their daily headaches. What cybersecurity researchers at Carnegie-Mellon university are now arguing is that it is not enough to treat “users as isolated individuals”, but instead as the social units we are.
A straightforward example of applied social cybersecurity is social proof. People generally prefer to go with the flow and follow the lead of others. This concept was applied to Facebook security by Carnegie-Mellon’s Human-Computer Interaction Institute, a pioneering research group in this field. Instead of simply letting users know about optional security settings, Facebook told users that a portion of their friends used these settings. By quantifying how many of a user’s friend network actually installed this extra security and then informing the user, Facebook increased engagement and click-through rates. This can easily be applied to inauthentic content as well: organizations can use a simple fake-or-not format to quiz their employees on disinformation, while comparing their results against their peers.
Why social cybersecurity matters
In 2016, Russian military Intelligence (GRU) launched a large-scale effort to subvert the U.S. elections. Though the NSA found that the GRU attempted to access voter databases and sent phishing emails to local election officials across the country, they failed to hack any voting machines. Their disinformation campaign on the other hand, had wide-ranging effect on political discourse in this country to this day. It is easier to quantify the GRU’s efforts because they are less persistent and more concrete than ideological warfare. Cybersecurity experts can track how a computer virus spreads, or how many election officials clicked on a spear-phishing link, but to determine how much damage a meme caused is almost impossible. If I share a meme about how 5G towers spreads Covid-19, how many people would I need to convince for it to be successful? If a few people click on it and share it, even if it is to ridicule or debunk the conspiracy, I could potentially cause more civic damage than a coordinated campaign of phishing emails ever could. It’s not enough to beef up the firewalls of electoral offices if the people doing the voting are vulnerable to reading, believing and sharing disinformation.
Another example of how social cybersecurity is relevant is the recent Twitter Bitcoin hack. On July 15th, hackers successfully gained access to Twitter’s administration tools by targeting employees with social engineering attacks. The hackers then used this administrative control to access the accounts of some of the post influential and well-known figures on Earth, including Jeff Bezos, Joe Biden and Barack Obama, sending a malicious Bitcoin link to their massive follower base. Despite this overwhelming success, Engadget magazine found when the dust had settled, only $118,000 was stolen.
This bizarre incident illustrates a few things that our readers should know. The first and most obvious one is not to click on any link without doing due diligence – the chance that a Presidential candidate will send a Bitcoin link to you is rather small. Secondly, the hackers were incredibly clumsy. Though they succeeded in gaining access to some of the most influential accounts in the world, they only managed to get the equivalent of an average software engineer’s annual salary before Twitter shut down their operation. If the hackers had instead been malicious actors seeking to push a disruptive agenda, they could have done so much more damage. Imagine if instead of hitting the public with Bitcoin links, they used a political figure’s account to post inflammatory content about BLM, protests, Antifa or Covid-19 quarantine rules? A false report of Federal agents opening fire on Portland protestors, or antifa “anarchists” firing upon police, could have led to massive unrest until people figured out it was disinformation.
If you think this is too abstract or national a problem to concern you, think about the Watsonville protest hoax that happened over the July 4th weekend. For one of the biggest business weekends in the year, a local town felt compelled to shut down storefronts in fear of a protest that never came.
The Jefsen Building, downtown Watsonville
With both the pandemic worsening and a crucial election drawing near, County residents should be aware that national issues are local issue, and vice versa. The real takeaway of the Twitter hacking story is that the social part of the social cybersecurity – the human part – is just as important as the computer part. A hacker looking to get rich quick could not have done as much damage to Watsonville’s main street as one malicious actor did with a fake protest flyer.
Here at dKomplex, we are concerned about the social impact of disinformation on local citizens and organizations. Using a combination of qualitative and quantitative research, we hope to bridge the gap between traditional cybersecurity and disinformation defense. That bridge is you, the business-owner, election official, social media user, employee or regular citizen. An understanding of this emergent field will be crucial to developing good communications strategies with your clients, constiuents or the public in general, especially when it comes to defending those communities from harmful and inauthentic content.
In the same way that cybersecurity firms track and analyze the spread of malware or phishing links in a client’s system, dKomplex is developing reports that assess the vulnerability of the Monterey County area to “cyber-social” threats, including bots, echo-chambers and disinformation. So far, we have released one report on the information landscape in Seaside, the social media landscape in Monterey and will release a similar assessment on Santa Barbara County later this month. Stay tuned!