Cybersecurity News & Commentary - Aug 31, 2017

The Source Port is Georgia Tech's monthly cybersecurity newsletter, featuring commentary from its researchers about topics in the news, what wasn't written between the lines, the big (and sometimes nagging) questions driving our research, and new projects underway.


Aug. 31, 2017

 

Should the Leading Online Tech Companies Be Regulated as Public Utilities?

Companies like Facebook and Google might appear to provide something like a public utility, but should the government regulate them as such? Former White House advisor Steve Bannon argued that such services are "effectively a necessity in contemporary life.” Thus far, the tech sector and Washington think-tank crowd have not grappled with that possibility in much depth, if at all. This post, published in Lawfare, provides a look at some reasons that leading tech companies today resemble sectors traditionally subjected to public utility regulation. It then considers strong critiques of such a regulatory approach.

Read the full piece by Peter Swire, associate director of policy for IISP, the Huang Professor of Law and Ethics at the Georgia Tech Scheller College of Business, a Senior Counsel to Alston & Bird LLP.

 


After Charlottesville: Registrars, Content Regulation and Domain Name Policy

When a white supremacist protest in Charlottesville resulted in the murder of Helen Heyer, the Daily Stormer published repugnant, hate-filled content about her on its website. This provoked numerous Internet service providers (domain name registrars, DNS proxy services, a DDoS mitigation service and a hosting provider) to terminate Daily Stormer’s services for a variety of alleged Terms of Service (ToS) violation(s). Attempts to register new related domains in different TLDs (such as "dailystormer.ru") or similar strings (such as "dailystormertest.com") are being refused or similarly met with termination of service.

 

IISP Analyst Brenden Kuerbis: [excerpt from original post] "Is this all to the good, an example of how the Internet’s private actor-driven governance model responds to abuse and problems on the web? Or is it a worrisome deviation from net neutrality that may come back to bite us? The Internet Governance Group at Georgia Tech has been following these developments, and in this post, we look more closely at the role of domain name registrars and policies in regulating content and domain names... GoDaddy cites violation of its ToS... Google follows suit, but was it a violation of ToS?... Is this an unintended consequence of ICANN policy?

Domain name registrars like GoDaddy and Google, and other other Internet service providers, are under enormous pressure to regulate content online. If it’s not longstanding calls from the left for tech companies to censor unpopular speech, it’s now the alt-right calling for government to regulate the very same companies. In Internet governance, it’s the Terms of Service that matters."

 


'Shattered' Security: Compromised Parts

Counterfeit and compromised parts are increasingly making their way into the consumer supply chain. A research security team from Ben-Gurion University recently published a paper detailing how cell phones, tablets, laptops, and others are easy targets for compromised parts. The proliferation of on-the-go devices has led to a growth industry in pop-up fix-it shops, offering everything from replacement-screens-while-you-wait to near rebuilds. These shops exhibit wide variation in quality, but most alarmingly, they usually do not possess the capability for in-depth supply chain verification. This gap allows replacement parts containing malicious driver code, once installed, to interface directly at a system level, thus bypassing the usual security and code signing requirements for accessories. Malicious parts may not even need privilege escalation or to exploit the rest of the phone to wreak havoc. A compromised touch screen alone would be sufficient to capture a majority of user activity and without displaying any signs of doing so. A compromised user would often be helpless in even identifying that such a compromise has taken place.

 

IISP Analyst Stone Tillotson: "Racing home, you're desperate to check your front door. After getting excellent service during a lock-out, a casual internet search revealed your locksmith to be a convicted burglar, your new flatscreen foremost on your mind...

That's the scenario we face with replacement parts, but then, which new locksmith will you call for peace of mind? The invasiveness and cryptic nature of attacks like this make them incredibly difficult to detect. The military has been confronting this for years now, but this problem is only recently making its way into the consumer world. Even vendors which attempt to lock down their supply chains have the occasional misstep. Readily breakable glass screens, probably the typical cell user's biggest headache, are the biggest facilitator and the lack of widespread, affordable device servicing by manufacturers is the next. Resolving either of these would put a huge dent in the viability of this attack vector. Perhaps with device driver verification and signing, it could be even more diminished. Nevertheless, these kinds of attacks are predicated on a willing user handing over a device with security needs to an unknown party for the express purpose of physical access, and as the adage goes: With physical access, there is no security."

 


Cyber Command Stands Alone

Seven years after its formation, U.S. Cyber Command (CYBERCOM), the military organization chartered to pursue combat in the digital domain, has become a stand-alone command, reporting directly to the Secretary of Defense. The move officially formalizes the split between CYBERCOM and the National Security Agency (NSA) and moves it out from underneath the direction of U.S. Strategic Command (STRATCOM), traditionally responsible for unconventional domains such as information and space warfare. The timetable for implementation is still not clear as Congress will need to first confirm the new head of CYBERCOM before it can begin the separation process from NSA.

 

IISP Analyst Holly Dragoo: "A long time in the making, it’s finally happening. I’m not 100% sure this is a good thing, to be honest. On the one hand, CYBERCOM has been maturing under the auspices of NSA leadership and expertise, at significant cost. To relieve the burden of training staff, shared resource planning and such would be a massively freeing efficiency cut to both. On the other hand, a ‘conscious uncoupling’ of the two organizations won’t necessarily decrease the level of coordination needed on operations, nor will it help the existing communications gap that exists even as they are co-located; it can only go downhill from there. Information warfare needs cyber-focused intelligence to achieve domain goals. Time will tell, but (before it gets better) I fear there will be a drop in quality of CYBERCOM operational planning, a potential shift in staff from one group to the other, and an increase in bureaucratic hurdles to clear for the two to collaborate; not to mention heightened tensions over the civilian-or-military leader debate for the intelligence community-focused NSA. Let’s hope they don’t drift too far apart."

 


Injecting Backdoors into Deep Neural Networks

Researchers from NYU recently released a paper that demonstrates how to implant backdoors during the training of deep neural networks (DNNs). This results in a trained model that retains state-of-the-art accuracy, but incorrectly classifies input when the backdoor trigger is inserted. As an example, consider a road sign classifier for an autonomous driving system. Triggering a backdoor could take the form affixing a sticker to a stop sign, causing the classifier to mistakenly detect a speed limit sign instead.

 

IISP Analyst Yacin Nadji: "Adversarial deep learning has become popular in the academic security community, likely due to the hubbub surrounding autonomous vehicles. Security-wise, the problems are obvious given the potential for kinetic effects from purely digital manipulation. I'd like my 4,000-lbs sedan to not make mistakes while it's driving me at 70 mph. Overall, the paper re-emphasizes known weaknesses of neural models, but also highlights the type of problems that may arise when model building is handled by a third party, as well as the effects on transfer learning -- both of which are nice contributions to the space.

First, these types of attacks are possible partly because of the difficulty in interpreting deep learning models as well as models that rely on a large number of features (think millions). In simple scenarios, the learned feature weights can aid in interpretation. For example, consider a regression model that predicts a person's weight given their height and waist size. We would expect a heavier person to be taller and/or have a larger waist, so we would expect the weights for these two features to be positive. When a classifier has millions of features or uses deep learning, these (sometimes overly) simple checks are no longer possible. This provides ample room for backdoors to hide in, with little recourse for defenders to identify them. My guess is some of these problems could be alleviated with generative adversarial networks (GAN) but since the feature space is so large and the attack instances so small, unless you have infinite time (call me if you do) to train, it may not help in practice.

Second, the authors consider a particularly nefarious threat model where the attackers are part of the supply chain and perform the model training on behalf of the original company. As an example, consider if Tesla contracted a third party to build their self-driving car's vision models. As machine learning becomes integrated into more services and products, this is only likely to become more popular. This increased power allows malicious parties to craft very specific attacks, which only further reduces the likelihood of their discovery and makes accidental triggering of them less likely.

Finally, the authors demonstrate that the effect can go beyond the initial model due to transfer learning. Transfer learning is using knowledge gained from training one model to improve accuracy or training time in another. This commonly occurs in the deep learning space because training the initial network often takes a long time, even with beefy hardware. If a maliciously trained model is used to subsequently train another, the paper shows that instances with the backdoor trigger have worse accuracy; specifically, they demonstrate the malicious model of traffic signs from the United States causes lower accuracy in a subsequent training model of Swedish street signs. Without easy ways to interpret models, debugging these errors will not only be difficult, but could cascade to hundreds of other models long before the initial problem is uncovered."

 


A Crisis Is An Opportunity: Exploiting Hurricane Harvey

The United States Computer Emergency Readiness Team, or US-CERT, issued an advisory on August 28th, 2017 about phishers and scammers seeking to capitalize on the Hurricane Harvey disaster. US-CERT expects to see a wave of emails playing to the compassion for victims and curiosity about the event to stage charity scams and phishing attacks. As earlier bulletins from both US-CERT and others in the cybersecurity industry have noted, attackers are always willing to exploit a crisis. US-CERT advises caution in opening with unsolicited emails and active skepticism regarding charitable donations.

 

IISP Analyst Stone Tillotson: "Hurricane Harvey is likely to be joining a maddening list of those tragedies and crises the selfish are willing to exploit. The Indonesian Tsunami of 2004, Hurricane Katrina, the Sichuan Earthquake of 2008, ad nauseaum, all served as flashpoints for scams and attempted exploits. Tragically, the worse the disaster and the more intense our interest, the more effectively attackers and fraudsters are able to lure victims. What's to do when our compassion is our weakness? Slow down. Disasters and a desire to help energize us, but that energy needs to be managed to be helpful. Donations of money or goods will take time to make their way to those who need them, so slowing down and giving wisely is helpful, much as slowing down and acting deliberately is helpful in any security context. Con artists and attackers are adept at exploiting a sense of urgency. By giving ourselves time to think through our decisions, we can help those who need it, and ensure our online banking credentials aren't the next source of a crisis."

 


Naval Collision Raises Concern of Cyberattack and Hunt for Back-Up System

An estimated 90 percent of world trade is transported by sea, and shipping lanes have become increasingly crowded. Unlike aircraft, ships often lack a back-up navigation system. If GPS ceases to function, they can easily run aground or collide with another vessel. As ship operators explore back-up options for satellite navigation in the age of cyberattack, some are eyeing World War II radio technology as a possibility for communication.

 

IISP Analyst Chris M. Roberts: "This month, the U.S. Navy suffered its fourth collision this year in the western Pacific.  While a cyberattack has not been attributed to any of the incidents and, in fact, already has been discredited by many, it must be investigated at a much deeper level. A Reuters article highlights the shipping industry’s reliance on GPS for navigation, but it fails to mention that most ships rely heavily on GPS for collision avoidance too -- especially those without sophisticated radars. Dependency on GPS is nothing new as backup systems have been proposed for more than a decade now, but not widely deployed. Other navigational aids to the shipping industry, such as AIS (Automatic Identification Systems), also have been proven susceptible to spoofing. Therefore, it’s too early to rule out any chance of a cyberattack. It seems more logical, to me at least, that someone is messing with the navigational aids of these ships rather than the notion that captains in a relatively small region suddenly have forgotten how to properly navigate."

 


Building Blocks For New Attack Against DNA Sequencing

A novel approach by researchers at the University of Washington (UW) relies on encoding cybersecurity exploits into DNA sequences, which when processed, can compromise affected DNA sequencers and analysis engines. By carefully crafting DNA sequences, the research team was able to devise a sequence which was both a physically possible, was short enough to be preserved through the analysis process, and contained an encoded exploit. Typically, the results of the sequencing process would then be written to file for post-processing and analysis, and it was this analysis system that was the end goal of the attack. The team was able to demonstrate a functional, if short, compromise on their demonstration setup, proving the end-to-end nature of the attack vector. While noting several simplifying departures in their experimental setup, the UW team also noted that lab equipment, much like SCADA and medical devices, weren't built with security in mind, making this attack and others like it, a realistic possibility.

 

IISP Analyst Stone Tillotson: "The UW team deserves a lot of credit for starting a conversation about the possibility of this attack vector now. This attack vector offers the possibility to compromise the infrastructure which undergirds both the modern criminal investigatory apparatus and pharmaceutical industries. A compromised sequencer or its downstream analyzer could silently commit industrial espionage, or worse, tamper with the results of a criminal investigation. In fact, the UW team found that control programs for the sequencing and analysis process often reflected a relaxed attitude toward security concerns and often made use of unsafe practices, so the testing setup might not be far off the mark. The most frightening possibility emerging from all of this is: What if a lab used for analyzing evidence in criminal cases were found to be compromised? The 2012 revelation that Massachusetts lab technician Annie Dookan was found to be fabricating test results in criminal cases lead to the overthrow of some 20,000 drug convictions. Such an outcome is only the more terrifying when considering on what kind of cases DNA evidence is usually collected. Hopefully, the UW team's work will start to push lab equipment manufacturers in the same direction as SCADA went last decade before the otherwise inevitable, bone-chilling results."

 


New Russian Law Mirrors China in Restricting Use of VPNs 

Russia now has joined China in implementing a new law to block all technology, particularly virtual private network (VPN) services, from accessing banned websites in their respective countries. Cybersecurity legislation has been tightening in recent years for both Beijing and Moscow -- in several ways, such as user data collection and physical data retention regulations -- but this move has major implications for access to many Western websites such as Wikipedia, Facebook, and Reddit, which don’t allow content to be censored. It remains to be seen how this will be carried out or what will happen to those users with existing accounts who will be unable to access the websites, but it is scheduled to come into effect November 1 this year in Russia, and February 1, 2018 in China.

 

IISP Analyst Holly Dragoo: "Quite a disturbing trend, but honestly, it’s a bit surprising we haven’t seen this earlier. Timing coincides with another Russian law to link chat apps with actual user phone numbers…suspiciously just a few months before the Russian elections in March 2018. This will surely affect dissident groups trying to organize protests and ex-patriate Internet users in both countries, but what about foreign-owned businesses? I have seen one website state that Russia says businesses will be ”exempt” from this law, but nothing to confirm this or elaborate on what that might mean in practice. China has said their law is for “unauthorized” VPNs, implying there will be allowable exceptions. Without examples or clear criteria on what those might be, can we take their word for it? Enough with the vagary and thinly veiled excuses. These laws are just another way to squash political discourse and enable corruption."

 


Hackers Demonstrate Flaws in Voting Machines

Attendees at this year's DEF CON saw the first Voting Machine Village, a room dedicated to allowing conference attendees to have hands-on access to about a dozen different electronic voting machines. Hackers were invited to explore the systems for security vulnerabilities. The goal of Voting Machine Village was to raise awareness about the vulnerable state of eVoting and to promote the need for more transparent security assessment of the systems. Matt Blaze, co-founder of Voting Machine Village, promises that an expanded Village will be on the agenda for next year's conference.

 

IISP Analyst Joel Odom: "Unfortunately, 'vote hacking' in recent news has been a muddled issue. The media has conflated foreign interference in elections with electronic attacks against electronic voting infrastructure. I'm not going to speak to the topic of foreign interference in elections nor to the political discussion surrounding that topic, but I am interested in electronic attacks against voting machines, which is the focus of this commentary.
 
One of the goals of my opening lecture for a computer security course at Georgia Tech is to help students to develop a healthy pessimism about the security of electronic systems. Security is hard because of the asymmetry between attackers and defenders. Defenders cannot envision all of the possible ways that an attack against a system could happen. Attackers can cheat, and complexity is the enemy of security. In the words of engineering hero Montgomery Scott, 'the more they overthink the plumbing, the easier it is to stop up the drain.'
 
A voting system is necessarily complex. It consists of thousands of polling places, each with a handful of machines, each of which must serve hundreds of voters. Different machines in different locations require different ballot configurations, and different polling places must somehow transmit their votes to tabulation centers, the data from which must be amalgamated into a final election result. Confidentiality, integrity, and availability are all of primary concern. Additionally, the election results must be open for audit, and certification of millions of votes cast in this system must take place on a reasonably short time scale. And this notional system hasn't even assumed that the election system is electronic. When we base this entire system on modern computers running complex modern operating systems, we have a system of such complexity that I would assert that it is practically impossible for it to be completely secure.
 
In general, I find that the computer security community is not entirely opposed to electronic voting machines, but there are two things that experts in the field tend to call for. First, the experts want systems that are open to detailed scrutiny by independent security researchers who can help to find the flaws in the systems before the flaws are exploited. This is what DEF CON's Voting Machine Village promotes. Some of my colleagues on my software assurance team attended the event, and we all agree that it's a good idea.
 
Secondly, security professionals typically want a human-readable paper trail for audit purposes. My ideal situation looks like this. First, I vote at my polling place using an electronic interface. After I cast my ballot, I receive a printed card that shows me how I voted. If I choose to do so, I can read this card to be sure that the machine recorded my votes correctly. I then hand this card to the election official, and the card is handled as paper ballot. The electronic record can be used for quick tabulation of the overall vote, but the unhackable paper card (which I verified with my own eyes) serves as the official ballot of record. If there is any question about the integrity of the electronic result, a complete or statistical tabulation of the physical cards could be used as an integrity check for the election.
 
A democracy depends not only on the actual security of an election system, but on the perceived security of the system. An attacker who wants to undermine confidence in a democracy need only to start with undermining confidence in its election system. Humans are not designed to trust bits, but we have learned to trust physical constructs, such as words on paper. It may seem antiquated, but keeping election systems simple and human-readable is important for this critical social function. Allowing transparent audit of voting system security is another important aspect in maintaining confidence in the system."