IoT/CPS Security Research at the University of Michigan


Welcome! This website documents research in Internet of Things and Cyber-Physical Systems Security. The research is primarily conducted by the University of Michigan and its collaborators at Microsoft Research, University of Washington, University of California Berkeley, and Stony Brook University. We provide a few resources in the form of research papers, code, demo videos and frequently asked questions (FAQs).

Jump to:

  • SmartThings Security Analysis: An analysis focused on security design of IoT platforms. Our findings include overprivilege and insufficient event protection.
  • FlowFence : An information flow control (IFC) system for IoT apps.
  • ContexIoT : A system that provides contextual permission prompts in SmartThings apps.
  • Heimdall : A system that enables privacy-respecting collection of recommendation data from the phone and the built environment.
  • Robust Physical Perturbations : Can real physical objects be manipulated in ways that cause DNN-based classifiers to misclassify them?
  • DTAP : Clean-slate design for trigger-action platforms to support decentralized action integrity.


Summary and FAQ

We performed the first in-depth empirical security analysis of a popular emerging smart home programming platform---Samsung SmartThings. We evaluated the platform's security design, and coupled that with an analysis of 499 SmartThings apps (also called SmartApps) and 132 device handlers using static code analysis tools that we built.
  • What are your key findings?
    • Our key findings are twofold. First, although SmartThings implements a privilege separation model, we found that SmartApps can be overprivileged. That is, SmartApps can gain access to more operations on devices than their functionality requires. Second, the SmartThings event subsystem, which devices use to communicate asynchronously with SmartApps via events, does not sufficiently protect events that carry sensitive information such as lock pincodes.
  • Why SmartThings?
    • Recently, several competing smart home programming frameworks that support third party app development have emerged. These frameworks provide tangible benefits to users, but can also expose users to significant security risks. We analyzed Samsung-owned SmartThings because it has the largest number of apps among currently available smart home platforms, and supports a broad range of devices including motion sensors, fire alarms, and door locks.
  • Can you explain overprivilege, and what you found specifically for SmartThings?
    • Overprivilege is a security design flaw wherein an app gains access to more operations on protected resources than it requires to complete its claimed functionality. For instance, a battery manager app only needs access to read battery levels of devices. However, if this app can also issue operations to control the on/off status of those devices, that would be overprivilege. We found two forms of overprivilege for SmartThings. First, coarse-grained capabilities lead to over 55% of existing SmartApps to be overprivileged. Second, coarse SmartApp-SmartDevice binding leads to SmartApps gaining access to operations they did not explicitly ask for. Our analysis reveals that 42% of existing SmartApps are overprivileged in this way.
  • How can attackers exploit these design flaws?
    • We exploited framework design flaws to construct four proof-of-concept attacks that: (1) secretly planted door lock codes; (2) stole existing door lock codes; (3) disabled vacation mode of the home; and (4) induced a fake fire alarm. Details on how these attacks work are in our research paper linked below.

  • Code & Tools

    We have made three programming resources available on GitHub:

    • Static analysis tool that computes overprivilege in SmartApps.
    • Python script that automatically creates skeleton device handlers inside the SmartThings IDE.
    • Capability documentation that we used in our analysis.
    Tools on GitHub

    Research Paper -- Distinguished Practical Paper Award at IEEE S&P 2016 ("Oakland")

    Download PDF

    When referring to our work, please cite it as:

    Earlence Fernandes, Jaeyeon Jung, and Atul Prakash
    Security Analysis of Emerging Smart Home Applications
    In Proceedings of 37th IEEE Symposium on Security and Privacy, May 2016

    or, use BibTeX for citation:

                     @InProceedings{smartthings16,
                        author = {Earlence Fernandes and Jaeyeon Jung and Atul Prakash},
                        title = {{S}ecurity {A}nalysis of {E}merging {S}mart {H}ome {A}pplications},
                        booktitle = {Proceedings of the 37th {IEEE} Symposium on Security and Privacy},
                        month = May,
                        year = 2016
                     }
                    

    Attack Demos

    Pincode Snooping


    Backdoor Pincode Injection


    Disabling Vacation Mode


    Fake Fire Alarm



    Team

    Earlence Fernandes, Ph.D. Candidate, University of Michigan

    Jaeyeon Jung, Principal Security Architect, Microsoft Research (now Vice President, Samsung)

    Atul Prakash, Professor, University of Michigan


    Acknowledgements




    Summary

    Emerging IoT programming frameworks enable building apps that compute on sensitive data produced by smart homes and wearables. However, these frameworks only support permission-based access control on sensitive data, which is ineffective at controlling how apps use data once they gain access. To address this limitation, we present FlowFence, a system that requires consumers of sensitive data to declare their intended dataflow patterns, which it enforces with low overhead, while blocking all other undeclared flows. FlowFence achieves this by explicitly embedding data flows and the related control flows within app structure. Developers use FlowFence support to split their apps into two components: (1) A set of Quarantined Modules that operate on sensitive data in sandboxes, and (2) Code that does not operate on sensitive data but orchestrates execution by chaining Quarantined Modules together via taint-tracked opaque handles—references to data that can only be dereferenced inside sandboxes. We studied three existing IoT frameworks to derive key functionality goals for FlowFence, and we then ported three existing IoT apps. Securing these apps using FlowFence resulted in an average increase in size from 232 lines to 332 lines of source code. Performance results on ported apps indicate that FlowFence is practical: A face-recognition based doorcontroller app incurred a 4.9% latency overhead to recognize a face and unlock a door.

    Code

    Code on GitHub We accept pull requests!

    Research Paper

    Download PDF

    When referring to our work, please cite it as:

    Earlence Fernandes, Justin Paupore, Amir Rahmati, Daniel Simionato, Mauro Conti, and Atul Prakash
    FlowFence: Practical Data Protection for Emerging IoT Application Frameworks
    In Proceedings of the 25th USENIX Security Symposium, August 2016

    or, use BibTeX for citation:

                     @InProceedings{flowfence16,
                        author = {Earlence Fernandes and Justin Paupore and Amir Rahmati and Daniel Simionato and Mauro Conti and Atul Prakash},
                        title = {{F}low{F}ence: {P}ractical {D}ata {P}rotection for {E}merging {I}o{T} {A}pplication {F}rameworks},
                        booktitle = {Proceedings of the 25th {USENIX} Security Symposium},
                        month = August,
                        year = 2016
                     }
                    

    Team

    Earlence Fernandes, Ph.D. Candidate, University of Michigan

    Justin Paupore, Software Engineer, Google

    Amir Rahmati, Ph.D. Candidate, University of Michigan

    Daniel Simionato

    Mauro Conti, Associate Professor, University of Padova

    Atul Prakash, Professor, University of Michigan


    Acknowledgements



    Summary

    The Internet-of-Things (IoT) has quickly evolved to a new appified era where third-party developers can write apps for IoT platforms using programming frameworks. Like other appified platforms, e.g., the smartphone platform, the permission system plays an important role in platform security. However, design flaws in current IoT platform permission models have been reported recently, exposing users to significant harm such as break-ins and theft. To solve these problems, a new access control model is needed for both current and future IoT platforms. In this paper, we propose ContexIoT, a context-based permission system for appified IoT platforms that provides contextual integrity by supporting fine-grained context identification for sensitive actions, and runtime prompts with rich context information to help users perform effective access control. Context definition in ContexIoT is at the inter-procedure control and data flow levels, that we show to be more comprehensive than previous context-based permission systems for the smartphone platform. ContexIoT is designed to be backward compatible and thus can be directly adopted by current IoT platforms. We prototype ContexIoT on the Samsung SmartThings platform, with an automatic app patching mechanism developed to support unmodified commodity SmartThings apps. To evaluate the system’s effectiveness, we perform the first extensive study of possible attacks on appified IoT platforms by reproducing reported IoT attacks and constructing new IoT attacks based on smartphone malware classes. We categorize these attacks based on lifecycle and adversary techniques, and build the first taxonomized IoT attack app dataset. Evaluating ContexIoT on this dataset, we find that it can effectively distinguish the attack context for all the tested apps. The performance evaluation on 283 commodity IoT apps shows that the app patching adds nearly negligible delay to the event triggering latency, and the permission request frequency is far below the threshold that is considered to risk user habituation or annoyance.

    Code for Attacks

    Available here


    Research Paper

    Download PDF

    When referring to our work, please cite it as:

    Yunhan Jack Jia, Qi Alfred Chen, Shiqi Wang, Amir Rahmati, Earlence Fernandes, Z. Morley Mao, and Atul Prakash
    ContexIoT: Towards Providing Contextual Integrity to Appified IoT Platforms
    21st Network and Distributed Security Symposium (NDSS 2017), Feb 2017

    or, use BibTeX for citation:

                     @InProceedings{contexiot17,
                        author = {Yunhan Jack Jia and Qi Alfred Chen and Shiqi Wang and Amir Rahmati and Earlence Fernandes and Z. Morley Mao and Atul Prakash},
                        title = {{ContexIoT: Towards Providing Contextual Integrity to Appified IoT Platforms}},
                        booktitle = {21st Network and Distributed Security Symposium},
                        month = February,
                        year = 2017
                     }
                    

    Team

    Yunhan Jack Jia, Ph.D. Candidate, University of Michigan

    Qi Alfred Chen, Ph.D. Candidate, University of Michigan

    Shiqi Wang

    Amir Rahmati, Ph.D. Candidate, University of Michigan

    Earlence Fernandes, Ph.D. Candidate, University of Michigan

    Z. Morley Mao, Professor, University of Michigan

    Atul Prakash, Professor, University of Michigan


    Acknowledgements



    Summary

    Many of the everyday decisions a user makes rely on the suggestions of online recommendation systems. These systems amass implicit (e.g., location, purchase history, browsing history) and explicit (e.g., reviews, ratings) feedback from multiple users, produce a general consensus, and provide suggestions based on that consensus. However, due to privacy concerns, users are uncomfortable with implicit data collection, thus requiring recommendation systems to be overly dependent on explicit feedback. Unfortunately, users do not frequently provide explicit feedback. This hampers the ability of recommendation systems to provide high-quality suggestions. We introduce Heimdall, the first privacy-respecting implicit preference collection framework that enables recommendation systems to extract user preferences from their activities in a privacy-respecting manner. The key insight is to enable recommendation systems to run a collector on a user’s device and precisely control the information a collector transmits to the recommendation system back-end. Heimdall introduces immutable blobs as a mechanism to guarantee this property. We implemented Heimdall for the smartphone and smart home environments and wrote three example collectors to enhance existing recommendation systems with implicit feedback. Our performance results suggest that the overhead of immutable blobs is minimal, and a user study of 166 participants indicates that privacy concerns are significantly less when collectors record only specific information—a property that Heimdall enables.

    Code on GitHub

    Coming Soon!


    Research Paper

    Download PDF

    When referring to our work, please cite it as:

    Amir Rahmati, Earlence Fernandes, Kevin Eykholt, Xinheng Chen, and Atul Prakash
    Heimdall: A Privacy-Respecting Implicit Preference Collection Framework
    15th ACM International Conference on Mobile Systems, Applications, and Services (ACM MobiSys 2017), June 2017

    or, use BibTeX for citation:

                     @InProceedings{heimdall17,
                        author = {Amir Rahmati and Earlence Fernandes and Kevin Eykholt and Xinheng Chen and Atul Prakash},
                        title = {{Heimdall: A Privacy-Respecting Implicit Preference Collection Framework}},
                        booktitle = {15th ACM International Conference on Mobile Systems, Applications, and Services},
                        month = June,
                        year = 2017
                     }
                    

    Team

    Amir Rahmati, Ph.D. Candidate, University of Michigan

    Earlence Fernandes, Ph.D. Candidate, University of Michigan

    Kevin Eykholt, Ph.D. Candidate, University of Michigan

    Xinheng Chen, Student, University of Michigan

    Atul Prakash, Professor, University of Michigan


    Acknowledgements



    Summary

    Although deep neural networks (DNNs) perform well in a variety of applications, they are vulnerable to adversarial examples resulting from small-magnitude perturbations added to the input data. Inputs modified in this way can be mislabeled as a target class in targeted attacks or as a random class different from the ground truth in untargeted attacks. However, recent studies have demonstrated that such adversarial examples have limited effectiveness in the physical world due to changing physical conditions—they either completely fail to cause misclassification or only work in restricted cases where a relatively complex image is perturbed and printed on paper. In this paper, we propose a general attack algorithm—Robust Physical Perturbations (RP2)— that takes into account the numerous physical conditions and produces robust adversarial perturbations. Using a real-world example of road sign recognition, we show that adversarial examples generated using RP2 achieve high attack success rates in the physical world under a variety of conditions, including different viewpoints. Furthermore, to the best of our knowledge, there is currently no standardized way to evaluate physical adversarial perturbations. Therefore, we propose a two-stage evaluation methodology and tailor it to the road sign recognition use case. Our methodology captures a range of diverse physical conditions, including those encountered when images are captured from moving vehicles. We evaluate our physical attacks using this methodology and effectively fool two road sign classifiers. Using a perturbation in the shape of black and white stickers, we attack a real Stop sign, causing targeted misclassification in 100% of the images obtained in controlled lab settings and above 84% of the captured video frames obtained on a moving vehicle for one of the classifiers we attack.

    FAQ

    • Did you attack a real self-driving car?
      • No.
    • Okay, what did you attack?
      • We attacked a deep neural network-based classifier for U.S. road signs. A classifier is a neural network (in the context of our work) that interprets road signs. A car would potentially use a camera to take pictures of road signs, crop them, and then feed them into a road sign classifier. We did not attack object detectors -- a different type of machine learning model that analyzes an image of the entire scene and detects the signs and their labels without cropping. Object detection is a very different machine learning problem and presents different challenges for attackers.
        To the best of our knowledge, there is currently no publicly available classifier for U.S. road signs. Therefore, we trained a network on the LISA dataset, a U.S. sign dataset comprised of different road signs like Stop, Speed Limit, Yield, Right Turn, Left Turn, etc. This model consists of three convolutional layers followed by a fully connected layer and was originally developed as part of the Cleverhans library. Our final classifier accuracy was 91% on the test dataset.
    • What are your findings?
      • We show that it is possible to construct physical modifications to road signs, in ways that cause the trained classifier (discussed above) to misinterpret the meaning of the signs. For example, we were able to trick the classifier into interpreting a Stop sign as a Speed Limit 45 sign, and a Turn Right sign as either a Stop or Added Lane sign. Our physical modifications for a real Stop sign are a set of black and white stickers. See the resources section below for examples.
    • What resources does an attacker need?
      • An attacker needs a color printer for sticker attacks, and a poster printer for poster-printing attacks. The attacker would also need a camera to take an image of the sign he wishes to attack.
    • Who is a casual observer and why do these modifications to road signs not raise suspicion?
      • A casual observer is anyone in the street or in vehicles. Our algorithm produces perturbations that look like graffiti. As graffiti is commonly seen on road signs, it is unlikely that casual observers would suspect that anything is amiss.
    • Based on this work, are current self-driving cars at risk?
      • No. We did not attack a real self-driving car. However, our work does serve to highlight potential issues that future self-driving car algorithms might have to address. A more complete attack on a self-driving car would have to target the entire control pipeline that includes many more steps in addition to classification. One such part of the pipeline, which is out of the scope of our work, is the detection of objects, that is the identification of the region of an image taken by a car camera where some type of road sign is to be found. We focus our efforts on attacking classifiers using physical object modifications. We focus on classifiers because they are commonly studied in the context of doing research on adversarial examples. Although it is unlikely that our attacks on classifiers would attack detectors “out of the box,” it is highly possible that future work will examine and find robust attacks on object detectors, in a similar vein to our work on attacking classifiers.
    • Should I stop using the autonomous features (parking, freeway driving) of my car? Or is there any immediate concern?
      • We again stress that our attack was crafted for the trained neural network discussed above. As it stands today, this attack would most likely not work as-is on existing self-driving cars.
    • By revealing this vulnerability, aren't you helping potential hackers?
      • No---on the contrary, we are helping manufacturers and users to address potential problems before hackers can take advantage. As computer security researchers, we are interested in identifying the security risks of emerging technologies, with the goal of helping improve the security of future versions of those technologies. The security research community has found that evaluating the security risks of a new developing technology makes it much easier to confront and address security problems before adversarial pressure manifests. One example has been the modern automobile and another, the modern smart home. In both cases, there is progress toward better security. We hope that our results start a fruitful conversation on securing cyber-physical systems that use neural nets for making important control decisions.
    • Are you doing demos or interviews?
      • As our work is in progress, we are currently focused on improving and fine-tuning the scientific techniques behind our initial results. We created this FAQ in response to the unanticipated media interest and to answer questions that have arisen in the meantime. In the future, we may upload video demonstrations of the attack, and may accept interview invitations. For the time being, we have uploaded our experimental attack images on this website.
    • Whom should we contact if we have more questions?
      • We are a team of researchers at various institutions. Please see below for a list of team members and institutions involved in the project. In order to streamline communication, we have created an alias that reaches all team members. We strongly recommend that you contact roadsigns@umich.edu if you have further questions.

    Example Drive-By Test Video

    Abstract Art Attack on LISA-CNN

    The left-hand side is a video of a perturbed Stop sign, the right-hand side is a video of a clean Stop sign. The classifier (LISA-CNN) detects the perturbed sign as Speed Limit 45 until the car is very close to the sign. At that point, it is too late for the car to reliably stop. The subtitles show the LISA-CNN classifier output.

    Subtle Poster Attack on LISA-CNN

    The left-hand side is a video of a true-sized Stop sign printout (poster paper) with perturbations covering the entire surface area of the sign. The classifier (LISA-CNN) detects this perturbed sign as a Speed Limit 45 sign in all tested frames. The right-hand side is the baseline (a clean poster-printed Stop sign). The subtitles show LISA-CNN output.

    Research Paper

    Download PDF

    When referring to our work, please cite it as:

    Ivan Evtimov, Kevin Eykholt, Earlence Fernandes, Tadayoshi Kohno, Bo Li, Atul Prakash, Amir Rahmati, Dawn Song
    Robust Physical-World Attacks on Deep Learning Models
    arXiv preprint 1707.08945, August 2017

    or, use BibTeX for citation:

                     @InProceedings{roadsigns17,
                        author = {Ivan Evtimov and Kevin Eykholt and Earlence Fernandes and Tadayoshi Kohno and Bo Li and Atul Prakash and Amir Rahmati and Dawn Song},
                        title = {{Robust Physical-World Attacks on Deep Learning Models}},
                        booktitle = {arXiv preprint 1707.08945},
                        month = August,
                        year = 2017
                     }
                    

    Resources: Experimental Attack Images

    We have made a sampling of our experimental attack images available as a zip file (around 25MB). Click here to download.

    Team (alphabetical order)

    Ivan Evtimov, Ph.D. Candidate, University of Washington

    Kevin Eykholt, Ph.D. Candidate, University of Michigan

    Earlence Fernandes, Postdoctoral Researcher, University of Washington

    Tadayoshi Kohno, Professor, University of Washington

    Bo Li, Postdoctoral Researcher, University of California Berkeley

    Atul Prakash, Professor, University of Michigan

    Amir Rahmati, Professor, Stony Brook University

    Dawn Song, Professor, University of California Berkeley


    Acknowledgements



    Summary

    Trigger-Action platforms are web-based systems that enable users to create automation rules by stitching together online services representing digital and physical resources using OAuth tokens. Unfortunately, these platforms introduce a longrange large-scale security risk: If they are compromised, an attacker can misuse the OAuth tokens belonging to a large number of users to arbitrarily manipulate their devices and data. We introduce Decentralized Action Integrity, a security principle that prevents an untrusted trigger-action platform from misusing compromised OAuth tokens in ways that are inconsistent with any given user’s set of trigger-action rules. We present the design and evaluation of Decentralized Trigger-Action Platform (DTAP), a trigger-action platform that implements this principle by overcoming practical challenges. DTAP splits currently monolithic platform designs into an untrusted cloud service, and a set of user clients (each user only trusts their client). Our design introduces the concept of Transfer Tokens (XTokens) to practically use fine grained rule-specific tokens without increasing the number of OAuth permission prompts compared to current platforms. Our evaluation indicates that DTAP poses negligible overhead: it adds less than 15ms of latency to rule execution time, and reduces throughput by 2.5%.

    Research Paper

    PDF coming soon!

    When referring to our work, please cite it as:

    Earlence Fernandes, Amir Rahmati, Jaeyeon Jung, Atul Prakash
    Decentralized Action Integrity for Trigger-Action IoT Platforms
    22nd Network and Distributed Security Symposium (NDSS 2018), San Diego, CA, Feb 2018

    or, use BibTeX for citation:

                     @InProceedings{dtap18,
                        author = {Earlence Fernandes and Amir Rahmati and Jaeyeon Jung and Atul Prakash},
                        title = {{Decentralized Action Integrity for Trigger-Action IoT Platforms}},
                        booktitle = {22nd Network and Distributed Security Symposium (NDSS 2018)},
                        month = Feb,
                        year = 2018
                     }
                    

    Team

    Earlence Fernandes, Postdoctoral Researcher, University of Washington

    Amir Rahmati, Professor, Stony Brook University

    Jaeyeon Jung, Vice President, Samsung

    Atul Prakash, Professor, University of Michigan


    Acknowledgements