6533b7d2fe1ef96bd125e36a

RESEARCH PRODUCT

Countering Adversarial Inference Evasion Attacks Towards ML-Based Smart Lock in Cyber-Physical System Context

Antti KariluotoMartti LehtoPetri Vähäkainu

subject

ExploitComputer sciencebusiness.industryCyber-physical systemevasion attacksEvasion (network security)Context (language use)Adversarial machine learningComputer securitycomputer.software_genreadversarial machine learningdefensive mechanismscyber-physical systemAdversarial systemSmart lockkoneoppiminenälytekniikkabusinesskyberturvallisuuscomputerverkkohyökkäyksetBuilding automation

description

Machine Learning (ML) has been taking significant evolutionary steps and provided sophisticated means in developing novel and smart, up-to-date applications. However, the development has also brought new types of hazards into the daylight that can have even destructive consequences required to be addressed. Evasion attacks are among the most utilized attacks that can be generated in adversarial settings during the system operation. In assumption, ML environment is benign, but in reality, perpetrators may exploit vulnerabilities to conduct these gradient-free or gradient-based malicious adversarial inference attacks towards cyber-physical systems (CPS), such as smart buildings. Evasion attacks provide a utility for perpetrators to modify, for example, a testing dataset of a victim ML-model. In this article, we conduct a literature review concerning evasion attacks and countermeasures and discuss how these attacks can be utilized in order to deceive the, i.e., CPS smart lock system’s ML-classifier to gain access to the smart building. peerReviewed

http://urn.fi/URN:NBN:fi:jyu-202111195731