Measuring and Modeling the Label Dynamics of Online Anti-Malware Engines

Published in USENIX Security 2020: 29th USENIX Security Symposium Pages 2361-2378, 2020

Recommended citation: Zhu, S., Shi, J., Yang, L., Qin, B., Wang, G., & Song, L. (2020, August). Measuring and Modeling the Label Dynamics of Online Anti-Malware Engines. 29th USENIX Security Symposium (pp. 2361-2378). https://www.usenix.org/conference/usenixsecurity20/presentation/zhu

VirusTotal provides malware labels from a large set of anti-malware engines, and is heavily used by researchers for malware annotation and system evaluation. Since different engines often disagree with each other, researchers have used various methods to aggregate their labels. In this paper, we take a data-driven approach to categorize, reason, and validate common labeling methods used by researchers. We first survey 115 academic papers that use VirusTotal, and identify common methodologies. Then we collect the daily snapshots of VirusTotal labels for more than 14,000 files (including a subset of manually verified ground-truth) from 65 VirusTotal engines over a year. Our analysis validates the benefits of threshold-based label aggregation in stabilizing files’ labels, and also points out the impact of poorly-chosen thresholds. We show that hand-picked “trusted” engines do not always perform well, and certain groups of engines are strongly correlated and should not be treated independently. Finally, we empirically show certain engines fail to perform in-depth analysis on submitted files and can easily produce false positives. Based on our findings, we offer suggestions for future usage of VirusTotal for data annotation.

Download paper here

Recommended citation: Zhu, S., Shi, J., Yang, L., Qin, B., Wang, G., & Song, L. (2020, August). Measuring and Modeling the Label Dynamics of Online Anti-Malware Engines. 29th USENIX Security Symposium (pp. 2361-2378).