Performance analysis of various machine learning models for membership inference attack Online publication date: Mon, 08-Jan-2024
by K. Karthikeyan; K. Padmanaban; Datchanamoorthy Kavitha; Jampani Chandra Sekhar
International Journal of Sensor Networks (IJSNET), Vol. 43, No. 4, 2023
Abstract: In order to function correctly during the training phase, many ML models require enormous amounts of labelled data. There is a possibility that the data will contain private information, which must be protected regarding privacy. Membership inference attacks (MIA) are attacks that try to identify if a target data point was utilised for training a particular ML method. These attacks have the potential to compromise users' privacy and security. The degree to which an algorithm for ML divulges user membership information varies from implementation to implementation. Hence, a performance analysis was performed based on different ML algorithms under MIA inference attacks. This study proposed for comparing different ML approaches against MIAs and analyses which ML algorithm is better performing to such privacy attacks. Based on the performance analysis observation, the GAN and DNN models are considered as the best ML models to defend against MIA attacks with better performances.
Existing subscribers:
Go to Inderscience Online Journals to access the Full Text of this article.
If you are not a subscriber and you just want to read the full contents of this article, buy online access here.Complimentary Subscribers, Editors or Members of the Editorial Board of the International Journal of Sensor Networks (IJSNET):
Login with your Inderscience username and password:
Want to subscribe?
A subscription gives you complete access to all articles in the current issue, as well as to all articles in the previous three years (where applicable). See our Orders page to subscribe.
If you still need assistance, please email subs@inderscience.com