Jiankai Jin, Eleanor McMurtry, Benjamin I. P. Rubinstein, and Olga Ohrimenko.
In 2022 IEEE Symposium on Security and Privacy (SP), pp. 473-488.
This study has attacked Meta’s Opacus library for Pytorch and Google's Differential Privacy library. Meta has since fixed the issue and Google acknowledged our finding via Google Bug Hunter. One author (Olga) of this paper is invited to present this paper in Google TechTalk and Computer Laboratory Security Seminar.
Getting a-Round Guarantees: Floating-Point Attacks on Certified Robustness [preprint]
Jiankai Jin, Olga Ohrimenko, Benjamin I. P. Rubinstein.
arXiv preprint arXiv:2205.10159., 2022.
Supervisors: A/Prof Olga Ohrimenko, Prof Ben Rubinstein
Research Area: Identifying and Preventing Implementation Flaws in Secure Machine Learning Systems
Supervisors: A/Prof Olga Ohrimenko
Research Area: Effects of Data Sampling on Training of Machine Learning Models
I work on the recommendation systems of dating apps such as SoulMatch and ChatRight. My job includes: Design of recommendation systems; big data management using Hive, Redis, Spark, Flink, and ElasticSearch; feature engineering, training and deployment of machine learning models offline (XGBoost) and online (DeepFM). Click through rates and conversion rates of those apps improve up to 300%.
I work on the Bairong Federated Machine Learning Platform. My job includes: Deployment of the Bairong Federated Machine Learning Platform, frontend and backend development, test and optimization of machine learning models, development of customized feature engineering modules.
I did a lot of big data computation in my previous job, using tools such as SQL, Redis, ElasticSearch, Spark, and Flink.
Experience of working with Deep Learning (such as Transformer and ResNet), Ensemble Learning (such as XGBoost), using frameworks such as TensorFlow, PyTorch.
I can develop programs with Java, Python, Scala, Go, C++, and Bash.
Melbourne Research Scholarship