The Basics of Differential Privacy & Its Applicability to NLU Models
Over the years, large pre-trained language models like BERT and Roberta have led to significant improvements in natural language understanding (NLU) tasks. However, these pre-trained models pose a risk of potential data leakage if the training data includes any personally identifiable information (PII). To mitigate this risk, there are two common techniques to preserve privacy: … Read more