The term Federated learning was coined a couple of years back and is nothing but a way to train artificial Intelligence (AI) models without there being the necessity of anyone seeing or touching your data. All this paves a way to unlock information to feed new AI applications. This decentralized form of machine learning has gained immense popularity in no time. On that note, let us have a look at the top 10 notorious research papers on Federated Learning.
The main focus of this research paper is to establish that generative models trained using federated methods and formal differential guarantees can effectively debug many commonly occurring data issues. All this is even when the data is not directly inspected. Well, that's not all. The researchers also explore these methods in applications to text with differentially private federated RNNs, and images using a novel algorithm for differentially private federated GANs.
This research paper is put forth by the researchers from Yandex, University of Toronto, Moscow Institute of Physics and Technology, and National Research University of Higher School of Economics proposed Moshpit All-Reduce. Wondering what the research paper is all about? Well, here the researchers demonstrated the efficiency of their protocol for distributed optimization with strong theoretical guarantees, along with experiments that show impressive results.
The researchers from WeBank, Kwai, University of Southern California, University of Michigan, and the University of Rochester have come up with this research paper wherein they proposed a central server-free federated learning algorithm called Online Push-Sum (OPS) method to handle various challenges in a generic setting. The researchers have also provided a rigorous regret which shows interesting results on how users can benefit from communication with trusted users in the federated learning environment.
'Federated Learning for Mobile Keyboard Prediction' is a research paper brought up by Google researchers where they demonstrate the feasibility and benefits of training language models on client devices without exporting sensitive user data to servers. A point to note is that the federated learning environment gives users greater control over the use of their data and also simplifies incorporating privacy by default with distributed training and aggregation across a population of client devices.
Federated learning has been reformulated as a group of knowledge transfer training algorithms called FedGKT by researchers from the University of Southern California. The researchers aim at designing an alternating minimization approach to train small CNNs on edge nodes and periodically transfer their knowledge by knowledge distillation to a large server-side CNN. Now, there are numerous advantages of this – reduce demand for edge computation, and lower communication bandwidth for large CNNs, among others.
To facilitate federated learning algorithm development and fair performance comparison, researchers from Tencent and top universities introduced FedML, an open research library, and benchmark. Also, the researchers believe that their library and benchmarking framework provide an efficient and reproducible means for developing and evaluating federated learning algorithms.
An attentive aggregation method has been proposed by researchers from Monash University, the University of Queensland, and the University of Technology Sydney. They proposed a model aggregation with an attention mechanism considering the contribution of client models to the global model, together with an optimization technique during server aggregation, to solve the problems for mobile keyboard suggestions.
Researchers from ByteDance and Carnegie Mellon came together for a research paper titled – 'Label Leakage and Protection in Two-party Split Learning'. The model is such that it uses the norm of the communicated gradients between the parties to reveal the participants' ground-truth labels. Here, the researchers also discuss several protection techniques to mitigate this issue.
Researchers from the University College London, University of Cambridge, and Avignon Universite presented Flower, an open-source framework that supports heterogeneous environments, including mobile and edge devices, and scales to many distributed clients. With this in place, the engineers can port existing workloads with little overhead regardless of the ML framework used.
This is a research paper by Google, in collaboration with researchers from top universities. They have come up with a broad paper surveying the many open challenges in the area of federated learning. No wonder why the paper makes it to the list of top 10 notorious research papers on Federated Learning.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.