Capital One open-sources federated learning with Federated Model Aggregation
AI is the talk of the town and it seems like every software provider would like to have AI-powered features in their software. But in order to do that, you need AI models that you can train.
One of the newer approaches to model training in machine learning is federated learning (FL), which is an approach that decentralizes training so that data doesn’t need to be centrally stored, explained Kenny Bean, machine learning software engineer at Capital One.
In order to take advantage of the benefits that FL brings, Capital One created Federated Model Aggregation (FMA), which is an open-source project that allows developers to deploy their existing machine learning workflows in a federated setting.
According to Bean, FMA includes a number of different Python components. Connectors are provided that can be used to facilitate communication between these different components, and can also be used to connect to your own components.
It also includes a client that facilitates client-service interactions, an aggregator that pulls in model updates from a set of clients, and an API service that handles the UI and API interactions between components in the system.
According to Bean, who is one of the original authors of the project, FMA was created for developers who want to train models on data that is coming in from multiple locations, or that can’t be moved from its original location.
“Any time a model is used in a distributed way, there is a potential to use the FMA service to introduce federated learning to that training process,” said Bean.
One of the main goals the team had when developing the project was to make it customizable and reusable.
“We decided we are going to try and implement a service that would be able to integrate into pre-existing model training paradigms,” said Bean. “And that’s kind of where the FMA service was born.”
Another goal the creators had in mind was to make it easy to deploy. Models can be deployed with FMA in just one command. According to Bean, this is made possible because it uses Terraform, the infrastructure-as-code tool from HashiCorp.
The project wasn’t always envisioned as an open-source project, but the team soon realized it could really benefit the greater community.
“Initially we designed FMA for a specific use case and then quickly realized it could be applicable to many more. So that’s when we made the decision that if it’s highly customizable and easy to use then we should open-source it. Capital One relies on open-source technology and we believe in giving back to the community that helped us through our technology transformation.”
Looking ahead, the team is currently working on feature discovery and improving the interaction with the community to make it easier to gather feedback. They are also working to expand the components to other languages.
The post Capital One open-sources federated learning with Federated Model Aggregation appeared first on SD Times.
Tech Developers
No comments