Skip to content
Home » Recent Progress » Decentralized Multi-Target Cross-Domain Recommendation for Multi-Organization Collaborations

Decentralized Multi-Target Cross-Domain Recommendation for Multi-Organization Collaborations

Abstract

Recommender Systems (RSs) are operated locally by different organizations in many realistic scenarios. If various organizations can fully share their data and perform computation in a centralized manner, they may significantly improve the accuracy of recommendations. However, collaborations among multiple organizations in enhancing the performance of recommendations are primarily limited due to the difficulty of sharing data and models. To address this challenge, we propose Decentralized Multi-Target Cross-Domain Recommendation (DMTCDR) with Multi-Target Assisted Learning (MTAL) and Assisted AutoEncoder (AAE). Our method can help multiple organizations collaboratively improve their recommendation performance in a decentralized manner without sharing sensitive assets. Consequently, it allows decentralized organizations to collaborate and form a community of shared interest. We conduct extensive experiments to demonstrate that the new method can significantly outperform locally trained RSs and mitigate the cold start problem.

Collaborations for Better Recommendation

Our solution provides the keystone for establishing collaborations among multiple organizations to leverage isolated data and computation resources to improve recommendation performance. Each organization will calculate a set of `residuals’ and broadcast these to other organizations. These residuals approximate the fastest direction of reducing the training loss in hindsight. Subsequently, other organizations will fit the residuals using their local data, models, and objective functions and broadcast the fitted values back to each other. Learners will then assign weights to their peers to approximate the fastest direction of learning. The prediction will be aggregated from the fitted values. The above procedure is repeated until all organizations accomplish a sufficient level of learning.

Our main contributions are as follows.

  • We present a new recommendation framework Decentralized Multi-Target Cross-Domain Recommendation (DMTCDR), which can simultaneously improve the recommendation performance of multiple decentralized organizations without sharing their local data, models, or objective functions.
  • We propose a new decentralized learning algorithm named Multi-Target Assisted Learning (MTAL) with a new AutoEncoder-based RS called Assisted AutoEncoder (AAE). Our method exchanges information from various decentralized organizations by fitting `pseudo-residuals’ with local data and models. It also covers broad application scenarios, including explicit or implicit feedback, user- or item-based alignment, and with or without side information.
  • We conduct extensive experiments and demonstrate that our method can significantly outperform locally trained RSs and mitigates the cold start problem. As a result, our approach can promote collaborations among various organizations to form a community of shared interest.

Compared with AL, the proposed MTAL algorithm develops AL in two ways. First, we generalize AL from a single-target to a multi-target learning framework. In particular, AL and GAL assume multiple assistors to help a single sponsor. MTAL fits multiple targets of all organizations with a single local RS. Furthermore, each organization can optimize its own gradient assistance weights and gradient assisted learning rate to avoid negative transfer from other organizations. As a result, MTAL, along with AAE, implicitly operating on the global dataset, can simultaneously improve the recommendation performance of all participating organizations.

References

Diao, Enmao, Vahid Tarokh, and Jie Ding. “Decentralized Multi-Target Cross-Domain Recommendation for Multi-Organization Collaborations.” arXiv preprint arXiv:2110.13340 (2021). [DOC]