Countering Mainstream Bias via End-to-End Adaptive Local Learning: Preliminaries

Posted

Table of Links

Abstract and 1 Introduction

2 Preliminaries

3 End-to-End Adaptive Local Learning

3.1 Loss-Driven Mixture-of-Experts

3.2 Synchronized Learning via Adaptive Weight

4 Debiasing Experiments and 4.1 Experimental Setup

4.2 Debiasing Performance

4.3 Ablation Study

4.4 Effect of the Adaptive Weight Module and 4.5 Hyper-parameter Study

5 Related Work

6 Conclusion, Acknowledgements, and References

2 Preliminaries

\

Mainstream Bias. Typically, a recommender system is evaluated by averaging the utility over all users (such as NDCG@K), which essentially conceals the performance differences across different types of users. Previous work [39] formalizes the mainstream bias as the recommendation performance difference across users of different levels of mainstreamness. In this work, we follow the problem setting of [39] to measure the mainstream level for a user by calculating the average similarity of the user to all others: the more similar the user is to the majority, the more mainstream she is.

\

Debiasing Goal. In terms of the goal of debiasing, while addressing the issues of discrepancy modeling and unsynchronized learning benefits all user types, it is inappropriate to expect equalized utility across all users, which possibly encourages decreasing the utility for mainstream groups. Hence, in this work, we follow the Rawlsian Max-Min fairness principle of distribute justice [29]. To achieve fairness, this principle aims to maximize the minimum utility of individuals or groups, ensuring that no one is underserved. So, to counter the mainstream bias, we aim to improve the recommendation utility of niche users while preserving or even enhancing the performance of mainstream users.

\

:::info

Authors:

(1) Jinhao Pan [0009 −0006 −1574 −6376], Texas A&M University, College Station, TX, USA;

(2) Ziwei Zhu [0000 −0002 −3990 −4774], George Mason University, Fairfax, VA, USA;

(3) Jianling Wang [0000 −0001 −9916 −0976], Texas A&M University, College Station, TX, USA;

(4) Allen Lin [0000 −0003 −0980 −4323], Texas A&M University, College Station, TX, USA;

(5) James Caverlee [0000 −0001 −8350 −8528]. Texas A&M University, College Station, TX, USA.

:::

:::info

This paper is available on arxiv under CC BY 4.0 DEED license.

:::

\

mainstream-bias, collaborative-filtering, adaptive-local-learning, discrepancy-modeling, unsynchronized-learning, rawlsian-max-min-fairness, mixture-of-experts, loss-driven-models