Zijian Guo |
Multi-source LearningSummary (LLM read my papers; human bias-correction applied)In multi-source learning, my work aims to make models reliable when data come from multiple studies, hospitals, or environments that do not exactly agree. Rather than assuming a single pooled distribution, I define the target through an explicit robust optimization principle—typically a minimax (distributionally robust) objective that guards against the worst-case mixture or shift across sources—so the estimand itself is stable to heterogeneity. A key contribution is then to make this target computable and statistically actionable: in some settings, robust objectives admit an interpretable dual characterization that yields a closed-form solution or a simple reweighting rule; more generally, I develop efficient primal–dual / saddle-point algorithms tailored to the minimax structure, with theoretical guarantees. Finally, I provide valid uncertainty quantification for these optimization-defined estimands, even in nonstandard/nonregular regimes where classical smooth asymptotics can fail (e.g., due to nonsmooth objectives, boundary solutions, or source-adaptive behavior).
underline indicates supervised students ; # indicates equal contribution; * indicates alphabetical ordering ; ✉ indicates corresponding authorship. |