Biography
My name is Fan-Yun Willy Sun but I go by Sun :) I am advised by the amazing Nick Haber and Jiajun Wu. My research interests are graph machine learning and 3D computer vision. My goal is to build systems that can understand and generate interactive 3D worlds by applying insights drawn from relational learning.
I write occasionally here. Feel free to reach out to me at fanyun [at] stanford.edu!
Publications

Partial-View Object View Synthesis via Filtering Inversion
We propose a framework that combines the strengths of generative modeling and network finetun-ing to generate photorealistic 3D renderings of real-world objects
from sparse and sequential RGB inputs.
International Conference on 3D Vision (3DV) 2024
Davos, Switzerland

Interaction Modeling with Multiplex Attention
We present a forward prediction model that uses a multiplex latent graph to represent multiple independent types of interactions and attention to account for relations of different strengths and a progressive training strategy for the proposed model.
NeurIPS 2022
New Orleans, LA

Physion: Evaluating Physical Prediction from Vision in Humans and Machines
We presented a visual and physical prediction benchmark that measures ML algorithms' capabilities of predicting real world physics demonstrated how our benchmark can identify areas for improvement in physical understanding.
NeurIPS 2021, Datasets and Benchmarks Track

Equivariant Neural Network for Factor Graphs
In this paper, we identify factor graph isomorphism and introduce two neural network-based inference models that takes advantages of such inductive biases.

InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization
This paper studies learning the representations of whole graphs in both unsupervised and semi-supervised scenarios using mutual information maximization.
ICLR 2020
(Spotlight)
Addis Ababa, Ethiopia

vGraph: A Generative Model for Joint Community Detection and Node Representation Learning
In current literature, community detection and node representation learning are usually independently studied while they are actually highly correlated. We propose a probabilistic generative model called vGraph to learn community membership and node representation collaboratively. We also show that the framework of vGraph is quite flexible and can be easily extended to detect hierarchical communities.
NeurIPS, 2019
Vancouver, BC

A Regulation Enforcement Solution for Multi-agent Reinforcement Learning
In this paper, we proposed a framework to solve the following problem: In a decentralized environment, given that not all agents are compliant to regulations at first, can we develop a mechanism such that it is in the self-interest of non-compliant agents to comply after all. We utilized empirical game-theoretic analysis to justify our method.
AAMAS, 2019
Montreal, QC

Designing Non-greedy Reinforcement Learning Agents with Diminishing Reward Shaping
This paper intends to address an issue in multi-agent RL that when agents possessing varying capabilities. We introduce a simple method to train non-greedy agents with nearly no extra cost. Our model can achieve the following goals: non-homogeneous equality, only need local information, cost-effective, generalizable and configurable.
AAAI/ACM conference on AI, Ethics, Society 2018
(Oral)
New Orleans, LA

A Memory-Network Based Solution for Multivariate Time-Series Forecasting
Inspired by Memory Network for solving the question-answering tasks, we proposed a deep learning based model named Memory Time-series network (MTNet) for time series forecasting. Additionally, the attention mechanism designed enable MTNet to be interpretable.

Organ At Risk Segmentation with Multiple Modality
In real world scenario, doctors often utilize multiple modalities. In this paper, we propose to use Generative Adversarial Network to perform CT to MR transformation to synthesize MR images instead of aligning two modalities. The synthesized MR can be jointly trained with CT to achieve better performance.