Our work on implicit regularization in β-VAEs to appear at ICML’20.
I am serving as an area chair for NeurIPS’20.
Our work on injective flow models to appear at AISTATS’20.
Our work on guaranteed disentanglement with weak supervision to appear at ICLR’20.
I am serving as an area chair for ICLR’20.
Our work on adaptive fine-tuning for transfer learning to appear at CVPR’19.
I am serving as an area chair for ICLR’19.
Our work on scale-invariant flatness measures for loss surfaces is out.
I joined Google Brain as a research scientst.
Two papers appearing in NeurIPS’19 (Unsupervised domain adaptation, Delta-encoder for few-shot learning).
I am co-organizing a workshop Towards learning with limited labels: Equivariance, Invariance, and Beyond at ICML/FAIM 2018.
Our work on examining the geometry of deep generative models to appear at the CVPR’18 workshop on differential geometry.
Our work on efficient inference in ResNets to appear at CVPR’18.
Our work on variational inference of disentangled latents to appear at ICLR’18.
Our work on semi-supevised learning to appear at NIPS’17.
I am serving as an area chair for ICLR’18.
Our work on locality and invariance in kernel methods to appear at AISTATS’17.
Two papers, on multi-task learning and on stochastic subsampling in CNNs, to appear at CVPR’17.