Meng Fang

I am an Assistant Professor in Machine Learning at University of Liverpool. I'm also a visiting researcher at Eindhoven University of Technology (TU/e). I received my Ph.D from University of Technology Sydney, advised by Prof. Dacheng Tao. I was also supervised by Prof. Xingquan Zhu and Dr. Jie Yin at UTS. I was a postdoctoral research fellow working with Prof. Trevor Cohn. I had been a research scientist/intern at Tencent AI, CSIRO and Microsoft Research Asia before.

My research focus is on building human-like agents with language and control.

Email / Github / Scholar /



Moe
People
  • PhDs and students: Tristan Tomilin, Yucheng Yang, Tim d'Hondt, Jiaxu Zhao, Zhao Sun, Dongwon Ryu
  • Master Thesis Project (Finished: Obada Aljabasini, Tristan Tomilin, Chen Qian, Adithya Dinesh Rao, Paul Pham, Tong Zhao, Yibin Lei)
Teaching & Service
Research
Projects

Multi-goal reinforcement learning
TL;DR: We propose new methods and environments for multi-goal settings with sparse rewards.
Keywords: sparse rewards, sample efficient. [project page]

Text-based games
TL;DR: We consider the task of learning control policies for text-based games.
Keywords: knowledge graphs, attention, RL, hierarchical RL. [project page]

Question & Answering
TL;DR: We consider the reasoning process for question and answering problems.
Keywords: graphs, graph neural networks, knowledge graphs. [project page]

Reinforcement learning and language-informed agents

Perceiving the World: Question-guided Reinforcement Learning for Text-based Games
Yunqiu Xu, Meng Fang, Ling Chen, Yali Du, Joey Tianyi Zhou, Chengqi Zhang
In ACL 2022
[code]

Fire Burns, Sword Cuts: Commonsense Inductive Bias for Exploration in Text-based Games
Dongwon Kelvin Ryu, Ehsan Shareghi, Meng Fang, Yunqiu Xu, Shirui Pan, Reza Haf
In ACL 2022 (Short)
[code]

Rethinking Goal-Conditioned Supervised Learning and Its Connection to Offline RL
Rui Yang, Yiming Lu, Wenzhe Li, Hao Sun, Meng Fang, Yali Du, Xiu Li, Lei Han, Chongjie Zhang
In ICLR 2022
[code]

Diversity-augmented intrinsic motivation for deep reinforcement learning
Tianhong Dai, Yali Du, Meng Fang, Anil Anthony Bharath
In Neurocomputing 2022
[code]

Generalization in Text-based Games via Hierarchical Reinforcement Learning
Yunqiu Xu, Meng Fang, Ling Chen, Yali Du, Chengqi Zhang
In EMNLP 2021 (Findings)
[code]

Deep Reinforcement Learning for Prefab Assembly Planning in Robot-based Prefabricated Construction
Aiyu Zhu, Gangyan Xu, Pieter Pauwels, Bauke de Vries, Meng Fang
In IEEE International Conference on Automation Science and Engineering (CASE) 2021
[IEEE CASE2021 Best Student Paper Award Finalists]

Reinforcement Learning With Multiple Relational Attention for Solving Vehicle Routing Problems
Yunqiu Xu, Meng Fang, Ling Chen, Gangyan Xu, Yali Du, Chengqi Zhang
In IEEE Transactions on Cybernetics 2021

On the Guaranteed Almost Equivalence Between Imitation Learning From Observation and Demonstration
Zhihao Cheng, Liu Liu, Aishan Liu, Hao Sun, Meng Fang, Dacheng Tao
In IEEE Transactions on Neural Networks and Learning Systems 2021
[code]

TStarBot-X: An Open-Sourced and Comprehensive Study for Efficient League Training in StarCraft II Full Game
Lei Han*, Jiechao Xiong*, Peng Sun*, Xinghai Sun, Meng Fang, Qingwei Guo, Qiaobo Chen, Tengfei Shi, Hongsheng Yu, Zhengyou Zhang
Report 2021
[code]

Deep Reinforcement Learning with Stacked Hierarchical Attention for Text-based Games
Yunqiu Xu*, Meng Fang*, Ling Chen, Yali Du, Joey Tianyi Zhou, Chengqi Zhang
In NeurIPS 2020
[code]

Curriculum-guided hindsight experience replay
Meng Fang, Tianyi Zhou, Yali Du, Lei Han, Zhengyou Zhang
In NeurIPS 2019
[code]

LIIR: Learning individual intrinsic reward in multi-agent reinforcement learning
Yali Du*, Lei Han*, Meng Fang, Ji Liu, Tianhong Dai, Dacheng Tao
In NeurIPS 2019

DHER: Hindsight experience replay for dynamic goals
Meng Fang, Cheng Zhou, Bei Shi, Boqing Gong, Jia Xu, Tong Zhang
In ICLR 2019
[project webpage] [code]

Learning how to Active Learn: A Deep Reinforcement Learning Approach
Meng Fang, Yuan Li, Trevor Cohn
In EMNLP 2017
[code]

Language understanding

A Model-agnostic Data Manipulation Method for Persona-based Dialogue Generation
Yu Cao, Wei Bi, Meng Fang, Shuming Shi, Dacheng Tao
In ACL 2022
[code]

ProtoInfoMax: Prototypical Networks with Mutual Information Maximization for Out-of-Domain Detection
Iftitahu Ni'mah, Meng Fang, Vlado Menkovski, Mykola Pechenizkiy
In EMNLP 2021 (Findings)
[code]

DAGN: Discourse-Aware Graph Network for Logical Reasoning
Yinya Huang, Meng Fang, Yu Cao, Liwei Wang, Xiaodan Liang
In NAACL 2021 (Short)
[Leaderboard: the 1st until 17th Nov., 2020] [code]

REM-Net: Recursive Erasure Memory Network for Commonsense Evidence Refinement
Yinya Huang, Meng Fang, Xunlin Zhan, Qingxing Cao, Xiaodan Liang, Liang Lin
In AAAI 2021
[code]

Towards Efficiently Diversifying Dialogue Generation via Embedding Augmentation
Yu Cao, Liang Ding, Zhiliang Tian, Meng Fang
In ICASSP 2021

Pretrained Language Models for Dialogue Generation with Multiple Input Sources
Yu Cao, Wei Bi, Meng Fang, Dacheng Tao
In EMNLP 2020 (Findings)
[code]

Unsupervised Domain Adaptation on Reading Comprehension
Yu Cao, Meng Fang, Baosheng Yu, Joey Tianyi Zhou
In AAAI 2020
[code]

Dual adversarial neural transfer for low-resource named entity recognition
Joey Tianyi Zhou*, Hao Zhang*, Di Jin, Hongyuan Zhu, Meng Fang, Rick Siow Mong Goh, Kenneth Kwok
In ACL 2019

Bag: Bi-directional attention entity graph convolutional network for multi-hop reasoning question answering
Yu Cao, Meng Fang, Dacheng Tao
In NAACL 2019 (Short)
[code]

Others

Revisiting metric learning for few-shot image classification
Xiaomeng Li, Lequan Yu, Chi-Wing Fu, Meng Fang, Pheng-Ann Heng
In Neurocomputing 2020

Transfer Hashing: From Shallow to Deep
Joey Tianyi Zhou, Heng Zhao, Xi Peng, Meng Fang, Zheng Qin, Rick Siow Mong Goh
In IEEE Transactions on Neural Networks and Learning Systems 2018

Networked bandits with disjoint linear payoffs
Meng Fang, Dacheng Tao
In KDD 2014


Awards

Acknowledgements
I would like to thank all my collaborators, interns and students.

(imitation is the sincerest form of flattery)