Measuring and Relieving the Over-smoothing Problem for Graph Neural Networks from the Topological View
- Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, Xu Sun
- Computer ScienceAAAI Conference on Artificial Intelligence
- 7 September 2019
Two methods to alleviate the over-smoothing issue of GNNs are proposed: MADReg which adds a MADGap-based regularizer to the training objective; AdaEdge which optimizes the graph topology based on the model predictions.
Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question
- Haoyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, W. Xu
- Computer ScienceNIPS
- 21 May 2015
The mQA model, which is able to answer questions about the content of an image, is presented, which contains four components: a Long Short-Term Memory (LSTM), a Convolutional Neural Network (CNN), an LSTM for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer.
FewRel 2.0: Towards More Challenging Few-Shot Relation Classification
- Tianyu Gao, Xu Han, Jie Zhou
- Computer ScienceConference on Empirical Methods in Natural…
- 1 October 2019
It is found that the state-of-the-art few-shot relation classification models struggle on these two aspects, and that the commonly-used techniques for domain adaptation and NOTA detection still cannot handle the two challenges well.
Deep Progressive Reinforcement Learning for Skeleton-Based Action Recognition
- Yansong Tang, Yi Tian, Jiwen Lu, Peiyang Li, Jie Zhou
- Computer ScienceIEEE/CVF Conference on Computer Vision and…
- 1 June 2018
A deep progressive reinforcement learning (DPRL) method for action recognition in skeleton-based videos, which aims to distil the most informative frames and discard ambiguous frames in sequences for recognizing actions.
A Dual Reinforcement Learning Framework for Unsupervised Text Style Transfer
Unsupervised text style transfer aims to transfer the underlying style of text but keep its main content unchanged without parallel data. Most existing methods typically follow two steps: first…
NumNet: Machine Reading Comprehension with Numerical Reasoning
- Qiu Ran, Yankai Lin, Peng Li, Jie Zhou, Zhiyuan Liu
- Computer ScienceConference on Empirical Methods in Natural…
- 15 October 2019
A numerical MRC model named as NumNet is proposed, which utilizes a numerically-aware graph neural network to consider the comparing information and performs numerical reasoning over numbers in the question and passage, outperforming all existing machine reading comprehension models by considering the numerical relations among numbers.
PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers
- Xumin Yu, Yongming Rao, Ziyi Wang, Zuyan Liu, Jiwen Lu, Jie Zhou
- Computer ScienceIEEE International Conference on Computer Vision
- 19 August 2021
A new method is presented that reformulates point cloud completion as a set-to-set translation problem and design a new model, called PoinTr, that adopts a transformer encoder-decoder architecture for point clouds completion that outperforms state-of-the-art methods by a large margin.
Unsupervised Paraphrasing by Simulated Annealing
- Xianggen Liu, Lili Mou, Fandong Meng, Hao Zhou, Jie Zhou, Sen Song
- Computer ScienceAnnual Meeting of the Association for…
- 9 September 2019
Results show that UPSA achieves the state-of-the-art performance compared with previous unsupervised methods in terms of both automatic and human evaluations, and outperforms most existing domain-adapted supervised models, showing the generalizability of UPSA.
CM-Net: A Novel Collaborative Memory Network for Spoken Language Understanding
- Yijin Liu, Fandong Meng, Jinchao Zhang, Jie Zhou, Yufeng Chen, Jinan Xu
- Computer ScienceConference on Empirical Methods in Natural…
- 16 September 2019
A novel Collaborative Memory Network (CM-Net) based on the well-designed block, named CM-block, which achieves the state-of-the-art results on the ATIS and SNIPS in most of criteria, and significantly outperforms the baseline models on the CAIS.
Continual Relation Learning via Episodic Memory Activation and Reconsolidation
Inspired by the mechanism in human long-term memory formation, EMAR is introduced and it is shown that EMAR could get rid of catastrophically forgetting old relations and outperform the state-of-the-art continual learning models.
...
...