Learn More
We propose a simple yet effective technique for neural network learning. The forward propagation is computed as usual. In back propagation, only a small subset of the full gradient is computed to update the model parameters. The gradient vectors are sparsified in such a way that only the top-k elements (in terms of magnitude) are kept. As a result, only k(More)
Current Chinese social media text summarization models are based on an encoderdecoder framework. Although its generated summaries are similar to source texts literally, they have low semantic relevance. In this work, our goal is to improve semantic relevance between source texts and summaries for Chinese social media summarization. We introduce a Semantic(More)
ive text summarization has achieved successful performance thanks to the sequence-to-sequence model (Sutskever, Vinyals, and Le 2014) and attention mechanism (Bahdanau, Cho, and Bengio 2014). Rush, Chopra, and Weston (2015) first used an attention-based encoder to compress texts and a neural network language decoder to generate summaries. Following this(More)
We propose a method, called Label Embedding Network, which can learn label representation (label embedding) during the training process of deep networks. With the proposed method, the label embedding is adaptively and automatically learned through back propagation. The original one-hot represented loss function is converted into a new loss function with(More)
Major accident risks posed by chemical hazards have raised major social concerns in today's China. Land-use planning has been adopted by many countries as one of the essential elements for accident prevention. This article aims at proposing a method to assess major accident risks to support land-use planning in the vicinity of chemical installations. This(More)
  • 1