Learn More
This paper presents grammar error correction for Japanese particles that uses discrimina-tive sequence conversion, which corrects erroneous particles by substitution, insertion, and deletion. The error correction task is hindered by the difficulty of collecting large error corpora. We tackle this problem by using pseudo-error sentences generated(More)
Ascorbate (AsA) is a redox buffer and enzyme cofactor with various proposed functions in stress responses and growth. The aim was to identify genes whose transcript levels respond to changes in leaf AsA. The AsA-deficient Arabidopsis mutant vtc2-1 was incubated with the AsA precursor L-galactono-1,4-lactone (L-GalL) to increase leaf AsA concentration.(More)
In this paper we propose a novel algorithm for opinion summarization that takes account of content and coherence, simultaneously. We consider a summary as a sequence of sentences and directly acquire the optimum sequence from multiple review documents by extracting and ordering the sentences. We achieve this with a novel Integer Linear Programming (ILP)(More)
We propose a novel algorithm for sentiment summarization that takes account of informativeness and readability, simultaneously. Our algorithm generates a summary by selecting and ordering sentences taken from multiple review texts according to two scores that represent the informa-tiveness and readability of the sentence order. The informativeness score is(More)
In this paper we introduce a novel single-document summarization method based on a hidden semi-Markov model. This model can naturally model single-document summarization as the optimization problem of selecting the best sequence from among the sentences in the input document under the given objective function and knapsack constraint. This advantage makes it(More)
• We applied CSHMMs to contact center dialogue transcripts of six different domains. • Our method outperformed competitive baselines based on the maximum coverage of important words. All states are connected ergodically with equal transition probabilities HMM (common states) trained from all domains Train with the data of Domain 1 Train with the data of(More)
This paper reports the improvements we made to our previously proposed hidden Markov model (HMM) based summa-rization method for multi-domain contact center dialogues. Since the method relied on Viterbi decoding for selecting utterances to include in a summary, it had the inability to control compression rates. We enhance our method by using the(More)
In this paper we propose a novel text summarization model, the redundancy-constrained knapsack model. We add to the Knapsack problem a constraint to curb redundancy in the summary. We also propose a fast decoding method based on the Lagrange heuristic. Experiments based on ROUGE evaluations show that our proposals outperform a state-of-the-art text(More)