Learn More
In this paper, a natural language neural network model based on the analysis of the structure of sentences is proposed. The proposed neural network consists of 5 layers: sentence-layer, clause-layer, phrase-layer, word-layer, and concept-layer. The input text is split into different levels as sentences, clauses, phrases and words. Then neurons are allocated(More)
The character vocabulary can be very large in non-alphabetic languages such as Chinese and Japanese, which makes neural network models huge to process such languages. We explored a model for sentiment classification that takes the embeddings of the radicals of the Chinese characters, i.e, hanzi of Chinese and kanji of Japanese. Our model is composed of a(More)
Though there are some works on improving distributed word representations using lexicons, the improper over-fitting of the words that have multiple meanings is a remaining issue deteriorating the learning when lexicons are used, which needs to be solved. An alternative method is to allocate a vector per sense instead of a vector per word. However, the word(More)
There have been some works that learn a lexicon together with the corpus to improve the word embeddings. However, they either model the lexicon separately but update the neural networks for both the corpus and the lexicon by the same likelihood, or minimize the distance between all of the synonym pairs in the lexicon. Such methods do not consider the(More)
  • 1