Learn More
In this paper, a natural language neural network model based on the analysis of the structure of sentences is proposed. The proposed neural network consists of 5 layers: sentence-layer, clause-layer, phrase-layer, word-layer, and concept-layer. The input text is split into different levels as sentences, clauses, phrases and words. Then neurons are allocated(More)
We figure out a trap that is not carefully addressed in the previous works using lexicons or ontologies to train or improve distributed word representations: For polysemous words and utterances changing meaning in different contexts, their paraphrases or related entities in a lexicon or an ontology are unreliable and sometimes deteriorate the learning of(More)
Though there are some works on improving distributed word representations using lexicons, the improper over-fitting of the words that have multiple meanings is a remaining issue deteriorating the learning when lexicons are used, which needs to be solved. An alternative method is to allocate a vector per sense instead of a vector per word. However, the word(More)
The character vocabulary can be very large in non-alphabetic languages such as Chinese and Japanese, which makes neural network models huge to process such languages. We explored a model for sentiment classification that takes the embeddings of the radicals of the Chinese characters, i.e, hanzi of Chinese and kanji of Japanese. Our model is composed of a(More)
  • 1