Graph Neural Networks (GNNs) are a powerful tool for processing graphs, that represent a natural way to collect information coming from several areas of science and engineering — e.g. data mining, computer vision, molecular chemistry, molecular biology, pattern recognition —, where data are intrinsically organized in entities and relationships among entities. Nevertheless, GNNs suffer, so as recurrent/recursive models, from the long-term dependency problem that makes the learning difficult in deep structures. In this paper, we present a new architecture, called Layered GNN (LGNN), realized by a cascade of GNNs: each layer is fed with the original data and with the state information calculated by the previous layer in the cascade. Intuitively, this allows each GNN to solve a subproblem, related only to those patterns that were misclassified by the previous GNNs. Some experimental results are reported, based on synthetic and real-world datasets, which assess a significant improvement in performances w.r.t. the standard GNN approach.