Context is essential to evaluate an atomic piece of information composing an articulated structured sample. A particular context captures different structural information with respect to an alternative context. The paper introduces a generative kernel that easily and effectively combines the structural information captured by generative tree models characterized by different contextual capabilities. The proposed approach exploits the idea of hidden states multisets to realize a tree encoding that takes into account both the summarized information on the path leading to a node (i.e. a top-down context) as well as the information on how substructures are composed to create a subtree rooted on a node (bottom-up context). An thorough experimental analysis is provided, showing that the bi-directional approach incorporating top-down and bottom-up contexts yields to superior performances with respect to the unidirectional contexts alone, achieving state of the art results on challenging tree classification benchmarks.