In previous research a model for thematic role assignment (θRARes) was proposed, using the Reservoir Computing paradigm. This language comprehension model consisted of a recurrent neural network (RNN) with fixed random connections which models distributed processing in the prefrontal cortex, and an output layer which models the striatum. In contrast to this previous batch learning method, in this paper we explored a more biological learning mechanism. A new version of the model (i-θRARes) was developed that permitted incremental learning, at each time step. Learning was based on a stochastic gradient descent method. We report here results showing that this incremental version was successfully able to learn a corpus of complex grammatical constructions, reinforcing the neurocognitive plausibility of the model from a language acquisition perspective.