Future robots are expected to communicate with humans using natural language. The naïve human user will expect a robot to easily understand what he/she is meaning by instructions concerning robot's tasks. This implies that the robot will need to have a means of grounding, in its own sensors, the natural language terms and constructions used by the human user. This paper presents an approach to solve this problem that is based on the integration of a "learning server" in the software architecture of the robot. Such server should be capable of on-line, incremental learning from examples; it should handle multiple problems concurrently and it should have meta-learning capabilities. A learning server already developed by the authors is presented. Complementarily, the dimensionality reduction problem is also addressed, using a Blocked DCT approach. Experimental results are obtained in a scenario in which three concepts (corresponding to natural language expressions) are concurrently learned.