Owing to increase in desired user throughput and to the subsequent increase in network traffic, the number and density of cells in cellular networks have increased, especially starting with LTE. This directly translates into higher capital and operational expenses as well as increased complexity of network operation. To counter all three challenges, Self-Organized Networks (SON) have been proposed. A number of SON Functions (SFs) have been defined both from the network operator community as well as from the standardization bodies. In this respect, a SF represents a network function that can be automated e.g. Mobility Robustness Optimization (MRO) or Mobility Load balancing (MLB). The different SFs operate on the same radio network, in many cases adjusting the same or related parameters. Conflicts are as such bound to occur during the parallel operation of such SFs and mechanisms are required to resolve or minimize the conflicts. This thesis studies the solutions through which SON functions can be coordinated in an automated and preferably distributed manner. In the first part we evaluate the design principles of SFs that aim at easing the coordination. With the observation that the SON control loop is similar to a generic Q-learning problem, we propose designing SFs as Q-learning agents. This framework is applied to two SFs (MRO and MLB) with very positive results. Given the designed QL based SFs, we then evaluate two SON coordination approaches that consider the SON environment as a Multi-Agent System (MAS). The first approach based on Spatial-Temporal Decoupling (STD) separates the execution of SF instances in space and time so as to minimize the conflicts among instances. The second approach applies multi-agent cooperative learning for an automated solution towards SON coordination. In this case individual SF instances learn based on utilities that aggregate their own metrics as well as the metrics of peer SF instances. The intention in this case is to ensure that the learned stateaction policy functions apply actions that guarantee the best result for the active SF but also have the least effect on the peer SFs. Both coordination approaches have been evaluated with very positive results in simulations that consider the MRO MLB conflict.