A novel statistical model of interference in wireless networks is proposed. The model is based on the traditional propagation channel model, which includes the average path loss as well as the large-scale and small-scale fading. In addition to these two traditional types of fading, a new concept of network-scale fading is introduced, which is due to a random spatial distribution of transmitters and receivers of the network over a large region of space occupied by the whole network. This new type of fading complements the small-scale (e.g. Rayleigh) and large-scale (e.g. lognormal) ones, is on the scale exceeding that of the other two and is independent of them. Its probability density function is derived for typical network configurations and propagation channel conditions. Network-level analysis of interference effects is given, which includes estimation of the average number of interferers, of the dynamic range of the interferers potentially capable of generating linear and non-linear distortion effects in the victim receiver, and of the outage probability. In many cases, the combined interference power at the receiver is shown to be dominated by the contribution of the strongest interferer. This analysis culminates in formulation of a tradeoff relationship between the network density and the outage probability. The positive role of linear filtering (e.g. in the antenna or in frequency filters of the receiver) in reducing the number and dynamic range of interfering signals, and/or in reducing the outage probability is quantified via a new statistical selectivity parameter (Q-parameter). The linear filtering allows increasing the network density by a factor of Q at the same outage probability.