In this paper, we consider the resource allocation problem for computing a large set of equal-sized independent tasks on heterogeneous computing systems. This problem represents the computation paradigm for a wide range of applications such as SETI@home and Monte Carlo simulations. We consider a general problem in which the interconnection between the nodes is modeled using a graph. We maximize the throughput of the system by using a linear programming formulation. This linear programming formulation is further transformed to an extended network flow representation, which can be solved efficiently using maximum flow/minimum cut algorithms. This leads to a simple distributed protocol for the problem. The effectiveness of the proposed resource allocation approach is verified through simulations.