Distributed computing systems have evolved over decades to support various types of scientific applications and overall computing paradigms have been categorized into HTC (High-Throughput Computing) to support bags of tasks which are usually long running, HPC (High-Performance Computing) for processing tightly-coupled communication-intensive tasks on top of dedicated clusters of workstations or Supercomputers, and Data-intensive Computing leveraging distributed storage systems and parallel processing frameworks. Many-Task Computing (MTC) aims to bridge the gap between traditional HTC and HPC by building efficient middleware systems throughout employing lightweight task dispatching mechanisms, minimizing data movements, leveraging data-aware scheduling, and proposing of next generation Exascale storage systems. Recent emerging applications requiring millions or even billions of tasks to be processed with relatively short per task execution times have driven this new computing paradigm. In this paper, we investigate concepts and technologies of MTC and propose guidelines for building an efficient and effective middleware system to fully support MTC applications. Throughout our short survey about challenges, systems and applications of MTC, we argue that a next generation distributed middleware system must effectively leverage distributed file systems, parallel processing frameworks, decentralized data/compute management systems, and dynamic load balancing techniques to solve the most challenging and complex scientific problems.