Although capacities of persistent storage devices evolved rapidly in the last years, the bandwidth between memory and persistent storage devices is still the bottleneck. As loosely coupled data sharing applications running in cluster environment may need an enormous number of files, the access to these files might become the bottleneck. With the quick development of the server and high-speed network, there are many works done on distributed memory cache to minimize data requests to the centralized file system. These systems have the drawback that nodes are coupled together to form a distributed cache statically. It is a difficult administrative task for changing environments like clusters. Current high performance computing resources, support batch job submissions using distributed resource management systems like TORQUE. How to use the resource management system to set up a self-organizing distributed memory cache on demand has rarely been studied. In this paper, we design a framework for dynamically setting up distributed memory cache for data sharing applications. Shared files are stored in the distributed memory cache, which can be accessed transparently and deliver data with high bandwidth. We describe the architecture of the framework, and evaluate its performance for a use case scenario.