Parag Mhashilkar

Learn More
Grid computing has become very popular in big and widespread scientific communities with high computing demands, like high energy physics. Computing resources are being distributed over many independent sites with only a thin layer of Grid middleware shared between them. This deployment model has proven to be very convenient for computing resource(More)
The Advanced Networking Initiative (ANI) project from the Energy Services Network provides a 100 Gbps test bed, which offers the opportunity for evaluating applications and middleware used by scientific experiments. This test bed is a prototype of a 100 Gbps wide-area network backbone, which links several Department of Energy (DOE) national laboratories,(More)
Grid computing has enabled scientific communities to effectively share computing resources distributed over many independent sites. Several such communities, or Virtual Organizations (VO), in the Open Science Grid and the European Grid Infrastructure use the GlideinWMS system to run complex application work-flows. GlideinWMS is a pilot-based workload(More)
The SAM-Grid job management component does not currently regulate the flow of jobs to the execution sites. When multiple jobs are submitted to the same execution site, they all enter the gateway node more or less at the same time. Because job preparation at the gateway is a CPU intensive activity, we have observed high value of load (20) at the gateway(More)
 Collaborate with the Open Science Grid (OSG) Network Area for the deployment of PerfSONAR at 100 OSG facilities  Aggregate and display data through the OSG Dashboard for end-to-end hop-by-hop paths across network domains  R&D on 100G for production use by CMS & FNAL high-capacity high-throughput Storage facility  Ensure that the stack of software and(More)
Exascale science translates to big data. In the case of the Large Hadron Collider (LHC), the data is not only immense, it is also globally distributed. Fermilab is host to the LHC Compact Muon Solenoid (CMS) experiment's US Tier-1 Center, the largest of the LHC Tier-1s. The Laboratory must deal with both scaling and wide-area distribution challenges in(More)
The SciDAC Center for Enabling Distributed Petascale Science (CEDPS) seeks to accelerate DOE research by eliminating barriers to reliable and performant wide area data movement. A major focus of CEDPS R&D has been the development of Globus Online, a hosted data movement service to which users can hand off responsibility for a range of data movement tasks.(More)
The Open Science Grid (OSG) offers access to around hundred Compute elements (CE) and storage elements (SE) via standard Grid interfaces. The Resource Selection Service (ReSS) is a push-based workload management system that is integrated with the OSG information systems and resources. ReSS integrates standard Grid tools such as Condor, as a brokering(More)
  • 1