Learn More
The large sized data sets are replicated in more than one site for the better availability to the nodes in a grid. Downloading the dataset from these replicated locations have practical difficulties, due to network traffic, congestion, frequent change-in performance of the servers, etc. In order to speed up the download, complex server selection techniques,(More)
Seven compounds were extracted and purified from the roots of Michelia compressa var. lanyuensis. These compounds are liriodenine, (-)-N-acetylanonaine, pressalanine A, p-dihydroxybenzaldehyde, 3,4-dihydroxybenzoic acid, (-)-bornesitol and β-sitostenone. These compounds were screened for anti-proliferation and anti-tyrosinase activities in B16F10 cells.(More)
Standardization is used to ensure that the variables in a similarity calculation make an equal contribution to the computed similarity value. This paper compares the use of seven different methods that have been suggested previously for the standardization of integer-valued or real-valued data, comparing the results with unstandardized data. Sets of(More)
In data grid environments, datasets are usually replicated to many servers when taking into consideration its efficiency. Since these files are usually huge in size, how to efficiently transmit and access between servers and grid users is an important issue. In this paper, we present an economy-based parallel file transfer technique using P2P co-allocation(More)
  • 1