A new algorithm, the Partial Correlation Statistic (PCS) algorithm, is presented for structure learning under linear Structural Equation Models. The PCS algorithm can deal with continuous data following linear arbitrary distribution rather than only a Gaussian distribution. This paper makes two specific contributions. First, for linear arbitrarily distributed datasets, which are generated by the linear structural equation models, if the sample size is sufficiently large, partial correlation coefficient statistic is proved to follow a Student's t-distribution. Second, the PCS algorithm combines hypothesis testing of partial correlation statistic and local learning to select potential neighbors of the target node. This significantly reduces the search space and achieves good time performance. The PCS algorithm does not need to choose optimal threshold of partial correlation by large amount of experiments. Especially, the PCS algorithm redefines the relevance from statistic theory and measure the relevance of the variables based on <inline-formula><tex-math notation="LaTeX">$p$</tex-math><alternatives> <inline-graphic xlink:type="simple" xlink:href="yang-ieq1-2578315.gif"/></alternatives></inline-formula>-value. The effectiveness of the algorithm is compared with current state of the art methods on seven networks. A simulation shows that the PCS algorithm outperforms existing algorithms in terms of both accuracy and time performance on average.