Best Practices in Population Modeling Should Always Be Evolving


Pfizer scientists present the population PK guidance that they recently developed and implemented.1 We welcome the initiative which serves multiple purposes: (i) provide Pfizer with feedback whether their practices are in line with the scientific community, (ii) allow other organizations to review/ update their own practices or develop their own guidance, (iii) provide developers of new methodologies and software, an example of how present methodologies are being implemented in a large drug development organization. In the following, we will restrict our comments on the practices recommended by the guidance to those mentioned in the main article.1 However, the purpose of this commentary is not to discuss point-by-point all the recommendations in the article, which is written as the Pfizer recommendations. Indeed, statistical methods for model evaluation, and/ or for model building, including covariates, are diverse and always evolving. We would like to point out that written recommendation should not be rigid. They should be evolving with development of new methods; otherwise they can be counter-productive, especially in a rapid development science like pharmacometrics. Recommendations should also be adapted to the purpose of model building, for instance if a model is developed in the context of hypothesis testing, it may be approached differently than when it is developed strictly for prediction. We want to mention some disagreements, which does not mean that we endorse all the other recommendations. In the evaluation of model adequacy, the likelihood ratio test is suggested as a diagnostic tool. However, the test only allows discrimination between two rival models, with no indication that either of them is adequate. Inspection of individual fits is not suggested although it may be informative at the early stage of model building. Although the authors pointed out that the guideline is not tool specific, it is mainly NONMEM oriented and indeed some of the terms of the Pop PK Workflow are “NONMEM” jargon (for instance “inclusion of $COV”). This does not help to bring new scientists in the field and to communicate with statisticians,2 specifying “computation of parameter imprecision” would be more general. For the complete discussion of the guidance, a commentary is not the right forum; indeed internally at Pfizer a wiki was used. Rather, an electronic discussion board, made available by ISoP Knowledge center (http://www.go-isop. org/) will be organized for the purpose of discussing this and potentially other guidances to be made publically available. We encourage those who want to contribute to the discussion about good practices in population PK model building and evaluation to participate in the discussions that will take place there. These discussions are likely to benefit Pfizer, by identifying areas of the best practices where changes may be justified, while support and absence of justified critique in other areas may provide additional support for the practices adopted. In the development or updating of internal guidances by other organizations, it needs to be recognized, as the Pfizer authors do, that the present guidance is conditioned on the software in use as well as the experiences of the organization. It would be unwise to base guidance on practices untested in the environment where they are to be used. This reflects the practices suggested by Pfizer, which appear to be based on internal experience in addition to the present literature. In other organizations, partially other methods for model building are likely to have been used and this can form the basis for adopting other practices, while new methods are being explored. Areas where other organizations may well have other experiences are in the areas of random effects model building, covariate model building and model diagnostics. By building on existing practices and by advocating the use of established methodologies, a “best” practice might constrain progress and development of new approaches, especially as a considerable effort will go into each analysis only to adhere to the guidance. This may leave little room for initiatives of developing new practices within routine analyses of the organization where best practices are established. Therefore, development of propagation of new, “good,” examples may slow down. The absence of mention of external evaluation or of cross-validation methods is an example that further highlights the need for best practices documents to be flexible and evolving. However, we contend that other mechanisms are likely to oppose this trend and lead to a more rapid adoption of new, better practices. Within the organization, there may be identification, through the process of developing guidances, of areas where agreement of best practice is scarce. This may be an impetus for more focused exploration of alternative existing and new methodologies leading to a better understanding. Revision of a best practice document, with associated training, is likely to have a faster uptake of better methodologies than typical dissemination of new methodologies, or training alone, where the methodologies proposed CPT: Pharmacometrics & Systems Pharmacology (2013) 2, e52; doi:10.1038/psp.2013.37; published online 3 July 2013 Best Practices in Population Modeling Should Always Be Evolving

DOI: 10.1038/psp.2013.37

Extracted Key Phrases

Cite this paper

@inproceedings{Karlsson2013BestPI, title={Best Practices in Population Modeling Should Always Be Evolving}, author={M O Karlsson and France Mentr{\'e}}, booktitle={CPT: pharmacometrics & systems pharmacology}, year={2013} }