Maximising the global use of HIV surveillance data through the development and sharing of analytical tools.


T o improve health it is important that care and prevention activities are focused on real problems and real need. Rational decisions about health strategies and interventions should be based on reliable and timely knowledge of the distribution of disease, which is only available with good surveillance. For a disease like AIDS, convincing statistics are necessary in estimating the extent of the spread of HIV and the associated demographic, social, and economic costs. However, there are problems with HIV surveillance: the long latent period means that disease is a reflection of historic rather than current spread; the infection is particularly present where resources for surveillance are limited; and there are biases in who comes forward for testing, be it in anonymous women attending antenatal clinics or those seeking a diagnosis in voluntary counselling and testing. These challenges have led to the development of surveillance methods and the theoretical tools to interpret surveillance data, which are based on an understanding of the problems and use the best available data and models to provide timely and practical information for users. The devastating costs of the disease and the initial alarm followed by limited spread in many industrial countries conspire to generate scepticism among the public, politicians, and professionals alike about the scale of the HIV pandemic. Against such scepticism convincing estimates can have a powerful and timely advocacy effect. However, for these estimates to be convincing, they need to have a sound empirical basis and be based on transparent well accepted methods. This is the only antidote against the common tendency to overestimate the spread and consequences of a disease to generate more resources for the response to the epidemic. Although overestimation may have public health benefits in the short run, its long term effects will undoubtedly be counterproductive. The availability of a reliable, sensitive, specific, safe, and inexpensive test for HIV infection has been crucial to understanding the epidemiology of the virus. The establishment of HIV surveillance systems has informed us of the spread of HIV, in a manner unparalleled for other diseases. In high income countries universal reporting of some diseases and registries have provided rich sources of comprehensive data. However, in developing countries limited resources often mean that surveillance is viewed as a luxury and surveillance systems that rely on reporting of AIDS cases have provided little. Many countries have set up expanded surveillance systems that bring together data on prevalence of infection and disease, as well as on risk factors for infection. The understanding that is vital for mobilising and directing resources and informing prevention and care programmes must be based on the analysis of local epidemiological data, thereby maximising the investment in surveillance systems. In this issue a collection of papers describes some of the tools generated by researchers to assist this analysis. A balance has to be struck between scientific rigor and universal applicability. To those versed in a particular specialism the compromises and simplifications necessary may appear unwarranted. However, for a global epidemiological exercise methods need to be readily applicable and understandable in very diverse circumstances. Only as universal methods are adopted and owned by the often over stretched national epidemiologists does their worth become apparent and over time experience and training allow for the development and application of more sophisticated and precise methods. To maximise the use of HIV surveillance data the Joint United Nations Programme on AIDS (UNAIDS) and the World Health Organization (WHO), in collaboration with researchers from a range of organisations, have co-ordinated the development of universally applicable methods. Between April and September 2003, in a series of 12 regional workshops, 261 national epidemiologists from 127 countries have been trained in the use of the tools appropriate to their level of epidemic. All of the ’’tools’’ described in this issue use mathematical models to analyse epidemiological data. Mathematical models provide a framework for the analysis of data. They can then be used to generate predictions, to test hypotheses, explore indirect consequences, and to create future scenarios. Such scenarios can incorporate proposed interventions and evaluate their potential for benefit or harm. Mathematical models offer a precise way of capturing our assumptions about data and can range from very simple models that attempt to capture the essence of a system to the very complex, which attempt to incorporate all relevant (and often irrelevant) detail. It is only by progressing from the simple to the complex in model development that we can hope to understand the changes in model behaviour associated with extra levels of complexity. Ideally a model should be suited to its function and include the necessary level of complexity. Model validity then relates to the ability of the model to generate the appropriate answers to the questions posed and can be assessed through the understanding the model generates, the fit of the model to currently available data, and in retrospect whether model projections agree with subsequent observation. However, such model fits to data should be treated with caution. In fitting a mathematical model to available data we make very important assumptions. Firstly, that the data to which model output is compared provides a good description of the underlying epidemic. Often in surveillance data there are systematic biases that cannot be accounted for in statistical measures of the uncertainty. These systematic biases are particularly worrisome if they change over time. Secondly, the degrees of freedom required to estimate all the relevant parameters in a model often exceed the degrees of freedom in the available data. This is sometimes addressed by restricting the number of parameters estimated through a comparison with outcome data, as is the case with the UNAIDS Estimation and Projection Package (EPP), 8 where only four parameter values are estimated from sero-prevalence data and others are externally estimated from other sources of data. Even then, the available data on parts of the epidemic curve often make it impossible to estimate with any i1

1 Figure or Table

Cite this paper

@article{Garnett2004MaximisingTG, title={Maximising the global use of HIV surveillance data through the development and sharing of analytical tools.}, author={Geoffrey P. Garnett and Nicholas C. Grassly and J. Ties Boerma and Peter Denis Ghys}, journal={Sexually transmitted infections}, year={2004}, volume={80 Suppl 1}, pages={i1-4} }