OBJECTIVE The aim of this study was to show the possibility to identify what decisions in the initial regional medical command and control (IRMCC) that have to be improved. DESIGN This was a prospective, observational study conducted during nine similar educational programs for regional and hospital medical command and control in major incidents and disasters. Eighteen management groups were evaluated during 18 standardized simulation exercises. MAIN OUTCOME MEASURE More detailed and quantitative evaluation methods for systematic evaluation within disaster medicine have been asked for. The hypothesis was that measurable performance indicators can create comparable results and identify weak and strong areas of performance in disaster management education and training. METHODS Evaluation of each exercise was made with a set of 11 measurable performance indicators for IRMCC. The results of each indicator were scored 0, 1, or 2 according to the performance of each management group. RESULTS The average of the total score for IRMCC was 14.05 of 22. The two best scored performance indicators, No 1 "declaring major incident" and No 2 "deciding on level of preparedness for staff" differed significantly from the two lowest scoring performance indicators, No 7 "first information to media" and No 8 "formulate general guidelines for response." CONCLUSION The study demonstrated that decisions such as "formulating guidelines for response and "first information to media" were areas in initial medical command and control that need to be improved. This method can serve as a quality control tool in disaster management education programs.