This section presents the experiences of users with selected evaluation tools.
Users assessed their experience with the tools using a standardized approach. A scoring scheme with ten criteria was developed and answers were scored from 1-4; where 1 = not satisfactory at all, 2 = need major improvements, 3 = almost perfect but minor improvements are needed, 4 = completely satisfactory. With each numeric score (1-4) a comment was requested explaining the score. The ten criteria were:
- Meets evaluation needs/requirements
- Overall appearance
- Generation of actionable evaluation outputs
- Allows evaluation of OH-aspects
- Workability in terms of required data (1: very complex, 4: simple)
- Workability in terms of required people to include (1: many, 4: few)
- Workability in terms of analysis to be done (1: difficult, 4: simple)
- Time taken for application of tool (1: > 2 months, , 2: 1-2 months, 3: 1 week – 1 month, 4: < 1 week).
In addition, users answered four questions using a SWOT-like approach: 1) things that I liked or that it covers well, 2) things that I struggled with, 3) things people should be aware of when using this tool, and 4) things that this tool is not covering or not good at covering.
Please note that these reports are subjective and do not constitute the endorsement of any tool.
Using the ISSEP framework for the evaluation of the impacts on the decision-making of the Canadian Integrated Program for Antimicrobial Resistance Surveillance.
Cécile Aenishaenslin, Michèle Vernet, Jane Parmley, André Ravel, Barbara Häsler.
Evaluating integrated surveillance of antimicrobial resistance: experiences from the use of four evaluation tools.
Liza Rosenbaum Nielsen, Lis Alban, Johanne Ellis-Iversen, Marianne Sandberg.
Evaluating integrated surveillance of Belgium’s national action plan on antimicrobial resistance: experiences from the use of the NEOH and the FAO PMP-AMR evaluation tools.
Maria-Eleni Filippitzi, Ilias Chantziaras, Nicolas Antoine Moussieaux.