By Olga Ioannidou, Ludwig Maximilian University of Munich, Germany
Is climate change real? Is nuclear power safe? Should we trust genetically modified food? These are some questions that you -and oftentimes even scientists- cannot answer with a simple “Yes” or “No”. However theses scientific issues have an impact on society and we oftentimes, as citizens, need to make decisions about them.
This is why topics, such as climate change and GMOs, have been lately introduced in science education. Students are often asked to argue stating pros and cons of controversial scientific issues in classroom discussions. This type of argumentation is termed as “socio-scientific argumentation” and it is often included in national curricula as an educational goal. However, teachers are still sceptical about the introduction of such topics in their classroom. One issue that contributes to the problem is that there is neither a clear definition of the concept nor clear guidelines of how it can be measured.
This problem motivated me and my research team to look closer to the definitions and measurements that have been used by researchers regarding socio-scientific argumentation (SSA). Our goal was to reveal general and specific characteristics of SSA and present possible ways in which the quality of SSA can be reliably measured. For this purpose, we conducted an integrative literature review, which included studies that presented measurement tools (e.g. scoring rubrics, tests and questionnaires) for measuring students’ and teachers’ socio-scientific argumentation. The results of this review showed that, in a conceptual level, researchers define SSA as ill-structured, without clear-cut solutions. In addition, while SSA is conceptually treated as a part of scientific literacy, more emphasis is put on argumentation based on moral reasoning over reasoning based on knowledge and scientific evidence. Regarding the measurement of SSA, researchers use a variety of tools. However, the majority of the studies focus on the structure of the arguments, despite previous criticism for the use of structural models for the assessment of students’ argumentation.
With this review, we aspire to start a discussion about the nature of SSA and adequate ways of measuring it. Some points of this discussion could be: Is socio-scientific argumentation a special argumentation form with distinctive characteristics (e.g. compared to scientific argumentation)? Which indicators can teachers and researchers use to assess the quality of SSA? Can we use the same measuring tools for various topics?
Olga studied Primary Education in Aristotle University of Thessaloniki. In 2016, she gained her Master’s degree on ‘Research on Teaching and Learning’ at the Technical University of Munich. Since 2016, she is a PhD candidate at the REASON international program at Ludwig Maximilian University of Munich .She is also a research associate at the Technical University of Munich.