Data Practices and Cultures: their impact on Educational Quality
A new Webinar in the Series “Data Cultures in Higher Education”
REGISTRATION OPEN!
The question of educational quality in higher education has been a matter of concern for all of us in the last 50 years, particularly since we understood that when talking about quality, we did not all represent the same images.
In the immense mass of bibliography and educational policy recommendation documents, we will intuit messages about educational quality such as … “superiority or excellence (of certain practices or educational models or materials)” … “align the needs of the demand ( students and labor market) with those of the offer (university courses) … “clearly determine the properties inherent to the contents and didactic methodologies, which allow judging their value” … “generate metric systems to analyze quality and adapt the services according to standards “…” reach all those who need a university education and offer a service contextualized in a glocal way “. These definitions highlight what the classic text of Harvey and Green had already pointed out (Harvey & Green, 1993): quality can be defined in various ways, depending on the objectives of an organization.
I came across this text at the beginning of 2010 when, working with Patrizia Ghislandi, an Italian expert on the subject of eLearning quality, I had the opportunity to reflect on the need to think about quality as a complex, multi-perspective and multi-level process. A process that requires continuous adjustments by stakeholders. It was also in those years that I met Nan Yang, whose work in exploring the problem of educational quality in higher education has been evolving with success. In her recent book “eLearning for Quality Teaching in Higher Education: Teachers perceptions, practices and interventions. Nan reflects on fundamental aspects: how teachers perceive quality, and in defining quality, how they guide their practices, interventions and didactic innovation. Her research developed between the first half of 2010 and towards the end of the decade. In that interim, we both saw raising concern relating the quality in higher education institutions, starting with the entry into a new era of quality analysis: the metrification of the attributes that define quality and hence the spread of university rankings.
Materiality of the data in HEIs: ranking and evaluations of the quality of teaching.
I could add little about the problem of university rankings, which has not already been dealt with by the research work of Albert Sangrà. In two of his recent articles, “Collecting Data for feeding the online dimension of university rankings: A feasibility test” y “Rankings meet distance education: defining relevant criteria and indicators for online universities“, he just discusses about the conundrum of generating indicators to uncover a frequently hidden, little considered dimension of higher education: online learning. And why is it hidden? Precisely because of the presence of a culture of educational quality where face to face teaching has always won, with a well-established prejudice against online education as an unreliable substitute for the former. Several COVID19 discussions and the strong insistence to return to full presence despite the difficulties of maintaining it in this era of pandemic, are just a reminder of such a terrible resistance. As a result, the indicators on educational quality in HEIs do not represent or give relevant weight to the effective use of online education methodologies.
In a much more local example, with Patricia Ghislandi and Albert himself we reflected on the “Streetlamp Paradox“: We analyse what our evaluation instruments allow us to see, that is, we look for (and find) within the area illuminated by the lamp. Everything that our instruments do not “illuminate” remains in an unknown territory. And yet, as we discussed in that work, many times it is what we do not see / measure that shapes the perception of the undergraduates’ quality experience.
These examples make one issue very visible: the metrification of quality in the rankings and the evaluation of teaching, correlative to the dataification processes of society and university institutions, involves all the aspects that we have been analyzing in this Webinar Series. We have already talked about how learning analytics are built and all the problems that entails for teachers and students; we have also discussed about the biases in building algorithms to activate recommendations or services in social sciences. In other words, data requires assemblages that rely on the measurements’ approach, which in time are generated within an institutional culture. The latter imposes values and perspectives of what is good, of what we can define as “quality”. Hence, it imposes the conditions of data extraction, their elaboration and the final representation, which is hence associated to performance (rankings). This is what Luci Pangrazio calls “data materiality“, in her research work.
We then have a materiality of data (is conceived as an objective instrument) which has a series of implications for the stakeholders. If we think about the rankings, these entail for example recommendations that address the selection of university institutions by young people in good/high socio-economic conditions. Or they either promote headhunting and talent recruitment (researchers and university teachers) and with it more research funds. Everything could create a vicious circle with implications for the “prestige” of universities.
The importance of critical data literacy by all stakeholders in HEIs
These phenomena require not only reflection by experts. Since the practices of all the interest groups of a university institution, from students and teachers to administrative and management personnel, go through on a daily basis, we cannot avoid considering the relevance of promoting critical data literacy on their part, which also it is linked to Quality Literacy (an aspect that we also investigated with Patrizia, based on the work of Ulf Ehlers).
Well, this has been precisely the point made by Nan: through a recent research work, she has explored the data literacy of the various stakeholders wondering (and asking) what they need to know and how could they learn about, to actively engage in the construction of educational quality.
Nan introduces her work explaining that …most data in higher education have not been transformed into actionable insights for quality enhancement. However this is not only, or above all, a technical problem. Data are socially produced by stakeholders, whose motivations and understanding influence data practices. As a result, stakeholders’ data literacy influences the effectiveness of using data for student success, as complex concept embedded into a broader vision of quality in Higher Education.
In fact, with most studies focusing on students’ data literacy, stakeholders’ data literacy to support educational quality is still an issue.

Nan Yang has explored educational quality from several perspectives and within several contexts, like large-size lectures to overall educational systems.
In this webinar, she will explain why a complex, multi-perspective vision of data literacy within Higher Education institutions can make a difference in student success through a holistic view.
We will discuss some of the instruments she uses in her research to this regard, including a competencies matrix of data literacy adopted to discuss the specific data literacy competencies that stakeholders should focus on promoting student success.
.