Towards Automatic Evaluation of Learning Object Metadata Quality

0
56

Authors: Erik Duval, Xavier Ochoa

Tags: 2006, conceptual modeling

Thanks to recent developments on automatic generation of metadata and interoperability between repositories, the production, management and consumption of learning object metadata is vastly surpassing the human capacity to review or process these metadata. However, we need to make sure that the presence of some low quality metadata does not compromise the performance of services that rely on that information. Consequently, there is a need for automatic assessment of the quality of metadata, so that tools or users can be alerted about low quality instances. In this paper, we present several quality metrics for learning object metadata. We applied these metrics to a sample of records from a real repository and compared the results with the quality assessment given to the same records by a group of human reviewers. Through correlation and regression analysis, we found that one of the metrics, the text information content, could be used as a predictor of the human evaluation. While this metric is not a definitive measurement of the “real” quality of the metadata record, we present several ways in which it can be used. We also propose new research in other quality dimensions of the learning object metadata.

Read the full paper here: https://link.springer.com/chapter/10.1007/11908883_44