Authors: David W. Embley, Stephen W. Liddle
Tags: 2013, conceptual modeling
Big data is characterized by volume, variety, velocity, and veracity. We should expect conceptual modeling to provide some answers since its historical perspective has always been about structuring information—making its volume searchable, harnessing its variety uniformly, mitigating its velocity with automation, and checking its veracity with application constraints. We provide perspectives about how conceptual modeling can “come to the rescue” for many big-data applications by handling volume and velocity with automation, by inter-conceptual-model transformations for mitigating variety, and by conceptualized constraint checking for increasing veracity.Read the full paper here: https://link.springer.com/chapter/10.1007/978-3-642-41924-9_1