So, Aloha models are are not written in terms of Instances, Tensors, or DataModels. Instead, models are written generically, and different semantics implementations are provided to give meaning to the features extracted from the arbitrary input types on which the models operate. While these differences may not sound extremely useful, together they produce a number of advantages. The most notable is probably the way input features make their way to the models. Typically, when interacting with APIs, data is translated into a format that can be understood by the objects being called. By tying a model interface to an input type specified inside the library, we require the caller to convert the data to the input type before the model can use the data to make a prediction. There are some ways to ease the woes that are involved in the ETL process, but as we've seen many times, transforming data can be slow, error-prone, and ultimately, unnecessary altogether. It's almost always the case that data is in an alternate format than the one required for learning or prediction. Because data, in its natural form, typically has a graph-like structure and many machine learning algorithms operate on vector spaces, we often have to perform such a transformation. The question is who should do the data transformation.