- Objective 1: To propose and empirically validate a neuro-cognitive model of the multiple, mutually interactive time scales that contribute to human perception of gesture qualities and action prediction., assuming an embodied cognitive experience of gestural qualities (in time and through time).
- Objective 2: To develop computational models grounded on the neuro-cognitive model (Obj.1) for the automated detection, measurement, and prediction of movement qualities at individual as well as group level across different time scales.
- Objective 3: To analyse music performers movement synchronized at different temporal scales in ecological music performance, to inspire the design of computational models (Obj. 2), their applications in the use case scenarios (Obj 7), and to develop and validate a movement sonification framework that enhance the perception and communication of movement across different temporal scales.
- Objective 4: To design controlled and ecological experiments to iteratively refine, validate, and evaluate the proposed conceptual framework and computational models, leveraging several test-beds affording the possibility to address prediction in different scenarios. Benefiting from an interdisciplinary research consortium, the aforementioned scenarios will range from action prediction in a controlled laboratory setting, to prediction in dyadic human-human and human-robot interaction and to prediction in small group interaction.
EnTimeMent measurable objectives can be summarised as follows: