cff-version: 1.2.0 abstract: "
This repository contains the complete code and experiment outputs for Chapter 4 of M. Tsfasman's PhD thesis "Towards predicting memory in multimodal group interactions". It provides Python scripts and notebooks for preprocessing multimodal data, training and evaluating machine learning models to predict memorable moments in conversations using features such as eye-gaze and speaker activity, and performing feature ablation studies to assess the importance of each input. The repository includes robust session-based cross-validation, hyper-parameter optimization, and tools for result visualization. It is designed for reproducible research and is an extended, methodologically improved version of a published ICMI 2022 paper*, supporting both local and cluster-based execution.
*M Tsfasman, K Fenech, M Tarvirdians, A Lorincz, C Jonker, C Oertel, "Towards creating a conversational memory for long-term meeting support: predicting memorable moments in multi-party conversations through eye-gaze,'' in Proc. International Conference on Multimodal Interaction (ICMI)}, pp. 94–104, 2022.
" authors: - family-names: Tsfasman given-names: Maria orcid: "https://orcid.org/0000-0001-5582-7636" title: "Code and results data underlying Chapter 4 of the PhD thesis: "Towards predicting memory in multimodal group interactions"" keywords: version: 1 identifiers: - type: doi value: 10.4121/1ac2b163-f9f5-4df0-9485-dc80ee3b632f.v1 license: CC BY-NC 4.0 date-released: 2025-07-25