« BIBL » : différence entre les versions

De Parcours SIIA
Aller à la navigation Aller à la recherche
 
(16 versions intermédiaires par 3 utilisateurs non affichées)
Ligne 7 : Ligne 7 :
=== Affectations sujets ===
=== Affectations sujets ===


Here is a shared table that present the different subject of study: [https://lite.framacalc.org/9qt3-siia-bibl_sujets_etude_biblio_2021 list of studies].
Here is a shared table that presents the different subjects of study: [https://lite.framacalc.org/9qt3-siia-bibl_sujets_etude_biblio_2021 list of studies].
 
Students are invited to indicate their choice(s) in this table.


=== Indications générales ===
=== Indications générales ===
Ligne 23 : Ligne 25 :
*  [https://www.enib.fr/~chevaill/documents/master/siia_bibl/Safety_and_Usability_of_Speech_Interfaces_for_In-V.pdf Safety and Usability of Speech Interfaces for In-Vehicle Tasks while Driving: A Brief Literature Review], Adriana Barón and Paul Green, Tec/. Report, The University of Michigan Transportation Research Institute, 2006.
*  [https://www.enib.fr/~chevaill/documents/master/siia_bibl/Safety_and_Usability_of_Speech_Interfaces_for_In-V.pdf Safety and Usability of Speech Interfaces for In-Vehicle Tasks while Driving: A Brief Literature Review], Adriana Barón and Paul Green, Tec/. Report, The University of Michigan Transportation Research Institute, 2006.


== Sujets d'étude 2021-2022 ==
== Sujets libres (sans lien avec un stage ou lié à un stage non attribué) ==


=== Programmation de systèmes domotiques par les utilisateurs finaux ===
=== How telepresence systems can support collaborative dynamics in large interactive spaces? ===
 
* Enseignant : [mailto:cedric.fleury@imt-atlantique.fr Cédric Fleury]
* Enseignant : Eric Maisel (maisel@enib.fr), Lab-STICC, ENIB
* Sujet en lien avec un stage : non.
* Sujet en lien avec un stage : non.


Le développement de la programmation événementielle, basée sur des règles trigger-action, contribue au développement des systèmes domotiques en permettant la mise en relation entre capteurs (de température, de luminosité, de présence, ...) et effecteurs (ampoules, radiateurs, stores, ...). L'utilisation de ces systèmes ayant pour objectif d'améliorer la sécurité et le confort dans les bâtiments mais également de permettre d'en réduire l'emprunte écologique. La personnalisation de ces systèmes est nécessaire de façon à les adapter aux différents contextes tant architecturaux que technologiques, environnementaux et culturels. A court et moyen terme cette adaptation passe encore par une programmation de ces systèmes domotiques par les utilisateurs finaux (tout un chacun chez soi, le personnel soignant voire les patients dans les hôpitaux, les salariés dans les bureaux, ...). Cette tâche de programmation est plus complexe qu'il n'y parait et suppose le développement d'assistants logiciels de façon à la rendre accessible.
Videoconferencing and telepresence have long been a way to enhance communication among remote users. They improve turn-taking, mutual understanding, and negotiation of common ground by supporting non-verbal cues such as eye-gaze direction, facial expressions, gestures, and body langue [3, 6, 10]. They are also an effective solution to avoid the "Uncanny Valley" effect [7] that can be encountered when using avatars.


L'objet de cette étude bibliographique est d'une part de présenter en quoi consiste cette programmation trigger-action à partir d'articles académiques et d'autre part d'esquisser un panorama des différentes approches envisageables afin  d'en faciliter l'utilisation par les utilisateurs finaux.
However, such systems are often limited to basic setups in which each user must seat in front of a computer equipped with a camera. Other systems, such as Multiview [9] or MMSpace [8], handle groups, but still only group-to-group conversations are possible. This leads to awkward situations in which colleagues in the same building stay in their office to attend a videoconference meeting instead of attending together, or participants are forced to have side conversations via chat. More recent work investigates dynamic setups that allow users to move into the system and interact with share content. t-Rooms [5] displays remote users on circular screens around a tabletop. CamRay [1] handles video communication between two users interacting on remote wall-sized displays. GazeLens [4] integrates a remote user in a group collaboration around physical artifacts on a table. Nevertheless, such systems do not support different moments in the collaboration, such as tightly coupled and loose collaboration, subgroup collaboration, spontaneous or side discussions. Supporting such dynamics in collaboration is a major challenge for the next telepresence systems.


B. Ur, E. McManus, M. Pak Yong Ho, M. L. Littman, "Practical trigger-action programming in the smart home", Proc. of the SIGCHI Conference on Human Factors in Computing Systems, pp 802-812, CHI'14, Avril 2014, Toronto, Canada.
[1]  I. Avellino, C. Fleury, W. Mackay and M. Beaudouin-Lafon. “CamRay: Camera Arrays Support Remote Collaboration on Wall-Sized Displays”. Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’17). 2017.


B. Ur, M. Pak Yong Ho, S. Brawner, J. Lee, S. Menniken, N. Picard, D. Schulze, M. L. Littman, "Trigger-Action Programming in the Wild : An Analysis of 200.000 IFTTT Recipes" in Proc. of the 2016 CHI Conference on Human Factors in Computing Systems, pp-3227-3231, CHI'16, Mai 2016, San Jose, USA.
[3]  E. A. Isaacs and J. C. Tang. “What Video Can and Can’t Do for Collaboration: A Case Study.” Proceedings of the ACM International Conference on Multimedia (MULTIMEDIA’93). 1993.


J. Huang, M. Cakmak, "Supporting mental model accuracy in trigger-action programming" in Proc. of the 2015 ACM International Conference on Pervasive and Ubiquitous Computing, pp 215-225, UbiComp'15, Septembre 2015,Osaka, Japon.
[4]  K.-D. Le, I. Avellino, C. Fleury, M. Fjeld, A. Kunz. “GazeLens: Guiding Attention to Improve Gaze Interpretation in Hub-Satellite Collaboration”. Proceedings of the Conference on Human- Computer Interaction (INTERACT’19). 2019.


F. Paterno, S. Alawadi, "Towards Intelligent Personalization of IoT Platforms" in Proc. of 2019 ACM Conference on Intelligent User Interfaces, IUI'19, Mars 2019, Los Angeles, USA.
[5]  P. K. Luff, N. Yamashita, H. Kuzuoka, and C. Heath. “Flexible Ecologies And Incongruent Locations.” Proceedings of the Conf. on Human Factors in Computing Systems. (CHI ’15). 1995.


A. Mattioli, F. Paterno, "A Visual Environment for End-User Creation of IoT Customization Rule with Recommendation Support" in Proc. of the International Conference on Advanced Visual Interfaces, pp 1-5, AVI'20, Septembre 2020, Salerno, Italie.
[6]  A. F. Monk and C. Gale. “A Look Is Worth a Thousand Words: Full Gaze Awareness in Video- Mediated Conversation.” In: Discourse Processes 33.3, 2002, pp. 257–278.


F. Corno, L. De Russis, A. Monge Roffarello, "TAPrec : Supporting the Composition of Trigger-Action Rules Through Dynamic Recommendations" in Proc. of the 25th Conference on Intelligent User Interfaces, pp 579-588, IUI'20, Mars 2020, Cagliari, Itale.
[7]  M. Mori, K. F. MacDorman and N. Kageki, “The Uncanny Valley [From the Field]”, IEEE Robotics & Automation Magazine, vol. 19, no. 2, pp. 98-100, 2012.


F. Corno, L. De Russis, A. Monge Roffarello, "A Semantic Web Approach to Simplifying Trigger-Action Programming in the IoT", in Computer, vol 50, Issue 11, pp 18-24, 2017.
[8]  K. Otsuka, “MMSpace: Kinetically-augmented telepresence for small group-to-group conversations”. Proceedings of 2016 IEEE Virtual Reality (VR’16). 2016.


A.-M. Vainio, M. Valtonen, J. Vanhala, "Proactive Fuzzy Control and Adaptation Methods for Smart Homes" in IEEE Intelligent Systems, vol 23 issue 2, pp 42-49, 2008.
[9]  A. Sellen, B. Buxton, and J. Arnott. “Using Spatial Cues to Improve Videoconferencing.” Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI ’92). 1992.


A.-M. Vainio, M. Valtonen, J. Vanhala, "Learning and adaptive fuzzy control system for smart home", in Developing Ambient Intelligence, pp 28-47, Springer.
[10]  E. S. Veinott, J. Olson, G. M. Olson, and X. Fu. “Video Helps Remote Work: Speakers Who Need to Negotiate Common Ground Benefit from Seeing Each Other.” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’99). 1999.


=== Systèmes de recommandation sérendipitifs ===
=== How to represent the physical space surrounding users in remote AR collaboration? ===
* Enseignants : [mailto:cedric.fleury@imt-atlantique.fr Cédric Fleury] et [mailto:thierry.duval@imt-atlantique.fr Thierry Duval]
* Sujet en lien avec un stage : oui ([[Stages|Hybrid Collaborative across Heterogeneous Devices]])
* '''stage non attribué'''


* Enseignant•e•(s) : Eric Maisel (maisel@enib.fr), Lab-STICC, ENIB
Augmented Reality (AR) is becoming a very popular technology to support remote collaboration, as it enables users to share virtual content with distant collaborators. However, sharing the physical spaces surrounding users is still a major challenge. Each user involved in an AR collaborative situation enters the shared environment with a part of its own environment [4, 9]. For example, this space can be shared in several ways for two remote users [3]: (i) in an equitable mode (i.e., half from user 1 and half from user 2) [5], (ii) in a host-guest situation where the host imposes the shape of the augmented environment to the guest [7, 8], or (iii) in a mixed environment specifically designed for the collaborative task [6]. Whatever the configuration, the question of how users perceive and use this shared environment arises [2]
* sujet en lien avec un stage : non.


Les systèmes de recommandation sont utilisés pour proposer à un utilisateur particulier confronté à un problème donné un ensemble de solutions pertinentes. Ils sont d'autant plus nécessaires que la quantité d'informations accessibles ne cesse d'augmenter et de dépasser la quantité d'information qu'un être humain peut traiter.
[1] H. H. Clark, and S. E. Brennan. "Grounding in communication". In: L. B. Resnick, J. M. Levine, & S. D. Teasley (Eds.), Perspectives on socially shared cognition (pp. 127–149). American Psychological Association. 1991.


Il s'agira dans ce travail de se focaliser sur un problème particulier : celui des bulles informationnelles. L'apparition de ces bulles est directement lié à la nature des algorithmes de recommandation : ceux-ci calculent leurs propositions en tenant compte des choix précédents de l'utilisateur ou de ceux des autres utilisateurs dans la mesure où ceux-ci sont similaires à l'utilisateur considéré. Dans les deux cas les recommandations faites à cet utilisateur restent limitées et n'évoluent que peu.
[2] S. R. Fussell, R. E. Kraut, and J. Siegel. “Coordination of communication: effects of shared visual context on collaborative work”. Proceedings of the 2000 ACM conference on Computer supported cooperative work (CSCW '00). 2000.


La sérendipité - une des cibles de cette étude bibliographique - est la propriété que satisfont les systèmes de recherche d'information quand ils sont capables de proposer des solutions qui sont à la fois pertinentes pour l'utilisateur et auxquelles cet utilisateur ne s'attendait pas. Cette propriété est une des solutions aux bulles informationnelles.
[3] B. T. Kumaravel, F. Anderson, G. Fitzmaurice, B. Hartmann, and Tovi Grossman. "Loki:Facilitating Remote Instruction of Physical Tasks Using Bi-Directional Mixed-Reality Telepresence". Proceedings of the ACM Symposium on User Interface Software and Technology (UIST '19), 2019.


Cette étude bibliographique a pour objectif d'une part de rappeler ce que sont les systèmes de recommandation, en particulier les systèmes de recommandation basés sur les connaissances et d'autre part de présenter les différentes approches permettant de mettre en oeuvre des systèmes de recommandation sérendipitifs. Il faudra également s'intéresser à la manière dont ces systèmes peuvent être évalués.
[4] P. Ladwig and C. Geiger. “A Literature Review on Collaboration in Mixed Reality”. International Conference on Remote Engineering and Virtual Instrumentation (REV). 2018.


[5] N. H. Lehment, D. Merget and G. Rigoll. "Creating automatically aligned consensus realities for AR videoconferencing". IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2014.


A. Ameen, "Knowledge based Recommendation System in Semantic Web - A Survey" in International Journal of Computer Applications, Vol. 182, No 43, Mars 2019 .
[6] T. Mahmood, W. Fulmer, N. Mungoli, J. Huang and A. Lu. "Improving Information Sharing and Collaborative Analysis for Remote GeoSpatial Visualization Using Mixed Reality". IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2019.


Y. Du, S. Ranwez, N. Sutton-Charani, V. Ranwez, "Apport des ontologies aux systèmes de recommandation : état de l'art et perspective", In Proc. of 30es Journées Francophones d'Ingéniérie des Connaissances, IC 2019, AFIA, Juillet 2019, Toulouse, France, pp 64-77 .
[7] O. Oda, C. Elvezio, M. Sukan, S. Feiner, and B. Tversky. "Virtual Replicas for Remote Assistance in Virtual and Augmented Reality". Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST '15), 2015.


Y. Wang, N. Stash, L. Aroyo, L. Hollink, G. Scheiber, "Using Semantic Relations for Content-based Recommender Systems in Cultural heritage", in Proc. of Worshop on Ontology Patterns 2009  in  ISWC workshop, 2009.
[8] S. Orts-Escolano, C. Rhemann, S. Fanello, W. Chang, A. Kowdle, Y. Degtyarev, and al. “Holoportation: Virtual 3D Teleportation in Real-time”. Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). 2016.


D. Kotlov, S Wang, J. Veijalaien, "A survey of serendipity in recommender system", Knowledge-based Systems, vol 111, pp 180-192, November 2016 .
[9] M. Sereno, X. Wang, L. Besancon, M. J. Mcguffin and T. Isenberg, "Collaborative Work in Augmented Reality: A Survey". IEEE Transactions on Visualization and Computer Graphics. 2020.


D Kotlov, J. Veijalain, S. Wang, "Challenges of Serendipity in Recommender Systems", in Proc. of the 12th International Conference on Web Information Systems and Technologies, WEBIST 2016, Vol 2 pp 251-256 .
=== Principes et mise en oeuvre des architectures de Machine Learning de type « Transformers » ===
 
* Enseignant :  [mailto:pierre.deloor@enib.fr Pierre De Loor]
N.I.Y. Saat, S.A.M. Noah, M. Mohd, "Towards Serendipity for Content-Based Recommender Systems", International Journal on Advanced Science Engineering Information Technology, Vol. 8, No 4-2, pp 1762-1769, 2018 .
* Sujet en lien avec un stage : non.
 
L. Iaquinta, M.de Gemmis, P. Lops, G. Semeraro, M. Filannino, P. Molino, "Introducing Serendipity in a Content-based Recommender System", in Proc. of the IEEE Eighth International Conference on Hybrid Intelligent Systems, Barcelone, Espagne.


E. E. Toms, "Serendipitous Information Retrieval", in Proc. of DELOS, Workshop : Information Seeking, Searching and Quering in Digital Libraries, pp 17-20, 2000 .
[[Media:BIBL-2021_Deloor_SujetBiblioTransformer.pdf| Résumé du sujet]]


L. McGinty, B. Smyth (2003) On the Role of Diversity in Conversational Recommender Systems. In: Ashley K.D., Bridge D.G. (eds) Case-Based Reasoning Research and Development. ICCBR 2003. Lecture Notes in Computer Science, vol 2689. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45006-8_23
=== Sujet sur les réseaux adversiaux génératifs (GAN) ===
* Enseignant :  [mailto:pierre.deloor@enib.fr Pierre De Loor]
* Sujet en lien avec un stage : non


T. Dorjmaa, T. Shin, "Evaluating the Quality of Recommendation System by Using Serendipity Measure", in Journal of Intelligent Systems, Vol 25, No 4, pp 89-103, Décembre 2019.
[[Media:Generative_Adversial_Networks_2022.pdf| Résumé du sujet]]


M. Kaminskas, D. Bridge, "Measuring Surprise in Recommender Systems", in Proc. of Workshop on Recommender Systems Evaluation : Dimensions and Design (REDD 2014), held in conjunction with RecSys 2014, Octobre 2014, Silicon Valley, USA.
=== Modèle du comportement réactif du regard pour un agent virtuel inoccupé, basé sur signaux visuels et acoustiques ===
* Enseignants : [mailto:elisabetta.bevacqua@enib.fr Elisabetta Bevacqua] et [mailto:desmeulles@enib.fr Gireg Desmeulles]
* Sujet en lien avec un stage : oui ([[Stages]])
* '''stage non attribué'''


A. S. Nugroho, I. Ardiyanto, T. B. Adji, "User Curiosity Factor in Determining Serendipity of Recommender System", in proc of the International Journal of Innovative Technology and Exploring Engineering IJITEE, Vol 5, No 3, Septembre 2021.
=== Informatique affective : myographie et réalité virtuelle  ===
* Enseignants : [mailto:augereau@enib.fr Olivier Augereau]
* Sujet en lien avec un stage : oui ([[Stages]])
* '''stage non attribué'''


A. Menk, L. Sebastia, R. Ferreira, "Curumin, A serendipitous Recommender System based on Human Curiosity" in Procedia Computer Science 112 (2017),  pp 484-493.
=== Exploitation de données de League of Legends pour l'étude de la complexité dans les décisions humaines ===
* Enseignants : [mailto:augereau@enib.fr Olivier Augereau]
* Sujet en lien avec un stage : oui ([[Stages]])
* '''stage non attribué'''


X Niu, F. Abbas, M. L. Maher, K. Grace, "Surprise Me If You Can : Serendipity in Health Information", Proc of the 2018 CHI Conference on Human Factors in Computing SystemsCHI 2018, CHI 2018, pp 1-12, Avril 2018, Montréal, Canada.
=== Comment simuler de façon efficace et réaliste les essaims de robots ? ===
 
* Enseignant : [mailto:jeremy.riviere@univ-brest.fr Jérémy Rivière]
=== Étude des techniques de prédiction en Machine Learning pour lutter contre l’échec et le décrochage scolaire ===
 
* Enseignante : Fahima DJELIL
* Sujet en lien avec un stage : non.
* Sujet en lien avec un stage : non.


[[Media:BIBL-2021_Djelil_SujetBiblioPredictionDecrochageScolaire.pdf|Présentation du sujet]]
L'étude des "Robot swarms", ou essaims de robots, porte sur les systèmes comprenant de nombreux robots - ou drones - volants, roulants, etc. qui se coordonnent de façon autonome, à partir de règles de contrôle locales basées sur les perceptions du robot et son état actuel. La conception du comportement de ces robots passent le plus souvent par un outil de simulation. Cette étude bibliographique s'intéresse aux plateformes de simulation existantes d'essaims de robots. L'objectif est de proposer le recensement le plus exhaustif possible de ces plateformes, d'en faire une synthèse, et de les catégoriser selon des critères à définir : efficacité, réalisme, langage de programmation, licence, portabilité, etc.


=== Algorithmes de comportements auto-organisés pour des essaims de drones ===
[1] Yihan Zhang, Lyon Zhang, Hanlin Wang, Fabián E. Bustamante, and Michael Rubenstein. 2020. SwarmTalk - Towards Benchmark Software Suites for Swarm Robotics Platforms. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS '20). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 1638–1646.


=== Interactions avec essaim de drones  ===
[2] Schranz, M., Umlauft, M., Sende, M., & Elmenreich, W. (2020). Swarm Robotic Behaviors and Current Applications. Frontiers in robotics and AI, 7, 36. https://doi.org/10.3389/frobt.2020.00036


* Enseignant•e•(s) : Jérémy Rivière
[3] Webots
* Lien avec le stage Robotique en essaims et Systèmes Multi-Agents.


Il s'agira dans la bibliographie de faire une veille technologique des différents dispositifs de tracking existants, puis de comparer et d'évaluer ceux qui sont utilisés en particulier dans des travaux de recherche sur les essaims de robots (Mona et autres).
Mots-clés : Multi-robot System, Swarm Robotic, Simulation Platform, Robot Simulator


=== Techniques et usages possibles de la réalité virtuelle pour l’empathie environnementale  ===
=== Comment simuler de façon efficace et réaliste les systèmes multicellulaires ? ===
* Enseignant : [mailto:pascal.ballet@univ-brest.fr Pascal Ballet]
* Sujet en lien avec un stage : non.


Teacher: Anne-Gwenn Bosser (ENIB) Lab-STICC - COMMEDIA.
Cette étude bibliographique s'intéresse aux plateformes de simulation existantes de cellules vivantes par l'approche multiagent ou assimilées (modèle cellulaire de Potts, automates cellulaire). L'objectif est de proposer le recensement le plus exhaustif possible de ces plateformes, d'en faire une synthèse, et de les catégoriser selon des critères à définir : efficacité, réalisme, langage de programmation, licence, portabilité, etc.
Subject related to an internship.


=== Génération automatique d’humour ou de jeux de mots ===
[1] Seunghwa Kang, Simon Kahan, Jason McDermott, Nicholas Flann, Ilya Shmulevich, Biocellion : accelerating computer simulation of multicellular biological system models , Bioinformatics, Volume 30, Issue 21, 1 November 2014, Pages 3101–3108, https://doi.org/10.1093/bioinformatics/btu498


Teacher: Anne-Gwenn Bosser(ENIB) Lab-STICC - COMMEDIA.
[2] Starruß, J., De Back, W., Brusch, L., & Deutsch, A. (2014). Morpheus: a user-friendly modeling environment for multiscale and multicellular systems biology. Bioinformatics, 30(9), 1331-1332.
Subject not related to an internship


=== Principes et mise en oeuvre des architectures de Machine Learning de type « Transformers » ===
[3] Morpheus, https://morpheus.gitlab.io/


* Teacher: Pierre De Loor, ENIB, Lab-STICC - COMMEDIA.
[4] Swat, M. H., Thomas, G. L., Belmonte, J. M., Shirinifard, A., Hmeljak, D., & Glazier, J. A. (2012). Multi-scale modeling of tissues using CompuCell3D. In Methods in cell biology (Vol. 110, pp. 325-366). Academic Press.
* Subject not related to an internship


[[Media:BIBL-2021_Deloor_SujetBiblioTransformer.pdf|Subject summary]]
[5] Ballet, P. (2018). SimCells, an advanced software for multicellular modeling Application to tumoral and blood vessel co-development.


=== L'incarnation sensorimotrice d'un agent virtuel chez un humain en RV ===
[6] Centillyon, https://centyllion.com/fr/
=== Self-learning agents having intrinsic motivation to learn ===


* Teacher: Pierre Chevaillier (ENIB) Lab-STICC COMMEDIA
Mots clés : Cellular Potts Model, Multi-agent, Multi-cellular simulator.
* Study not related to an internship


Intrinsic motivation drives artificial agents, such as robots, to discover novel ’states’ by exploring their environment. The exploration is not motivated by any explicit task-oriented goal, unless the one to learn. In other words, intrinsic motivation is for an agent the quest of novelty, which suppose for the agent to intrinsically curious. The concept has been used for self-learning agents using the principle of reinforcement learning. It leads the agent to learn new skills which may become useful to get some rewards in the future. This heuristic is interesting in situations where the agent would only get very sparse extrinsic rewards from its successful actions.
=== IA pour la traduction automatique de l'humour et des jeux de mots ===
* Enseignant :  [mailto:bosser@enib.fr Anne-Gwenn Bosser]
* Sujet en lien avec un stage : non


The above paragraph is dense in concepts which must be explained and formally defined: intrinsic motivation, novelty, curiosity-driven learning, etc. This study aims at presenting the scientific motivations behind this approach, the main principles of curiosity-driven learning and intrinsic motivation and their recent applications in reinforcement learning. The benefits of this approach should be clearly stated.
[[Media:JOKER_-_Sujets_bibliographie_2022-23.pdf| Résumé du sujet]]


** Aubret, A., Matignon, L., and Hassas, S. (2019). A survey on intrinsic motivation in reinforcement learning. ArXiv.
== Sujets inclus dans les stages affectés ==
** Nguyen, S. M., Duminy, N., Manoury, A., Duhaut, D., and Buche, C. (2021). Robots learn increas- ingly complex tasks with intrinsic motivation and automatic curriculum learning. Ku ̈nstliche Intelligenz, 35:81–90.
** Oudeyer, P.-Y. (2018). The New Science of Curiosity, chapter Computational Theories of Curiosity- Driven Learning, pages 43–72. Psychology of Emotions, Motivations and Actions. Nova Science Publisher.
** Pathak, D., Agrawal, P., Efros, A. A., and Darrell, T. (2017). Curiosity-driven exploration by self-supervised prediction. In Precup, D. and Teh, Y. W., editors, Proceedings of the 34th In- ternational Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 2778–2787. PMLR.


=== How naive agents can learn a representation of space from their sensorimotor experience? ===
=== Techniques for localized data representation in Augmented Reality ===
* Enseignants : [mailto:etienne.peillard@imt-atlantique.fr Etienne Peillard] et [mailto:Aymeric.Henard@univ-brest.fr Aymeric Henard]
* Sujet en lien avec un stage : oui ([[Stages|Visualisation immersive et localisée de données en Réalité Augmentée]])
* '''stage attribué'''


* Teacher: Pierre Chevaillier (ENIB) Lab-STICC COMMEDIA
Augmented reality allows for the superimposition of virtual elements in a real-world environment that can be associated with it. It enables, for example, the display of temperature data in a room to visually identify cold spots, or the display of robot speed and trajectory to understand their movement. However, the display possibilities are twofold: the data to be displayed can be of various types (discrete/continuous, 1D/2D/3D/see 4D), and there are numerous ways to display them. Furthermore, due to augmented reality's limitations, some techniques may not be adapted or may cause display issues, particularly when the visualizations become distant or overlapping. This research topic aims to review all of the techniques that allow data to be displayed in AR in a co-localized manner, identifying their benefits and drawbacks as detailed in the scientific literature.
* Study not related to an internship


According to the Sensorimotor Contingency Theory (SCT), the acquisition of space knowledge results from the interaction between perception and body movements. This theory is based on Poincar ́e’s assumption that our body naturally compensates a change in our perception by a movement and thus ’embodies’ a representation of space. The (hidden) relationship between the agent’s sensations (sensory inputs) and its body movements (motor outputs) depends on the characteristics of its surrounding space, as the agent can experience it, that also depends on its own capabilities (perception and action).
[1] Olshannikova, Ekaterina ; Ometov, Aleksandr ; Koucheryavy, Yevgeni ; Olsson, Thomas: Visualizing Big Data with augmented and virtual reality: challenges and research agenda. In: Journal of Big Data Bd. 2, SpringerOpen (2015), Nr. 1, S. 1–27


It is an ambitious scientific track to try to implement such an embodied cognition in an artificial agent. Up to now it has been experimented by simulation on very simple setups, having in mind applications to robotics (Le Clec’H et al., 2016). Other authors have tried to put the SCT in action and to characterize what a space representation could be (Terekhov and O’Regan, 2016) and what kind of spacial properties could be learned (Laflaquière et al., 2018; Laflaquière, 2020).
[2] Hedley, Nicholas R. ; Billinghurst, Mark ; Postner, Lori ; May, Richard ; Kato, Hirokazu: Explorations in the use of augmented reality for geographic visualization. In: Presence: Teleoperators and Virtual Environments Bd. 11 (2002), Nr. 2, S. 119–133


This study shall briefly introduce the Sensorimotor Contingency Theory and present the principles and the recent results in the learning of space representation by a naive robotic agent.
[3] Olshannikova, Ekaterina ; Ometov, Aleksandr ; Koucheryavy, Yevgeni: Towards big data visualization for augmented reality. In: Proceedings - 16th IEEE Conference on Business Informatics, CBI 2014 Bd. 2, Institute of Electrical and Electronics Engineers Inc. (2014), S. 33–37 — ISBN 9781479957781


Références
[4] Miranda, Brunelli P. ; Queiroz, Vinicius F. ; Araújo, Tiago D.O. ; Santos, Carlos G.R. ; Meiguins, Bianchi S.: A low-cost multi-user augmented reality application for data visualization. In: Multimedia Tools and Applications Bd. 81, Springer (2022), Nr. 11, S. 14773–14801
** Laflaquière, A., O’Regan, J. K., Gas, B., and Terekhov, A. V. (2018). Discovering space – grounding spatial topology and metric regularity in a naive agent’s sensorimotor experience. Neural Network, 105:371–392.
**Laflaquière, A. (2020). Emergence of spatial coordinates via exploration. arXiv:2010.15469v1 [cs.LG]. 4 pages, 2 figures, BabyMind Workshop at NeurIPS 2020.
** Le Clec’H, G., Gas, B., and O’Regan, J. K. (2016). Acquisition of a space representation by a naive agent from sensorimotor invariance and proprioceptive compensation. International Journal of Advanced Robotic Systems, 13(6):172988141667513.
** Terekhov, A. V. and O’Regan, J. K. (2016). Space as an invention of active agents. Frontiers in Robotics and AI, 3.


=== Recommandations de visualisation pour la cybersécurité ===
[5] Martins, Nuno Cid ; Marques, Bernardo ; Alves, João ; Araújo, Tiago ; Dias, Paulo ; Santos, Beatriz Sousa: Augmented reality situated visualization in decision-making. In: Multimedia Tools and Applications Bd. 81, Springer (2022), Nr. 11, S. 14749–14772


* Teacher: Nicolas Delcombel, IMT Atlantique, Lab-STICC - INUIT.
=== How collaboration in mixed reality can benefit from the use of heterogeneous devices? ===
* Study not related to an internship


[[Media:BIBL_2021_Delcombel_VisuCyber.pdf|Présentation de l'étude]]
* Enseignants : [mailto:cedric.fleury@imt-atlantique.fr Cédric Fleury] et [mailto:etienne.peillard@imt-atlantique.fr Etienne Peillard]
* Sujet en lien avec un stage : oui ([[Stages|Perception of Shared Spaces in Collaborative Augmented Reality]])
* '''stage attribué'''


=== Méthodes de segmentation sémantique de nuages de points  ===
The massive development of display technologies brings a wide range of new devices, such as mobile phones, AR/VR headsets and large displays, available to the general public. These devices offer many opportunities for co-located and remote collaboration on physical and digital content. Some can handle groups of co-located users [10, 13], while others enable remote users to connect in various situations [3, 6, 11, 14]. For example, some previous systems allow users to use a mobile device to interact with a co-located partner wearing a VR headset [4, 7]. Other systems enable users in VR to guide a remote collaborator using an AR headset [1, 8, 9, 12].
[1] H. Bai, P. Sasikumar, J. Yang, and M. Billinghurst. "A User Study on Mixed Reality Remote
Collaboration with Eye Gaze and Hand Gesture Sharing". Proceedings of the CHI Conference on
Human Factors in Computing Systems (CHI’20), 2020.


* Teacher: Cédric Buche (CNRS) IRL CROSSING
[2] H. H. Clark, and S. E. Brennan. "Grounding in communication". In: L. B. Resnick, J. M. Levine, & S. D. Teasley (Eds.), Perspectives on socially shared cognition (pp. 127–149). American Psychological Association. 1991.
* Study related to an internship (see 'page des stages')


=== Étude de l'impact des techniques d'interaction sur la perception en Réalité Augmentée ===
[3] C. Fleury, T. Duval, V. Gouranton, A. Steed. "Evaluation of Remote Collaborative Manipulation for Scientific Data Analysis", ACM Symposium on Virtual Reality Software and Technology (VRST’12), 2012.


* Teacher: Etienne Paillard (IMT Atlantique), Lab-STICC INUIT.
[4] J. Gugenheimer, E. Stemasov, J. Frommel, and E. Rukzio. "ShareVR: Enabling Co-Located Experiences for Virtual Reality between HMD and Non-HMD Users". Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17), 2017.
* Study related to an internship (see page des stages).


=== IHM pour l'exploration de données temporelles ===
[5] J. Hollan and S. Stornetta. "Beyond being there". In : Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’92), 1992.


* Teacher: Olivier Augereau (ENIB, Lab-STICC COMMEDIA)
[6] B. T. Kumaravel, F. Anderson, G. Fitzmaurice, B. Hartmann, and Tovi Grossman. "Loki:Facilitating Remote Instruction of Physical Tasks Using Bi-Directional Mixed-Reality Telepresence". Proceedings of the ACM Symposium on User Interface Software and Technology (UIST '19), 2019.
* Subject related to an internship


=== Communication non verbale en environnement virtuel ===
[7] B. T. Kumaravel, C. Nguyen, S. DiVerdi, and B. Hartmann. "TransceiVR: Bridging Asymmetrical Communication Between VR Users and External Collaborators". Proceedings of the ACM Symposium on User Interface Software and Technology (UIST '20), 2020.


* Teachers: Olivier Augereau (ENIB, Lab-STICC COMMEDIA), Antoine Dellavalle
[8] M. Le Chénéchal, T. Duval, J. Royan, V. Gouranton, and B. Arnaldi. “Vishnu: Virtual Immersive Support for HelpiNg Users - An Interaction Paradigm for Remote Collaborative Maintenance in Mixed Reality”. Proceedings of 3DCVE 2016 (IEEE VR 2016 International Workshop on 3D Collaborative Virtual Environments). 2016.
* Subject related to an internship (see page des stages)


=== Caractérisation Affective Automatique d’une Expérience Immersive ===
[9] M. Le Chénéchal, T. Duval, V. Gouranton, J. Royan, and B. Arnaldi. “The Stretchable Arms for Collaborative Remote Guiding”. Proceedings of ICAT-EGVE 2015, Eurographics. 2015.


* Teachers: Anne-Gwenn Bosser (ENIB), Olivier Augereau (ENIB), Lab-STICC COMMEDIA
[10] C. Liu, O. Chapuis, M. Beaudouin-Lafon, and E. Lecolinet. “Shared Interaction on a Wall-Sized Display in a Data Manipulation Task.” In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. CHI ’16.
* Subject related to an internship (see 'page des stages')


=== Digital commensability in VR ===
[11] P. Mohr, S. Mori, T. Langlotz, B. H. Thomas, D. Schmalstieg, and D. Kalkofen. "Mixed Reality Light Fields for Interactive Remote Assistance". Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’20), 2020.


* Teachers: Elisabetta Bevacqua (ENIB), Gireg Desmeulles (ENIB), Lab-STICC - COMMEDIA
[12] O. Oda, C. Elvezio, M. Sukan, S. Feiner, and B. Tversky. "Virtual Replicas for Remote Assistance in Virtual and Augmented Reality". Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST '15), 2015.
* Subject related to an internship (see 'page des stages')


=== Reconnaissance automatique d’activités humaines ===
[13] Y. Okuya, O. Gladin, N. Ladévèze, C. Fleury, P. Bourdot. "Investigating Collaborative Exploration of Design Alternatives on a Wall-Sized Display", ACM Conference on Human Factors in Computing Systems (CHI’20), 2020.


* Teacher: Alexis Nédélec (ENIB) Lab-STICC - INUIT
[14] H. Xia, S. Herscher, K. Perlin, and D. Wigdor. "Spacetime: Enabling Fluid Individual and Collaborative Editing in Virtual Reality". Proceedings of the ACM Symposium on User Interface Software and Technology (UIST ’18), 2018.
* Subject related to an internship.
* Student : Vincent FER.

Dernière version du 28 novembre 2022 à 09:54


Méthode de travail et objectifs

Affectations sujets

Here is a shared table that presents the different subjects of study: list of studies.

Students are invited to indicate their choice(s) in this table.

Indications générales

Voici quelques indications pour la rédaction de l'étude bibliographique et sa restitution orale : instructions biblio (révision nov. 2020).

Documents à étudier

Comme toute technique d'ingénierie, ou toute démarche scientifique, la réalisation d'une étude bibliographique, appelée aussi revue de littérature (littérature review), doit être réalisée de manière méthodique et apporter des éléments pour en apprécier la justesse et la pertinence. Même si les motivations pour réaliser un tel exercice peuvent être diverses, dans ses grandes lignes, la méthodologie reste la même.

Voici quelques documents à lire avant et pendant la réalisation de votre étude.

Sujets libres (sans lien avec un stage ou lié à un stage non attribué)

How telepresence systems can support collaborative dynamics in large interactive spaces?

Videoconferencing and telepresence have long been a way to enhance communication among remote users. They improve turn-taking, mutual understanding, and negotiation of common ground by supporting non-verbal cues such as eye-gaze direction, facial expressions, gestures, and body langue [3, 6, 10]. They are also an effective solution to avoid the "Uncanny Valley" effect [7] that can be encountered when using avatars.

However, such systems are often limited to basic setups in which each user must seat in front of a computer equipped with a camera. Other systems, such as Multiview [9] or MMSpace [8], handle groups, but still only group-to-group conversations are possible. This leads to awkward situations in which colleagues in the same building stay in their office to attend a videoconference meeting instead of attending together, or participants are forced to have side conversations via chat. More recent work investigates dynamic setups that allow users to move into the system and interact with share content. t-Rooms [5] displays remote users on circular screens around a tabletop. CamRay [1] handles video communication between two users interacting on remote wall-sized displays. GazeLens [4] integrates a remote user in a group collaboration around physical artifacts on a table. Nevertheless, such systems do not support different moments in the collaboration, such as tightly coupled and loose collaboration, subgroup collaboration, spontaneous or side discussions. Supporting such dynamics in collaboration is a major challenge for the next telepresence systems.

[1] I. Avellino, C. Fleury, W. Mackay and M. Beaudouin-Lafon. “CamRay: Camera Arrays Support Remote Collaboration on Wall-Sized Displays”. Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’17). 2017.

[3] E. A. Isaacs and J. C. Tang. “What Video Can and Can’t Do for Collaboration: A Case Study.” Proceedings of the ACM International Conference on Multimedia (MULTIMEDIA’93). 1993.

[4] K.-D. Le, I. Avellino, C. Fleury, M. Fjeld, A. Kunz. “GazeLens: Guiding Attention to Improve Gaze Interpretation in Hub-Satellite Collaboration”. Proceedings of the Conference on Human- Computer Interaction (INTERACT’19). 2019.

[5] P. K. Luff, N. Yamashita, H. Kuzuoka, and C. Heath. “Flexible Ecologies And Incongruent Locations.” Proceedings of the Conf. on Human Factors in Computing Systems. (CHI ’15). 1995.

[6] A. F. Monk and C. Gale. “A Look Is Worth a Thousand Words: Full Gaze Awareness in Video- Mediated Conversation.” In: Discourse Processes 33.3, 2002, pp. 257–278.

[7] M. Mori, K. F. MacDorman and N. Kageki, “The Uncanny Valley [From the Field]”, IEEE Robotics & Automation Magazine, vol. 19, no. 2, pp. 98-100, 2012.

[8] K. Otsuka, “MMSpace: Kinetically-augmented telepresence for small group-to-group conversations”. Proceedings of 2016 IEEE Virtual Reality (VR’16). 2016.

[9] A. Sellen, B. Buxton, and J. Arnott. “Using Spatial Cues to Improve Videoconferencing.” Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI ’92). 1992.

[10] E. S. Veinott, J. Olson, G. M. Olson, and X. Fu. “Video Helps Remote Work: Speakers Who Need to Negotiate Common Ground Benefit from Seeing Each Other.” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’99). 1999.

How to represent the physical space surrounding users in remote AR collaboration?

Augmented Reality (AR) is becoming a very popular technology to support remote collaboration, as it enables users to share virtual content with distant collaborators. However, sharing the physical spaces surrounding users is still a major challenge. Each user involved in an AR collaborative situation enters the shared environment with a part of its own environment [4, 9]. For example, this space can be shared in several ways for two remote users [3]: (i) in an equitable mode (i.e., half from user 1 and half from user 2) [5], (ii) in a host-guest situation where the host imposes the shape of the augmented environment to the guest [7, 8], or (iii) in a mixed environment specifically designed for the collaborative task [6]. Whatever the configuration, the question of how users perceive and use this shared environment arises [2]

[1] H. H. Clark, and S. E. Brennan. "Grounding in communication". In: L. B. Resnick, J. M. Levine, & S. D. Teasley (Eds.), Perspectives on socially shared cognition (pp. 127–149). American Psychological Association. 1991.

[2] S. R. Fussell, R. E. Kraut, and J. Siegel. “Coordination of communication: effects of shared visual context on collaborative work”. Proceedings of the 2000 ACM conference on Computer supported cooperative work (CSCW '00). 2000.

[3] B. T. Kumaravel, F. Anderson, G. Fitzmaurice, B. Hartmann, and Tovi Grossman. "Loki:Facilitating Remote Instruction of Physical Tasks Using Bi-Directional Mixed-Reality Telepresence". Proceedings of the ACM Symposium on User Interface Software and Technology (UIST '19), 2019.

[4] P. Ladwig and C. Geiger. “A Literature Review on Collaboration in Mixed Reality”. International Conference on Remote Engineering and Virtual Instrumentation (REV). 2018.

[5] N. H. Lehment, D. Merget and G. Rigoll. "Creating automatically aligned consensus realities for AR videoconferencing". IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2014.

[6] T. Mahmood, W. Fulmer, N. Mungoli, J. Huang and A. Lu. "Improving Information Sharing and Collaborative Analysis for Remote GeoSpatial Visualization Using Mixed Reality". IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2019.

[7] O. Oda, C. Elvezio, M. Sukan, S. Feiner, and B. Tversky. "Virtual Replicas for Remote Assistance in Virtual and Augmented Reality". Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST '15), 2015.

[8] S. Orts-Escolano, C. Rhemann, S. Fanello, W. Chang, A. Kowdle, Y. Degtyarev, and al. “Holoportation: Virtual 3D Teleportation in Real-time”. Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). 2016.

[9] M. Sereno, X. Wang, L. Besancon, M. J. Mcguffin and T. Isenberg, "Collaborative Work in Augmented Reality: A Survey". IEEE Transactions on Visualization and Computer Graphics. 2020.

Principes et mise en oeuvre des architectures de Machine Learning de type « Transformers »

Résumé du sujet

Sujet sur les réseaux adversiaux génératifs (GAN)

Résumé du sujet

Modèle du comportement réactif du regard pour un agent virtuel inoccupé, basé sur signaux visuels et acoustiques

Informatique affective : myographie et réalité virtuelle

Exploitation de données de League of Legends pour l'étude de la complexité dans les décisions humaines

Comment simuler de façon efficace et réaliste les essaims de robots ?

L'étude des "Robot swarms", ou essaims de robots, porte sur les systèmes comprenant de nombreux robots - ou drones - volants, roulants, etc. qui se coordonnent de façon autonome, à partir de règles de contrôle locales basées sur les perceptions du robot et son état actuel. La conception du comportement de ces robots passent le plus souvent par un outil de simulation. Cette étude bibliographique s'intéresse aux plateformes de simulation existantes d'essaims de robots. L'objectif est de proposer le recensement le plus exhaustif possible de ces plateformes, d'en faire une synthèse, et de les catégoriser selon des critères à définir : efficacité, réalisme, langage de programmation, licence, portabilité, etc.

[1] Yihan Zhang, Lyon Zhang, Hanlin Wang, Fabián E. Bustamante, and Michael Rubenstein. 2020. SwarmTalk - Towards Benchmark Software Suites for Swarm Robotics Platforms. In Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS '20). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 1638–1646.

[2] Schranz, M., Umlauft, M., Sende, M., & Elmenreich, W. (2020). Swarm Robotic Behaviors and Current Applications. Frontiers in robotics and AI, 7, 36. https://doi.org/10.3389/frobt.2020.00036

[3] Webots

Mots-clés : Multi-robot System, Swarm Robotic, Simulation Platform, Robot Simulator

Comment simuler de façon efficace et réaliste les systèmes multicellulaires ?

Cette étude bibliographique s'intéresse aux plateformes de simulation existantes de cellules vivantes par l'approche multiagent ou assimilées (modèle cellulaire de Potts, automates cellulaire). L'objectif est de proposer le recensement le plus exhaustif possible de ces plateformes, d'en faire une synthèse, et de les catégoriser selon des critères à définir : efficacité, réalisme, langage de programmation, licence, portabilité, etc.

[1] Seunghwa Kang, Simon Kahan, Jason McDermott, Nicholas Flann, Ilya Shmulevich, Biocellion : accelerating computer simulation of multicellular biological system models , Bioinformatics, Volume 30, Issue 21, 1 November 2014, Pages 3101–3108, https://doi.org/10.1093/bioinformatics/btu498

[2] Starruß, J., De Back, W., Brusch, L., & Deutsch, A. (2014). Morpheus: a user-friendly modeling environment for multiscale and multicellular systems biology. Bioinformatics, 30(9), 1331-1332.

[3] Morpheus, https://morpheus.gitlab.io/

[4] Swat, M. H., Thomas, G. L., Belmonte, J. M., Shirinifard, A., Hmeljak, D., & Glazier, J. A. (2012). Multi-scale modeling of tissues using CompuCell3D. In Methods in cell biology (Vol. 110, pp. 325-366). Academic Press.

[5] Ballet, P. (2018). SimCells, an advanced software for multicellular modeling Application to tumoral and blood vessel co-development.

[6] Centillyon, https://centyllion.com/fr/

Mots clés : Cellular Potts Model, Multi-agent, Multi-cellular simulator.

IA pour la traduction automatique de l'humour et des jeux de mots

Résumé du sujet

Sujets inclus dans les stages affectés

Techniques for localized data representation in Augmented Reality

Augmented reality allows for the superimposition of virtual elements in a real-world environment that can be associated with it. It enables, for example, the display of temperature data in a room to visually identify cold spots, or the display of robot speed and trajectory to understand their movement. However, the display possibilities are twofold: the data to be displayed can be of various types (discrete/continuous, 1D/2D/3D/see 4D), and there are numerous ways to display them. Furthermore, due to augmented reality's limitations, some techniques may not be adapted or may cause display issues, particularly when the visualizations become distant or overlapping. This research topic aims to review all of the techniques that allow data to be displayed in AR in a co-localized manner, identifying their benefits and drawbacks as detailed in the scientific literature.

[1] Olshannikova, Ekaterina ; Ometov, Aleksandr ; Koucheryavy, Yevgeni ; Olsson, Thomas: Visualizing Big Data with augmented and virtual reality: challenges and research agenda. In: Journal of Big Data Bd. 2, SpringerOpen (2015), Nr. 1, S. 1–27

[2] Hedley, Nicholas R. ; Billinghurst, Mark ; Postner, Lori ; May, Richard ; Kato, Hirokazu: Explorations in the use of augmented reality for geographic visualization. In: Presence: Teleoperators and Virtual Environments Bd. 11 (2002), Nr. 2, S. 119–133

[3] Olshannikova, Ekaterina ; Ometov, Aleksandr ; Koucheryavy, Yevgeni: Towards big data visualization for augmented reality. In: Proceedings - 16th IEEE Conference on Business Informatics, CBI 2014 Bd. 2, Institute of Electrical and Electronics Engineers Inc. (2014), S. 33–37 — ISBN 9781479957781

[4] Miranda, Brunelli P. ; Queiroz, Vinicius F. ; Araújo, Tiago D.O. ; Santos, Carlos G.R. ; Meiguins, Bianchi S.: A low-cost multi-user augmented reality application for data visualization. In: Multimedia Tools and Applications Bd. 81, Springer (2022), Nr. 11, S. 14773–14801

[5] Martins, Nuno Cid ; Marques, Bernardo ; Alves, João ; Araújo, Tiago ; Dias, Paulo ; Santos, Beatriz Sousa: Augmented reality situated visualization in decision-making. In: Multimedia Tools and Applications Bd. 81, Springer (2022), Nr. 11, S. 14749–14772

How collaboration in mixed reality can benefit from the use of heterogeneous devices?

The massive development of display technologies brings a wide range of new devices, such as mobile phones, AR/VR headsets and large displays, available to the general public. These devices offer many opportunities for co-located and remote collaboration on physical and digital content. Some can handle groups of co-located users [10, 13], while others enable remote users to connect in various situations [3, 6, 11, 14]. For example, some previous systems allow users to use a mobile device to interact with a co-located partner wearing a VR headset [4, 7]. Other systems enable users in VR to guide a remote collaborator using an AR headset [1, 8, 9, 12]. [1] H. Bai, P. Sasikumar, J. Yang, and M. Billinghurst. "A User Study on Mixed Reality Remote Collaboration with Eye Gaze and Hand Gesture Sharing". Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’20), 2020.

[2] H. H. Clark, and S. E. Brennan. "Grounding in communication". In: L. B. Resnick, J. M. Levine, & S. D. Teasley (Eds.), Perspectives on socially shared cognition (pp. 127–149). American Psychological Association. 1991.

[3] C. Fleury, T. Duval, V. Gouranton, A. Steed. "Evaluation of Remote Collaborative Manipulation for Scientific Data Analysis", ACM Symposium on Virtual Reality Software and Technology (VRST’12), 2012.

[4] J. Gugenheimer, E. Stemasov, J. Frommel, and E. Rukzio. "ShareVR: Enabling Co-Located Experiences for Virtual Reality between HMD and Non-HMD Users". Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17), 2017.

[5] J. Hollan and S. Stornetta. "Beyond being there". In : Proceedings of the ACM Conference on Human Factors in Computing Systems (CHI’92), 1992.

[6] B. T. Kumaravel, F. Anderson, G. Fitzmaurice, B. Hartmann, and Tovi Grossman. "Loki:Facilitating Remote Instruction of Physical Tasks Using Bi-Directional Mixed-Reality Telepresence". Proceedings of the ACM Symposium on User Interface Software and Technology (UIST '19), 2019.

[7] B. T. Kumaravel, C. Nguyen, S. DiVerdi, and B. Hartmann. "TransceiVR: Bridging Asymmetrical Communication Between VR Users and External Collaborators". Proceedings of the ACM Symposium on User Interface Software and Technology (UIST '20), 2020.

[8] M. Le Chénéchal, T. Duval, J. Royan, V. Gouranton, and B. Arnaldi. “Vishnu: Virtual Immersive Support for HelpiNg Users - An Interaction Paradigm for Remote Collaborative Maintenance in Mixed Reality”. Proceedings of 3DCVE 2016 (IEEE VR 2016 International Workshop on 3D Collaborative Virtual Environments). 2016.

[9] M. Le Chénéchal, T. Duval, V. Gouranton, J. Royan, and B. Arnaldi. “The Stretchable Arms for Collaborative Remote Guiding”. Proceedings of ICAT-EGVE 2015, Eurographics. 2015.

[10] C. Liu, O. Chapuis, M. Beaudouin-Lafon, and E. Lecolinet. “Shared Interaction on a Wall-Sized Display in a Data Manipulation Task.” In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. CHI ’16.

[11] P. Mohr, S. Mori, T. Langlotz, B. H. Thomas, D. Schmalstieg, and D. Kalkofen. "Mixed Reality Light Fields for Interactive Remote Assistance". Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI’20), 2020.

[12] O. Oda, C. Elvezio, M. Sukan, S. Feiner, and B. Tversky. "Virtual Replicas for Remote Assistance in Virtual and Augmented Reality". Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST '15), 2015.

[13] Y. Okuya, O. Gladin, N. Ladévèze, C. Fleury, P. Bourdot. "Investigating Collaborative Exploration of Design Alternatives on a Wall-Sized Display", ACM Conference on Human Factors in Computing Systems (CHI’20), 2020.

[14] H. Xia, S. Herscher, K. Perlin, and D. Wigdor. "Spacetime: Enabling Fluid Individual and Collaborative Editing in Virtual Reality". Proceedings of the ACM Symposium on User Interface Software and Technology (UIST ’18), 2018.