AMIT KUMAR PANDEY
 Ph.D. (Robotics and AI)
Socially Intelligent Robots and Societal Applications, Human Robot Interaction, Cognitive Architecture, Learning

 
akpandey@aldebaran-robotics.com
amit.kr.pandey@gmail.com
 
 
43, rue du Colonel Pierre Avia
75015 Paris
France
 
Head Principal Scientist (Chief Scientist)
Scientific Coordinator - Collaborative Projects
 
SoftBank Robotics (formerly Aldebaran Robotics)
Paris, France
 
 
Book/Monograph
Journal/Book Chapter/Editorial
Conference/Workshop

Book/Monograph

Towards Socially Intelligent Robots in Human-Centered Environment
Springer Tracks on Advanced Robotics (STAR). (Under publication)
Amit Kumar Pandey

Abstract

Journal/Book Chapter/Editorial

Developmental Social Robotics: An Applied Perspective
International Journal of Social Robotics, August 2015, Volume 7, Issue 4, pp 417-420

   
For robots to coexist with us in harmony and be our companion, they should be able to explicitly reason about humans, their presence, the social and human-centered environment, and the social-cultural norms, to behave in socially expected and accepted manner. To develop such capabilities, from psychology, child developmental and human behavioral research, we can identify some of the key ingredients, such as the abilities to distinguish between self and others, and to reason about affordance, perspective taking, shared spaces, social signals, emotions, theory of mind, social situation, etc., and the capability to develop social intelligence through the process of social learning. Researchers across the world are working to equip robots with some of these aspects from diverse perspectives and developing various interesting and innovative applications. This special issue is intended to reflect some of those high-quality research works, results and potential applications.
    

@article{Amit_Kumar_Pandey_DevSoR_IJSR_2015,
year={2015},
issn={1875-4791},
journal={International Journal of Social Robotics},
volume={7},
number={4},
doi={10.1007/s12369-015-0312-0},
title={Developmental Social Robotics: An Applied Perspective},
url={http://dx.doi.org/10.1007/s12369-015-0312-0},
publisher={Springer Netherlands},
author={Pandey, AmitKumar and Alami, Rachid and Kawamura, Kazuhiko},
pages={417-420},
language={English}
}

Abstract
Amit Kumar Pandey, Rachid Alami and Kazuhiko Kawamura
Ingredients and a Framework of Dexterous Manipulation Skills for Robots in Human Centered Environment and HRI
Journal of Robotics Society of Japan, Volume 32,  No. 4 (RSJ-2014)
Amit Kumar Pandey and Rachid Alami

Planning Manipulation tasks in human-centered environment require reasoning beyond stability of grasp and placement by incorporating important aspects of human perspective. In this paper, we identify some basic constraints, which a human-centered dexterous manipulation task planner should take into account. Then we present some key system components of such task planner, followed by a generalized framework for task planning. We also show one instantiation of the framework and present results with different robots, in different situations for a set of different types of task, such as Show, Hide, Give, Make Accessible an object to human.  

interaction.@article{Amit Kumar Pandey201432_347,
title={Ingredients and a Framework of Dexterous Manipulation Skills for Robots in Human Centered Environment and HRI},
author={Amit Kumar Pandey and Rachid Alami},
journal={Journal of the Robotics Society of Japan},
volume={32},
number={4},
pages={347-353},
year={2014},
doi={10.7210/jrsj.32.347}
}

Abstract
Towards Human-Level Semantics Understanding of Human-Centered Object Manipulation Tasks for HRI: Reasoning About Effect, Ability, Effort and Perspective Taking 
International Journal of Social Robotics, Volume 6(4), pages 593-620 (IJSR 2014)

In its lifetime, a robot should be able to autonomously understand the semantics of different tasks to effectively perform them in different situations. In this context, it is important to distinguish the meaning (in terms of the desired effect) of a task and the means to achieve that task. Our focus is those tasks in which one agent is required to perform a task for another agent, such as give, show, hide, make-accessible, etc. In this paper, we identify that a high-level human-centered combined reasoning, based on perspective taking, efforts and abilities analyses, is the key to understand semantics of such tasks. By combining these aspects, the robot infers sets of hierarchy of facts, which serve for analyzing the effect of a task. We adapt the explanation based learning approach enabling the task understanding from the very first demonstration and continuous refinement with new demonstrations. We argue that such symbolic level understanding of a task, which is not bound to trajectory, kinematics structure or shape of the robot, facilitates generalization to novel situations as well as ease the transfer of acquired knowledge among heterogeneous robots. Further, the knowledge of tasks at such human understandable level of abstraction will enrich the natural human–robot interaction.

   
  
@article{Amit_Kumar_Pandey_Task_Semantics_Learning_IJSR_2014
year={2014},
issn={1875-4791},
journal={International Journal of Social Robotics},
volume={6},
number={4},
title={Towards Human-Level Semantics Understanding of Human-Centered Object Manipulation Tasks for HRI: Reasoning About Effect, Ability, Effort and Perspective Taking},
publisher={Springer Netherlands},
keywords={Perspective taking; Ability analysis; Effort analysis; Emulation learning; Effect understanding; Social learning; Explanation based learning; Task semantics},
author={Pandey, Amit Kumar and Alami, Rachid},
pages={593-620},
language={English}
}
   
    

Abstract
Amit Kumar Pandey and Rachid Alami
Lovotics, the Uncanny Valley and the Grand Challenges
Journal of Lovotics, Volume 1(1), 2014

   
@article{Amit_Kumar_Pandey_Lovotics_2014
year={2014},
journal={International Journal of Social Robotics},
volume={1},
number={1},
title={Lovotics, the Uncanny Valley and the Grand},
publisher={OMICS},
author={Pandey, Amit Kumar and Alami, Rachid},
pages={1-3},
language={English}
}
    

Lovotics, the relatively new direction of robotics research, aims to bring love, affection and friendship between the human and the robot. In this paper, we will discuss the key aspects and raise some basic questions, which must be addresses for designing a 'lovotics robot', which is expected to be capable of stimulating mutual love-like bond between the human and the robot. We must also be careful of not falling in the uncanny valley.

Amit Kumar Pandey
Abstract
Towards Task-Aware Proactive-Sociable Robot based on Multi-State Perspective-Taking 
International Journal of Social Robotics, Volume 5, Issue 2, Page 215 - 236 (IJSR-2013)

Robots are expected to cooperate with humans in day-to-day interaction. One aspect of such cooperation is behaving proactively. In this paper we will enable our robots, equipped with visuo-spatial perspective-taking capabilities, to behave proactively based on reasoning ‘where’ its human partner might perform a particular task with different effort levels. For this, the robot analyzes the agents’ abilities not only from the current state but also from a set of different states the agent might attain.
Depending on the task and the situation, the robot exhibits different types of proactive behaviors, such as, reaching out, suggesting a solution and providing clues by head movement, for two different tasks performed by the human partner: give and make accessible. These proactive behaviors are intended to be informative to reduce confusion of the human partner, to communicate the robot’s ability and intention and to guide the partner for better cooperation.
We have validated the behaviors by user studies, which suggest that such proactive behaviors reduce the ‘confusion’ and ‘effort’ of the users. Further, the participants reported the robot to be more ‘supportive and aware’ compared to the situations where the robot was non-proactive.
Such proactive behaviors could enrich multi-modal interaction and cooperation capabilities of the robot as well as help in developing more complex socially expected and accepted behaviors in the human centered environment.

   
@article{Amit_Kumar_Pandey_Proactive_Social_Robot_IJSR_2014
year={2013},
issn={1875-4791},
journal={International Journal of Social Robotics},
volume={5},
number={2},
doi={10.1007/s12369-013-0181-3},
title={Towards a Task-Aware Proactive Sociable Robot Based on Multi-state Perspective-Taking},
url={http://dx.doi.org/10.1007/s12369-013-0181-3},
publisher={Springer Netherlands},
keywords={Proactive robot; Human-robot interaction; Social robot; Multi-state perspective taking},
author={Pandey, AmitKumar and Ali, Muhammad and Alami, Rachid},
pages={215-236},
language={English}
}
    

Abstract
Amit Kumar Pandey, Muhammad Ali and Rachid Alami
Human-Aware Robot Navigation: A Survey 
Robotics and Autonomous Systems, Volume 61, Issue 12, (2013), Pages 1726-1743 (RAS-2013)

Navigation is a basic skill for autonomous robots. In the last years human-robot interaction has become an important research field that spans all of the robot capabilities including perception, reasoning, learning, manipulation and navigation. For navigation, the presence of humans requires novel approaches that take into account the constraints of human comfort as well as social rules. Besides these constraints, putting robots among humans opens new interaction possibilities for robots, also for navigation tasks, such as robot guides. This paper provides a survey of existing approaches to human-aware navigation and offers a general classification scheme for the presented methods.

   
@article{ Amit_Kumar_Pandey_Human_Aware_Navigation_Survey_RAS_2013,
title = "Human-aware robot navigation: A survey ",
author = "Thibault Kruse and Amit Kumar Pandey and Rachid Alami and Alexandra Kirsch",
journal = "Robotics and Autonomous Systems ",
volume = "61",
number = "12",
pages = "1726 - 1743",
year = "2013",
note = "",
issn = "0921-8890",
doi = "http://dx.doi.org/10.1016/j.robot.2013.05.007",
url = "http://www.sciencedirect.com/science/article/pii/S0921889013001048",
keywords = {Autonomous robot, Human-aware, Human-centered environment, Navigation, Survey},
}
    

Abstract
Thibault Kruse, Amit Kumar Pandey, Rachid Alami, Alexandra Kirsch
Mightability: A Multi-state Visuo-spatial Reasoning for Human-Robot Interaction
Experimental Robotics, Springer Tracts in Advanced Robotics, Khatib, Oussama; Kumar, Vijay; Sukhatme, Gaurav (Eds.), Volume 79 (2014), Pages 49-63

We, the Humans, are capable of estimating various abilities of ourselves and of the person we are interacting with. Visibility and reachability are among two such abilities. Studies in neuroscience and psychology suggest that from the age of 12-15 months children start to understand the occlusion of others line-of-sight and from the age of 3 years they start to develop the ability, termed as perceived reachability for self and for others. As such capabilities evolve in the children, they start showing intuitive and proactive behavior by perceiving various abilities of the human partner.
Inspired from such studies, which suggest that visuo-spatial perception plays an important role in Human-Human interaction, we propose to equip our robot to perceive various types of abilities of the agents in the workspace. The robot perceives such abilities not only from the current state of the agent but also by virtually putting an agent into various achievable states, such as turn left, stand up, etc. As the robot estimates what an agent might be able to ‘see’ and ‘reach’ if will be in a particular state, we term such analyses as Mightability Analyses. Currently the robot is equipped to perform such Mightability analyses at two levels: cells in the 3D grid and objects in the space, which we termed as Mightability Maps (MM) and Object Oriented Mightabilities (OOM) respectively.
We have shown the applications of Mightability analyses in performing various co-operative tasks like show and make an object accessible to the human as well as competitive tasks like hide and put away an object from the human. Such Mightability analyses equip the robot for higher-level learning and decisional capabilities as well as could facilitate the robot for better verbalize interaction and proactive behavior.

   
@incollection{Amit_Kumar_Pandey_Mightability_STAR_2014,
year={2014},
isbn={978-3-642-28571-4},
booktitle={Experimental Robotics},
volume={79},
series={Springer Tracts in Advanced Robotics},
editor={Khatib, Oussama and Kumar, Vijay and Sukhatme, Gaurav},
doi={10.1007/978-3-642-28572-1_4},
title={Mightability: A Multi-state Visuo-spatial Reasoning for Human-Robot Interaction},
url={http://dx.doi.org/10.1007/978-3-642-28572-1_4},
publisher={Springer Berlin Heidelberg},
author={Pandey, AmitKumar and Alami, Rachid},
pages={49-63},
language={English}
}
    

Abstract
Amit Kumar Pandey and Rachid Alami
Towards Grounding Human-Robot Interaction
Bridges between the Methodological and Practical Work of the Robotics and Cognitive Systems Communities - From Sensors to Concepts, Springer Publishing, 2012. (Under publication)

<<< To be Added >>

   
@inbook{lemaignan_2012_Grounding_HRI,
 chapter = {Towards Grounding Human-Robot Interaction},
title = {Bridges between the Methodological and Practical Work of the Robotics and Cognitive Systems Communities - From Sensors to Concepts},
publisher = {Springer Publishing},
year = {2012},
editor = {Amirat, T. and Chibani, A. and Zarri, G. P.},
author = {Lemaignan, S. and Alami, R. and Pandey, A. K. and Warnier, M. and Guitton, J.},
series = {Intelligent Systems Reference Library},
note = {To be published}
}
    

Abstract
Severin Lemaignan, Rachid Alami, Amit Kumar Pandey, Matthieu Warnier, J Guitton,
Link Graph and Feature Chain Based Robust Online SLAM for Fully Autonomous Mobile Robot Navigation System Using Sonar Sensors
Recent Progress in Robotics: Viable Robotic Service to Human, Lecture Notes in Control and Information Sciences, Volume 370 (2008), pp 113-131

Local localization of a fully autonomous mobile robot in a partial map is an important aspect from the view point of accurate map building and safe path planning. The problem of correcting the location of a robot in a partial map worsens when sonar sensors are used. When a mobile robot is exploring the environment autonomously, it is rare to get the consistent pair of features or readings from two different positions using sonar sensors. So the approaches, which rely on readings or features matching, are prone to fail without exhaustive mathematical calculations of sonar modeling and environment modeling. This paper introduces link graph based robust two step feature chain based localization for achieving online SLAM (Simultaneous Localization And Mapping) using sonar data only. Instead of relying completely on matching of feature to feature or point to point, our approach finds possible associations between features to localize. The link graph based approach removes many false associations enhancing the SLAM process. We also map features onto Occupancy Grid (OG) framework taking advantage of its dense representation of the world. Combining features onto OG overcomes many of its limitations such as the independence assumption between cells and provides for better modeling of the sonar providing more accurate maps.

@incollection{Amit_Kumar_Pandey_SLAM_LNCIS_2008,
year={2008},
isbn={978-3-540-76728-2},
booktitle={Recent Progress in Robotics: Viable Robotic Service to Human},
volume={370},
series={Lecture Notes in Control and Information Sciences},
editor={Lee, Sukhan and Suh, IlHong and Kim, MunSang},
doi={10.1007/978-3-540-76729-9_10},
title={Link Graph and Feature Chain Based Robust Online SLAM for Fully Autonomous Mobile Robot Navigation System Using Sonar Sensors},
url={http://dx.doi.org/10.1007/978-3-540-76729-9_10},
publisher={Springer Berlin Heidelberg},
keywords={SLAM; Sonar; Autonomous Mobile Robot; Feature Chain; Link Graph},
author={Pandey, AmitKumar and Krishna, K.Madhava},
pages={113-131},
language={English}
}

Abstract
Amit Kumar Pandey and K. Madhava Krishna

Conference/Workshop

When a Social Robot might Learn to Support Potentially Immoral Behaviors on the name of Privacy
The Dilemma of Privacy vs. Ethics for a Socially Intelligent Robot
Privacy-Sensitive Robotics 2017, HRI 2017 

under construction

    Robots are becoming commonplace. They are also becoming capable of learning. Combination of these, from one perspective, might also be problematic. What if someone teaches a robot some ‘bad’ things? As a precautionary measure robot could be pre-programmed to not learn a list of ‘bad’ things. But on the other side robots will have to be programmed for supporting the privacy of the people. What if someone uses the ‘privacy’ channel to teach ‘bad’ things, and as bad as making the robot to be part of supporting even potentially unethical and immoral behaviors? This paper illustrates such possibilities through a simple human-robot interaction based robot learning system. The aim is to proactively fetch the attention of the community towards such possible future threats and how to address those scientifically. The presented system is part of an ongoing study about how people expect a social robot to behave if there is a dilemma of Privacy vs. Moral, Social and Ethical accountability.    

Abstract
Amit Kumar Pandey, Rodolphe Gelin, Martina Ruocco, Marco Monforte, and Bruno Siciliano
A Human-Robot Competition: Towards Evaluating Robots’ Reasoning Abilities for HRI
The Eight International Conference on Social Robotic (ICSR) 2016

 
{cke_protected}{C}%3C!%2D%2DStartFragment%2D%2D%3E@Inbook{Pandey2016,
author="Pandey, Amit Kumar
and de Silva, Lavindra
and Alami, Rachid",
editor="Agah, Arvin
and Cabibihan, John-John
and Howard, Ayanna M.
and Salichs, Miguel A.
and He, Hongsheng",
title="A Human-Robot Competition: Towards Evaluating Robots' Reasoning Abilities for HRI",
bookTitle="Social Robotics: 8th International Conference, ICSR 2016, Kansas City, MO, USA, November 1-3, 2016 Proceedings",
year="2016",
publisher="Springer International Publishing",
address="Cham",
pages="138--147",
isbn="978-3-319-47437-3",
doi="10.1007/978-3-319-47437-3_14",
url="http://dx.doi.org/10.1007/978-3-319-47437-3_14"

    
For effective Human-Robot Interaction (HRI), a robot should be human and human-environment aware. Perspective taking, effort analysis and affordance analysis are some of the core components in such human-centered reasoning. This paper is concerned with the need for benchmarking scenarios to assess the resultant intelligence, when such reasoning blocks function together. Despite the various competitions involving robots, there is a lack of approaches considering the human in their scenarios and in the reasoning processes, especially those targeting HRI. We present a game that is centered upon a human-robot competition, and motivate how our scenario, and the idea of a robot and a human competing, can serve as a benchmark test for both human-aware reasoning as well as inter-robot social intelligence. Based on subjective feedback from participants, we also provide some pointers and ingredients for evaluation matrices.
  

Abstract
Amit Kumar Pandey, Lavindra De Silva, and Rachid Alami
A Novel Concept of Human-Robot Competition for HRI Reasoning: Where Does It Point?
ACM/IEEE International Conference on Human-Robot Interaction (HRI 2016)(LBR)

@inproceedings{Pandey:2016:NCH:2906831.2906942, author = {Pandey, Amit Kumar and de Silva, Lavindra and Alami, Rachid}, title = {A Novel Concept of Human-Robot Competition for Evaluating a Robot's Reasoning Capabilities in HRI}, booktitle = {The Eleventh ACM/IEEE International Conference on Human Robot Interaction}, series = {HRI '16}, year = {2016}, isbn = {978-1-4673-8370-7}, location = {Christchurch, New Zealand}, pages = {491--492}, numpages = {2}, url = {http://dl.acm.org/citation.cfm?id=2906831.2906942}, acmid = {2906942}, publisher = {IEEE Press}, address = {Piscataway, NJ, USA}, keywords = {affordance, affordance graph, ai, cognitive robotics, evaluation and benchmarking, hri, human robot interaction, perspective taking, robot competition, social robotics, socially intelligent robots}, }

​For intelligent Human-Robot Interaction (HRI), a robot should be equipped with some core reasoning capabilities such as perspective taking, effort analysis, and affordance analysis. This paper starts to explore how a robot equipped with such reasoning abilities could be evaluated. To this end, inspired by the Turing test, we design a game involving a human-robot competition scenario. Interestingly, the participants' subjective feedback, which tended to compare the robot's abilities with their own, points toward potential criteria for developing benchmark scenarios and evaluation matrices.

Abstract
Amit Kumar Pandey, Lavindra De Silva, and Rachid Alami
Towards Evaluating Human Robot Dialog based Affordance Learning and the Challenges
7th International Workshop on Spoken Dialogue Systems (IWSDS 2016)

Under Construction

Under COnstruction    

Abstract
Amit Kumar Pandey,  Coline Le Dantec, and Rodolphe Gelin
Human Robot Interaction can Boost Robot's Affordance Learning: A Proof of Concep
International Conference on Advanced Robotics (ICAR 2015), pp 642-648

Affordance, being one of the key building blocks behind how we interact with the environment, is also studied widely in robotics from different perspectives, for navigation, for task planning, etc. Therefore, the study is mostly focused on affordances of individual objects and for robot environment interaction, and such affordances have been mostly perceived through vision and physical interaction. However, in a human centered environment, for a robot to be socially intelligent and exhibit more natural interaction behavior, it should be able to learn affordances also through day-to-day verbal interaction and that too from the perspective of what does the presence of a specific set of objects affords to provide. In this paper, we will present the novel idea of verbal interaction based multi-object affordance learning and a framework to achieve that. Further, an instantiation of the framework on the real robot within office context is analyzed. Some of the potential future works and applications, such as fusing with activity pattern and interaction grounding will be briefly discussed.

   
@INPROCEEDINGS{Amit_Kumar_Pandey_Affordance_Learning_HRI_ICAR_2015,
author={Pandey, Amit Kumar and Gelin, Rodolphe},
booktitle={ International Conference on Advanced Robotics (ICAR)},
title={Human robot interaction can boost robot's affordance learning: A proof of concept},
year={2015},
pages={642-648},
keywords={human-robot interaction;humanoid robots;intelligent robots;legged locomotion;path planning;activity pattern;day-to-day verbal interaction;human centered environment;human robot interaction;natural interaction behavior;robot affordance learning;robot environment interaction;robot navigation;robot task planning;socially intelligent robot;verbal interaction based multiobject affordance learning;Cognition;Data mining;Databases;Keyboards;Mice;Monitoring;Robots},
doi={10.1109/ICAR.2015.7251524},
month={July},}    

Abstract
Amit Kumar Pandey and Rodolphe Gelin
L2TOR - Second Language Tutoring using Social Robots
International WS on Educational Robots, International Conference on Social Robotics (WONDER, ICSR 2015)
Tony Belpaeme, James Kennedy, Paul Baxter, Paul Vogt, Emiel E.J. Krahmer, Stefan Kopp, Kirsten Bergmann, Paul Leseman, Aylin C. Küntay, Tilbe Göksun, Amit K. Pandey, Rodolphe Gelin, Petra Koudelkova, Tommy Deblieck

   
This paper introduces a research effort to develop and evaluate social robots for second language tutoring in early childhood. The L2TOR project will capitalise on recent observations in which social robots have been shown to have marked benefits over screen-based technologies in education, both in terms of learning outcomes and motivation. As language acquisition benefits from early, personalised and interactive tutoring, current language tutoring delivery is often ill-equipped to deal with this: classroom resources are at present inadequate to offer one-to-one tutoring with (near) native speakers in educational and home contexts. L2TOR will address this by furthering the science and technology of language tutoring robots. This document describes the main research strands and expected outcomes of the project.
    

   
@inproceedings{ Belpaeme_L2TOR_WONDER_ICSR_2015,
author = { Belpaeme, Tony and Kennedy, James  and Baxter, Paul  and Vogt, Paul and Krahmer, Emiel E.J.  and Kopp, Stefan and Bergmann, Kirsten and Leseman, Paul  and Küntay, Aylin C.  and Göksun, Tilbe  and Pandey, Amit K. and Gelin, Rodolphe  and Koudelkova, Petra and Deblieck, Tommy},
title = { L2TOR - Second Language Tutoring using Social Robots},
booktitle = {WONDER Workshop, 2015 International Conference on Social Robotics},
year = {2015}}
    

Abstract
A New Approach to Combined Symbolic-Geometric Backtracking in the Context of Human-Robot Interaction
IEEE International Conference on Robotics and Automation  (ICRA 2014) 
Lavindra De Silva, Mamoun Gharbi, Amit Kumar Pandey and Rachid Alami

   
Bridging the gap between symbolic and geometric planning has received much attention in recent years. An important issue in some of the works that combine the two approaches is finding the right balance between backtracking at the symbolic level versus at the geometric planning level. We present in this work a new approach to interleaved backtracking, where the symbolic planner backtracks to try alternative action branches that naturally map to different geometric solutions. This eliminates the need to “protect” certain symbolic conditions when backtracking at the geometric level, and addresses a completeness issue in our previous approach to interleaved backtracking. We discuss a concrete, non-trivial symbolic-geometric planning example in the context of Human-Robot Interaction, a full implementation of the combined planning technique, and an evaluation of performance as well as the effect of increasing the symbolic-action branching factor
    

   
@INPROCEEDINGS{Lavindra_Sym_Geo_HRI_ICRA_2014,
author={de Silva, L. and Gharbi, M. and Pandey, A.K. and Alami, R.},
booktitle={Robotics and Automation (ICRA), 2014 IEEE International Conference on},
title={A new approach to combined symbolic-geometric backtracking in the context of human-robot interaction},
year={2014},
pages={3757-3763},
keywords={geometry;human-robot interaction;manipulators;mobile robots;path planning;combined symbolic-geometric backtracking;geometric planning level;human-robot interaction;symbolic planning level;symbolic-action branching factor;Abstracts;Applicators;Context;Libraries;Planning;Robots;Trajectory},
doi={10.1109/ICRA.2014.6907403},
month={May},}
    

Abstract
Romeo2 Project: Humanoid Robot Assistant and Companion for Everyday Life: I. Situation Assessment for Social Intelligence
International workshop on Artificial Intelligence and Cognition, Torino, Italy (AIC 2014)
Amit Kumar Pandey, Rodolphe Gelin, Rachid Alami, Renaud Viry, Axel Buendia, Roland Meertens, Mohamed Chetouani, Laurence Devillers, Marie Tahon, David Filliat, Yves Grenier, Mounira Maazaoui, Abderrahmane Kheddar, Frederic Lerasle, and Laurent Fitte Duval 

   
For a socially intelligent robot, different levels of situation assessment are required, ranging from basic processing of sensor input to high-level analysis of semantics and intention. However, the attempt to combine them all prompts new research challenges and the need of a coherent framework and architecture.
This paper presents the situation assessment aspect of Romeo2, a unique project aiming to bring multi-modal and multi-layered perception on a single system and targeting for a unified theoretical and functional frame- work for a robot companion for everyday life. It also discusses some of the innovation potentials, which the combination of these various perception abilities adds into the robot’s socio-cognitive capabilities.
    

   
@inproceedings{Amit_Kumar_Pandey_Romeo2_AIC2014,
  TITLE = {{Romeo2 Project: Humanoid Robot Assistant and Companion for Everyday Life: I. Situation Assessment for Social Intelligence}},
  AUTHOR = {Pandey, Amit K. and Gelin, Rodolphe and Alami, Rachid and Viry, Renaud and Buendia, Axel and Meertens, Roland and Chetouani, Mohamed and Devillers, Laurence and Tahon, Marie and Filliat, David and Grenier, Yves and Maazaoui, Mounira and Kheddar, Abderrahmane and Lerasle, Fr{\'e}d{\'e}ric and Fitte-Duval, Laurent},
  BOOKTITLE = {{International Workshop on Artificial Intelligence and Cognition, 2nd Edition}},
PUBLISHER = {{CEUR Workshop Proceedings (CEUR-WS.org)}}, VOLUME = {1315}, PAGES = {140-147}, YEAR = {2014}, KEYWORDS = {Situation Assessment ; Socially Intelligent Robot ; Human Robot Interaction ; Robot Companion},
  PDF = { http://ceur-ws.org/Vol-1315/paper12.pdf},
}
    

Abstract
Affordance Graph: A Framework to Encode Effort-based Affordances for day-to-day HRI 
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013)
Amit Kumar Pandey and Rachid Alami

Analyzing affordances has its root in socio-cognitive development of primates. Knowing what the environment, including other agents, can offer in terms of action capabilities is important for our day-to-day interaction and cooperation. In this paper, we will merge two complementary aspects of affordances: from agent-object perspective, what an agent afford to do with an object, and from agent-agent perspective, what an agent can afford to do for other agent, and present a unified notion of Affordance Graph. The graph will encode affordances for a variety of tasks: take, give, pick, put on, put into, show, hide, make accessible, etc. Another novelty will be to incorporate the aspects of effort and perspective-taking in constructing such graph. Hence, the Affordance Graph will tell about the action-capabilities of manipulating the objects among the agents and across the places, along with the information about the required level of efforts and the potential places. We will also demonstrate some interesting applications.

   
@INPROCEEDINGS{Amit_Kumar_Pandey_Affordance_Graph_IROS_2013,
author={Pandey, Amit Kumar and Alami, Rachid},
booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
title={Affordance graph: A framework to encode perspective taking and effort based affordances for day-to-day human-robot interaction},
year={2013},
pages={2180-2187},
keywords={graph theory;human-robot interaction;affordance graph;agent-agent perspective;agent-object perspective;day-to-day human-robot interaction;effort based affordances;socio-cognitive development;Collision avoidance;Containers;Grippers;Human-robot interaction;Robot sensing systems},
doi={10.1109/IROS.2013.6696661},
ISSN={2153-0858},
month={Nov},}
    

Abstract
An Interface for Interleaved Symbolic-Geometric Planning and Backtracking
 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2013)
Lavindra de Silva, Amit Kumar Pandey and Rachid Alami

While symbolic planners work with an abstract representation of the real world, allowing plans to be constructed relatively quickly, geometric planning - although more computationally complex - is essential for building symbolic plans that actually work in the real world. To combine the two types of systems, we present in this paper a meaningful interface, and insights into a methodology for developing interwoven symbolic-geometric domains. We concretely present this “link” between the two approaches with algorithms and data structures that amount to an intermediate layer that coordinates symbolic-geometric planning. Since both planners are capable of “backtracking” at their own levels, we also investigate the issue of how to interleave their backtracking, which we do in the context of the algorithms that form the link. Finally, we present a prototype implementation of the combined system on a PR2 robot.

@INPROCEEDINGS{6696358, 
author={de Silva, L. and Pandey, A.K. and Alami, R.}, 
booktitle={Intelligent Robots and Systems (IROS), 2013 IEEE/RSJ International Conference on}, 
title={An interface for interleaved symbolic-geometric planning and backtracking}, 
year={2013}, 
pages={232-239}, 
keywords={computational geometry;data structures;robots;PR2 robot;backtracking;data structures;interleaved symbolic-geometric planning;Abstracts;Compounds;Grippers;Libraries;Planning;Robots;Trajectory}, 
doi={10.1109/IROS.2013.6696358}, 
ISSN={2153-0858}, 
month={Nov},}

Abstract
 On robot decisional abilities for human-robot joint action
5th Joint Action Meeting (JAM 2013)
Aurélie Clodic, Severin Lemaignan, Amit Kumar Pandey, Lavindra de Silva, Mathieu Warnier and Rachid Alami

   
Integrative approach for a robot that acts in interaction with humans
    

   
@INPROCEEDINGS{Clodic_JAM_2013,
author={Clodic, Aurélie and Lemaignan, Severin  and Pandey, Amit Kumar and de Silva, Lavindra and Warnier, Mathieu and Alami, Rachid },
booktitle={ 5th Joint Action Meeting (JAM 2013) },
title={On robot decisional abilities for human-robot joint action},
year={2013}}
    

Abstract
Bottom up development of a robot's basic socio-cognitive abilities for joint action
5th Joint Action Meeting (JAM 2013)
Amit Kumar Pandey, Aurélie Clodic, Lavindra de Silva, Severin Lemaignan, Mathieu Warnier and Rachid Alami 

   
Inspired from child development and behavioral research, we identify and equip our robots with basic yet key socio-cognitive capabilities for joint action:
 
1. Perspective Taking: Reasoning about abilities to reach and see some place or object from others’ perspective. These are central for deciding the “what”, “where” and “how” aspects of joint action.
2. Affordance and Effort Analysis: Reasoning about “what” an agent can afford to do with an object and for other agents, and with “which” effort levels. These are important for planning joint actions.
3. State Analysis: Analyzing the current physical state of an agent, e.g. whether holding something or free, looking around, or focusing on something. These are important for executing and monitoring a joint action.
4. Planning Basic Joint Tasks: Planning day-to-day tasks, e.g. giving, showing, or hiding some object, by taking into account how to grasp the object so that the other agent can take it, how to hold/place it so that the other agent can recognize it. These are important for the success of the joint action.
5. Proactivity for Joint Tasks: For common tasks like ‘give’ or ‘make accessible’, it helps if the receiver agent proactively reaches out to take, or suggest where to put. We found that such proactive behaviors reduce the effort and confusion of the human partner in the joint action.
We claim that these altogether greatly elevates the robot’s collaborative and joint task planning and executing capabilities towards being socially acceptable.
    

   
@INPROCEEDINGS{Amit_Kumar_Pandey_Robot_Joint_Action_2013,
author={Pandey, Amit Kumar and Clodic, Aurélie and and de Silva, Lavindra and Lemaignan, Severin and Warnier, Mathieu and Alami, Rachid },
booktitle={ 5th Joint Action Meeting (JAM 2013) },
title={Bottom up development of a robot's basic socio-cognitive abilities for joint action},
year={2013}}    

Abstract
Towards combining HTN planning and geometric task planning
RSS workshop on Combined Robot Motion Planning and AI Planning for Practical Applications (RSS 2013)

In this paper we present an interface between a symbolic planner and a geometric task planner, which is different to a standard trajectory planner in that the former is able to perform geometric reasoning on abstract entities---tasks. We believe that this approach facilitates a more principled interface to symbolic planning, while also leaving more room for the geometric planner to make independent decisions. We show how the two planners could be interfaced, and how their planning and backtracking could be interleaved. We also provide insights for a methodology for using the combined system, and experimental results to use as a benchmark with future extensions to both the combined system, as well as to the geometric task planner.

   
@article{DBLP:journals/corr/SilvaPGA13,
  author    = {Lavindra de Silva and
               Amit Kumar Pandey and
               Mamoun Gharbi and
               Rachid Alami},
  title     = {Towards Combining {HTN} Planning and Geometric Task Planning},
  journal   = {CoRR},
  volume    = {abs/1307.1482},
  year      = {2013},
  url       = {http://arxiv.org/abs/1307.1482},
  timestamp = {Thu, 15 Aug 2013 15:30:54 +0200},
  biburl    = {http://dblp.uni-trier.de/rec/bib/journals/corr/SilvaPGA13},
  bibsource = {dblp computer science bibliography, http://dblp.org}
}   

Abstract
Lavindra de Silva, Amit Kumar Pandey, Mamoun Gharbi and Rachid Alami
Taskability Graph: Towards Analyzing Effort based Agent-Agent Affordances
21st IEEE International Symposium on Robot and Human Interactive Communication (Ro-Man 2012)

Affordance analysis, what something/someone can afford or offers, is an important aspect for day-to-day interaction and decision-making. In this paper, we will enrich the notion of affordance by incorporating agent-agent affordance: what does an agent afford for another agent in terms of a task. Further, we will present an effort hierarchy and derive the concept of Taskability Graph, which encodes: what all agents could do for all other agents, with which levels of mutual-efforts and at which places. This makes the robot more aware about agents' abilities and facilitates to develop better interaction and decision-making capabilities. We will discuss the potential application in effort based shared cooperative planning.

   
@INPROCEEDINGS{Amit_Kumar_Pandey_Taskability_Graph_RoMan_2012,
author={Pandey, A.K. and Alami, R.},
booktitle={RO-MAN, 2012 IEEE},
title={Taskability Graph: Towards analyzing effort based agent-agent affordances},
year={2012},
pages={791-796},
keywords={cognition;decision making;graph theory;human-robot interaction;intelligent robots;decision making;effort-based agent-agent affordance analysis;effort-based shared cooperative planning;mutual-effort levels;robot interaction;taskability graph;Cognition;Human-robot interaction;Humans;Planning;Real-time systems;Robots;Torso},
doi={10.1109/ROMAN.2012.6343848},
ISSN={1944-9445},
month={Sept},}
    

Abstract
Amit Kumar Pandey and Rachid Alami
Visuo-Spatial Ability, Effort and Affordance Analyses: Towards Practical Realization of Building Blocks for Robot’s Complex Socio-Cognitive Behaviors
8th International Cognitive Robotics WS, AAAI-2012 (AAAI-CogRob 2012)
Amit Kumar Pandey and Rachid Alami

For the long term co-existence of robots with us in complete harmony, they will be expected to show socio-cognitive behaviors. In this paper, taking inspiration from child development research and human behavioral psychology we will identify the basic but key capabilities: perceiving abilities, effort and affordances. Further we will present the concepts, which fuse these components to perform multi-effort ability and affordance analysis. We will show instantiations of these capabilities on real robot and will discuss its potential applications for more complex socio-cognitive behavior.

   
@paper{Amit_Kumar_Pandey_CogRob_AAAI_2012,
     author = {Amit Kumar Pandey and Rachid Alami},
     title = {Visuo-Spatial Ability, Effort and Affordance Analyses: Towards Building Blocks for Robot's Complex Socio-Cognitive Behaviors},
     conference = {AAAI Workshops},
     year = {2012},
     keywords = {Human Robot Interaction; Perspective Taking; Mightability Analysis; Affordance; Social Robot;},
url = {http://www.aaai.org/ocs/index.php/WS/AAAIW12/paper/view/5270}
}
    

Abstract
Towards Planning Human-Robot Interactive Manipulation Tasks: Task Dependent and Human Oriented Autonomous Selection of Grasp and Placement
IEEE/RAS-EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob 2012)

In a typical Human-Robot Interaction (HRI) scenario, the robot needs to perform various tasks for the human, hence should take into account human oriented constraints. In this context it is not sufficient that the robot selects grasp and placement of the object from the stability point of view only. Motivated from human behavioral psychology, in this paper we emphasize on the mutually depended nature of grasp and placement selections, which is further constrained by the task, the environment and the human's perspective. We will explore essential human oriented constraints on grasp and placement selections and present a framework to incorporate them in synthesizing key configurations of planning basic interactive manipulation tasks.

   
@INPROCEEDINGS{Amit_Kumar_Pandey_Robot_HRI_Task_Planning_BioRob_2012,
author={Pandey, A.K. and Saut, J.-P. and Sidobre, D. and Alami, R.},  booktitle={4th IEEE RAS EMBS International Conference on  Biomedical Robotics and Biomechatronics (BioRob},  title={Towards planning Human-Robot Interactive manipulation tasks: Task dependent and human oriented autonomous selection of grasp and placement},
year={2012},  pages={1371-1376}, month={June},
keywords={human-robot interaction;manipulators;HRI;grasp selection;human behavioral psychology;human oriented autonomous selection;human oriented constraints;human-robot interaction;human-robot interactive manipulation tasks;placement selection;task dependent;Collision avoidance;Humans;Planning;Robots;Shape;Trajectory;Wrist},
doi={10.1109/BioRob.2012.6290776}, ISSN={2155-1774}, }
 
    

Abstract
Amit Kumar Pandey, Jean-Philippe Saut, Daniel Sidobre and Rachid Alami
Towards Task Understanding through Multi-State Visuo-Spatial Perspective Taking for Human-Robot Interaction
International Joint Conference on Artificial Intelligence-Workshop on Agents Learning Interactively from Human Teachers (ALIHT, IJCAI 2011).

   
For a lifelong learning robot, in the context of task understanding, it is important to distinguish the ‘meaning’ of a task from the ‘means’ to achieve it.
In this paper we will select a set of tasks in a typical Human-Robot interaction scenario such as show, hide, make accessible, etc., and illustrate that visuo-spatial perspective taking can be effectively used to understand such tasks’ semantics in terms of ‘effect’. The idea is, for understanding the ‘effects’ the robot analyzes the reachability and visibility of an agent not only from the current state of the agent but also from a set of virtual states, which the agent might attain with different level of efforts from his/its current state.
We show that such symbolic understandings of tasks could be generalized to new situations or spatial arrangements, as well as facilitate 'transfer of understanding’ among heterogeneous robots. Robot begins to understand the semantics of the task from the first demonstration and continuously refines its understanding with further examples.
    

   
@inproceedings{Amit_Kumar_Pandey_Robot_Learning_ALIHT_IJCAI_2011,
  title={Towards task understanding through multi-state visuo-spatial perspective taking for human-robot interaction},
  author={Pandey, Amit Kumar and Alami, Rachid},
  booktitle={IJCAI workshop on agents learning interactively from human teachers (ALIHT-IJCAI)},
  year={2011}
}
    

Abstract
Amit Kumar Pandey and Rachid Alami
Towards Multi- State Visuo-Spatial Reasoning based Proactive Human-Robot Interaction
15th International Conference on Advanced Robotics (ICAR 2011)
(finalist for the best student paper award)

Robots are expected to co-operate with humans in day-to-day interaction. One aspect of such co-operation is behaving proactively. In this paper, our robot will exploit the visuo-spatial perspective-taking of the human partner not only from his current state but also from a set of different states he might attain from his current state. Such rich information will help the robot in better predicting `where' the human can perform a particular task and how the robot could support it. We have tested the system on two different robots for the tasks of giving and making an object accessible to the robot by the human partner. Our robots equipped with such multi-state visuo-spatial perspective-taking capabilities show different proactive behaviors depending upon the task and situation, such as reach out proactively and to a correct place, when human has to give an object to the robot. Primary results of user studies show that such proactive behaviors reduce the human's `confusion' as well as `the robot' seems to be more `aware' about the task and the human.

   
@INPROCEEDINGS{ Amit_Kumar_Pandey_Proactive_Robot_ICAR_2011,
author={Pandey, A.K. and Ali, M. and Warnier, M. and Alami, R.},
booktitle={15th International Conference on Advanced Robotics (ICAR},
title={Towards multi-state visuo-spatial reasoning based proactive human-robot interaction},
year={2011},
pages={143-149},
keywords={human-robot interaction;inference mechanisms;multistate visuo-spatial reasoning;proactive behaviors;proactive human-robot interaction;Cognition;Collision avoidance;Humans;Joints;Robot kinematics;Robot sensing systems},
doi={10.1109/ICAR.2011.6088642},
month={June},}
    

Abstract
Amit Kumar Pandey, Muhammad Ali, Matthieu Warnier and Rachid Alami
Mightability: Multi-State Visuo-Spatial Reasoning for Human-Robot Interaction
12th International Symposium on Experimental Robotics (ISER 2010)

   
@inproceedings{DBLP:conf/iser/PandeyA10,
  author    = {Amit Kumar Pandey and
               Rachid Alami},
  title     = {Mightability: {A} Multi-state Visuo-spatial Reasoning for Human-Robot
               Interaction},
  booktitle = {Experimental Robotics - The 12th International Symposium on Experimental
               Robotics, {ISER} 2010, December 18-21, 2010, New Delhi and Agra, India},
  pages     = {49--63},
  year      = {2010},
  crossref  = {DBLP:conf/iser/2010},
  url       = {http://dx.doi.org/10.1007/978-3-642-28572-1_4},
  doi       = {10.1007/978-3-642-28572-1_4},
  timestamp = {Wed, 21 Aug 2013 21:13:03 +0200},
  biburl    = {http://dblp.uni-trier.de/rec/bib/conf/iser/PandeyA10},
  bibsource = {dblp computer science bibliography, http://dblp.org}
}
    

We, the Humans, are capable of estimating various abilities of ourselves and of the person we are interacting with. Visibility and reachability are among two such abilities. Studies in neuroscience and psychology suggest that from the age of 12-15 months children start to understand the occlusion of others line-of-sight and from the age of 3 years they start to develop the ability, termed as perceived reachability for self and for others. As such capabilities evolve in the children, they start showing intuitive and proactive behavior by perceiving various abilities of the human partner.
Inspired from such studies, which suggest that visuo-spatial perception plays an important role in Human-Human interaction, we propose to equip our robot to perceive various types of abilities of the agents in the workspace. The robot perceives such abilities not only from the current state of the agent but also by virtually putting an agent into various achievable states, such as turn left, stand up, etc. As the robot estimates what an agent might be able to ‘see’ and ‘reach’ if will be in a particular state, we term such analyses as Mightability Analyses. Currently the robot is equipped to perform such Mightability analyses at two levels: cells in the 3D grid and objects in the space, which we termed as Mightability Maps (MM) and Object Oriented Mightabilities (OOM) respectively.
We have shown the applications of Mightability analyses in performing various co-operative tasks like show and make an object accessible to the human as well as competitive tasks like hide and put away an object from the human. Such Mightability analyses equip the robot for higher-level learning and decisional capabilities as well as could facilitate the robot for better verbalize interaction and proactive behavior.

Abstract
Amit Kumar Pandey and Rachid Alami
Mightability Maps: A Perceptual Level Decisional Framework for Co-operative and Competitive Human-Robot Interaction
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010)

   
@INPROCEEDINGS{ Amit_Kumar_Pandey_Mightability_Maps_IROS_2010,
author={Pandey, Amit K. and Alami, Rachid},
booktitle={ IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
title={Mightability maps: A perceptual level decisional framework for co-operative and competitive human-robot interaction},
year={2010},
pages={5842-5848},
keywords={human-robot interaction;reachability analysis;co-operative human-robot interaction;competitive human-robot interaction;human-human interaction;perceptual level decisional framework;weighted mightability maps},
doi={10.1109/IROS.2010.5651503},
ISSN={2153-0858},
month={Oct},}
    

Interestingly Humans are able to maintain rough estimations of visibility, reachability and other capabilities of not only themselves but of the person they are interacting with. Studies in neuroscience and psychology suggest that from the age of 12-15 months children start to understand the occlusion of others line-of-sight and from the age of 3 years they start to develop the ability, termed as perceived reachability for self and for others. As such capabilities evolve in the children, they start showing intuitive and proactive behavior by perceiving various abilities of the human partner. Inspired from such studies, which suggest that visuo-spatial perception plays an important role in Human-Human interaction, we propose to equip our robot with the capabilities to maintain various types of reachabilities and visibilities information of itself and of the human partner in the shared workspace. Since these analyses will be basically perceived by performing a virtual action onto the agent and roughly estimating what that agent might be able to 'see' and 'reach' in 3D space, we term these representations as Mightability Maps. By applying various set operations on Weighted Mightability Maps, robot could perceive a set of candidate solutions in real time for various tasks. We show its application in exhibiting two different behaviors of robot: co-operative and competitive. These maps are also quick to compute and could help in developing higher-level decisional capabilities in the robot.

Abstract
Amit Kumar Pandey and Rachid Alami
A Framework towards a Socially Aware Mobile Robot Motion in Human-Centered Dynamic Environment
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010)

   
@INPROCEEDINGS{ Amit_Kumar_Pandey_Socially_Aware_Navigation_IROS_2010,
author={Pandey, Amit K. and Alami, Rachid},
booktitle={ IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
title={A framework towards a socially aware Mobile Robot motion in Human-Centered dynamic environment},
year={2010},
pages={5855-5860},
keywords={computational geometry;human-robot interaction;inference mechanisms;mobile robots;path planning;social aspects of automation;Voronoi diagram;high level reasoning;human centered dynamic environment;human proximity guideline;motion behavior;socially aware mobile robot},
doi={10.1109/IROS.2010.5649688},
ISSN={2153-0858},
month={Oct},}
    

For a Mobile Robot to navigate in the Human-Centered environment without imposing alien like impression by its motion, it should be able to reason about various criteria ranging from clearance, environment structure, unknown objects, social conventions, proximity constraints, presence of an individual or group of peoples, etc. Also the robot should neither be over-reactive nor be simple wait and move machine. We have adapted a Voronoi diagram based approach for the analysis of local clearance and environment structure. We also propose to treat human differently from other obstacles for which the robot constructs different sets of regions around human and iteratively converges to a set of points (milestones), using social conventions, human proximity guidelines and clearance constraints to generate and modify its path smoothly. Once equipped with such capabilities, robot is able to do higher-level reasoning for dynamic and selective adaptation of social convention depending upon the environment segment. It also leads the robot to be aware about its own motion behavior.

Abstract
Amit Kumar Pandey and Rachid Alami
Interleaving Symbolic and Geometric Reasoning for a Robotic Assistant
Combining Action and Motion Planning WS, ICAPS (CAMP-ICAPS 2010)

   
@inproceedings{alili2010interleaving,
  title={Interleaving symbolic and geometric reasoning for a robotic assistant},
  author={Alili, Samir and Pandey, A Kumar and Sisbot, E Akin and Alami, Rachid},
  booktitle={ICAPS Workshop on Combining Action and Motion Planning},
  year={2010}
}
    

   
   
It is now well known that while symbolic task planners have been drastically improved to solve more and more complex symbolic problems the difficulty of successfully applying such planners to robotics problems still remains. Indeed, in such planners, actions such as “navigate” or “grasp” use abstracted applicability situations that might result in finding plans that cannot be refined at the geometrical level. This is due to the gap between the representation they are based on and the physical environment (see the pioneering paper (Lozano-Perez, Jones, and Mazer 1987)).
In this paper, we extend this approach and apply it to the challenging context of human-robot cooperative manipulation. We propose a scheme that is still based on the coalition of a symbolic planner (Alili et al. 2009) and a geometric planner (Pandey and Alami 2010; Sisbot, Marin Urias, and Alami 2007; Marin Urias, Sisbot, and Alami 2008) but which provides a more elaborate interaction between the two planning environments.
    
    

Abstract
Samir Alili, Amit Kumar Pandey, E. Akin Sisbot and Rachid Alami
Robot, tell me what you know about...?: Expressing robot's knowledge through interaction
Interactive Communication for Autonomous Intelligent Robots WS, ICRA (ICAIR-ICRA 2010)

   
@inproceedings{ros2010robot,
  title={Robot, tell me what you know about...?: Expressing robot’s knowledge through interaction},
  author={Ros, Raquel and Sisbot, E Akin and Lemaignan, S{\'e}verin and Pandey, Amit and Alami, Rachid},
  booktitle={Proceedings of the ICRA 2010 Workshop on Interactive Communication for Autonomous Intelligent Robots (ICAIR)},
  pages={26--29},
  year={2010}
}
    

   
Explicitly showing the robot’s knowledge about the states of the world and the agents’ capabilities in such states is essential in human robot interaction. This way, the human partner can better understand the robot’s intentions and beliefs in order to provide missing information that may eventually improve the interaction. We present our current approach for modeling the robot’s knowledge from a symbolic point of view based on an ontology. This knowledge is fed by two sources: direct interaction with the human, and geometric reasoning. We present an interactive task scenario where we exploit the robot’s knowledge to interact with the human while showing its internal geometric reasoning when possible.
    

Abstract
Raquel Ros, E. Akin Sisbot, Severin Lemaingan, Amit Kumar Pandey and Rachid Alami,
A Framework for Adapting Social Conventions in a Mobile Robot Motion in Human-Centered Environment
14th International Conference on Advanced Robotics (ICAR 2009)

@INPROCEEDINGS{5174708, 
author={A. K. Pandey and R. Alami}, 
booktitle={2009 International Conference on Advanced Robotics}, 
title={A framework for adapting social conventions in a mobile robot motion in human-centered environment}, 
year={2009}, 
pages={1-8}, 
keywords={convergence of numerical methods;human-robot interaction;iterative methods;mobile robots;path planning;convergence;human proximity rule;human-centered environment;iterative method;mobile robot;motion planning;path planning;social convention;task oriented rule;Human robot interaction;Humanoid robots;Large-scale systems;Mobile robots;Motion control;Motion planning;Path planning;Safety;Strategic planning;Trajectory}, 
month={June},}

Interestingly in different situations, human not only plans differently for approaching, accompanying, passing by and avoiding another person, but also smoothly maintains an appropriate distance. But for a mobile robot it is not trivial at all, while also maintaining its goal. In this paper we present a generic framework of mobile robot path planning for adapting social rules at different states of execution, which apart from assuring safety, also respects the comfort and expectations of human, and convey its intention to human well in advance. In our approach for treating human explicitly robot constructs different sets of regions around human and iteratively converges to a set of points (milestones), using social rules, human proximity rules and task oriented rules to generate a smooth path. We have compared our results with the case, when robot is purely reactive.

Abstract
Amit Kumar Pandey and Rachid Alami
A Step towards a Sociable Robot Guide which Monitors and Adapts to the Person’s Activities
14th International Conference on Advanced Robotics (ICAR 2009)

@INPROCEEDINGS{5174706, 
author={A. K. Pandey and R. Alami}, 
booktitle={2009 International Conference on Advanced Robotics}, 
title={A step towards a sociable robot guide which monitors and adapts to the person's activities}, 
year={2009}, 
pages={1-8}, 
keywords={intelligent robots;mobile robots;path planning;self-adjusting systems;service robots;adaptive sociable mobile robot guide;goal oriented re-engagement;human desire;human diverse behavior;human will;path planning;person activity monitoring;service robot;Crops;Humanoid robots;Humans;Mobile robots;Monitoring;Robot sensing systems;Switches}, 
month={June},}

This paper presents a framework for a mobile robot guide, which provides the human with the flexibility to decide upon the way he wants to be guided. During the guiding process at any instant, the exploratory nature of the human, the social behavior and the individual's desires contribute to the position of the person with respect to the robot. For a robot to behave socially, it should not expect that the human will always follow the exact trajectory of the robot or will always maintain a fixed distance with robot. Depending upon the human's will and desire, he can choose either to accompany or to follow the robot during the guiding. To give more privilege to human, robot should also not expect that human will always support the guiding. Human may temporarily suspend the joint commitment of guiding process, due to other interesting tasks and may completely abandon the path expected by robot. Also sometimes the structure of the environment could enforce separation or hide the human. As a ‘social’ guide, robot should not only tolerate human diverse behavior but also try to adapt its path to support the human activity as well as influence the human path towards the goal. It should take appropriate decisions about when, where and how to deviate, as the very frequent or unnatural maneuvers of robot will make the human feel uncomfortable. In this paper we present a framework of monitoring and adapting to the human commitment on the joint task, and carrying out appropriate and goal oriented re-engagement attempts, if required, from the view point of guiding.

Abstract
Amit Kumar Pandey and Rachid Alami
Towards Shared Attention through Geometric Reasoning for Human Robot Interaction
IEEE-RAS International Conference on Humanoid Robots (Humanoids 2009)

Under Construction

Under COnstruction    

Abstract
Luis F. Marin-Urias, Emrah Akin Sisbot, Amit Kumar Pandey, Riichiro Tadakuma and Rachid Alami
Towards a Sociable Robot Guide which Respects and Supports the Human Activity
IEEE Conference on Automation Science and Engineering (IEEE- CASE 2009)

Under Construction

Under COnstruction    

Abstract
Amit Kumar Pandey, and Rachid Alami
On Measurement Models for Line Segments and Point Based SLAM
14th International Conference on Advanced Robotics (ICAR 2009)

Under Construction

Under COnstruction    

Abstract
Satish Pedduri, Gururaj Kosuru, K Madhava Krishna and Amit Kumar Pandey, and Amit Kumar Pandey
Localizing from multiple hypotheses states minimizing expected path lengths for mobile robots
CLAWAR 2008

Under Construction

Under COnstruction    

Abstract
Hemanth Korapatti, S Subhash, K Madhava Krishna and Amit Kumar Pandey
Line Feature Association Technique for Feature Chain based SLAM using Sonar Only Sensors
 IEEE - ISMCR 2008

Under Construction

Under COnstruction    

Abstract
Amit Kumar Pandey and K Madhava Krishna
Link Graph and Feature Chain based Robust Online SLAM for Fully Autonomous Mobile Robot Navigation   System using Sonar Sensors
International Conference on Advanced Robotics (ICAR 2007)

Under Construction

Under COnstruction    

Abstract
Amit Kumar Pandey and K Madhava Krishna
Feature Chain based Occupancy Grid SLAM for Robots Equipped with Sonar Sensors
IEEE-International Conference on Integration of Knowledge Intensive Multi-Agent Systems (KIMAS 2007)

Under Construction

Under COnstruction    

Abstract
Amit Kumar Pandey, K Madhava Krishna, and Henry Hexmoor
Feature Based Occupancy Grid Maps for Sonar Based Safe Mapping
International Joint Conference on Artificial Intelligence (IJCAI 2007)

@inproceedings{pandey2007feature, title={Feature Based Occupancy Grid Maps for Sonar Based Safe-Mapping.}, author={Pandey, Amit Kumar and Krishna, K Madhava and Nath, Mainak}, booktitle={IJCAI}, pages={2172}, year={2007} 

      
This paper presents a methodology for integrating features within the occupancy grid (OG) framework. The OG maps provide a dense representation of the environment. In particular they give information for every range measurement projected onto a grid. However independence assumptions between cells during updates as well as not considering sonar models lead to inconsistent maps, which may also lead the robot to take some decisions which may be unsafe or which may introduce an unnecessary overhead of run-time collision avoidance behaviors. Feature based maps provide more consistent representation by implicitly considering correlation between cells. But they are sparse due to sparseness of features in a typical environment. This paper provides a method for integrating feature based representations within the standard Bayesian framework of OG and provides a dense, more accurate and safe representation than standard OG methods.
   

Abstract
Amit Kumar Pandey, K Madhava Krishna, and Mainak Nath