Найдено 32
Discrimination against robots: Discussing the ethics of social interactions and who is harmed
Barfield J.K.
Q3
Walter de Gruyter
Paladyn, 2023, цитирований: 5,
open access Open access ,
PDF, doi.org, Abstract
Abstract This article discusses the topic of ethics and policy for human interaction with robots. The term “robot ethics” (or roboethics) is generally concerned with ethical problems that may occur when humans and robots interact in social situations or when robots make decisions which could impact human well-being. For example, whether robots pose a threat to humans in warfare, the use of robots as caregivers, or the use of robots which make decisions which could impact historically disadvantaged populations. In each case, the focus of the discussion is predominantly on how to design robots that act ethically toward humans (some refer to this issue as “machine ethics”). Alternatively, robot ethics could refer to the ethics associated with human behavior toward robots especially as robots become active members of society. It is this latter and relatively unexplored view of robot ethics that this article focuses on, and specifically whether robots will be the subject of discriminatory and biased responses from humans based on the robot’s perceived race, gender, or ethnicity. If so, the paper considers what issues are implicated, and how society might respond? From past research, preliminary evidence suggests that acts of discrimination which may be directed against people may also be expressed toward robots experienced in social contexts; therefore, discrimination against robots as a function of their physical design and behavior is an important and timely topic of discussion for robot ethics, human–robot interaction, and the design of social robots.
Automated argument adjudication to solve ethical problems in multi-agent environments
Bringsjord S., Govindarajulu N.S., Giancola M.
Q3
Walter de Gruyter
Paladyn, 2021, цитирований: 10,
open access Open access ,
PDF, doi.org, Abstract
AbstractSuppose an artificial agentaadj{a}_{\text{adj}}, as time unfolds, (i) receives from multiple artificial agents (which may, in turn, themselves have received from yet other such agents…) propositional content, and (ii) must solve an ethical problem on the basis of what it has received. How shouldaadj{a}_{\text{adj}}adjudicate what it has received in order to produce such a solution? We consider an environment infused with logicist artificial agentsa1,a2,…,an{a}_{1},{a}_{2},\ldots ,{a}_{n}that sense and report their findings to “adjudicator” agents who must solve ethical problems. (Many if not most of these agents may be robots.) In such an environment, inconsistency is a virtual guarantee:aadj{a}_{\text{adj}}may, for instance, receive a report froma1{a}_{1}that propositionϕ\phiholds, then froma2{a}_{2}that¬ϕ\neg \phiholds, and then froma3{a}_{3}that neitherϕ\phinor¬ϕ\neg \phishould be believed, but ratherψ\psiinstead, at some level of likelihood. We further assume that agents receiving such incompatible reports will nonetheless sometimes simply need, before long, to make decisions on the basis of these reports, in order to try to solve ethical problems. We provide a solution to such a quandary: AI capable of adjudicating competing reports from subsidiary agents through time, and delivering to humans a rational, ethically correct (relative to underlying ethical principles) recommendation based upon such adjudication. To illuminate our solution, we anchor it to a particular scenario.
Are robots perceived as good decision makers? A study investigating trust and preference of robotic and human linesman-referees in football
Das K., Wang Y., Green K.E.
Q3
Walter de Gruyter
Paladyn, 2021, цитирований: 1,
open access Open access ,
PDF, doi.org, Abstract
Abstract Increasingly, robots are decision makers in manufacturing, finance, medicine, and other areas, but the technology may not be trusted enough for reasons such as gaps between expectation and competency, challenges in explainable AI, users’ exposure level to the technology, etc. To investigate the trust issues between users and robots, the authors employed in this study, the case of robots making decisions in football (or “soccer” as it is known in the US) games as referees. More specifically, we presented a study on how the appearance of a human and three robotic linesmen (as presented in a study by Malle et al.) impacts fans’ trust and preference for them. Our online study with 104 participants finds a positive correlation between “Trust” and “Preference” for humanoid and human linesmen, but not for “AI” and “mechanical” linesmen. Although no significant trust differences were observed for different types of linesmen, participants do prefer human linesman to mechanical and humanoid linesmen. Our qualitative study further validated these quantitative findings by probing possible reasons for people’s preference: when the appearance of a linesman is not humanlike, people focus less on the trust issues but more on other reasons for their linesman preference such as efficiency, stability, and minimal robot design. These findings provide important insights for the design of trustworthy decision-making robots which are increasingly integrated to more and more aspects of our everyday lives.
Overtrusting robots: Setting a research agenda to mitigate overtrust in automation
Aroyo A.M., de Bruyne J., Dheu O., Fosch-Villaronga E., Gudkov A., Hoch H., Jones S., Lutz C., Sætra H., Solberg M., Tamò-Larrieux A.
Q3
Walter de Gruyter
Paladyn, 2021, цитирований: 38,
open access Open access ,
PDF, doi.org, Abstract
Abstract There is increasing attention given to the concept of trustworthiness for artificial intelligence and robotics. However, trust is highly context-dependent, varies among cultures, and requires reflection on others’ trustworthiness, appraising whether there is enough evidence to conclude that these agents deserve to be trusted. Moreover, little research exists on what happens when too much trust is placed in robots and autonomous systems. Conceptual clarity and a shared framework for approaching overtrust are missing. In this contribution, we offer an overview of pressing topics in the context of overtrust and robots and autonomous systems. Our review mobilizes insights solicited from in-depth conversations from a multidisciplinary workshop on the subject of trust in human–robot interaction (HRI), held at a leading robotics conference in 2020. A broad range of participants brought in their expertise, allowing the formulation of a forward-looking research agenda on overtrust and automation biases in robotics and autonomous systems. Key points include the need for multidisciplinary understandings that are situated in an eco-system perspective, the consideration of adjacent concepts such as deception and anthropomorphization, a connection to ongoing legal discussions through the topic of liability, and a socially embedded understanding of overtrust in education and literacy matters. The article integrates diverse literature and provides a ground for common understanding for overtrust in the context of HRI.
Special issue on robots and autism: Conceptualization, technology, and methodology
Baraka K., Beights R., Couto M., Radice M.
Q3
Walter de Gruyter
Paladyn, 2021, цитирований: 0,
open access Open access ,
PDF, doi.org
Committing to interdependence: Implications from game theory for human–robot trust
Razin Y.S., Feigh K.M.
Q3
Walter de Gruyter
Paladyn, 2021, цитирований: 3,
open access Open access ,
PDF, doi.org, Abstract
AbstractHuman–robot interaction (HRI) and game theory have developed distinct theories of trust for over three decades in relative isolation from one another. HRI has focused on the underlying dimensions, layers, correlates, and antecedents of trust models, while game theory has concentrated on the psychology and strategies behind singular trust decisions. Both fields have grappled to understand over-trust and trust calibration, as well as how to measure trust expectations, risk, and vulnerability. This article presents initial steps in closing the gap between these fields. By using insights and experimental findings from interdependence theory and social psychology, this work starts by analyzing a large game theory competition data set to demonstrate that the strongest predictors for a wide variety of human–human trust interactions are the interdependence-derived variables for commitment and trust that we have developed. It then presents a second study with human subject results for more realistic trust scenarios, involving both human–human and human–machine trust. In both the competition data and our experimental data, we demonstrate that the interdependence metrics better capture social “overtrust” than either rational or normative psychological reasoning, as proposed by game theory. This work further explores how interdependence theory – with its focus on commitment, coercion, and cooperation – addresses many of the proposed underlying constructs and antecedents within human–robot trust, shedding new light on key similarities and differences that arise when robots replace humans in trust interactions.
Emotional musical prosody for the enhancement of trust: Audio design for robotic arm communication
Savery R., Zahray L., Weinberg G.
Q3
Walter de Gruyter
Paladyn, 2021, цитирований: 6,
open access Open access ,
PDF, doi.org, Abstract
Abstract As robotic arms become prevalent in industry, it is crucial to improve levels of trust from human collaborators. Low levels of trust in human–robot interaction can reduce overall performance and prevent full robot utilization. We investigated the potential benefits of using emotional musical prosody (EMP) to allow the robot to respond emotionally to the user’s actions. We define EMP as musical phrases inspired by speech-based prosody used to display emotion. We tested participants’ responses to interacting with a virtual robot arm and a virtual humanoid that acted as a decision agent, helping participants select the next number in a sequence. We compared results from three versions of the application in a between-group experiment, where the robot presented different emotional reactions to the user’s input depending on whether the user agreed with the robot and whether the user’s choice was correct. One version used EMP audio phrases selected from our dataset of singer improvisations, the second version used audio consisting of a single pitch randomly assigned to each emotion, and the final version used no audio, only gestures. In each version, the robot reacted with emotional gestures. Participants completed a trust survey following the interaction, and we found that the reported trust ratings of the EMP group were significantly higher than both the single-pitch and no audio groups for the robotic arm. We found that our audio system made no significant difference in any metric when used on a humanoid robot implying audio needs to be separately designed for each platform.
Design guidelines for human–robot interaction with assistive robot manipulation systems
Wilkinson A., Gonzales M., Hoey P., Kontak D., Wang D., Torname N., Laderoute S., Han Z., Allspaw J., Platt R., Yanco H.
Q3
Walter de Gruyter
Paladyn, 2021, цитирований: 7,
open access Open access ,
PDF, doi.org, Abstract
Abstract The design of user interfaces (UIs) for assistive robot systems can be improved through the use of a set of design guidelines presented in this article. As an example, the article presents two different UI designs for an assistive manipulation robot system. We explore the design considerations from these two contrasting UIs. The first is referred to as the graphical user interface (GUI), which the user operates entirely through a touchscreen as a representation of the state of the art. The second is a type of novel UI referred to as the tangible user interface (TUI). The TUI makes use of devices in the real world, such as laser pointers and a projector–camera system that enables augmented reality. Each of these interfaces is designed to allow the system to be operated by an untrained user in an open environment such as a grocery store. Our goal is for these guidelines to aid researchers in the design of human–robot interaction for assistive robot systems, particularly when designing multiple interaction methods for direct comparison.
A study on an applied behavior analysis-based robot-mediated listening comprehension intervention for ASD
Louie W.G., Korneder J., Abbas I., Pawluk C.
Q3
Walter de Gruyter
Paladyn, 2020, цитирований: 15,
open access Open access ,
PDF, doi.org, Abstract
Abstract Autism spectrum disorder (ASD) is a lifelong developmental condition that affects an individual’s ability to communicate and relate to others. Despite such challenges, early intervention during childhood development has shown to have positive long-term benefits for individuals with ASD. Namely, early childhood development of communicative speech skills has shown to improve future literacy and academic achievement. However, the delivery of such interventions is often time-consuming. Socially assistive robots (SARs) are a potential strategic technology that could help support intervention delivery for children with ASD and increase the number of individuals that healthcare professionals can positively affect. For SARs to be effectively integrated in real-world treatment for individuals with ASD, they should follow current evidence-based practices used by therapists such as Applied Behavior Analysis (ABA). In this work, we present a study that investigates the efficacy of applying well-known ABA techniques to a robot-mediated listening comprehension intervention delivered to children with ASD at a university-based ABA clinic. The interventions were delivered in place of human therapists to teach study participants a new skill as a part of their overall treatment plan. All the children participating in the intervention improved in the skill being taught by the robot and enjoyed interacting with the robot, as evident by high occurrences of positive affect as well as engagement during the sessions. One of the three participants has also reached mastery of the skill via the robot-mediated interventions.
Sex robot technology and the Narrative Policy Framework (NPF): A relationship in the making?
Mainenti D.C.
Q3
Walter de Gruyter
Paladyn, 2020, цитирований: 3,
open access Open access ,
PDF, doi.org, Abstract
AbstractThe use of sex robots is expected to become widespread in the coming decades, not only for hedonistic purposes but also for therapy, to keep the elderly company in care homes, for education, and to help couples in long-distance relationships. As new technological artifacts are introduced to society, they play a role in shaping the societal norms and belief systems while also creating tensions between various approaches and relationships, resulting in a range of policy-making proposals that bring into question traditional disciplinary boundaries that exist between the technical and the social. The Narrative Policy Framework attempts to position policy studies in such a way so as to better describe, explain, and predict a wide variety of processes and outcomes in a political world increasingly burdened by uncertain reporting, capitalistic marketing, and persuasive narratives. Through content analysis, this study identifies coalitions in the scientific community, based on results gathered from Scopus, to develop insights into the manner in which liberal, utilitarian, and conservative influences alike are shaping narrative elements and content both in favor of and against sex robot technology.
Social Security and robotization: Possible ways to finance human reskilling and promote employment
Díaz A., Grau Ruiz M.A.
Q3
Walter de Gruyter
Paladyn, 2020, цитирований: 4,
open access Open access ,
PDF, doi.org, Abstract
AbstractThis contribution aims to open the discussion on how to balance the opportunities and the risks posed by the increased robotization of the economy. It particularly addresses the concerns related to the sustainability of the current Social Security systems. Having in mind the quick process of skill depreciation, it is urgent to incentivize workers’ training and human employment. Some ways already used in the past to finance similar goals are reviewed here in order to show possible solutions to be adapted in the near future within the European Union in line with the guidelines given by several international institutions.
Learning to be human with sociable robots
Weiss D.M.
Q3
Walter de Gruyter
Paladyn, 2020, цитирований: 7,
open access Open access ,
PDF, doi.org, Abstract
AbstractThis essay examines the debate over the status of sociable robots and relational artifacts through the prism of our relationship to television. In their work on human-technology relations, Cynthia Breazeal and Sherry Turkle have staked out starkly different assessments. Breazeal’s work on sociable robots suggests that these technological artifacts will be human helpmates and sociable companions. Sherry Turkle argues that such relational artifacts seduce us into simulated relationships with technological others that largely serve to exploit our emotional vulnerabilities and undermine authentic human relationships. Drawing on an analysis of the television as our first relational artifact and on the AMC television show Humans, this essay argues that in order to intervene in this debate we need a multimediated theory of technology that situates our technical artifacts in the domestic realm and examines their impact on those populations especially impacted by such technologies, including women, children, and the elderly. It is only then that we will be able to take the full measure of the impact of such sociable technologies on our being human.
Interactive robots with model-based ‘autism-like’ behaviors
Baraka K., Melo F.S., Veloso M.
Q3
Walter de Gruyter
Paladyn, 2019, цитирований: 8,
open access Open access ,
PDF, doi.org, Abstract
AbstractDue to their predictability, controllability, and simple social abilities, robots are starting to be used in diverse ways to assist individuals with Autism Spectrum Disorder (ASD). In this work, we investigate an alternative and novel research direction for using robots in relation to ASD, through programming a humanoid robot to exhibit behaviors similar to those observed in children with ASD. We designed 16 ‘autism-like’ behaviors of different severities on a NAO robot, based on ADOS-2, the gold standard for ASD diagnosis. Our behaviors span four dimensions, verbal and non-verbal, and correspond to a spectrum of typical ASD responses to 3 different stimulus families inspired by standard diagnostic tasks. We integrated these behaviors in an autonomous agent running on the robot, with which humans can continuously interact through predefined stimuli. Through user-controllable features, we allow for 256 unique customizations of the robot’s behavioral profile.We evaluated the validity of our interactive robot both in video-based and ‘in situ’ studies with 3 therapists. We also present subjective evaluations on the potential benefits of such robots to complement existing therapist training, as well as to enable novel tasks for ASD therapy.
User expectations of privacy in robot assisted therapy
Henkel Z., Baugus K., Bethel C.L., May D.C.
Q3
Walter de Gruyter
Paladyn, 2019, цитирований: 7,
open access Open access ,
PDF, doi.org, Abstract
Abstract This article describes ethical issues related to the design and use of social robots in sensitive contexts like psychological interventions and provides insights from one user design study and two controlled experiments with adults and children. User expectations regarding privacy with a therapeutic robotic dog, Therabot, gathered from a 16 participant design study are presented. Furthermore, results from 142 forensic interviews about bullying experiences conducted with children (ages 8 to 17) using three different social robots (Nao, Female RoboKind, Male RoboKind) and humans (female and male) as forensic interviewers are examined to provide insights into child beliefs about privacy and social judgment in sensitive interactions with social robots. The data collected indicates that adult participants felt a therapeutic robotic dog would be most useful for children in comparison to other age groups, and should include privacy safeguards. Data obtained from children after a forensic interview about their bullying experiences shows that they perceive social robots as providing significantly more socially protective factors than adult humans. These findings provide insight into how children perceive social robots and illustrate the need for careful considerationwhen designing social robots that will be used in sensitive contexts with vulnerable users like children.
Doing autoethnography of social robots: Ethnographic reflexivity in HRI
Chun B.
Q3
Walter de Gruyter
Paladyn, 2019, цитирований: 11,
open access Open access ,
PDF, doi.org, Abstract
Abstract Originating from anthropology, ethnographic reflexivity refers to ethnographers’ understanding and articulation of their own intervention to participants’ activities as innate study opportunities which affect quality of the ethnographic data. Despite of its methodological discordance with scientific methods which minimize researchers’ effects on the data, validity and effectiveness of reflexive ethnography have newly been claimed in technology studies. Inspired by the shift, I suggest potential ways of incorporating ethnographic reflexivity into studies of human-robot social interaction including ethnographic participant observation, collaborative autoethnography and hybrid autoethnography. I presume such approaches would facilitate roboticists’ access to human conditions where robots’ daily operation occurs. A primary aim here is to fill the field’s current methodological gap between needs for better-examining robots’ social functioning and a lack of insights from ethnography, prominent socio-technical methods. Supplementary goals are to yield a nuanced understanding of ethnography in HRI and to suggest embracement of reflexive ethnographies for future innovations.
More than just friends: in-home use and design recommendations for sensing socially assistive robots (SARs) by older adults with depression
Randall N., Bennett C.C., Šabanović S., Nagata S., Eldridge L., Collins S., Piatt J.A.
Q3
Walter de Gruyter
Paladyn, 2019, цитирований: 44,
open access Open access ,
PDF, doi.org, Abstract
Abstract As healthcare turns its focus to preventative community-based interventions, there is increasing interest in using in-home technology to support this goal. This study evaluates the design and use of socially assistive robots (SARs) and sensors as in-home therapeutic support for older adults with depression. The seal-like SAR Paro, along with onboard and wearable sensors, was placed in the homes of 10 older adults diagnosed with clinical depression for one month. Design workshops were conducted before and after the in-home implementation with participating older adults and clinical care staff members. Workshops showed older adults and clinicians sawseveral potential uses for robots and sensors to support in-home depression care. Long-term in-home use of the robot allowed researchers and participants to situate desired robot features in specific practices and experiences of daily life, and some user requests for functionality changed due to extended use. Sensor data showed that participants’ attitudes toward and intention to use the robot were strongly correlated with particular Circadian patterns (afternoon and evening) of robot use. Sensor data also showed that those without pets interacted with Paro significantly more than those with pets, and survey data showed they had more positive attitudes toward the SAR. Companionship, while a desired capability, emerged as insufficient to engage many older adults in long-term use of SARs in their home.
Time to compile: A performance installation as human-robot interaction study examining self-evaluation and perceived control
Cuan C., Berl E., LaViers A.
Q3
Walter de Gruyter
Paladyn, 2019, цитирований: 9,
open access Open access ,
PDF, doi.org, Abstract
AbstractEmbodied art installations embed interactive elements within theatrical contexts and allow participating audience members to experience art in an active, kinesthetic manner. These experiences can exemplify, probe, or question how humans think about objects, each other, and themselves. This paper presents work using installations to explore human perceptions of robot and human capabilities. The paper documents an installation, developed over several months and activated at distinct venues, where user studies were conducted in parallel to a robotic art installation. A set of best practices for successful collection of data over the course of these trials is developed. Results of the studies are presented, giving insight into human opinions of a variety of natural and artificial systems. In particular, after experiencing the art installation, participants were more likely to attribute action of distinct system elements to non-human entities. Post treatment survey responses revealed a direct relationship between predicted difficulty and perceived success. Qualitative responses give insight into viewers’ experiences watching human performers alongside technologies. This work lays a framework for measuring human perceptions of humanoid systems – and factors that influence the perception of whether a natural or artificial agent is controlling a given movement behavior – inside robotic art installations.
“I’ll take care of you,” said the robot
Fosch-Villaronga E., Albo-Canals J.
Q3
Walter de Gruyter
Paladyn, 2019, цитирований: 26,
open access Open access ,
PDF, doi.org, Abstract
AbstractThe insertion of robotic and artificial intelligent (AI) systems in therapeutic settings is accelerating. In this paper, we investigate the legal and ethical challenges of the growing inclusion of social robots in therapy. Typical examples of such systems are Kaspar, Hookie, Pleo, Tito, Robota,Nao, Leka or Keepon. Although recent studies support the adoption of robotic technologies for therapy and education, these technological developments interact socially with children, elderly or disabled, and may raise concerns that range from physical to cognitive safety, including data protection. Research in other fields also suggests that technology has a profound and alerting impact on us and our human nature. This article brings all these findings into the debate on whether the adoption of therapeutic AI and robot technologies are adequate, not only to raise awareness of the possible impacts of this technology but also to help steer the development and use of AI and robot technologies in therapeutic settings in the appropriate direction. Our contribution seeks to provide a thoughtful analysis of some issues concerning the use and development of social robots in therapy, in the hope that this can inform the policy debate and set the scene for further research.
Measuring human perceptions of expressivity in natural and artificial systems through the live performance piece Time to compile
Cuan C., Berl E., LaViers A.
Q3
Walter de Gruyter
Paladyn, 2019, цитирований: 1,
open access Open access ,
PDF, doi.org, Abstract
Abstract Live performance is a vehicle where theatrical devices are used to exemplify, probe, or question how humans think about objects, each other, and themselves. This paper presents work using this vehicle to explore human perceptions of robot and human capabilities. The paper documents four performances at three distinct venues where user studies were conducted in parallel to live performance. A set of best practices for successful collection of data in this manner over the course of these trials is developed. Then, results of the studies are presented, giving insight into human opinions of a variety of natural and artificial systems. In particular, participants are asked to rate the expressivity of 12 distinct systems, displayed on stage, as well as themselves. The results show trends ranking objects lowest, then robots, then humans, then self, highest. Moreover, objects involved in the show were generally rated higher after the performance. Qualitative responses give further insight into how viewers experienced watching human performers alongside elements of technology. This work lays a framework for measuring human perceptions of robotic systems – and factors that influence this perception – inside live performance and suggests black-that through the lens of expressivity systems of similar type are rated similarly by audience members.
GenEth: a general ethical dilemma analyzer
Anderson M., Anderson S.L.
Q3
Walter de Gruyter
Paladyn, 2018, цитирований: 45,
open access Open access ,
PDF, doi.org, Abstract
AbstractWe argue that ethically significant behavior of autonomous systems should be guided by explicit ethical principles determined through a consensus of ethicists. Such a consensus is likely to emerge in many areas in which intelligent autonomous systems are apt to be deployed and for the actions they are liable to undertake, as we are more likely to agree on how machines ought to treat us than on how human beings ought to treat one another. Given such a consensus, particular cases of ethical dilemmas where ethicists agree on the ethically relevant features and the right course of action can be used to help discover principles needed for ethical guidance of the behavior of autonomous systems. Such principles help ensure the ethical behavior of complex and dynamic systems and further serve as a basis for justification of this behavior. To provide assistance in discovering ethical principles, we have developed GenEth, a general ethical dilemma analyzer that, through a dialog with ethicists, uses inductive logic programming to codify ethical principles in any given domain. GenEth has been used to codify principles in a number of domains pertinent to the behavior of autonomous systems and these principles have been verified using an Ethical Turing Test, a test devised to compare the judgments of codified principles with that of ethicists.
Liability for Autonomous and Artificially Intelligent Robots
Barfield W.
Q3
Walter de Gruyter
Paladyn, 2018, цитирований: 30,
open access Open access ,
Обзор, PDF, doi.org, Abstract
Abstract In the backdrop of increasingly intelligent machines, important issues of law have been raised by the use of robots that operate autonomous from human supervisory control. In particular, when systems operating with autonomous robot’s damage property or injure humans, it may be difficult to determinewho’s at fault and therefore liable under current legal schemes. This paper reviews product liability and negligence tort law which may be used to allocate liability for robots that damage property or cause injury. Further, the paper concludes with a discussion of different approaches to allocating liability in an age of increasingly intelligent and autonomous robots directed by sophisticated algorithms, analytical, and computational techniques
Context-aware robot navigation using interactively built semantic maps
Cosgun A., Christensen H.I.
Q3
Walter de Gruyter
Paladyn, 2018, цитирований: 13,
open access Open access ,
PDF, doi.org, Abstract
Abstract We discuss the process of building semantic maps, how to interactively label entities in them, and how to use them to enable context-aware navigation behaviors in human environments. We utilize planar surfaces, such as walls and tables, and static objects, such as door signs, as features for our semantic mapping approach. Users can interactively annotate these features by having the robot follow him/her, entering the label through a mobile app, and performing a pointing gesture toward the landmark of interest. Our gesture-based approach can reliably estimate which object is being pointed at, and detect ambiguous gestures with probabilistic modeling. Our person following method attempts to maximize future utility by search for future actions assuming constant velocity model for the human. We describe a method to extract metric goals from a semantic map landmark and to plan a human aware path that takes into account the personal spaces of people. Finally, we demonstrate context awareness for person following in two scenarios: interactive labeling and door passing.We believe that future navigation approaches and service robotics applications can be made more effective by further exploiting the structure of human environments.
Real-time gaze estimation via pupil center tracking
Cazzato D., Dominio F., Manduchi R., Castro S.M.
Q3
Walter de Gruyter
Paladyn, 2018, цитирований: 10,
open access Open access ,
PDF, doi.org, Abstract
Abstract Automatic gaze estimation not based on commercial and expensive eye tracking hardware solutions can enable several applications in the fields of human computer interaction (HCI) and human behavior analysis. It is therefore not surprising that several related techniques and methods have been investigated in recent years. However, very few camera-based systems proposed in the literature are both real-time and robust. In this work, we propose a real-time user-calibration-free gaze estimation system that does not need person-dependent calibration, can deal with illumination changes and head pose variations, and can work with a wide range of distances from the camera. Our solution is based on a 3-D appearance-based method that processes the images from a built-in laptop camera. Real-time performance is obtained by combining head pose information with geometrical eye features to train a machine learning algorithm. Our method has been validated on a data set of images of users in natural environments, and shows promising results. The possibility of a real-time implementation, combined with the good quality of gaze tracking, make this system suitable for various HCI applications.
Corporantia: Is moral consciousness above individual brains/robots?
Santos-Lang C.C.
Q3
Walter de Gruyter
Paladyn, 2018, цитирований: 3,
open access Open access ,
PDF, doi.org, Abstract
Abstract This article calls out the common assumption that moral consciousness occurs at the level of individual brains and robots. It explores the alternative, evidence against the assumption, and provides a means to further test the assumption. It also discusses the consequences of making or abandoning this assumption, especially the consequences for the further evolution of robots.
A receptionist robot for Brazilian people: study on interaction involving illiterates
Trovato G., Ramos J.G., Azevedo H., Moroni A., Magossi S., Simmons R., Ishii H., Takanishi A.
Q3
Walter de Gruyter
Paladyn, 2017, цитирований: 15,
open access Open access ,
PDF, doi.org, Abstract
Abstract The receptionist job, consisting in providing useful indications to visitors in a public office, is one possible employment of social robots. The design and the behaviour of robots expected to be integrated in human societies are crucial issues, and they are dependent on the culture and society in which the robot should be deployed. We study the factors that could be used in the design of a receptionist robot in Brazil, a country with a mix of races and considerable gaps in economic and educational level. This inequality results in the presence of functional illiterate people, unable to use reading, writing and numeracy skills. We invited Brazilian people, including a group of functionally illiterate subjects, to interact with two types of receptionists differing in physical appearance (agent v mechanical robot) and in the sound of the voice (human like v mechanical). Results gathered during the interactions point out a preference for the agent, for the human-like voice and a more intense reaction to stimuli by illiterates. These results provide useful indications that should be considered when designing a receptionist robot, as well as insights on the effect of illiteracy in the interaction.
Cobalt Бета
ru en