Imitation in robotics is seen as a powerful means to reduce the complexity of robot programming. It allows users to instruct robots by simply showing them how to execute a given task. Through imitation robots can learn from their environment and adapt to it just as human newborns do. In order to be useful as human companions, robots must act for a purpose by achieving goals and fullfiling human expectations. But, what is the goal behind the surface of the demonstrated behavior? How to extract, encode and reuse eventual regularities observed? These questions are indispensable for the development of cognitive agents capable of being human companions in everyday life. In this paper we present ConSCIS, a framework for robot teaching through observation and imitation inspired by recent findings in cognitive sciences, biology and neuroscience. In ConSCIS we regard imitation as the process of manipulating high-level symbols in order to achieve goals and intentions hidden in the observation of task. The architecture has been tested both in simulation and on an anthropomorphic robot platform.
In this paper we propose a novel approach for the texture analysis-synthesis problem, with the purpose to restore missing zones in greyscale images. Bit-plane decomposition is used, and a dictionary is build with bit-blocks statistics for each plane. Gaps are reconstructed with a conditional stochastic process, to propagate texture global features into the damaged area, using information stored in the dictionary. Our restoration method is simple, easy and fast, with very good results for a large set of textured images. Results are compared with a state-of-the-art restoration algorithm.
Imitation in robotics is seen as a powerful means to reduce the complexity of robot programming. It allows users to instruct robots by simply showing them how to execute a given task. Through imitation robots can learn from their environment and adapt to it just as human newborns do. Despite different facets of imitative behaviours observed in humans and higher primates, imitation in robotics has usually been implemented as a process of copying demonstrated actions onto the movement apparatus of the robot. While the results being reached are impressive, we believe that a shift towards a higher expression of imitation, namely the comprehension of human actions and inference of its intentions, is needed. In order to be useful as human companions, robots must act for a purpose by achieving goals and fulfilling human expectations. In this paper we present ConSCIS (Conceptual Space based Cognitive Imitation System), an architecture for goal-level imitation in robotics where the focus is put on final effects of actions on objects. The architecture tightly links low-level data with high-level knowledge, and integrates, in a unified framework, several aspects of imitation, such as perception, learning, knowledge representation, action generation and robot control. Some preliminary experimental results with an anthropomorphic arm/hand robotic system are shown.
This paper describes research on a ldquomental commitment robotrdquo. These robots have a different target audience to industrial robots, one that is not so rigidly dependent on objective measures such as accuracy and speed. The main goal of this research is to explore a new area in robotics, with the emphasis on human-robot interaction. In previous research, we classified robots into four categories, which related to their appearance. We then introduced a robot cat and a robot seal, which we evaluated by interviewing a large group of people. The results showed that physical interaction improved their subjective evaluation of the robots. Moreover, a priori knowledge of a subject has a considerable influence on the subjective interpretation and evaluation of mental commitment robots. In this paper, we asked several groups of subjects to evaluate the seal robot known as dasiaParopsila by answering questionnaires that were given out in exhibitions that were held in seven different countries; Japan, U.K., Sweden, Italy, Korea, Brunei and U.S. This paper reports the results of statistical analysis of the evaluation data.
We describe a probabilistic reference disambiguation mechanism developed for a spoken dialogue system mounted on an autonomous robotic agent. Our mechanism processes referring expressions containing intrinsic features of objects (lexical item, colour and size) and locative expressions, which involve more than one concept. The intended objects are identified in the context of the output of a simulated scene analysis system, which returns the colour and size of the seen objects and a distribution for their type. The evaluation of our system shows high resolution performance across a range of spoken referring expressions and simulated vision accuracies.
Energy concentration of the S-transform in the time-frequency domain has been addressed in this paper by optimizing the width of the window function used. A new scheme is developed and referred to as a window width optimized S-transform. Two optimization schemes have been proposed, one for a constant window width, the other for time-varying window width. The former is intended for signals with constant or slowly varying frequencies, while the latter can deal with signals with fast changing frequency components. The proposed scheme has been evaluated using a set of test signals. The results have indicated that the new scheme can provide much improved energy concentration in the time-frequency domain in comparison with the standard S-transform. It is also shown using the test signals that the proposed scheme can lead to higher energy concentration in comparison with other standard linear techniques, such as short-time Fourier transform and its adaptive forms. Finally, the method has been demonstrated on engine knock signal analysis to show its effectiveness.
Nema pronađenih rezultata, molimo da izmjenite uslove pretrage i pokušate ponovo!
Ova stranica koristi kolačiće da bi vam pružila najbolje iskustvo
Saznaj više