2020 |
|
Muhammad Ayaz Hussain; Muhammad Saif-ur-Rehman; Christian Klaes; Ioannis Iossifidis Comparison of Anomaly Detection between Statistical Method and Undercomplete Inproceedings IEEE IInternational Congress on Big Data, pp. 32–38, Los Angeles, USA, 2020. @inproceedings{Hussain2020, title = {Comparison of Anomaly Detection between Statistical Method and Undercomplete}, author = {Muhammad Ayaz Hussain and Muhammad Saif-ur-Rehman and Christian Klaes and Ioannis Iossifidis}, doi = {https://doi.org/10.1145/3404687.3404689}, year = {2020}, date = {2020-01-01}, booktitle = {IEEE IInternational Congress on Big Data}, pages = {32--38}, address = {Los Angeles, USA}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } | |
Muhammad Saif-ur-Rehman; Omair Ali; Susanne Dyck; Robin Lienkämper; Marita Metzler; Yaroslav Parpaley; Jörg Wellmer; Charles Liu; Brian Lee; Spencer Kellis; Richard A Andersen; Ioannis Iossifidis; Tobias Glasmachers; Christian Klaes SpikeDeep-Classifier: A deep-learning based fully automatic offline spike sorting algorithm Journal Article Journal of Neural Engineering, 2020. Abstract | Links | BibTeX | Tags: @article{10.1088/1741-2552/abc8d4, title = {SpikeDeep-Classifier: A deep-learning based fully automatic offline spike sorting algorithm}, author = {Muhammad Saif-ur-Rehman and Omair Ali and Susanne Dyck and Robin Lienkämper and Marita Metzler and Yaroslav Parpaley and Jörg Wellmer and Charles Liu and Brian Lee and Spencer Kellis and Richard A Andersen and Ioannis Iossifidis and Tobias Glasmachers and Christian Klaes}, url = {http://iopscience.iop.org/article/10.1088/1741-2552/abc8d4}, doi = {https://doi.org/10.1088/1741-2552/abc8d4}, year = {2020}, date = {2020-01-01}, journal = {Journal of Neural Engineering}, abstract = {Objective. Advancements in electrode design have resulted in micro-electrode arrays with hundreds of channels for single cell recordings. In the resulting electrophysiological recordings, each implanted electrode can record spike activity (SA) of one or more neurons along with background activity (BA). The aim of this study is to isolate SA of each neural source. This process is called spike sorting or spike classification. Advanced spike sorting algorithms are time consuming because of the human intervention at various stages of the pipeline. Current approaches lack generalization because the values of hyperparameters are not fixed, even for multiple recording sessions of the same subject. In this study, a fully automatic spike sorting algorithm called “SpikeDeep-Classifier” is proposed. The values of hyperparameters remain fixed for all the evaluation data. Approach. The proposed approach is based on our previous study (SpikeDeeptector) and a novel background activity rejector (BAR), which are both supervised learning algorithms and an unsupervised learning algorithm (K-means). SpikeDeeptector and BAR are used to extract meaningful channels and remove BA from the extracted meaningful channels, respectively. The process of clustering becomes straight-forward once the BA is completely removed from the data. Then, K-means with a predefined maximum number of clusters is applied on the remaining data originating from neural sources only. Lastly, a similarity-based criterion and a threshold are used to keep distinct clusters and merge similar looking clusters. The proposed approach is called cluster accept or merge (CAOM) and it has only two hyperparameters (maximum number of clusters and similarity threshold) which are kept fixed for all the evaluation data after tuning. Main Results. We compared the results of our algorithm with ground-truth labels. The algorithm is evaluated on data of human patients and publicly available labeled non-human primates (NHPs) datasets. The average accuracy of BAR on datasets of human patients is 92.3% which is further reduced to 88.03% after (K-means + CAOM). In addition, the average accuracy of BAR on a publicly available labeled dataset of NHPs is 95.40% which reduces to 86.95% after (K-mean + CAOM). Lastly, we compared the performance of the SpikeDeep-Classifier with two human experts, where SpikeDeep-Classifier has produced comparable results. Significance. The results demonstrate that “SpikeDeep-Classifier” possesses the ability to generalize well on a versatile dataset and henceforth provides a generalized well on a versatile dataset and henceforth provides a generalized and fully automated solution to offline spike sorting.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Objective. Advancements in electrode design have resulted in micro-electrode arrays with hundreds of channels for single cell recordings. In the resulting electrophysiological recordings, each implanted electrode can record spike activity (SA) of one or more neurons along with background activity (BA). The aim of this study is to isolate SA of each neural source. This process is called spike sorting or spike classification. Advanced spike sorting algorithms are time consuming because of the human intervention at various stages of the pipeline. Current approaches lack generalization because the values of hyperparameters are not fixed, even for multiple recording sessions of the same subject. In this study, a fully automatic spike sorting algorithm called “SpikeDeep-Classifier” is proposed. The values of hyperparameters remain fixed for all the evaluation data. Approach. The proposed approach is based on our previous study (SpikeDeeptector) and a novel background activity rejector (BAR), which are both supervised learning algorithms and an unsupervised learning algorithm (K-means). SpikeDeeptector and BAR are used to extract meaningful channels and remove BA from the extracted meaningful channels, respectively. The process of clustering becomes straight-forward once the BA is completely removed from the data. Then, K-means with a predefined maximum number of clusters is applied on the remaining data originating from neural sources only. Lastly, a similarity-based criterion and a threshold are used to keep distinct clusters and merge similar looking clusters. The proposed approach is called cluster accept or merge (CAOM) and it has only two hyperparameters (maximum number of clusters and similarity threshold) which are kept fixed for all the evaluation data after tuning. Main Results. We compared the results of our algorithm with ground-truth labels. The algorithm is evaluated on data of human patients and publicly available labeled non-human primates (NHPs) datasets. The average accuracy of BAR on datasets of human patients is 92.3% which is further reduced to 88.03% after (K-means + CAOM). In addition, the average accuracy of BAR on a publicly available labeled dataset of NHPs is 95.40% which reduces to 86.95% after (K-mean + CAOM). Lastly, we compared the performance of the SpikeDeep-Classifier with two human experts, where SpikeDeep-Classifier has produced comparable results. Significance. The results demonstrate that “SpikeDeep-Classifier” possesses the ability to generalize well on a versatile dataset and henceforth provides a generalized well on a versatile dataset and henceforth provides a generalized and fully automated solution to offline spike sorting. | |
2019 |
|
Muhammad Saif-ur-Rehman; Robin Lienkämper; S Dyck; A Rayana; Y Parpaley; J Wllner; C Liu; B Lee; S Kellis; D Manahan-Vaughn; O Güntürkün; R A Andersen; Ioannis Iossifidis; T Galmachers; C Klaes Universal Spikedeeptector Miscellaneous 2019. @misc{ur-reimann2019a, title = {Universal Spikedeeptector}, author = {Muhammad Saif-ur-Rehman and Robin Lienkämper and S Dyck and A Rayana and Y Parpaley and J Wllner and C Liu and B Lee and S Kellis and D Manahan-Vaughn and O Güntürkün and R A Andersen and Ioannis Iossifidis and T Galmachers and C Klaes}, year = {2019}, date = {2019-01-01}, publisher = {SfN 2019}, abstract = {State-of-the-art microelectrode array technology enables simultaneous, large-scale single unit recordings from hundreds of channels. Identification of channels recording neural data as compared to noise is the first step for all further analyses. Automatizing this process aims at minimizing the human involvement and time for manual curation. In our previous study, we introduced the “SpikeDeeptector” (SD), which enables us to automatically detect and track channels containing neural data from different human patients implanted with different types of microelectrodes across different brain areas. SD works on human data and to some extent on the data of non-human primates (NHPs). However, to make SD more versatile we proposed a more generalized method called “Universal SpikeDeeptector (USD)”, which is an extended version of SD. USD intends to detect and track the channels containing neural data recorded from four different species (rats, ravens, NHPs and humans) using different kinds of microelectrodes and different recording sites. To our knowledge, there is no method that can simultaneously detect and track neural data of multiple species. To enable contextual learning, USD constructs a feature vector from a batch of waveforms. The constructed feature vectors are then fed into a deep-learning algorithm, which learns contextualized, temporal and spatial patterns. USD is a supervised learning method. Therefore, it requires labeled data for training. It is mainly trained on data from a single human tetraplegic patient, and a small but equal portion of data from the remaining three species. The trained model is then evaluated on a test dataset collected from several humans, NHPs, rats, and birds. The results show that the USD performed consistently well across data collected from each species.}, keywords = {}, pubstate = {published}, tppubtype = {misc} } State-of-the-art microelectrode array technology enables simultaneous, large-scale single unit recordings from hundreds of channels. Identification of channels recording neural data as compared to noise is the first step for all further analyses. Automatizing this process aims at minimizing the human involvement and time for manual curation. In our previous study, we introduced the “SpikeDeeptector” (SD), which enables us to automatically detect and track channels containing neural data from different human patients implanted with different types of microelectrodes across different brain areas. SD works on human data and to some extent on the data of non-human primates (NHPs). However, to make SD more versatile we proposed a more generalized method called “Universal SpikeDeeptector (USD)”, which is an extended version of SD. USD intends to detect and track the channels containing neural data recorded from four different species (rats, ravens, NHPs and humans) using different kinds of microelectrodes and different recording sites. To our knowledge, there is no method that can simultaneously detect and track neural data of multiple species. To enable contextual learning, USD constructs a feature vector from a batch of waveforms. The constructed feature vectors are then fed into a deep-learning algorithm, which learns contextualized, temporal and spatial patterns. USD is a supervised learning method. Therefore, it requires labeled data for training. It is mainly trained on data from a single human tetraplegic patient, and a small but equal portion of data from the remaining three species. The trained model is then evaluated on a test dataset collected from several humans, NHPs, rats, and birds. The results show that the USD performed consistently well across data collected from each species. | |
Muhammad Saif-ur-Rehman; Robin Lienkämper; Yaroslav Parpaley; Jörg Wellmer; Charles Liu; Brian Lee; Spencer Kellis; Richard Andersen; Ioannis Iossifidis; Tobias Glasmachers; Christian Klaes SpikeDeeptector: a deep-learning based method for detection of neural spiking activity Journal Article Journal of Neural Engineering, 16 (5), pp. 056003, 2019. Abstract | Links | BibTeX | Tags: @article{Saif-ur-Rehman2019, title = {SpikeDeeptector: a deep-learning based method for detection of neural spiking activity}, author = {Muhammad Saif-ur-Rehman and Robin Lienkämper and Yaroslav Parpaley and Jörg Wellmer and Charles Liu and Brian Lee and Spencer Kellis and Richard Andersen and Ioannis Iossifidis and Tobias Glasmachers and Christian Klaes}, url = {https://iopscience.iop.org/article/10.1088/1741-2552/ab1e63/meta}, doi = {10.1088/1741-2552/ab1e63}, year = {2019}, date = {2019-01-01}, journal = {Journal of Neural Engineering}, volume = {16}, number = {5}, pages = {056003}, abstract = {Objective . In electrophysiology, microelectrodes are the primary source for recording neural data (single unit activity). These microelectrodes can be implanted individually or in the form of arrays containing dozens to hundreds of channels. Recordings of some channels contain neural activity, which are often contaminated with noise. Another fraction of channels does not record any neural data, but only noise. By noise, we mean physiological activities unrelated to spiking, including technical artifacts and neural activities of neurons that are too far away from the electrode to be usefully processed. For further analysis, an automatic identification and continuous tracking of channels containing neural data is of great significance for many applications, e.g. automated selection of neural channels during online and offline spike sorting. Automated spike detection and sorting is also critical for online decoding in brain–computer interface (BCI) applications, in which on...}, keywords = {}, pubstate = {published}, tppubtype = {article} } Objective . In electrophysiology, microelectrodes are the primary source for recording neural data (single unit activity). These microelectrodes can be implanted individually or in the form of arrays containing dozens to hundreds of channels. Recordings of some channels contain neural activity, which are often contaminated with noise. Another fraction of channels does not record any neural data, but only noise. By noise, we mean physiological activities unrelated to spiking, including technical artifacts and neural activities of neurons that are too far away from the electrode to be usefully processed. For further analysis, an automatic identification and continuous tracking of channels containing neural data is of great significance for many applications, e.g. automated selection of neural channels during online and offline spike sorting. Automated spike detection and sorting is also critical for online decoding in brain–computer interface (BCI) applications, in which on... | |
2018 |
|
Muhammad Ayaz Hussain; Christian Klaes; Ioannis Iossifidis: Toward a Model of Timed Arm Movement Based on Temporal Tuning of Neurons in Primary Motor (MI) and Posterior Parietal Cortex (PPC) Title Inproceedings BC18 : Computational Neuroscience & Neurotechnology Bernstein Conference 2018, BCCN, 2018. @inproceedings{bccn18, title = {Toward a Model of Timed Arm Movement Based on Temporal Tuning of Neurons in Primary Motor (MI) and Posterior Parietal Cortex (PPC) Title}, author = {Muhammad Ayaz Hussain and Christian Klaes and Ioannis Iossifidis:}, year = {2018}, date = {2018-01-01}, booktitle = {BC18 : Computational Neuroscience & Neurotechnology Bernstein Conference 2018}, publisher = {BCCN}, abstract = {To study driver behavior we set up a lab with fixed base driving simulators. In order to compensate for the lack of physical feedback in this scenario, we aimed for another means of increasing the realism of our system. In the following, we propose an efficient method of head tracking and its integration in our driving simulation. Furthermore, we illuminate why this is a promising boost of the subjects immersion in the virtual world. Our idea for increasing the feeling of immersion is to give the subject feedback on head movements relative to the screen. A real driver sometimes moves his head in order to see something better or to look behind an occluding object. In addition to these intentional movements, a study conducted by Zirkovitz and Harris has revealed that drivers involuntarily tilt their heads when they go around corners in order to maximize the use of visual information available in the scene. Our system reflects the visual changes of any head movement and hence gives feedback on both involuntary and intentional motion. If, for example, subjects move to the left, they will see more from the right-hand side of the scene. If, on the other hand, they move upwards, a larger fraction of the engine hood will be visible. The same holds for the rear view mirror}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } To study driver behavior we set up a lab with fixed base driving simulators. In order to compensate for the lack of physical feedback in this scenario, we aimed for another means of increasing the realism of our system. In the following, we propose an efficient method of head tracking and its integration in our driving simulation. Furthermore, we illuminate why this is a promising boost of the subjects immersion in the virtual world. Our idea for increasing the feeling of immersion is to give the subject feedback on head movements relative to the screen. A real driver sometimes moves his head in order to see something better or to look behind an occluding object. In addition to these intentional movements, a study conducted by Zirkovitz and Harris has revealed that drivers involuntarily tilt their heads when they go around corners in order to maximize the use of visual information available in the scene. Our system reflects the visual changes of any head movement and hence gives feedback on both involuntary and intentional motion. If, for example, subjects move to the left, they will see more from the right-hand side of the scene. If, on the other hand, they move upwards, a larger fraction of the engine hood will be visible. The same holds for the rear view mirror | |
2017 |
|
Ioannis Iossifidis; C Klaes Low dimensional representation of human arm movement for efficient neuroprosthetic control by individuals with tetraplegia Miscellaneous 2017. @misc{Iossifidis2017, title = {Low dimensional representation of human arm movement for efficient neuroprosthetic control by individuals with tetraplegia}, author = {Ioannis Iossifidis and C Klaes}, year = {2017}, date = {2017-01-01}, publisher = {SfN 2017}, abstract = {Over the last decades the generation mechanism and the representation of goal- directed movements has been a topic of intensive neurophysiological research. The investigation in the motor, premotor, and parietal areas led to the discovery that the direction of hand's movement in space was encoded by populations of neurons in these areas together with many other movement parameters. These distributions of population activation reflect how movements are prepared ahead of movement initiation, as revealed by activity induced by cues that precede the imperative signal (Georgopoulos, 1991). Inspired by those findings a model based on dynamical systems was proposed both, to model goal directed trajectories in humans and to generate trajectories for redundant anthropomorphic robotic arms. The analysis of the attractor dynamics based on the qualitative comparison with measurements of resulting trajectories taken from arm movement experiments with humans (Grimme u. a., 2012) created a framework able to reproduce and to generate naturalistic human like arm trajectories (Iossifidis und Rano, 2013; Iossifidis, Schöner u. a., 2006). The main idea of the methodology is to choose low-dimensional, behavioral va- riables of the goal task can be represented as attractor states of those variables. The movement is generated through a dynamical system with attractors and repellers on the behavioral space, at the goal and constraint positions respectively. When the motion of the robot evolves according to the dynamics of these systems, the behavioral variables will be stabilized at their attractors. Movement is represented by the polar coordinates $phi$,$theta$ of the movement direction (heading direction) and the angular frequency $ømega$ of a hopf oscillator, generating the velocity profile of the arm movement. Therefore, the system dynamics will be expressed in terms of these variables. The target and each obstacle induce vector fields over these variables in a way that states where the hand is moving closer to the target are attractive, while states where it is moving towards an obstacle are repellant. Contributions from different sources are weighted by different factors, e.g. in the vicinity of an obstacle, the contribution from that obstacle must dominate the behavior to guarantee constraint satisfaction (collision prevention). Based on three parameters the presented framework is able to generate temporal stabilized (timed) discrete movements, dealing with disturbances and maintaining an approximately constant movement time. In the current study we will implant two 96-channel intracortical microelectrode arrays in the primary motor and the posterior parietal cortex (PPC) of an individual with tetraplegia. In the training phase the parameters of the dynamical systems will be tuned and optimized by machine learning algorithms. Rather controlling directly the arm movement and adjusting continuously parameters, the patient adjust by his or hers thoughts the three parameters of the dynamics, which remain almost constant during the movement. Only when the motion plan is changing the parameters have to be readjusted. The target directed trajectory evolves from the attractor solution of the dynamical systems equations, which means that the trajectory is generated while the system is in a stable stationary state, a fixed-point attractor. The increase of the degree of assistance lowers the cognitive load of the patient and enables the acknowledgement of the desired task without frustration. In addition we aim to replace the robotic manipulator by an exoskeleton for the upper body which will enable the patients to move his or hers own limbs, which would complete the development of a real neuroprosthetic device for every day use.}, keywords = {}, pubstate = {published}, tppubtype = {misc} } Over the last decades the generation mechanism and the representation of goal- directed movements has been a topic of intensive neurophysiological research. The investigation in the motor, premotor, and parietal areas led to the discovery that the direction of hand's movement in space was encoded by populations of neurons in these areas together with many other movement parameters. These distributions of population activation reflect how movements are prepared ahead of movement initiation, as revealed by activity induced by cues that precede the imperative signal (Georgopoulos, 1991). Inspired by those findings a model based on dynamical systems was proposed both, to model goal directed trajectories in humans and to generate trajectories for redundant anthropomorphic robotic arms. The analysis of the attractor dynamics based on the qualitative comparison with measurements of resulting trajectories taken from arm movement experiments with humans (Grimme u. a., 2012) created a framework able to reproduce and to generate naturalistic human like arm trajectories (Iossifidis und Rano, 2013; Iossifidis, Schöner u. a., 2006). The main idea of the methodology is to choose low-dimensional, behavioral va- riables of the goal task can be represented as attractor states of those variables. The movement is generated through a dynamical system with attractors and repellers on the behavioral space, at the goal and constraint positions respectively. When the motion of the robot evolves according to the dynamics of these systems, the behavioral variables will be stabilized at their attractors. Movement is represented by the polar coordinates $phi$,$theta$ of the movement direction (heading direction) and the angular frequency $ømega$ of a hopf oscillator, generating the velocity profile of the arm movement. Therefore, the system dynamics will be expressed in terms of these variables. The target and each obstacle induce vector fields over these variables in a way that states where the hand is moving closer to the target are attractive, while states where it is moving towards an obstacle are repellant. Contributions from different sources are weighted by different factors, e.g. in the vicinity of an obstacle, the contribution from that obstacle must dominate the behavior to guarantee constraint satisfaction (collision prevention). Based on three parameters the presented framework is able to generate temporal stabilized (timed) discrete movements, dealing with disturbances and maintaining an approximately constant movement time. In the current study we will implant two 96-channel intracortical microelectrode arrays in the primary motor and the posterior parietal cortex (PPC) of an individual with tetraplegia. In the training phase the parameters of the dynamical systems will be tuned and optimized by machine learning algorithms. Rather controlling directly the arm movement and adjusting continuously parameters, the patient adjust by his or hers thoughts the three parameters of the dynamics, which remain almost constant during the movement. Only when the motion plan is changing the parameters have to be readjusted. The target directed trajectory evolves from the attractor solution of the dynamical systems equations, which means that the trajectory is generated while the system is in a stable stationary state, a fixed-point attractor. The increase of the degree of assistance lowers the cognitive load of the patient and enables the acknowledgement of the desired task without frustration. In addition we aim to replace the robotic manipulator by an exoskeleton for the upper body which will enable the patients to move his or hers own limbs, which would complete the development of a real neuroprosthetic device for every day use. | |
Ioannis Iossifidis:; Muhammad Ayaz Hussain; Christian Klaes Temporal stabilized arm movement for efficient neuroprosthetic control by individuals with tetraplegia Miscellaneous 2017. @misc{Iossifidis2017a, title = {Temporal stabilized arm movement for efficient neuroprosthetic control by individuals with tetraplegia}, author = {Ioannis Iossifidis: and Muhammad Ayaz Hussain and Christian Klaes}, year = {2017}, date = {2017-01-01}, publisher = {SfN 2017}, abstract = {The generation of discrete movement with distinct and stable time courses characterizes each human movement and reflect the need to perform catching and interception tasks and for timed action sequences, incorporating dynamically changing environmental constraints. Several lines of evidence suggest neuronal mechanism for the initiation of movements i.e. in the supplementary motor area (SMA) and the premotor cortex and for movement planning mechanism generating velocity profiles satisfying time constraints. In order to meet the requirements of on-line evolving trajectories we propose a model, based on dynamical systems which describes goal directed trajectories in humans and generates trajectories for redundant anthropomorphic robotic arms The current study aim to evaluate the temporal characteristics of primary motor and posterior parietal cortex in patients with tetraplegia by using inception task implemented in virtual reality. The participants will be implanted with two 96-channel intracortical microelectrode arrays in the Primary Motor and Post Parietal Cortex. In the training phase the participants will be confronted with the observation of a robotic arm intercepting the bob of a pendulum at the lowest point of it's trajectory (maximum velocity) - the end effector reaches at the same time as the bob of the pendulum the lowest point of the trajectory performing a perfectly timed movement. The arm is positioned perpendicular to the oscillation plane exactly at the hight of the interception point to generate a one dimensional trajectory to the target. The time to contact between the robot's end effector and the bob of the pendulum is maintained constant and during the different sessions the distance between end effector and the point of interception is gradually increased. In order to catch up and to reach in time, either velocity formation or initiation time of the movement have to be changed. Both effects will be investigated independently. For the decoding of movement-related information we introduce a framework exploiting a deep learning approach with a convolutional neural networks.}, keywords = {}, pubstate = {published}, tppubtype = {misc} } The generation of discrete movement with distinct and stable time courses characterizes each human movement and reflect the need to perform catching and interception tasks and for timed action sequences, incorporating dynamically changing environmental constraints. Several lines of evidence suggest neuronal mechanism for the initiation of movements i.e. in the supplementary motor area (SMA) and the premotor cortex and for movement planning mechanism generating velocity profiles satisfying time constraints. In order to meet the requirements of on-line evolving trajectories we propose a model, based on dynamical systems which describes goal directed trajectories in humans and generates trajectories for redundant anthropomorphic robotic arms The current study aim to evaluate the temporal characteristics of primary motor and posterior parietal cortex in patients with tetraplegia by using inception task implemented in virtual reality. The participants will be implanted with two 96-channel intracortical microelectrode arrays in the Primary Motor and Post Parietal Cortex. In the training phase the participants will be confronted with the observation of a robotic arm intercepting the bob of a pendulum at the lowest point of it's trajectory (maximum velocity) - the end effector reaches at the same time as the bob of the pendulum the lowest point of the trajectory performing a perfectly timed movement. The arm is positioned perpendicular to the oscillation plane exactly at the hight of the interception point to generate a one dimensional trajectory to the target. The time to contact between the robot's end effector and the bob of the pendulum is maintained constant and during the different sessions the distance between end effector and the point of interception is gradually increased. In order to catch up and to reach in time, either velocity formation or initiation time of the movement have to be changed. Both effects will be investigated independently. For the decoding of movement-related information we introduce a framework exploiting a deep learning approach with a convolutional neural networks. | |
2014 |
|
Ioannis Iossifidis Simulated Framework for the Development and Evaluation of Redundant Robotic Systems Inproceedings International Conference on Pervasive and Embedded and Communication Systems, 2014, PECCS2014, 2014. @inproceedings{Iossifidis2014a, title = {Simulated Framework for the Development and Evaluation of Redundant Robotic Systems}, author = {Ioannis Iossifidis}, year = {2014}, date = {2014-01-01}, booktitle = {International Conference on Pervasive and Embedded and Communication Systems, 2014, PECCS2014}, abstract = {In the current work we present a simulated environment for the development and evaluation of multi redundant open chain manipulators. The framework is implemented in Matlab and provides solutions for the kinematics and dynamics of an arbitrary open chain manipulator. For a anthropomorphic trunk-shoulder-arm configura- tion with in total nine degree of freedoms, a closed form solution of the inverse kinematics problem is derived. The attractor dynamics approach to motion generation was evaluated within this framework and the results are verified on the real anthropomorphic robotic assistant Cora.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } In the current work we present a simulated environment for the development and evaluation of multi redundant open chain manipulators. The framework is implemented in Matlab and provides solutions for the kinematics and dynamics of an arbitrary open chain manipulator. For a anthropomorphic trunk-shoulder-arm configura- tion with in total nine degree of freedoms, a closed form solution of the inverse kinematics problem is derived. The attractor dynamics approach to motion generation was evaluated within this framework and the results are verified on the real anthropomorphic robotic assistant Cora. | |
Ioannis Iossifidis Development of a Haptic Interface for Safe Human Robot Collaboration Inproceedings International Conference on Pervasive and Embedded and Communication Systems, 2014, PECCS2014, 2014. @inproceedings{Iossifidis2014b, title = {Development of a Haptic Interface for Safe Human Robot Collaboration}, author = {Ioannis Iossifidis}, year = {2014}, date = {2014-01-01}, booktitle = {International Conference on Pervasive and Embedded and Communication Systems, 2014, PECCS2014}, abstract = {In the context of the increasing number of collaborative workplaces in industrial environments, where humans and robots sharing the same workplace, safety and intuitive interaction is a prerequisite. This means, that the robot can (1) have contact with his own body and the surrounding objects, (2) the motion of the robot can be corrected online by the human user just by touching his artificial skin or (3) interrupt the action in dangerous situations. In the current work we introduce a haptic interface (artificial skin) which is utilized to cover the arms of an anthropomorphic robotic assistant. The touched induced input of the artificial skin is interpreted and fed into the motor control algorithm to generate the desired motion and to avoid harm for human and machine.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } In the context of the increasing number of collaborative workplaces in industrial environments, where humans and robots sharing the same workplace, safety and intuitive interaction is a prerequisite. This means, that the robot can (1) have contact with his own body and the surrounding objects, (2) the motion of the robot can be corrected online by the human user just by touching his artificial skin or (3) interrupt the action in dangerous situations. In the current work we introduce a haptic interface (artificial skin) which is utilized to cover the arms of an anthropomorphic robotic assistant. The touched induced input of the artificial skin is interpreted and fed into the motor control algorithm to generate the desired motion and to avoid harm for human and machine. | |
2013 |
|
Ioannis Iossifidis Utilizing Artificial Skin for Direct Physical Interaction Inproceedings Proc. IEEE/RSJ International Conference on Robotics and Biomimetics (RoBio2013), 2013. @inproceedings{Iossifidis2013d, title = {Utilizing Artificial Skin for Direct Physical Interaction}, author = {Ioannis Iossifidis}, year = {2013}, date = {2013-01-01}, booktitle = {Proc. IEEE/RSJ International Conference on Robotics and Biomimetics (RoBio2013)}, abstract = {Autonomous robots with limited computational capacity call for control approaches that generate meaningful, goal-directed behavior without using a large amount of resources. The attractor dynamics approach to movement generation is a framework that links sensor data to motor commands via coupled dynamical systems that have attractors at behaviorally desired states. The low computational demands leave enough system resources for higher level function like forming a sequence of local goals to reach a distant one. The comparatively high performance of local behavior generation allows the global planning to be relatively simple. In the present paper, we apply this approach to generate walking trajectories for a small humanoid robot, the Aldebaran Nao, that are goal-directed and avoid obstacles. The sensor information is a single camera in the head of the robot. The limited field of vision is compensated by head movements. The design of the dynamical system for motion generation and the choice of state variable makes a computationally expensive scene representation or local map building unnecessary.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Autonomous robots with limited computational capacity call for control approaches that generate meaningful, goal-directed behavior without using a large amount of resources. The attractor dynamics approach to movement generation is a framework that links sensor data to motor commands via coupled dynamical systems that have attractors at behaviorally desired states. The low computational demands leave enough system resources for higher level function like forming a sequence of local goals to reach a distant one. The comparatively high performance of local behavior generation allows the global planning to be relatively simple. In the present paper, we apply this approach to generate walking trajectories for a small humanoid robot, the Aldebaran Nao, that are goal-directed and avoid obstacles. The sensor information is a single camera in the head of the robot. The limited field of vision is compensated by head movements. The design of the dynamical system for motion generation and the choice of state variable makes a computationally expensive scene representation or local map building unnecessary. | |
I Iossifidis Utilizing artificial skin for direct physical interaction Inproceedings 2013 IEEE International Conference on Robotics and Biomimetics, ROBIO 2013, 2013. Abstract | Links | BibTeX | Tags: @inproceedings{Iossifidis2013c, title = {Utilizing artificial skin for direct physical interaction}, author = {I Iossifidis}, doi = {10.1109/ROBIO.2013.6739562}, year = {2013}, date = {2013-01-01}, booktitle = {2013 IEEE International Conference on Robotics and Biomimetics, ROBIO 2013}, abstract = {Focusing on the development of flexible robots for industrial and household environments, we identify intuitive teaching as the key feature and direct physical interaction and guidance as the most important interface. In the current work we introduce a multi redundant robotic assistant equipped with a touch sensitive skin around the upper- and the forearm, in order to incorporate contact forces into the arm control. A context-sensitive interpretation of the contact forces is being used to guide the attention of the robot, to avoid obstacles and to move the robot arm directly by the human operator. textcopyright 2013 IEEE.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Focusing on the development of flexible robots for industrial and household environments, we identify intuitive teaching as the key feature and direct physical interaction and guidance as the most important interface. In the current work we introduce a multi redundant robotic assistant equipped with a touch sensitive skin around the upper- and the forearm, in order to incorporate contact forces into the arm control. A context-sensitive interpretation of the contact forces is being used to guide the attention of the robot, to avoid obstacles and to move the robot arm directly by the human operator. textcopyright 2013 IEEE. | |
Ioannis Iossifidis Motion Constraint Satisfaction by Means of Closed Form Solution for Redundant Robot Arms Inproceedings Proc. IEEE/RSJ International Conference on Robotics and Biomimetics (RoBio2013), 2013. @inproceedings{Iossifidis2013c, title = {Motion Constraint Satisfaction by Means of Closed Form Solution for Redundant Robot Arms}, author = {Ioannis Iossifidis}, year = {2013}, date = {2013-01-01}, booktitle = {Proc. IEEE/RSJ International Conference on Robotics and Biomimetics (RoBio2013)}, abstract = {Autonomous robots with limited computational capacity call for control approaches that generate meaningful, goal-directed behavior without using a large amount of resources. The attractor dynamics approach to movement generation is a framework that links sensor data to motor commands via coupled dynamical systems that have attractors at behaviorally desired states. The low computational demands leave enough system resources for higher level function like forming a sequence of local goals to reach a distant one. The comparatively high performance of local behavior generation allows the global planning to be relatively simple. In the present paper, we apply this approach to generate walking trajectories for a small humanoid robot, the Aldebaran Nao, that are goal-directed and avoid obstacles. The sensor information is a single camera in the head of the robot. The limited field of vision is compensated by head movements. The design of the dynamical system for motion generation and the choice of state variable makes a computationally expensive scene representation or local map building unnecessary.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Autonomous robots with limited computational capacity call for control approaches that generate meaningful, goal-directed behavior without using a large amount of resources. The attractor dynamics approach to movement generation is a framework that links sensor data to motor commands via coupled dynamical systems that have attractors at behaviorally desired states. The low computational demands leave enough system resources for higher level function like forming a sequence of local goals to reach a distant one. The comparatively high performance of local behavior generation allows the global planning to be relatively simple. In the present paper, we apply this approach to generate walking trajectories for a small humanoid robot, the Aldebaran Nao, that are goal-directed and avoid obstacles. The sensor information is a single camera in the head of the robot. The limited field of vision is compensated by head movements. The design of the dynamical system for motion generation and the choice of state variable makes a computationally expensive scene representation or local map building unnecessary. | |
Ioannis Iossifidis; Ianki Rano Modeling Human Arm Motion by Means of Attractor Dynamics Approach Inproceedings Proc. IEEE/RSJ International Conference on Robotics and Biomimetics (RoBio2013), 2013. @inproceedings{Iossifidis2013a, title = {Modeling Human Arm Motion by Means of Attractor Dynamics Approach}, author = {Ioannis Iossifidis and Ianki Rano}, year = {2013}, date = {2013-01-01}, booktitle = {Proc. IEEE/RSJ International Conference on Robotics and Biomimetics (RoBio2013)}, abstract = {Autonomous robots with limited computational capacity call for control approaches that generate meaningful, goal-directed behavior without using a large amount of resources. The attractor dynamics approach to movement generation is a framework that links sensor data to motor commands via coupled dynamical systems that have attractors at behaviorally desired states. The low computational demands leave enough system resources for higher level function like forming a sequence of local goals to reach a distant one. The comparatively high performance of local behavior generation allows the global planning to be relatively simple. In the present paper, we apply this approach to generate walking trajectories for a small humanoid robot, the Aldebaran Nao, that are goal-directed and avoid obstacles. The sensor information is a single camera in the head of the robot. The limited field of vision is compensated by head movements. The design of the dynamical system for motion generation and the choice of state variable makes a computationally expensive scene representation or local map building unnecessary.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Autonomous robots with limited computational capacity call for control approaches that generate meaningful, goal-directed behavior without using a large amount of resources. The attractor dynamics approach to movement generation is a framework that links sensor data to motor commands via coupled dynamical systems that have attractors at behaviorally desired states. The low computational demands leave enough system resources for higher level function like forming a sequence of local goals to reach a distant one. The comparatively high performance of local behavior generation allows the global planning to be relatively simple. In the present paper, we apply this approach to generate walking trajectories for a small humanoid robot, the Aldebaran Nao, that are goal-directed and avoid obstacles. The sensor information is a single camera in the head of the robot. The limited field of vision is compensated by head movements. The design of the dynamical system for motion generation and the choice of state variable makes a computationally expensive scene representation or local map building unnecessary. | |
Ioannis Iossifidis Motion constraint satisfaction by means of closed form solution for redundant robot arms Inproceedings 2013 IEEE International Conference on Robotics and Biomimetics, ROBIO 2013, pp. 2106–2111, 2013, ISBN: 978-1-4799-2744-9. Abstract | Links | BibTeX | Tags: @inproceedings{Iossifidis2013b, title = {Motion constraint satisfaction by means of closed form solution for redundant robot arms}, author = {Ioannis Iossifidis}, doi = {10.1109/ROBIO.2013.6739780}, isbn = {978-1-4799-2744-9}, year = {2013}, date = {2013-01-01}, booktitle = {2013 IEEE International Conference on Robotics and Biomimetics, ROBIO 2013}, pages = {2106--2111}, abstract = {Generation of flexible goal directed movement describes the key skill of autonomous articulated robots. Critical points are still the acknowledgement of reaching and grasping task while satisfying static and dynamically changing constraints given by the environment or caused by the human operator in a collaborative situation. This means that the motion planning dynamics has to incorporate multiple contributions of different qualities which should be formulated in constraint specific reference frames and then transformed into the frame of joint velocities. Whereby the handling of the contribution to motion planning is determined by the solution of the inverse kinematics problem. In this work a closed form solution for the inverse kinematics problem for an eight degree of freedom arm is presented. The geometrical properties of the multi redundant arm and the resulting free parameter which determine it's null space motion are utilized to satisfy constraints of the desired motion. We implement this system on an eight DoF redundant manipulator and show its feasibility in a simulation. textcopyright 2013 IEEE.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Generation of flexible goal directed movement describes the key skill of autonomous articulated robots. Critical points are still the acknowledgement of reaching and grasping task while satisfying static and dynamically changing constraints given by the environment or caused by the human operator in a collaborative situation. This means that the motion planning dynamics has to incorporate multiple contributions of different qualities which should be formulated in constraint specific reference frames and then transformed into the frame of joint velocities. Whereby the handling of the contribution to motion planning is determined by the solution of the inverse kinematics problem. In this work a closed form solution for the inverse kinematics problem for an eight degree of freedom arm is presented. The geometrical properties of the multi redundant arm and the resulting free parameter which determine it's null space motion are utilized to satisfy constraints of the desired motion. We implement this system on an eight DoF redundant manipulator and show its feasibility in a simulation. textcopyright 2013 IEEE. | |
Inaki Rano; Ioannis Iossifidis Modelling human arm motion through the attractor dynamics approach Inproceedings 2013 IEEE International Conference on Robotics and Biomimetics, ROBIO 2013, pp. 2088–2093, 2013, ISBN: 9781479927449. Abstract | Links | BibTeX | Tags: @inproceedings{Rano2013, title = {Modelling human arm motion through the attractor dynamics approach}, author = {Inaki Rano and Ioannis Iossifidis}, doi = {10.1109/ROBIO.2013.6739777}, isbn = {9781479927449}, year = {2013}, date = {2013-01-01}, booktitle = {2013 IEEE International Conference on Robotics and Biomimetics, ROBIO 2013}, pages = {2088--2093}, abstract = {Movement generation in robotics is an old problem with many excellent solutions. Most of them, however, look for optimality according to some metrics, but have no biological inspiration or cannot be used to imitate biological motion. For a human these techniques behave in a non-naturalistic way. This poses a problem for instance in human-robot interaction and, in general, for a good acceptance of robots in society. The present work presents a new analysis of the attractor dynamics approach to movement generation used in an anthropomorphic robot arm. Our analysis points to the possibility of using this approach to generate human-like arm trajectories in robots. One key property of human trajectories in pick-and-place tasks is the planarity of the trajectory of the end effector in 3D space. We show that this feature is also displayed by the attractor dynamic approach and, therefore, is a good candidate to the generation of naturalistic arm movements. textcopyright 2013 IEEE.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Movement generation in robotics is an old problem with many excellent solutions. Most of them, however, look for optimality according to some metrics, but have no biological inspiration or cannot be used to imitate biological motion. For a human these techniques behave in a non-naturalistic way. This poses a problem for instance in human-robot interaction and, in general, for a good acceptance of robots in society. The present work presents a new analysis of the attractor dynamics approach to movement generation used in an anthropomorphic robot arm. Our analysis points to the possibility of using this approach to generate human-like arm trajectories in robots. One key property of human trajectories in pick-and-place tasks is the planarity of the trajectory of the end effector in 3D space. We show that this feature is also displayed by the attractor dynamic approach and, therefore, is a good candidate to the generation of naturalistic arm movements. textcopyright 2013 IEEE. | |
2012 |
|
S Noth; J Edelbrunner; I Iossifidis An integrated architecture for the development and assessment of ADAS Inproceedings IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, 2012, ISBN: 9781467330640. Abstract | Links | BibTeX | Tags: @inproceedings{Noth2012a, title = {An integrated architecture for the development and assessment of ADAS}, author = {S Noth and J Edelbrunner and I Iossifidis}, doi = {10.1109/ITSC.2012.6338805}, isbn = {9781467330640}, year = {2012}, date = {2012-01-01}, booktitle = {IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC}, abstract = {Advanced Driver Assistant Systems act, by definition in natural, often poorly structured, environments and are supposed to closely interact with human operators. Both, natural environments as well as human behaviour have no inherent metric and can not be modelled/measured in the classical way physically plausibly behaving systems are described. textcopyright 2012 IEEE.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Advanced Driver Assistant Systems act, by definition in natural, often poorly structured, environments and are supposed to closely interact with human operators. Both, natural environments as well as human behaviour have no inherent metric and can not be modelled/measured in the classical way physically plausibly behaving systems are described. textcopyright 2012 IEEE. | |
Sebastian Noth; Johann Edelbrunner; Ioannis Iossifidis A Versatile Simulated Reality Framework: From Embedded Components to ADAS Inproceedings International Conference on Pervasive and Embedded and Communication Systems, 2012, PECCS2012, 2012. BibTeX | Tags: @inproceedings{Noth2012b, title = {A Versatile Simulated Reality Framework: From Embedded Components to ADAS}, author = {Sebastian Noth and Johann Edelbrunner and Ioannis Iossifidis}, year = {2012}, date = {2012-01-01}, booktitle = {International Conference on Pervasive and Embedded and Communication Systems, 2012, PECCS2012}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } | |
Ioannis Iossifidis Sequence Generation for Grasping Tasks by Means of Dynamical Systems Conference BC12 : Computational Neuroscience $backslash$& Neurotechnology Bernstein Conference $backslash$& Neurex Annual Meeting 2012, 2012. BibTeX | Tags: @conference{Iossifidis2012, title = {Sequence Generation for Grasping Tasks by Means of Dynamical Systems}, author = {Ioannis Iossifidis}, year = {2012}, date = {2012-01-01}, booktitle = {BC12 : Computational Neuroscience $backslash$& Neurotechnology Bernstein Conference $backslash$& Neurex Annual Meeting 2012}, keywords = {}, pubstate = {published}, tppubtype = {conference} } | |
2011 |
|
S K U Zibner; C Faubel; I Iossifidis; G Schöner Dynamic neural fields as building blocks of a cortex-inspired architecture for robotic scene representation Journal Article IEEE Transactions on Autonomous Mental Development, 3 (1), 2011, ISSN: 19430604. Abstract | Links | BibTeX | Tags: Autonomous robotics, dynamic field theory (DFT), dynamical systems, embodied cognition, neural processing @article{Zibner2011, title = {Dynamic neural fields as building blocks of a cortex-inspired architecture for robotic scene representation}, author = {S K U Zibner and C Faubel and I Iossifidis and G Schöner}, doi = {10.1109/TAMD.2011.2109714}, issn = {19430604}, year = {2011}, date = {2011-01-01}, journal = {IEEE Transactions on Autonomous Mental Development}, volume = {3}, number = {1}, abstract = {Based on the concepts of dynamic field theory (DFT), we present an architecture that autonomously generates scene representations by controlling gaze and attention, creating visual objects in the foreground, tracking objects, reading them into working memory, and taking into account their visibility. At the core of this architecture are three-dimensional dynamic neural fields (DNFs) that link feature to spatial information. These three-dimensional fields couple into lower dimensional fields, which provide the links to the sensory surface and to the motor systems. We discuss how DNFs can be used as building blocks for cognitive architectures, characterize the critical bifurcations in DNFs, as well as the possible coupling structures among DNFs. In a series of robotic experiments, we demonstrate how the DNF architecture provides the core functionalities of a scene representation. textcopyright 2011 IEEE.}, keywords = {Autonomous robotics, dynamic field theory (DFT), dynamical systems, embodied cognition, neural processing}, pubstate = {published}, tppubtype = {article} } Based on the concepts of dynamic field theory (DFT), we present an architecture that autonomously generates scene representations by controlling gaze and attention, creating visual objects in the foreground, tracking objects, reading them into working memory, and taking into account their visibility. At the core of this architecture are three-dimensional dynamic neural fields (DNFs) that link feature to spatial information. These three-dimensional fields couple into lower dimensional fields, which provide the links to the sensory surface and to the motor systems. We discuss how DNFs can be used as building blocks for cognitive architectures, characterize the critical bifurcations in DNFs, as well as the possible coupling structures among DNFs. In a series of robotic experiments, we demonstrate how the DNF architecture provides the core functionalities of a scene representation. textcopyright 2011 IEEE. | |
Stephan K U Zibner; Christian Faubel; Ioannis Iossifidis; Gregor Schöner Dynamic Neural Fields as Building Blocks for a Cortex-Inspired Architecture of Robotic Scene Representation Journal Article Autonomous Mental Development, IEEE Transactions on, 3 (1), 2011. BibTeX | Tags: @article{Zibnersubmitteda, title = {Dynamic Neural Fields as Building Blocks for a Cortex-Inspired Architecture of Robotic Scene Representation}, author = {Stephan K U Zibner and Christian Faubel and Ioannis Iossifidis and Gregor Schöner}, year = {2011}, date = {2011-01-01}, journal = {Autonomous Mental Development, IEEE Transactions on}, volume = {3}, number = {1}, keywords = {}, pubstate = {published}, tppubtype = {article} } | |
Urun Dogan; Johann Edelbrunner; Ioannis Iossifidis Autonomous Driving: A Comparison of Machine Learning Techniques by Measns of the Prediction of Lane Change Behavior Inproceedings Proc. IEEE/RSJ International Conference on Robotics and Biomimetics (RoBio2011), 2011. BibTeX | Tags: digital simulation, driver information systems, driver model, drivers lane change behavior prediction, feed forward neural network, feedforward neural nets, lane change maneuvers, NISYS TRS, recurrent neural nets, recurrent neural network, support vector machines, traffic simulator @inproceedings{Dogan2011, title = {Autonomous Driving: A Comparison of Machine Learning Techniques by Measns of the Prediction of Lane Change Behavior}, author = {Urun Dogan and Johann Edelbrunner and Ioannis Iossifidis}, year = {2011}, date = {2011-01-01}, booktitle = {Proc. IEEE/RSJ International Conference on Robotics and Biomimetics (RoBio2011)}, keywords = {digital simulation, driver information systems, driver model, drivers lane change behavior prediction, feed forward neural network, feedforward neural nets, lane change maneuvers, NISYS TRS, recurrent neural nets, recurrent neural network, support vector machines, traffic simulator}, pubstate = {published}, tppubtype = {inproceedings} } | |
Sebastian Noth; Ioannis Iossifidis Simulated reality environment for development and assessment of cognitive robotic systems Inproceedings Proc. IEEE/RSJ International Conference on Robotics and Biomimetics (RoBio2011), 2011. @inproceedings{Noth2011, title = {Simulated reality environment for development and assessment of cognitive robotic systems}, author = {Sebastian Noth and Ioannis Iossifidis}, year = {2011}, date = {2011-01-01}, booktitle = {Proc. IEEE/RSJ International Conference on Robotics and Biomimetics (RoBio2011)}, abstract = {Simulated reality environment incorporating humans and physically plausible behaving robots, providing natural interaction channels, with the option to link simulator to real perception and motion, is gaining importance for the development of cognitive, intuitive interacting and collaborating robotic systems. In the present work we introduce a head tracking system which is utilized to incorporate human ego motion in simulated environment improving immersion in the context of human-robot collaborative tasks.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Simulated reality environment incorporating humans and physically plausible behaving robots, providing natural interaction channels, with the option to link simulator to real perception and motion, is gaining importance for the development of cognitive, intuitive interacting and collaborating robotic systems. In the present work we introduce a head tracking system which is utilized to incorporate human ego motion in simulated environment improving immersion in the context of human-robot collaborative tasks. | |
Ioannis Iossifidis; Darius Malysiak; Hendrik Reimann Model-free local navigation for humanoid robots Inproceedings Proc. IEEE/RSJ International Conference on Robotics and Biomimetics (RoBio2011), 2011. @inproceedings{Iossifidis2011A, title = {Model-free local navigation for humanoid robots}, author = {Ioannis Iossifidis and Darius Malysiak and Hendrik Reimann}, year = {2011}, date = {2011-01-01}, booktitle = {Proc. IEEE/RSJ International Conference on Robotics and Biomimetics (RoBio2011)}, abstract = {Autonomous robots with limited computational capacity call for control approaches that generate meaningful, goal-directed behavior without using a large amount of resources. The attractor dynamics approach to movement generation is a framework that links sensor data to motor commands via coupled dynamical systems that have attractors at behaviorally desired states. The low computational demands leave enough system resources for higher level function like forming a sequence of local goals to reach a distant one. The comparatively high performance of local behavior generation allows the global planning to be relatively simple. In the present paper, we apply this approach to generate walking trajectories for a small humanoid robot, the Aldebaran Nao, that are goal-directed and avoid obstacles. The sensor information is a single camera in the head of the robot. The limited field of vision is compensated by head movements. The design of the dynamical system for motion generation and the choice of state variable makes a computationally expensive scene representation or local map building unnecessary.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Autonomous robots with limited computational capacity call for control approaches that generate meaningful, goal-directed behavior without using a large amount of resources. The attractor dynamics approach to movement generation is a framework that links sensor data to motor commands via coupled dynamical systems that have attractors at behaviorally desired states. The low computational demands leave enough system resources for higher level function like forming a sequence of local goals to reach a distant one. The comparatively high performance of local behavior generation allows the global planning to be relatively simple. In the present paper, we apply this approach to generate walking trajectories for a small humanoid robot, the Aldebaran Nao, that are goal-directed and avoid obstacles. The sensor information is a single camera in the head of the robot. The limited field of vision is compensated by head movements. The design of the dynamical system for motion generation and the choice of state variable makes a computationally expensive scene representation or local map building unnecessary. | |
Hendrik Reimann; Ioannis Iossifidis; Gregor Schöner Autonomous movement generation for manipulators with multiple simultaneous constraints using the attractor dynamics approach Inproceedings 2011 IEEE International Conference on Robotics and Automation, ICRA2011, 2011. @inproceedings{Reimann2011, title = {Autonomous movement generation for manipulators with multiple simultaneous constraints using the attractor dynamics approach}, author = {Hendrik Reimann and Ioannis Iossifidis and Gregor Schöner}, year = {2011}, date = {2011-01-01}, booktitle = {2011 IEEE International Conference on Robotics and Automation, ICRA2011}, abstract = {The movement of autonomous agents in natural environments is restricted by potentially large numbers of con- straints. To generate behavior that fulfills all given constraints simultaneously, the attractor dynamics approach to movement generation represents each constraint by a dynamical system with attractors or repellors at desired or undesired values of a relevant variable. These dynamical systems are transformed into vector fields over the control variables of a robotic agent that force the state of the whole system in directions beneficial to the satisfaction of the behavioral constraint. The attractor dynamics approach was recently successfully applied to the generation of manipulator motion trajectories avoiding collision with obstacles [1] and constraints on gripper orientation during reaching and grasping movements [2]. Continuing that body of work, this paper proposes a system which generates movements satisfying both obstacle avoidance and gripper orientation constraints simultaneously. As an extension, the additional constraint of avoiding hardware limits for joint angles is in- cluded. Properties of the resulting system are demonstrated by a systematic study generating movements with a large number of constraints in different scene setups. Specific characteristics are highlighted by several showcase example movements.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } The movement of autonomous agents in natural environments is restricted by potentially large numbers of con- straints. To generate behavior that fulfills all given constraints simultaneously, the attractor dynamics approach to movement generation represents each constraint by a dynamical system with attractors or repellors at desired or undesired values of a relevant variable. These dynamical systems are transformed into vector fields over the control variables of a robotic agent that force the state of the whole system in directions beneficial to the satisfaction of the behavioral constraint. The attractor dynamics approach was recently successfully applied to the generation of manipulator motion trajectories avoiding collision with obstacles [1] and constraints on gripper orientation during reaching and grasping movements [2]. Continuing that body of work, this paper proposes a system which generates movements satisfying both obstacle avoidance and gripper orientation constraints simultaneously. As an extension, the additional constraint of avoiding hardware limits for joint angles is in- cluded. Properties of the resulting system are demonstrated by a systematic study generating movements with a large number of constraints in different scene setups. Specific characteristics are highlighted by several showcase example movements. | |
Ürün Dogan; Johann Edelbrunner; Ioannis Iossifidis Autonomous driving: A comparison of machine learning techniques by means of the prediction of lane change behavior Inproceedings 2011 IEEE International Conference on Robotics and Biomimetics, ROBIO 2011, pp. 1837–1843, 2011, ISSN: 01962892. Abstract | Links | BibTeX | Tags: @inproceedings{Dogan2011a, title = {Autonomous driving: A comparison of machine learning techniques by means of the prediction of lane change behavior}, author = {Ürün Dogan and Johann Edelbrunner and Ioannis Iossifidis}, doi = {10.1109/ROBIO.2011.6181557}, issn = {01962892}, year = {2011}, date = {2011-01-01}, booktitle = {2011 IEEE International Conference on Robotics and Biomimetics, ROBIO 2011}, pages = {1837--1843}, abstract = {In the presented work we compare machine learning techniques in the context of lane change behavior performed by humans in a semi-naturalistic simulated environment. We evaluate different learning approaches using differing feature combinations in order to identify appropriate feature, best feature combination, and the most appropriate machine learning technique for the described task. Based on the data acquired from human drivers in the traffic simulator NISYS TRS1, we trained a recurrent neural network, a feed forward neural network and a set of support vector machines. In the followed test drives the system was able to predict lane changes up to 1.5 sec in beforehand.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } In the presented work we compare machine learning techniques in the context of lane change behavior performed by humans in a semi-naturalistic simulated environment. We evaluate different learning approaches using differing feature combinations in order to identify appropriate feature, best feature combination, and the most appropriate machine learning technique for the described task. Based on the data acquired from human drivers in the traffic simulator NISYS TRS1, we trained a recurrent neural network, a feed forward neural network and a set of support vector machines. In the followed test drives the system was able to predict lane changes up to 1.5 sec in beforehand. | |
D Malysiak; H Reiman; I Iossifidis Human like trajectories for humanoid robots Conference BC11 : Computational Neuroscience $backslash$& Neurotechnology Bernstein Conference $backslash$& Neurex Annual Meeting 2011, 2011. BibTeX | Tags: @conference{Malysiak2011, title = {Human like trajectories for humanoid robots}, author = {D Malysiak and H Reiman and I Iossifidis}, year = {2011}, date = {2011-01-01}, booktitle = {BC11 : Computational Neuroscience $backslash$& Neurotechnology Bernstein Conference $backslash$& Neurex Annual Meeting 2011}, keywords = {}, pubstate = {published}, tppubtype = {conference} } | |
S Noth; I Iossifidis Benefits of ego motion feedback for interactive experiments in virtual reality scenarios Conference BC11 : Computational Neuroscience $backslash$& Neurotechnology Bernstein Conference $backslash$& Neurex Annual Meeting 2011, 2011. BibTeX | Tags: @conference{Noth2011a, title = {Benefits of ego motion feedback for interactive experiments in virtual reality scenarios}, author = {S Noth and I Iossifidis}, year = {2011}, date = {2011-01-01}, booktitle = {BC11 : Computational Neuroscience $backslash$& Neurotechnology Bernstein Conference $backslash$& Neurex Annual Meeting 2011}, keywords = {}, pubstate = {published}, tppubtype = {conference} } | |
2010 |
|
Hendrik Reimann; Ioannis Iossifidis; Gregor Schoner; Gregor Schöner Integrating orientation constraints into the attractor dynamics approach for autonomous manipulation Inproceedings 2010 10th IEEE-RAS International Conference on Humanoid Robots, pp. 294–301, IEEE, 2010, ISBN: 978-1-4244-8688-5. Abstract | Links | BibTeX | Tags: @inproceedings{Reimann2010a, title = {Integrating orientation constraints into the attractor dynamics approach for autonomous manipulation}, author = {Hendrik Reimann and Ioannis Iossifidis and Gregor Schoner and Gregor Schöner}, url = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5686349}, doi = {10.1109/ICHR.2010.5686349}, isbn = {978-1-4244-8688-5}, year = {2010}, date = {2010-12-01}, booktitle = {2010 10th IEEE-RAS International Conference on Humanoid Robots}, pages = {294--301}, publisher = {IEEE}, abstract = {When autonomous robots generate behavior in complex environments they must satisfy multiple different constraints such as moving toward a target, avoidance of obstacles, or alignment of the gripper with a particular orientation. It is often convenient to represent each type of constraint in a specific reference frame, so that the satisfaction of all constraints requires transformation into a shared base frame. In the attractor dynamics approach, behavior is generated as an attractor solution of a dynamical system that is formulated in such a base frame to enable control. Each constraint contributes an attractive (for targets) or repulsive (for obstacles) component to the vector field. Here we show how these dynamic contributions can be formulated in different reference frames suited to each constraint and then be transformed and integrated within the base frame. Building on earlier work, we show how the orientation of the gripper can be integrated with other constraints on the movement of the manipulator. We also show, how an attractor dynamics of “neural” activation variables can be designed that activates and deactivates the different contributions to the vector field over time to generate a sequence of component movements. As a demonstration, we treat a manipulation task in which grasping oblong cylindrical objects is decomposed into an ensemble of separate constraints that are integrated and resolved using the attractor dynamics approach. The system is implemented on the small humanoid robot Nao, and illustrated in two exemplary movement tasks.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } When autonomous robots generate behavior in complex environments they must satisfy multiple different constraints such as moving toward a target, avoidance of obstacles, or alignment of the gripper with a particular orientation. It is often convenient to represent each type of constraint in a specific reference frame, so that the satisfaction of all constraints requires transformation into a shared base frame. In the attractor dynamics approach, behavior is generated as an attractor solution of a dynamical system that is formulated in such a base frame to enable control. Each constraint contributes an attractive (for targets) or repulsive (for obstacles) component to the vector field. Here we show how these dynamic contributions can be formulated in different reference frames suited to each constraint and then be transformed and integrated within the base frame. Building on earlier work, we show how the orientation of the gripper can be integrated with other constraints on the movement of the manipulator. We also show, how an attractor dynamics of “neural” activation variables can be designed that activates and deactivates the different contributions to the vector field over time to generate a sequence of component movements. As a demonstration, we treat a manipulation task in which grasping oblong cylindrical objects is decomposed into an ensemble of separate constraints that are integrated and resolved using the attractor dynamics approach. The system is implemented on the small humanoid robot Nao, and illustrated in two exemplary movement tasks. | |
H Reimann; I Iossifidis; G Schöner Generating collision free reaching movements for redundant manipulators using dynamical systems Inproceedings 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5372–5379, IEEE, 2010, ISBN: 978-1-4244-6674-0. Abstract | Links | BibTeX | Tags: @inproceedings{Reimann2010b, title = {Generating collision free reaching movements for redundant manipulators using dynamical systems}, author = {H Reimann and I Iossifidis and G Schöner}, url = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5650603}, doi = {10.1109/IROS.2010.5650603}, isbn = {978-1-4244-6674-0}, year = {2010}, date = {2010-10-01}, booktitle = {2010 IEEE/RSJ International Conference on Intelligent Robots and Systems}, pages = {5372--5379}, publisher = {IEEE}, abstract = {For autonomous robots to manipulate objects in unknown environments, they must be able to move their arms without colliding with nearby objects, other agents or humans. The simultaneous avoidance of multiple obstacles in real time by all link segments of a manipulator is still a hard task both in practice and in theory. We present a systematic scheme for the generation of collision free movements for redundant manipulators in scenes with arbitrarily many obstacles. Based on the dynamical systems approach to robotics, constraints are formulated as contributions to a dynamical system that erect attractors for targets and repellors for obstacles. These contributions are formulated in terms of variables relevant to each constraint and then transformed into vector fields over the manipulator joint velocity vector as an embedding space in which all constraints are simultaneously observed. We demonstrate the feasibility of the approach by implementing it on a real anthropomorphic 8-degrees-of-freedom redundant manipulator. In addition, performance is characterized by detecting failures in a systematic simulation experiment in randomized scenes with varying numbers of obstacles.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } For autonomous robots to manipulate objects in unknown environments, they must be able to move their arms without colliding with nearby objects, other agents or humans. The simultaneous avoidance of multiple obstacles in real time by all link segments of a manipulator is still a hard task both in practice and in theory. We present a systematic scheme for the generation of collision free movements for redundant manipulators in scenes with arbitrarily many obstacles. Based on the dynamical systems approach to robotics, constraints are formulated as contributions to a dynamical system that erect attractors for targets and repellors for obstacles. These contributions are formulated in terms of variables relevant to each constraint and then transformed into vector fields over the manipulator joint velocity vector as an embedding space in which all constraints are simultaneously observed. We demonstrate the feasibility of the approach by implementing it on a real anthropomorphic 8-degrees-of-freedom redundant manipulator. In addition, performance is characterized by detecting failures in a systematic simulation experiment in randomized scenes with varying numbers of obstacles. | |
S K U Zibner; C Faubel; I Iossifidis; G Schöner; J P Spencer Scenes and tracking with dynamic neural fields: How to update a robotic scene representation Inproceedings 2010 IEEE 9th International Conference on Development and Learning, ICDL-2010 - Conference Program, 2010, ISBN: 9781424469024. Abstract | Links | BibTeX | Tags: @inproceedings{Zibner2010, title = {Scenes and tracking with dynamic neural fields: How to update a robotic scene representation}, author = {S K U Zibner and C Faubel and I Iossifidis and G Schöner and J P Spencer}, doi = {10.1109/DEVLRN.2010.5578837}, isbn = {9781424469024}, year = {2010}, date = {2010-01-01}, booktitle = {2010 IEEE 9th International Conference on Development and Learning, ICDL-2010 - Conference Program}, abstract = {We present an architecture based on the Dynamic Field Theory for the problem of scene representation. At the core of this architecture are three-dimensional neural fields linking feature to spatial information. These three-dimensional fields are coupled to lower-dimensional fields that provide both a close link to the sensory surface and a close link to motor behavior. We highlight the updating mechanism of this architecture, both when a single object is selected and followed by the robot's head in smooth pursuit and in multi-item tracking when several items move simultaneously. textcopyright 2010 IEEE.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } We present an architecture based on the Dynamic Field Theory for the problem of scene representation. At the core of this architecture are three-dimensional neural fields linking feature to spatial information. These three-dimensional fields are coupled to lower-dimensional fields that provide both a close link to the sensory surface and a close link to motor behavior. We highlight the updating mechanism of this architecture, both when a single object is selected and followed by the robot's head in smooth pursuit and in multi-item tracking when several items move simultaneously. textcopyright 2010 IEEE. | |
S K U Zibner; C Faubel; I Iossifidis; G Schöner Scene representation for anthropomorphic robots: A dynamic neural field approach Inproceedings Joint 41st International Symposium on Robotics and 6th German Conference on Robotics 2010, ISR/ROBOTIK 2010, 2010, ISBN: 9781617387197. @inproceedings{Zibner2010b, title = {Scene representation for anthropomorphic robots: A dynamic neural field approach}, author = {S K U Zibner and C Faubel and I Iossifidis and G Schöner}, isbn = {9781617387197}, year = {2010}, date = {2010-01-01}, booktitle = {Joint 41st International Symposium on Robotics and 6th German Conference on Robotics 2010, ISR/ROBOTIK 2010}, volume = {2}, abstract = {For autonomous robotic systems, the ability to represent a scene, to memorize and track objects and their associated features is a prerequisite for reasonable interactive behavior. In this paper, we present a biologically inspired architecture for scene representation that is based on Dynamic Field Theory. At the core of the architecture we make use of three-dimensional Dynamic Neural Fields for representing space-feature associations. These associations are built up autonomously in a sequential way and they are maintained and continuously updated. We demonstrate these capabilities in two experiments on an anthropomorphic robotic platform. In the first experiment we show the sequential scanning of a scene. The second experiment demonstrates the maintenance of associations for objects, which get out of view, and the correct update of the scene representation, if such objects are removed.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } For autonomous robotic systems, the ability to represent a scene, to memorize and track objects and their associated features is a prerequisite for reasonable interactive behavior. In this paper, we present a biologically inspired architecture for scene representation that is based on Dynamic Field Theory. At the core of the architecture we make use of three-dimensional Dynamic Neural Fields for representing space-feature associations. These associations are built up autonomously in a sequential way and they are maintained and continuously updated. We demonstrate these capabilities in two experiments on an anthropomorphic robotic platform. In the first experiment we show the sequential scanning of a scene. The second experiment demonstrates the maintenance of associations for objects, which get out of view, and the correct update of the scene representation, if such objects are removed. | |
Matthias Grimm; Ioannis Iossifidis Behavioral Organization for Mobile Robotic Systems: An Attractor Dynamics Approach Inproceedings ISR / ROBOTIK 2010, Munich, Germany, 2010. @inproceedings{Grimm2010, title = {Behavioral Organization for Mobile Robotic Systems: An Attractor Dynamics Approach}, author = {Matthias Grimm and Ioannis Iossifidis}, year = {2010}, date = {2010-01-01}, booktitle = {ISR / ROBOTIK 2010}, address = {Munich, Germany}, abstract = {Autonomous systems generate different behaviors based on the perceived environmental situation. The organization of a set of behaviors plays an important role in the field of autonomous robotics. The organization architecture must be flexible, so that behavioral changes are possible if the sensory information changes. Furthermore, behavioral organization must be stable, so that small changes in sensory information do not lead to oscillations. To achieve this, all behaviors, but also the underlying organization architecture, are based on continuous dynamical systems. They are characterized by a set of dynamical variables, also referred to as state variables. These variables represent the activation or deactivation of a particular behavior. Elementary behaviors are dependent on the sensor input in a way, that changes of the sensorial information lead to qualitatively different behaviors. The so-called sensor context denotes whether a behavior is applicable in the current sensor situation or not. However, for complex systems consisting of many elementary behaviors, it is necessary to take logical conditions into account to generate a sequence of behaviors. Furthermore, some elementary behaviors can or even must run in parallel, while others exclude each other. This internal information requires knowledge about the logical interaction of the behaviors and is stored within binary matrices. This makes the overall organization structure very flexible and easy to extend. We present the architecture using the example of approaching and passing a door. The robot has to navigate from one room to another while simultaneously avoiding obstacles in its pathway.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Autonomous systems generate different behaviors based on the perceived environmental situation. The organization of a set of behaviors plays an important role in the field of autonomous robotics. The organization architecture must be flexible, so that behavioral changes are possible if the sensory information changes. Furthermore, behavioral organization must be stable, so that small changes in sensory information do not lead to oscillations. To achieve this, all behaviors, but also the underlying organization architecture, are based on continuous dynamical systems. They are characterized by a set of dynamical variables, also referred to as state variables. These variables represent the activation or deactivation of a particular behavior. Elementary behaviors are dependent on the sensor input in a way, that changes of the sensorial information lead to qualitatively different behaviors. The so-called sensor context denotes whether a behavior is applicable in the current sensor situation or not. However, for complex systems consisting of many elementary behaviors, it is necessary to take logical conditions into account to generate a sequence of behaviors. Furthermore, some elementary behaviors can or even must run in parallel, while others exclude each other. This internal information requires knowledge about the logical interaction of the behaviors and is stored within binary matrices. This makes the overall organization structure very flexible and easy to extend. We present the architecture using the example of approaching and passing a door. The robot has to navigate from one room to another while simultaneously avoiding obstacles in its pathway. | |
Stephan K U Zibner; Christian Faubel; John P Spencer; Ioannis Iossifidis; Gregor Schöner Scenes and Tracking with Dynamic Neural Fields: How to Update a Robotic Scene Representation Inproceedings Proc. Int. Conf. on Development and Learning (ICDL10), 2010. BibTeX | Tags: @inproceedings{Zibner2010c, title = {Scenes and Tracking with Dynamic Neural Fields: How to Update a Robotic Scene Representation}, author = {Stephan K U Zibner and Christian Faubel and John P Spencer and Ioannis Iossifidis and Gregor Schöner}, year = {2010}, date = {2010-01-01}, booktitle = {Proc. Int. Conf. on Development and Learning (ICDL10)}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } | |
Yulia Sandamirskaya; John Lipinski; Ioannis Iossifidis; G Schöner Natural human-robot interaction through spatial language: a dynamic neural fields approach Inproceedings Proc. 19th IEEE International Workshop on Robot and Human Interactive Communication (ROMAN 2010), pp. 600–607, IEEE, 2010, ISSN: 1944-9445. @inproceedings{Sandamirskayasubmitted, title = {Natural human-robot interaction through spatial language: a dynamic neural fields approach}, author = {Yulia Sandamirskaya and John Lipinski and Ioannis Iossifidis and G Schöner}, url = {http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5598671}, issn = {1944-9445}, year = {2010}, date = {2010-01-01}, booktitle = {Proc. 19th IEEE International Workshop on Robot and Human Interactive Communication (ROMAN 2010)}, pages = {600--607}, publisher = {IEEE}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } | |
M A Grimm; I Iossifidis Behavioral organization for mobile robotic systems: An attractor dynamics approach Inproceedings Joint 41st International Symposium on Robotics and 6th German Conference on Robotics 2010, ISR/ROBOTIK 2010, 2010, ISBN: 9781617387197. @inproceedings{Grimm2010a, title = {Behavioral organization for mobile robotic systems: An attractor dynamics approach}, author = {M A Grimm and I Iossifidis}, isbn = {9781617387197}, year = {2010}, date = {2010-01-01}, booktitle = {Joint 41st International Symposium on Robotics and 6th German Conference on Robotics 2010, ISR/ROBOTIK 2010}, volume = {1}, abstract = {In this paper we describe an architecture for behavioral organization based on dynamical systems. This architecture enables the generation of complex behavioral sequences, which is demonstrated using the example of approaching and passing a door. The behavioral sequence is generated by activating and deactivating the elementary behaviors dependent on sensory information and internal logical conditions. The architecture is demonstrated on a mobile KOALA robot and in simulation as well.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } In this paper we describe an architecture for behavioral organization based on dynamical systems. This architecture enables the generation of complex behavioral sequences, which is demonstrated using the example of approaching and passing a door. The behavioral sequence is generated by activating and deactivating the elementary behaviors dependent on sensory information and internal logical conditions. The architecture is demonstrated on a mobile KOALA robot and in simulation as well. | |
Stephan K U Zibner; Christian Faubel; Ioannis Iossifidis; Gregor Schöner Scene Representation with Dynamic Neural Fields: An Example of Complex Cognitive Architectures Based on Dynamic Neural Field Theory Inproceedings Proc. Int. Conf. on Development and Learning (ICDL10), 2010. BibTeX | Tags: @inproceedings{Zibnersubmittedb, title = {Scene Representation with Dynamic Neural Fields: An Example of Complex Cognitive Architectures Based on Dynamic Neural Field Theory}, author = {Stephan K U Zibner and Christian Faubel and Ioannis Iossifidis and Gregor Schöner}, year = {2010}, date = {2010-01-01}, booktitle = {Proc. Int. Conf. on Development and Learning (ICDL10)}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } | |
Stephan K U Zibner; Christian Faubel; Ioannis Iossifidis; Gregor Schöner Scene Representation Based on Dynamic Field Theory: From Human to Machine Journal Article Front. Comput. Neurosci. Conference Abstract: Bernstein Conference on Computational Neuroscience, 2010. @article{Zibner2010a, title = {Scene Representation Based on Dynamic Field Theory: From Human to Machine}, author = {Stephan K U Zibner and Christian Faubel and Ioannis Iossifidis and Gregor Schöner}, doi = {10.3389/conf.fncom.2010.51.00019}, year = {2010}, date = {2010-01-01}, journal = {Front. Comput. Neurosci. Conference Abstract: Bernstein Conference on Computational Neuroscience}, keywords = {}, pubstate = {published}, tppubtype = {article} } | |
Stephan S K U Zibner; Christian Faubel; Ioannis Iossifidis; Gregor Schöner Scene Representation for Anthropomorphic Robots: A Dynamic Neural Field Approach Inproceedings ISR / ROBOTIK 2010, VDE VERLAG GmbH, Munich, Germany, 2010. Abstract | Links | BibTeX | Tags: @inproceedings{Zibner2010ab, title = {Scene Representation for Anthropomorphic Robots: A Dynamic Neural Field Approach}, author = {Stephan S K U Zibner and Christian Faubel and Ioannis Iossifidis and Gregor Schöner}, url = {http://www.vde-verlag.de/proceedings-en/453273138.html}, year = {2010}, date = {2010-01-01}, booktitle = {ISR / ROBOTIK 2010}, number = {Isr}, publisher = {VDE VERLAG GmbH}, address = {Munich, Germany}, abstract = {An internal representation of a scene is essential to generate actions on scene objects. A stabilized storage of object location and features offers the flexibility to process queries phrased in human-based terms relating to objects, which may not be in the current camera view. Scene representation is therefore an internal representation of the surrounding world that is stabilized against head and body movement. It contains associated information about location and features of objects. Because objects and bodies move, scene representation is not a one-time process, but a constantly scene- adapting mechanism of scanning for, storing, updating, and deleting information. Our novel architecture incorporates the generation of autonomous scanning sequences on real-time camera images. The head can then be oriented towards a selected object and the color feature can be extracted. Object location and feature information are associatively stored in a three-dimensional Dynamic Neural Field. Changes in the scene, even for multiple objects, can be tracked simultaneously. The stored information is used to generate behavior for cued recall. Cues can be table regions, features, or object labels. The robot demonstrates a successful recall by centering its gaze on the stated object.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } An internal representation of a scene is essential to generate actions on scene objects. A stabilized storage of object location and features offers the flexibility to process queries phrased in human-based terms relating to objects, which may not be in the current camera view. Scene representation is therefore an internal representation of the surrounding world that is stabilized against head and body movement. It contains associated information about location and features of objects. Because objects and bodies move, scene representation is not a one-time process, but a constantly scene- adapting mechanism of scanning for, storing, updating, and deleting information. Our novel architecture incorporates the generation of autonomous scanning sequences on real-time camera images. The head can then be oriented towards a selected object and the color feature can be extracted. Object location and feature information are associatively stored in a three-dimensional Dynamic Neural Field. Changes in the scene, even for multiple objects, can be tracked simultaneously. The stored information is used to generate behavior for cued recall. Cues can be table regions, features, or object labels. The robot demonstrates a successful recall by centering its gaze on the stated object. | |
Sebastian Noth; Eva Schrowangen; Ioannis Iossifidis Using ego motion feedback to improve the immersion in virtual reality environments Inproceedings ISR / ROBOTIK 2010, Munich, Germany, 2010. @inproceedings{Noth2010, title = {Using ego motion feedback to improve the immersion in virtual reality environments}, author = {Sebastian Noth and Eva Schrowangen and Ioannis Iossifidis}, year = {2010}, date = {2010-01-01}, booktitle = {ISR / ROBOTIK 2010}, address = {Munich, Germany}, abstract = {To study driver behavior we set up a lab with fixed base driving simulators. In order to compensate for the lack of physical feedback in this scenario, we aimed for another means of increasing the realism of our system. In the following, we propose an efficient method of head tracking and its integration in our driving simulation. Furthermore, we illuminate why this is a promising boost of the subjects immersion in the virtual world. Our idea for increasing the feeling of immersion is to give the subject feedback on head movements relative to the screen. A real driver sometimes moves his head in order to see something better or to look behind an occluding object. In addition to these intentional movements, a study conducted by Zirkovitz and Harris has revealed that drivers involuntarily tilt their heads when they go around corners in order to maximize the use of visual information available in the scene. Our system reflects the visual changes of any head movement and hence gives feedback on both involuntary and intentional motion. If, for example, subjects move to the left, they will see more from the right-hand side of the scene. If, on the other hand, they move upwards, a larger fraction of the engine hood will be visible. The same holds for the rear view mirror.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } To study driver behavior we set up a lab with fixed base driving simulators. In order to compensate for the lack of physical feedback in this scenario, we aimed for another means of increasing the realism of our system. In the following, we propose an efficient method of head tracking and its integration in our driving simulation. Furthermore, we illuminate why this is a promising boost of the subjects immersion in the virtual world. Our idea for increasing the feeling of immersion is to give the subject feedback on head movements relative to the screen. A real driver sometimes moves his head in order to see something better or to look behind an occluding object. In addition to these intentional movements, a study conducted by Zirkovitz and Harris has revealed that drivers involuntarily tilt their heads when they go around corners in order to maximize the use of visual information available in the scene. Our system reflects the visual changes of any head movement and hence gives feedback on both involuntary and intentional motion. If, for example, subjects move to the left, they will see more from the right-hand side of the scene. If, on the other hand, they move upwards, a larger fraction of the engine hood will be visible. The same holds for the rear view mirror. | |
Stephan Zibner; Christian Faubel; Ioannis Iossifidis; Gregor Schöner; John P Spencer Scene and Tracking with Dynamic Neural Field Approach Inproceedings ISR / ROBOTIK 2010, Munich, Germany, 2010. @inproceedings{Zibneri, title = {Scene and Tracking with Dynamic Neural Field Approach}, author = {Stephan Zibner and Christian Faubel and Ioannis Iossifidis and Gregor Schöner and John P Spencer}, year = {2010}, date = {2010-01-01}, booktitle = {ISR / ROBOTIK 2010}, address = {Munich, Germany}, abstract = {An internal representation of a scene is essential to generate actions on scene objects. A stabilized storage of object location and features offers the flexibility to process queries phrased in human-based terms relating to objects, which may not be in the current camera view. Scene representation is therefore an internal representation of the surrounding world that is stabilized against head and body movement. It contains associated information about location and features of objects. Because objects and bodies move, scene representation is not a one-time process, but a constantly scene- adapting mechanism of scanning for, storing, updating, and deleting information. Our novel architecture incorporates the generation of autonomous scanning sequences on real-time camera images. The head can then be oriented towards a selected object and the color feature can be extracted. Object location and feature information are associatively stored in a three-dimensional Dynamic Neural Field. Changes in the scene, even for multiple objects, can be tracked simultaneously. The stored information is used to generate behavior for cued recall. Cues can be table regions, features, or object labels. The robot demonstrates a successful recall by centering its gaze on the stated object.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } An internal representation of a scene is essential to generate actions on scene objects. A stabilized storage of object location and features offers the flexibility to process queries phrased in human-based terms relating to objects, which may not be in the current camera view. Scene representation is therefore an internal representation of the surrounding world that is stabilized against head and body movement. It contains associated information about location and features of objects. Because objects and bodies move, scene representation is not a one-time process, but a constantly scene- adapting mechanism of scanning for, storing, updating, and deleting information. Our novel architecture incorporates the generation of autonomous scanning sequences on real-time camera images. The head can then be oriented towards a selected object and the color feature can be extracted. Object location and feature information are associatively stored in a three-dimensional Dynamic Neural Field. Changes in the scene, even for multiple objects, can be tracked simultaneously. The stored information is used to generate behavior for cued recall. Cues can be table regions, features, or object labels. The robot demonstrates a successful recall by centering its gaze on the stated object. | |
Hendrik Reimann; Ioannis Iossifidis; Gregor Schöner End-effector obstacle avoidance using multiple dynamic variables Inproceedings ISR / ROBOTIK 2010, Munich, Germany, 2010. @inproceedings{Reimannf, title = {End-effector obstacle avoidance using multiple dynamic variables}, author = {Hendrik Reimann and Ioannis Iossifidis and Gregor Schöner}, year = {2010}, date = {2010-01-01}, booktitle = {ISR / ROBOTIK 2010}, address = {Munich, Germany}, abstract = {The avoidance of obstacles is a crucial part of the generation of behavior for autonomos robotic agents. A standard method to produce trajectories to a given target that avoids a number of possibly mobile obstacles is the potential field approach introduced by Khatib, where an artificial potential field is constructed around target and obstacles, with the target acting as a global minimum and the obstacles as local maxima, the gradient of which is used to determine the (artificial) force acting on the robot at any moment. While the potential field approach has been used extensively for vehicle motion in a plane, applications for robotic manipulators suffer from a high level of complexity due to the formulation of constraints as forces necessitating the inclusion of dynamic properties of the manipulator into the system. We pursue a different solution to the problem of manipulator obstacle avoidance based on the dynamic approach to robotics, which states that all behavioral constraints for the generation of movement should be formulated as attractors or repellors of a dynamical systems. The problem of behavior design is thus separated from the control problem of how to realize the designed behavior, bringing the advantage of simplicity in the formulation of the former.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } The avoidance of obstacles is a crucial part of the generation of behavior for autonomos robotic agents. A standard method to produce trajectories to a given target that avoids a number of possibly mobile obstacles is the potential field approach introduced by Khatib, where an artificial potential field is constructed around target and obstacles, with the target acting as a global minimum and the obstacles as local maxima, the gradient of which is used to determine the (artificial) force acting on the robot at any moment. While the potential field approach has been used extensively for vehicle motion in a plane, applications for robotic manipulators suffer from a high level of complexity due to the formulation of constraints as forces necessitating the inclusion of dynamic properties of the manipulator into the system. We pursue a different solution to the problem of manipulator obstacle avoidance based on the dynamic approach to robotics, which states that all behavioral constraints for the generation of movement should be formulated as attractors or repellors of a dynamical systems. The problem of behavior design is thus separated from the control problem of how to realize the designed behavior, bringing the advantage of simplicity in the formulation of the former. | |
2009 |
|
M Tuma; I Iossifidis; G Schöner Temporal stabilization of discrete movement in variable environments: An attractor dynamics approach Inproceedings 2009 IEEE International Conference on Robotics and Automation, pp. 863–868, IEEE, 2009, ISBN: 978-1-4244-2788-8. Abstract | Links | BibTeX | Tags: @inproceedings{Tuma2009, title = {Temporal stabilization of discrete movement in variable environments: An attractor dynamics approach}, author = {M Tuma and I Iossifidis and G Schöner}, url = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5152562}, doi = {10.1109/ROBOT.2009.5152562}, isbn = {978-1-4244-2788-8}, year = {2009}, date = {2009-05-01}, booktitle = {2009 IEEE International Conference on Robotics and Automation}, pages = {863--868}, publisher = {IEEE}, abstract = {The ability to generate discrete movement with distinct and stable time courses is important for interaction scenarios both between different robots and with human partners, for catching and interception tasks, and for timed action sequences. In dynamic environments, where trajectories are evolving online, this is not a trivial task. The dynamical systems approach to robotics provides a framework for robust incorporation of fluctuating sensor information, but control of movement time is usually restricted to rhythmic motion and realized through stable limit cycles. The present work uses a Hopf oscillator to produce discrete motion and formulates an online adaptation rule to stabilize total movement time against a wide range of disturbances. This is integrated into a dynamical systems framework for the sequencing of movement phases and for directional navigation, using 2D-planar motion as an example. The approach is demonstrated on a Khepera mobile unit in order to show its reliability even when depending on low-level sensor information.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } The ability to generate discrete movement with distinct and stable time courses is important for interaction scenarios both between different robots and with human partners, for catching and interception tasks, and for timed action sequences. In dynamic environments, where trajectories are evolving online, this is not a trivial task. The dynamical systems approach to robotics provides a framework for robust incorporation of fluctuating sensor information, but control of movement time is usually restricted to rhythmic motion and realized through stable limit cycles. The present work uses a Hopf oscillator to produce discrete motion and formulates an online adaptation rule to stabilize total movement time against a wide range of disturbances. This is integrated into a dynamical systems framework for the sequencing of movement phases and for directional navigation, using 2D-planar motion as an example. The approach is demonstrated on a Khepera mobile unit in order to show its reliability even when depending on low-level sensor information. | |
Matthias Tuma; Ioannis Iossifidis; Gregor Schöner Temporal Stabilization of Discrete Movement in Variable Environments: An Attractor Dynamics Approach Inproceedings Proc. IEEE International Conference on Robotics and Automation ICRA '09, pp. 863–868, Kobe, Japan, 2009. @inproceedings{Tuma2009b, title = {Temporal Stabilization of Discrete Movement in Variable Environments: An Attractor Dynamics Approach}, author = {Matthias Tuma and Ioannis Iossifidis and Gregor Schöner}, year = {2009}, date = {2009-01-01}, booktitle = {Proc. IEEE International Conference on Robotics and Automation ICRA '09}, pages = {863--868}, address = {Kobe, Japan}, abstract = {The ability to generate discrete movement with distinct and stable time courses is important for interaction scenarios both between different robots and with human partners, for catching and interception tasks, and for timed action sequences. In dynamic environments, where trajectories are evolving on-line, this is not a trivial task. The dynamical systems approach to robotics provides a framework for robust incorporation of fluctuating sensor information, but control of movement time is usually restricted to rhythmic motion and realized through stable limit cycles. The present work uses a Hopf oscillator to produce discrete motion and formulates an on-line adaptation rule to stabilize total movement time against a wide range of disturbances. This is integrated into a dynamical systems framework for the sequencing of movement phases and for directional navigation, using 2D-planar motion as an example. The approach is demonstrated on a Khepera mobile unit in order to show its reliability even when depending on low-level sensor information.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } The ability to generate discrete movement with distinct and stable time courses is important for interaction scenarios both between different robots and with human partners, for catching and interception tasks, and for timed action sequences. In dynamic environments, where trajectories are evolving on-line, this is not a trivial task. The dynamical systems approach to robotics provides a framework for robust incorporation of fluctuating sensor information, but control of movement time is usually restricted to rhythmic motion and realized through stable limit cycles. The present work uses a Hopf oscillator to produce discrete motion and formulates an on-line adaptation rule to stabilize total movement time against a wide range of disturbances. This is integrated into a dynamical systems framework for the sequencing of movement phases and for directional navigation, using 2D-planar motion as an example. The approach is demonstrated on a Khepera mobile unit in order to show its reliability even when depending on low-level sensor information. | |
Ioannis Iossifidis; Gregor Schöner Reaching while avoiding obstacles: a neuronally inspired attractor dynamics approach Inproceedings Bernstein Conference on Computational Neuroscience (BCCN 2009), 2009. @inproceedings{Iossifidis2009, title = {Reaching while avoiding obstacles: a neuronally inspired attractor dynamics approach}, author = {Ioannis Iossifidis and Gregor Schöner}, doi = {10.3389/conf.neuro.10.2009.14.007}, year = {2009}, date = {2009-01-01}, booktitle = {Bernstein Conference on Computational Neuroscience (BCCN 2009)}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } | |
2008 |
|
Ueruen Dogan; Hannes Edelbrunner; Ioannis Iossifidis Towards a Driver Model: Preliminary Study of Lane Change Behavior Inproceedings 2008 11th International IEEE Conference on Intelligent Transportation Systems, pp. 931–937, IEEE, 2008, ISBN: 978-1-4244-2111-4. Abstract | Links | BibTeX | Tags: @inproceedings{Dogan2008b, title = {Towards a Driver Model: Preliminary Study of Lane Change Behavior}, author = {Ueruen Dogan and Hannes Edelbrunner and Ioannis Iossifidis}, url = {http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4732700}, doi = {10.1109/ITSC.2008.4732700}, isbn = {978-1-4244-2111-4}, year = {2008}, date = {2008-10-01}, booktitle = {2008 11th International IEEE Conference on Intelligent Transportation Systems}, pages = {931--937}, publisher = {IEEE}, abstract = {The presented work formulates an framework in which early prediction of drivers lane change behavior is realized. We aim to build a representation of drivers lane change behavior in order to recognize and to predict driver's intentions as a first step towards a realistic driver model. In the test bed of the Institute of Neuroinformatik, based on the traffic simulator NISYS TRS 1, 10 individuals have driven in the experiments and they performed more then 150 lane change maneuvers. Lane-offset, distance to the front car and time to contact, were recorded. The acquired data was used to train - in parallel- a recurrent neural network, a feed forward neural network and a set of support vector machines. In the followed test drives the system was able of performing a lane change prediction time of 1.5 sec beforehand. The proposed approach describes a framework for lane-change detection and prediction, which will serve as a prerequisite for a successful driver model.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } The presented work formulates an framework in which early prediction of drivers lane change behavior is realized. We aim to build a representation of drivers lane change behavior in order to recognize and to predict driver's intentions as a first step towards a realistic driver model. In the test bed of the Institute of Neuroinformatik, based on the traffic simulator NISYS TRS 1, 10 individuals have driven in the experiments and they performed more then 150 lane change maneuvers. Lane-offset, distance to the front car and time to contact, were recorded. The acquired data was used to train - in parallel- a recurrent neural network, a feed forward neural network and a set of support vector machines. In the followed test drives the system was able of performing a lane change prediction time of 1.5 sec beforehand. The proposed approach describes a framework for lane-change detection and prediction, which will serve as a prerequisite for a successful driver model. | |
Urun Dogan; Johann Edelbrunner; Ioannis Iossifidis Towards a Driver Model: Preliminary Study of Lane Change Behavior Inproceedings Proc. 11th International IEEE Conference on Intelligent Transportation Systems ITSC 2008, pp. 931–937, Beijing, China, 2008. Abstract | Links | BibTeX | Tags: digital simulation, driver information systems, driver model, drivers lane change behavior prediction, feed forward neural network, feedforward neural nets, lane change maneuvers, NISYS TRS, recurrent neural nets, recurrent neural network, support vector machines, traffic simulator @inproceedings{Dogan2008, title = {Towards a Driver Model: Preliminary Study of Lane Change Behavior}, author = {Urun Dogan and Johann Edelbrunner and Ioannis Iossifidis}, doi = {10.1109/ITSC.2008.4732700}, year = {2008}, date = {2008-01-01}, booktitle = {Proc. 11th International IEEE Conference on Intelligent Transportation Systems ITSC 2008}, pages = {931--937}, address = {Beijing, China}, abstract = {The presented work formulates an framework in which early prediction of drivers lane change behavior is realized. We aim to build a representation of drivers lane change behavior in order to recognize and to predict driver's intentions as a first step towards a realistic driver model. In the test bed of the Institute of Neuroinformatik, based on the traffic simulator NISYS TRS textlesssuptextgreater1textless/suptextgreater, 10 individuals have driven in the experiments and they performed more then 150 lane change maneuvers. Lane-offset, distance to the front car and time to contact, were recorded. The acquired data was used to train - in parallel- a recurrent neural network, a feed forward neural network and a set of support vector machines. In the followed test drives the system was able of performing a lane change prediction time of 1.5 sec beforehand. The proposed approach describes a framework for lane-change detection and prediction, which will serve as a prerequisite for a successful driver model.}, keywords = {digital simulation, driver information systems, driver model, drivers lane change behavior prediction, feed forward neural network, feedforward neural nets, lane change maneuvers, NISYS TRS, recurrent neural nets, recurrent neural network, support vector machines, traffic simulator}, pubstate = {published}, tppubtype = {inproceedings} } The presented work formulates an framework in which early prediction of drivers lane change behavior is realized. We aim to build a representation of drivers lane change behavior in order to recognize and to predict driver's intentions as a first step towards a realistic driver model. In the test bed of the Institute of Neuroinformatik, based on the traffic simulator NISYS TRS textlesssuptextgreater1textless/suptextgreater, 10 individuals have driven in the experiments and they performed more then 150 lane change maneuvers. Lane-offset, distance to the front car and time to contact, were recorded. The acquired data was used to train - in parallel- a recurrent neural network, a feed forward neural network and a set of support vector machines. In the followed test drives the system was able of performing a lane change prediction time of 1.5 sec beforehand. The proposed approach describes a framework for lane-change detection and prediction, which will serve as a prerequisite for a successful driver model. | |
Gregor Schöner; Ioannis Iossifidis Auf gute Zusammenarbeit Journal Article Gerhirn und Geist, Spektrum der Wissenschaft, 3 , 2008. BibTeX | Tags: @article{Schoener2008, title = {Auf gute Zusammenarbeit}, author = {Gregor Schöner and Ioannis Iossifidis}, year = {2008}, date = {2008-01-01}, journal = {Gerhirn und Geist, Spektrum der Wissenschaft}, volume = {3}, keywords = {}, pubstate = {published}, tppubtype = {article} } | |
Hendrik Reimann; Ioannis Iossifidis Mathematical and Simulation Framework for Arbitrary Open Chain Manipulators Technical Report Institut für Neuroinformatik, Ruhr-Universität Bochum (IRINI 2008-03), 2008. BibTeX | Tags: @techreport{Reimann2008, title = {Mathematical and Simulation Framework for Arbitrary Open Chain Manipulators}, author = {Hendrik Reimann and Ioannis Iossifidis}, year = {2008}, date = {2008-01-01}, number = {IRINI 2008-03}, institution = {Institut für Neuroinformatik, Ruhr-Universität Bochum}, keywords = {}, pubstate = {published}, tppubtype = {techreport} } | |
2006 |
|
Ioannis Iossifidis; Gregor Schöner; Gregor Schoner Dynamical Systems Approach for the Autonomous Avoidance of Obstacles and Joint-limits for an Redundant Robot Arm Inproceedings 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 580–585, IEEE, 2006, ISBN: 1-4244-0258-1. Abstract | Links | BibTeX | Tags: anthropomorphic arm, attractor dynamics approach, autonomous obstacle avoidance, collision avoidance, dynamical systems approach, manipulator dynamics, primate central nervous system, redundant manipulators, redundant robot arm, telerobotics @inproceedings{Iossifidis2006a, title = {Dynamical Systems Approach for the Autonomous Avoidance of Obstacles and Joint-limits for an Redundant Robot Arm}, author = {Ioannis Iossifidis and Gregor Schöner and Gregor Schoner}, doi = {10.1109/IROS.2006.282468}, isbn = {1-4244-0258-1}, year = {2006}, date = {2006-10-01}, booktitle = {2006 IEEE/RSJ International Conference on Intelligent Robots and Systems}, pages = {580--585}, publisher = {IEEE}, abstract = {We extend the attractor dynamics approach to generate goal-directed movement of a redundant, anthropomorphic arm while avoiding dynamic obstacles and respecting joint limits. To make the robot's movements human-like, we generate approximately straight-line trajectories by using two heading direction angles of the tool-point quite analogously to how movement is represented in the primate central nervous system. Two additional angles control the tool's spatial orientation so that it follows the tool-point's collision-free path. A fifth equation governs the redundancy angle, which controls the elevation of the elbow so as to avoid obstacles and respect joint limits. These variables make it possible to generate movement while sitting in an attractor (or, in the language of the potential field approach, in a minimum). We demonstrate the approach on an assistant robot, which interacts with human users in a shared workspace}, keywords = {anthropomorphic arm, attractor dynamics approach, autonomous obstacle avoidance, collision avoidance, dynamical systems approach, manipulator dynamics, primate central nervous system, redundant manipulators, redundant robot arm, telerobotics}, pubstate = {published}, tppubtype = {inproceedings} } We extend the attractor dynamics approach to generate goal-directed movement of a redundant, anthropomorphic arm while avoiding dynamic obstacles and respecting joint limits. To make the robot's movements human-like, we generate approximately straight-line trajectories by using two heading direction angles of the tool-point quite analogously to how movement is represented in the primate central nervous system. Two additional angles control the tool's spatial orientation so that it follows the tool-point's collision-free path. A fifth equation governs the redundancy angle, which controls the elevation of the elbow so as to avoid obstacles and respect joint limits. These variables make it possible to generate movement while sitting in an attractor (or, in the language of the potential field approach, in a minimum). We demonstrate the approach on an assistant robot, which interacts with human users in a shared workspace | |
Ioannis Iossifidis Dynamische Systeme zur Steuerung anthropomorpher Roboterarme in autonomen Robotersystemen PhD Thesis Faculty for Physics and Astronomy, Ruhr-University Bochum, 2006. Abstract | Links | BibTeX | Tags: Autonome Robotik, Bewegungsplanung, Dynamische Systeme, Kinematik, Selbstorganisation @phdthesis{Iossifidis2006c, title = {Dynamische Systeme zur Steuerung anthropomorpher Roboterarme in autonomen Robotersystemen}, author = {Ioannis Iossifidis}, url = {http://www.logos-verlag.de/cgi-bin/engbuchmid?isbn=1305&lng=deu&id=}, year = {2006}, date = {2006-01-01}, number = {ISBN: 3-8325-1305-1}, pages = {160}, publisher = {Logos Verlag Berlin}, address = {Bochum, Germany}, school = {Faculty for Physics and Astronomy, Ruhr-University Bochum}, abstract = {Das übergeordnete Forschungsgebiet, in das sich die vorliegende Arbeit einbettet, befasst sich mit der Erforschung von informationsverabeitenden Prozessen im Gehirn und der Anwendung der resultierenden Erkenntnisse auf technische Systeme. In Analogie zu biologischen Systemen, deren Beschaffenheit aus den Anforderungen der Umwelt an ihr Verhalten resultiert, leitet sich die Anthropomorphie als Entwurfsprinzip für die Struktur des mit den Menschen interagierenden robotischen Assistenzsystemen ab. Der Autor behandelt in der vorliegende Arbeit das Problem der Erzeugung von Motorverhalten im dreidimensionalen Raum am Beispiel eines anthropomorphen Roboterarmes in einem anthropomorphen robotischen Assistenzsystem. Entwickelt wurde hierbei ein allgemeiner Ansatz, der die Konzepte der Erzeugung von Motorverhalten im 3D-Raum, der Voraussimulation dynamischer Systeme zur Systemdiagnose und zur Suche gewünschter Systemzustände, sowie ein Konzept der Organisation von Verhalten enthält und vereinigt. Nichtlineare dynamische Systeme bilden das mathematische Fundament, die einheitlich, formale Sprache des Ansatzes, mit der sowohl das Motorverhalten des Roboters als auch dessen zeitkontinuierliche Teilsysteme rückgekoppelt werden.}, keywords = {Autonome Robotik, Bewegungsplanung, Dynamische Systeme, Kinematik, Selbstorganisation}, pubstate = {published}, tppubtype = {phdthesis} } Das übergeordnete Forschungsgebiet, in das sich die vorliegende Arbeit einbettet, befasst sich mit der Erforschung von informationsverabeitenden Prozessen im Gehirn und der Anwendung der resultierenden Erkenntnisse auf technische Systeme. In Analogie zu biologischen Systemen, deren Beschaffenheit aus den Anforderungen der Umwelt an ihr Verhalten resultiert, leitet sich die Anthropomorphie als Entwurfsprinzip für die Struktur des mit den Menschen interagierenden robotischen Assistenzsystemen ab. Der Autor behandelt in der vorliegende Arbeit das Problem der Erzeugung von Motorverhalten im dreidimensionalen Raum am Beispiel eines anthropomorphen Roboterarmes in einem anthropomorphen robotischen Assistenzsystem. Entwickelt wurde hierbei ein allgemeiner Ansatz, der die Konzepte der Erzeugung von Motorverhalten im 3D-Raum, der Voraussimulation dynamischer Systeme zur Systemdiagnose und zur Suche gewünschter Systemzustände, sowie ein Konzept der Organisation von Verhalten enthält und vereinigt. Nichtlineare dynamische Systeme bilden das mathematische Fundament, die einheitlich, formale Sprache des Ansatzes, mit der sowohl das Motorverhalten des Roboters als auch dessen zeitkontinuierliche Teilsysteme rückgekoppelt werden. |
2020 |
|
Comparison of Anomaly Detection between Statistical Method and Undercomplete Inproceedings IEEE IInternational Congress on Big Data, pp. 32–38, Los Angeles, USA, 2020. | |
SpikeDeep-Classifier: A deep-learning based fully automatic offline spike sorting algorithm Journal Article Journal of Neural Engineering, 2020. | |
2019 |
|
Universal Spikedeeptector Miscellaneous 2019. | |
SpikeDeeptector: a deep-learning based method for detection of neural spiking activity Journal Article Journal of Neural Engineering, 16 (5), pp. 056003, 2019. | |
2018 |
|
Toward a Model of Timed Arm Movement Based on Temporal Tuning of Neurons in Primary Motor (MI) and Posterior Parietal Cortex (PPC) Title Inproceedings BC18 : Computational Neuroscience & Neurotechnology Bernstein Conference 2018, BCCN, 2018. | |
2017 |
|
Low dimensional representation of human arm movement for efficient neuroprosthetic control by individuals with tetraplegia Miscellaneous 2017. | |
Temporal stabilized arm movement for efficient neuroprosthetic control by individuals with tetraplegia Miscellaneous 2017. | |
2014 |
|
Simulated Framework for the Development and Evaluation of Redundant Robotic Systems Inproceedings International Conference on Pervasive and Embedded and Communication Systems, 2014, PECCS2014, 2014. | |
Development of a Haptic Interface for Safe Human Robot Collaboration Inproceedings International Conference on Pervasive and Embedded and Communication Systems, 2014, PECCS2014, 2014. | |
2013 |
|
Utilizing Artificial Skin for Direct Physical Interaction Inproceedings Proc. IEEE/RSJ International Conference on Robotics and Biomimetics (RoBio2013), 2013. | |
Utilizing artificial skin for direct physical interaction Inproceedings 2013 IEEE International Conference on Robotics and Biomimetics, ROBIO 2013, 2013. | |
Motion Constraint Satisfaction by Means of Closed Form Solution for Redundant Robot Arms Inproceedings Proc. IEEE/RSJ International Conference on Robotics and Biomimetics (RoBio2013), 2013. | |
Modeling Human Arm Motion by Means of Attractor Dynamics Approach Inproceedings Proc. IEEE/RSJ International Conference on Robotics and Biomimetics (RoBio2013), 2013. | |
Motion constraint satisfaction by means of closed form solution for redundant robot arms Inproceedings 2013 IEEE International Conference on Robotics and Biomimetics, ROBIO 2013, pp. 2106–2111, 2013, ISBN: 978-1-4799-2744-9. | |
Modelling human arm motion through the attractor dynamics approach Inproceedings 2013 IEEE International Conference on Robotics and Biomimetics, ROBIO 2013, pp. 2088–2093, 2013, ISBN: 9781479927449. | |
2012 |
|
An integrated architecture for the development and assessment of ADAS Inproceedings IEEE Conference on Intelligent Transportation Systems, Proceedings, ITSC, 2012, ISBN: 9781467330640. | |
A Versatile Simulated Reality Framework: From Embedded Components to ADAS Inproceedings International Conference on Pervasive and Embedded and Communication Systems, 2012, PECCS2012, 2012. | |
Sequence Generation for Grasping Tasks by Means of Dynamical Systems Conference BC12 : Computational Neuroscience $backslash$& Neurotechnology Bernstein Conference $backslash$& Neurex Annual Meeting 2012, 2012. | |
2011 |
|
Dynamic neural fields as building blocks of a cortex-inspired architecture for robotic scene representation Journal Article IEEE Transactions on Autonomous Mental Development, 3 (1), 2011, ISSN: 19430604. | |
Dynamic Neural Fields as Building Blocks for a Cortex-Inspired Architecture of Robotic Scene Representation Journal Article Autonomous Mental Development, IEEE Transactions on, 3 (1), 2011. | |
Autonomous Driving: A Comparison of Machine Learning Techniques by Measns of the Prediction of Lane Change Behavior Inproceedings Proc. IEEE/RSJ International Conference on Robotics and Biomimetics (RoBio2011), 2011. | |
Simulated reality environment for development and assessment of cognitive robotic systems Inproceedings Proc. IEEE/RSJ International Conference on Robotics and Biomimetics (RoBio2011), 2011. | |
Model-free local navigation for humanoid robots Inproceedings Proc. IEEE/RSJ International Conference on Robotics and Biomimetics (RoBio2011), 2011. | |
Autonomous movement generation for manipulators with multiple simultaneous constraints using the attractor dynamics approach Inproceedings 2011 IEEE International Conference on Robotics and Automation, ICRA2011, 2011. | |
Autonomous driving: A comparison of machine learning techniques by means of the prediction of lane change behavior Inproceedings 2011 IEEE International Conference on Robotics and Biomimetics, ROBIO 2011, pp. 1837–1843, 2011, ISSN: 01962892. | |
Human like trajectories for humanoid robots Conference BC11 : Computational Neuroscience $backslash$& Neurotechnology Bernstein Conference $backslash$& Neurex Annual Meeting 2011, 2011. | |
Benefits of ego motion feedback for interactive experiments in virtual reality scenarios Conference BC11 : Computational Neuroscience $backslash$& Neurotechnology Bernstein Conference $backslash$& Neurex Annual Meeting 2011, 2011. | |
2010 |
|
Integrating orientation constraints into the attractor dynamics approach for autonomous manipulation Inproceedings 2010 10th IEEE-RAS International Conference on Humanoid Robots, pp. 294–301, IEEE, 2010, ISBN: 978-1-4244-8688-5. | |
Generating collision free reaching movements for redundant manipulators using dynamical systems Inproceedings 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5372–5379, IEEE, 2010, ISBN: 978-1-4244-6674-0. | |
Scenes and tracking with dynamic neural fields: How to update a robotic scene representation Inproceedings 2010 IEEE 9th International Conference on Development and Learning, ICDL-2010 - Conference Program, 2010, ISBN: 9781424469024. | |
Scene representation for anthropomorphic robots: A dynamic neural field approach Inproceedings Joint 41st International Symposium on Robotics and 6th German Conference on Robotics 2010, ISR/ROBOTIK 2010, 2010, ISBN: 9781617387197. | |
Behavioral Organization for Mobile Robotic Systems: An Attractor Dynamics Approach Inproceedings ISR / ROBOTIK 2010, Munich, Germany, 2010. | |
Scenes and Tracking with Dynamic Neural Fields: How to Update a Robotic Scene Representation Inproceedings Proc. Int. Conf. on Development and Learning (ICDL10), 2010. | |
Natural human-robot interaction through spatial language: a dynamic neural fields approach Inproceedings Proc. 19th IEEE International Workshop on Robot and Human Interactive Communication (ROMAN 2010), pp. 600–607, IEEE, 2010, ISSN: 1944-9445. | |
Behavioral organization for mobile robotic systems: An attractor dynamics approach Inproceedings Joint 41st International Symposium on Robotics and 6th German Conference on Robotics 2010, ISR/ROBOTIK 2010, 2010, ISBN: 9781617387197. | |
Scene Representation with Dynamic Neural Fields: An Example of Complex Cognitive Architectures Based on Dynamic Neural Field Theory Inproceedings Proc. Int. Conf. on Development and Learning (ICDL10), 2010. | |
Scene Representation Based on Dynamic Field Theory: From Human to Machine Journal Article Front. Comput. Neurosci. Conference Abstract: Bernstein Conference on Computational Neuroscience, 2010. | |
Scene Representation for Anthropomorphic Robots: A Dynamic Neural Field Approach Inproceedings ISR / ROBOTIK 2010, VDE VERLAG GmbH, Munich, Germany, 2010. | |
Using ego motion feedback to improve the immersion in virtual reality environments Inproceedings ISR / ROBOTIK 2010, Munich, Germany, 2010. | |
Scene and Tracking with Dynamic Neural Field Approach Inproceedings ISR / ROBOTIK 2010, Munich, Germany, 2010. | |
End-effector obstacle avoidance using multiple dynamic variables Inproceedings ISR / ROBOTIK 2010, Munich, Germany, 2010. | |
2009 |
|
Temporal stabilization of discrete movement in variable environments: An attractor dynamics approach Inproceedings 2009 IEEE International Conference on Robotics and Automation, pp. 863–868, IEEE, 2009, ISBN: 978-1-4244-2788-8. | |
Temporal Stabilization of Discrete Movement in Variable Environments: An Attractor Dynamics Approach Inproceedings Proc. IEEE International Conference on Robotics and Automation ICRA '09, pp. 863–868, Kobe, Japan, 2009. | |
Reaching while avoiding obstacles: a neuronally inspired attractor dynamics approach Inproceedings Bernstein Conference on Computational Neuroscience (BCCN 2009), 2009. | |
2008 |
|
Towards a Driver Model: Preliminary Study of Lane Change Behavior Inproceedings 2008 11th International IEEE Conference on Intelligent Transportation Systems, pp. 931–937, IEEE, 2008, ISBN: 978-1-4244-2111-4. | |
Towards a Driver Model: Preliminary Study of Lane Change Behavior Inproceedings Proc. 11th International IEEE Conference on Intelligent Transportation Systems ITSC 2008, pp. 931–937, Beijing, China, 2008. | |
Auf gute Zusammenarbeit Journal Article Gerhirn und Geist, Spektrum der Wissenschaft, 3 , 2008. | |
Mathematical and Simulation Framework for Arbitrary Open Chain Manipulators Technical Report Institut für Neuroinformatik, Ruhr-Universität Bochum (IRINI 2008-03), 2008. | |
2006 |
|
Dynamical Systems Approach for the Autonomous Avoidance of Obstacles and Joint-limits for an Redundant Robot Arm Inproceedings 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 580–585, IEEE, 2006, ISBN: 1-4244-0258-1. | |
Dynamische Systeme zur Steuerung anthropomorpher Roboterarme in autonomen Robotersystemen PhD Thesis Faculty for Physics and Astronomy, Ruhr-University Bochum, 2006. |