-
Bennedsen Bloom posted an update 1 year, 6 months ago
Recent decades have witnessed a trend that control-theoretical techniques are widely leveraged in various areas, e.g., design and analysis of computational models. Computational methods can be modeled as a controller and searching the equilibrium point of a dynamical system is identical to solving an algebraic equation. Thus, absorbing mature technologies in control theory and integrating it with neural dynamics models can lead to new achievements. This work makes progress along this direction by applying control-theoretical techniques to construct new recurrent neural dynamics for manipulating a perturbed nonstationary quadratic program (QP) with time-varying parameters considered. Specifically, to break the limitations of existing continuous-time models in handling nonstationary problems, a discrete recurrent neural dynamics model is proposed to robustly deal with noise. This work shows how iterative computational methods for solving nonstationary QP can be revisited, designed, and analyzed in a control framework. A modified Newton iteration model and an improved gradient-based neural dynamics are established by referring to the superior structural technology of the presented recurrent neural dynamics, where the chief breakthrough is their excellent convergence and robustness over the traditional models. Numerical experiments are conducted to show the eminence of the proposed models in solving perturbed nonstationary QP.Animal welfare has become an increasingly important concern in the sports field. Learning horse-drawn carriage driving requires much time and effort for both the drivers and the horses because the associated gestures to avoid harming the horses are difficult to acquire. This raises the need to develop realistic simulation environments for future drivers. To this end, two haptic interface prototypes were designed, coupled with dedicated simulation software. The first was developed based on a SPIDAR haptic device and implemented simple behaviors of the carriage. A user study demonstrated interest in such a simulator, which led to the design of a second prototype, on a different architecture than the first prototype, for integrating more precise laws of horse behavior such as mood and allowing a more subtle control of forces. An evaluation with driving learners revealed that the simulator was capable of not only producing sensations close to reality but also improving the interaction between the trainer and the learner.Haptic information can be used to create our perception of the stiffness of objects and to regulate grip force. Introducing noise into sensory inputs can create uncertainty, yet a method of creating haptic uncertainty without distorting the haptic information has yet to be discovered. Toward this end, in this article, we investigated the effect of varying haptic information between consecutive interactions with an elastic force field on stiffness perception and grip force control. In a stiffness discrimination task, participants interacted with force fields multiple times. Low, medium, and high variability levels were created by drawing the stiffness level applied in each consecutive interaction within a trial from normal distributions. Perceptual haptic uncertainty was created only by the medium variability level. Moreover, all the variability levels affected the grip force control the modulation of the grip force with the load force decreased with repeated interactions with the force field, whereas no change in the baseline grip force was observed. Additionally, we ascertained that participants formed their perceived stiffness by calculating a weighted average of the different stiffness levels applied by a given force field. We conclude that the medium variability level can be effective in inducing uncertainty in both perception and action.Instance segmentation is an important task for biomedical and biological image analysis. Due to the complicated background components, the high variability of object appearances, numerous overlapping objects, and ambiguous object boundaries, this task still remains challenging. Recently, deep learning based methods have been widely employed to solve these problems and can be categorized into proposal-free and proposal-based methods. However, both proposal-free and proposal-based methods suffer from information loss, as they focus on either global-level semantic or local-level instance features. To tackle this issue, we present a Panoptic Feature Fusion Net (PFFNet) that unifies the semantic and instance features in this work. Specifically, our proposed PFFNet contains a residual attention feature fusion mechanism to incorporate the instance prediction with the semantic features, in order to facilitate the semantic contextual information learning in the instance branch. Then, a mask quality sub-branch is designed to align the confidence score of each object with the quality of the mask prediction. Furthermore, a consistency regularization mechanism is designed between the semantic segmentation tasks in the semantic and instance branches, for the robust learning of both tasks. Leupeptin in vitro Extensive experiments demonstrate the effectiveness of our proposed PFFNet, which outperforms several state-of-the-art methods on various biomedical and biological datasets.We address the task of jointly determining what a person is doing and where they are looking based on the analysis of video captured by a headworn camera. To facilitate our research, we first introduce the EGTEA Gaze+ dataset. Our dataset comes with videos, gaze tracking data, hand masks and action annotations, thereby providing the most comprehensive benchmark for First Person Vision (FPV). Moving beyond the dataset, we propose a novel deep model for joint gaze estimation and action recognition in FPV. Our method describes the participant’s gaze as a probabilistic variable and models its distribution using stochastic units in a deep network. We further sample from these stochastic units, generating an attention map to guide the aggregation of visual features for action recognition. Our method is evaluated on our EGTEA Gaze+ dataset and achieves a performance level that exceeds the state-of-the-art by a significant margin. More importantly, we demonstrate that our model can be applied to larger scale FPV dataset—EPIC-Kitchens even without using gaze, offering new state-of-the-art results on FPV action recognition.
Home Activity










