Shogo Nagasaka

Learn More
Continuous driving-behavioral data can be converted automatically into sequences of “drive topics” in natural language; for example, “gas pedal operating,” “high-speed cruise,” then “stopping and standing still with brakes on.” In regard to developing advanced driver-assistance systems (ADASs), various(More)
This paper presents an automatic translation method from time-series driving behavior into natural language with contextual information. Nowadays, various advanced driver-assistance systems (ADASs) have been developed to reduce the number of traffic accidents and multiple ADASs are required to reduce further accidents. For such multiple ADASs, considering(More)
In this paper, we propose a novel semiotic prediction method for driving behavior based on double articulation structure. It has been reported that predicting driving behavior from its multivariate time series behavior data by using machine learning methods, e.g., hybrid dynamical system, hidden Markov model and Gaussian mixture model, is difficult because(More)
In this paper, we propose an online algorithm for multimodal categorization based on the autonomously acquired multimodal information and partial words given by human users. For multimodal concept formation, multimodal latent Dirichlet allocation (MLDA) using Gibbs sampling is extended to an online version. We introduce a particle filter, which(More)
Humans develop their concept of an object by classifying it into a category, and acquire language by interacting with others at the same time. Thus, the meaning of a word can be learnt by connecting the recognized word and concept. We consider such an ability to be important in allowing robots to flexibly develop their knowledge of language and concepts.(More)
Various advanced driver assistance systems (ADASs) have recently been developed, such as Adaptive Cruise Control and Precrash Safety System. However, most ADASs can operate in only some driving situations because of the difficulty of recognizing contextual information. For closer cooperation between a driver and vehicle, the vehicle should recognize a wider(More)
This paper provides a novel summarization method for drive videos using driving behavior, such as driver maneuvers and vehicle reaction, recorded simultaneously alongside video. We segmented the driving behavior into chunks via an unsupervised manner and summarized the drive videos using the chunks, i.e., the switching points of the chunks were emphasized(More)
Time-series driving behavioral data and image sequences captured with car-mounted video cameras can be annotated automatically in natural language, for example, “in a traffic jam,” “leading vehicle is a truck,” or “there are three and more lanes.” Various driving support systems nowadays have been developed for safe(More)
An unsupervised learning method, called double articulation analyzer with temporal prediction (DAA-TP), is proposed on the basis of the original DAA model. The method will enable future advanced driving assistance systems to determine driving context and predict possible scenarios of driving behavior by segmenting and modeling incoming driving-behavior time(More)