WorldWideScience

Sample records for single multimodal representation

  1. Joint sparse representation for robust multimodal biometrics recognition.

    Science.gov (United States)

    Shekhar, Sumit; Patel, Vishal M; Nasrabadi, Nasser M; Chellappa, Rama

    2014-01-01

    Traditional biometric recognition systems rely on a single biometric signature for authentication. While the advantage of using multiple sources of information for establishing the identity has been widely recognized, computational models for multimodal biometrics recognition have only recently received attention. We propose a multimodal sparse representation method, which represents the test data by a sparse linear combination of training data, while constraining the observations from different modalities of the test subject to share their sparse representations. Thus, we simultaneously take into account correlations as well as coupling information among biometric modalities. A multimodal quality measure is also proposed to weigh each modality as it gets fused. Furthermore, we also kernelize the algorithm to handle nonlinearity in data. The optimization problem is solved using an efficient alternative direction method. Various experiments show that the proposed method compares favorably with competing fusion-based methods.

  2. A robust probabilistic collaborative representation based classification for multimodal biometrics

    Science.gov (United States)

    Zhang, Jing; Liu, Huanxi; Ding, Derui; Xiao, Jianli

    2018-04-01

    Most of the traditional biometric recognition systems perform recognition with a single biometric indicator. These systems have suffered noisy data, interclass variations, unacceptable error rates, forged identity, and so on. Due to these inherent problems, it is not valid that many researchers attempt to enhance the performance of unimodal biometric systems with single features. Thus, multimodal biometrics is investigated to reduce some of these defects. This paper proposes a new multimodal biometric recognition approach by fused faces and fingerprints. For more recognizable features, the proposed method extracts block local binary pattern features for all modalities, and then combines them into a single framework. For better classification, it employs the robust probabilistic collaborative representation based classifier to recognize individuals. Experimental results indicate that the proposed method has improved the recognition accuracy compared to the unimodal biometrics.

  3. Multimodal representations in collaborative history learning

    NARCIS (Netherlands)

    Prangsma, M.E.

    2007-01-01

    This dissertation focuses on the question: How does making and connecting different types of multimodal representations affect the collaborative learning process and the acquisition of a chronological frame of reference in 12 to 14-year olds in pre vocational education? A chronological frame of

  4. 3D hierarchical spatial representation and memory of multimodal sensory data

    Science.gov (United States)

    Khosla, Deepak; Dow, Paul A.; Huber, David J.

    2009-04-01

    This paper describes an efficient method and system for representing, processing and understanding multi-modal sensory data. More specifically, it describes a computational method and system for how to process and remember multiple locations in multimodal sensory space (e.g., visual, auditory, somatosensory, etc.). The multimodal representation and memory is based on a biologically-inspired hierarchy of spatial representations implemented with novel analogues of real representations used in the human brain. The novelty of the work is in the computationally efficient and robust spatial representation of 3D locations in multimodal sensory space as well as an associated working memory for storage and recall of these representations at the desired level for goal-oriented action. We describe (1) A simple and efficient method for human-like hierarchical spatial representations of sensory data and how to associate, integrate and convert between these representations (head-centered coordinate system, body-centered coordinate, etc.); (2) a robust method for training and learning a mapping of points in multimodal sensory space (e.g., camera-visible object positions, location of auditory sources, etc.) to the above hierarchical spatial representations; and (3) a specification and implementation of a hierarchical spatial working memory based on the above for storage and recall at the desired level for goal-oriented action(s). This work is most useful for any machine or human-machine application that requires processing of multimodal sensory inputs, making sense of it from a spatial perspective (e.g., where is the sensory information coming from with respect to the machine and its parts) and then taking some goal-oriented action based on this spatial understanding. A multi-level spatial representation hierarchy means that heterogeneous sensory inputs (e.g., visual, auditory, somatosensory, etc.) can map onto the hierarchy at different levels. When controlling various machine

  5. Cultural Shifts, Multimodal Representations, and Assessment Practices: A Case Study

    Science.gov (United States)

    Curwood, Jen Scott

    2012-01-01

    Multimodal texts involve the presence, absence, and co-occurrence of alphabetic text with visual, audio, tactile, gestural, and spatial representations. This article explores how teachers' evaluation of students' multimodal work can be understood in terms of cognition and culture. When teachers apply a paradigm of assessment rooted in print-based…

  6. An analysis of science content and representations in introductory college physics textbooks and multimodal learning resources

    Science.gov (United States)

    Donnelly, Suzanne M.

    This study features a comparative descriptive analysis of the physics content and representations surrounding the first law of thermodynamics as presented in four widely used introductory college physics textbooks representing each of four physics textbook categories (calculus-based, algebra/trigonometry-based, conceptual, and technical/applied). Introducing and employing a newly developed theoretical framework, multimodal generative learning theory (MGLT), an analysis of the multimodal characteristics of textbook and multimedia representations of physics principles was conducted. The modal affordances of textbook representations were identified, characterized, and compared across the four physics textbook categories in the context of their support of problem-solving. Keywords: college science, science textbooks, multimodal learning theory, thermodynamics, representations

  7. Multimodal representations of gender in young children's popular culture

    Directory of Open Access Journals (Sweden)

    Fredrik Lindstrand

    2016-12-01

    Full Text Available This article poses questions regarding learning and representation in relation to young children's popular culture. Focusing on gender, the article builds on multimodal, social semiotic analyses of two different media texts related to a specific brand and shows how gender and gender differences are represented multimodally in separate media contexts and in the interplay between different media. The results show that most of the semiotic resources employed in the different texts contribute in congruent ways to the representation of girls as either different from or inferior to boys. At the same time, however, excerpts from an encounter with a young girl who engages with characters from the brand in her role play are used as an example of how children actively make meaning and find strategies that subvert the repressive ideologies manifested in their everyday popular culture.

  8. Learning of Multimodal Representations With Random Walks on the Click Graph.

    Science.gov (United States)

    Wu, Fei; Lu, Xinyan; Song, Jun; Yan, Shuicheng; Zhang, Zhongfei Mark; Rui, Yong; Zhuang, Yueting

    2016-02-01

    In multimedia information retrieval, most classic approaches tend to represent different modalities of media in the same feature space. With the click data collected from the users' searching behavior, existing approaches take either one-to-one paired data (text-image pairs) or ranking examples (text-query-image and/or image-query-text ranking lists) as training examples, which do not make full use of the click data, particularly the implicit connections among the data objects. In this paper, we treat the click data as a large click graph, in which vertices are images/text queries and edges indicate the clicks between an image and a query. We consider learning a multimodal representation from the perspective of encoding the explicit/implicit relevance relationship between the vertices in the click graph. By minimizing both the truncated random walk loss as well as the distance between the learned representation of vertices and their corresponding deep neural network output, the proposed model which is named multimodal random walk neural network (MRW-NN) can be applied to not only learn robust representation of the existing multimodal data in the click graph, but also deal with the unseen queries and images to support cross-modal retrieval. We evaluate the latent representation learned by MRW-NN on a public large-scale click log data set Clickture and further show that MRW-NN achieves much better cross-modal retrieval performance on the unseen queries/images than the other state-of-the-art methods.

  9. Contemporary Multi-Modal Historical Representations and the Teaching of Disciplinary Understandings in History

    Science.gov (United States)

    Donnelly, Debra J.

    2018-01-01

    Traditional privileging of the printed text has been considerably eroded by rapid technological advancement and in Australia, as elsewhere, many History teaching programs feature an array of multi-modal historical representations. Research suggests that engagement with the visual and multi-modal constructs has the potential to enrich the pedagogy…

  10. PCANet-Based Structural Representation for Nonrigid Multimodal Medical Image Registration

    Directory of Open Access Journals (Sweden)

    Xingxing Zhu

    2018-05-01

    Full Text Available Nonrigid multimodal image registration remains a challenging task in medical image processing and analysis. The structural representation (SR-based registration methods have attracted much attention recently. However, the existing SR methods cannot provide satisfactory registration accuracy due to the utilization of hand-designed features for structural representation. To address this problem, the structural representation method based on the improved version of the simple deep learning network named PCANet is proposed for medical image registration. In the proposed method, PCANet is firstly trained on numerous medical images to learn convolution kernels for this network. Then, a pair of input medical images to be registered is processed by the learned PCANet. The features extracted by various layers in the PCANet are fused to produce multilevel features. The structural representation images are constructed for two input images based on nonlinear transformation of these multilevel features. The Euclidean distance between structural representation images is calculated and used as the similarity metrics. The objective function defined by the similarity metrics is optimized by L-BFGS method to obtain parameters of the free-form deformation (FFD model. Extensive experiments on simulated and real multimodal image datasets show that compared with the state-of-the-art registration methods, such as modality-independent neighborhood descriptor (MIND, normalized mutual information (NMI, Weber local descriptor (WLD, and the sum of squared differences on entropy images (ESSD, the proposed method provides better registration performance in terms of target registration error (TRE and subjective human vision.

  11. Fourier transform in multimode systems in the Bargmann representation

    International Nuclear Information System (INIS)

    Lei, C; Vourdas, A

    2007-01-01

    A Fourier transform in a multimode system is studied, using the Bargmann representation. The growth of a Bargmann function is shown to be related to the second-order correlation of the corresponding state. Both the total growth and the total second-order correlation remain unchanged under the Fourier transform. Examples with coherent states, squeezed states and Mittag-Leffler states are discussed

  12. Learning Multimodal Deep Representations for Crowd Anomaly Event Detection

    Directory of Open Access Journals (Sweden)

    Shaonian Huang

    2018-01-01

    Full Text Available Anomaly event detection in crowd scenes is extremely important; however, the majority of existing studies merely use hand-crafted features to detect anomalies. In this study, a novel unsupervised deep learning framework is proposed to detect anomaly events in crowded scenes. Specifically, low-level visual features, energy features, and motion map features are simultaneously extracted based on spatiotemporal energy measurements. Three convolutional restricted Boltzmann machines are trained to model the mid-level feature representation of normal patterns. Then a multimodal fusion scheme is utilized to learn the deep representation of crowd patterns. Based on the learned deep representation, a one-class support vector machine model is used to detect anomaly events. The proposed method is evaluated using two available public datasets and compared with state-of-the-art methods. The experimental results show its competitive performance for anomaly event detection in video surveillance.

  13. Alcohol sensor based on single-mode-multimode-single-mode fiber structure

    Science.gov (United States)

    Mefina Yulias, R.; Hatta, A. M.; Sekartedjo, Sekartedjo

    2016-11-01

    Alcohol sensor based on Single-mode -Multimode-Single-mode (SMS) fiber structure is being proposed to sense alcohol concentration in alcohol-water mixtures. This proposed sensor uses refractive index sensing as its sensing principle. Fabricated SMS fiber structure had 40 m of multimode length. With power input -6 dBm and wavelength 1550 nm, the proposed sensor showed good response with sensitivity 1,983 dB per % v/v with measurement range 05 % v/v and measurement span 0,5% v/v.

  14. Developing a ‘big picture’: Effects of collaborative construction of multimodal representations in history

    NARCIS (Netherlands)

    Prangsma, M.E.; van Boxtel, C.A.M.; Kanselaar, G.

    2008-01-01

    Many pupils have difficulties with the abstract verbal information in history lessons. In this study we assessed the value of active construction of multimodal representations of historical phenomena. In an experimental study we compared the learning outcomes of pupils who co-constructed textual

  15. User-based representation of time-resolved multimodal public transportation networks.

    Science.gov (United States)

    Alessandretti, Laura; Karsai, Márton; Gauvin, Laetitia

    2016-07-01

    Multimodal transportation systems, with several coexisting services like bus, tram and metro, can be represented as time-resolved multilayer networks where the different transportation modes connecting the same set of nodes are associated with distinct network layers. Their quantitative description became possible recently due to openly accessible datasets describing the geo-localized transportation dynamics of large urban areas. Advancements call for novel analytics, which combines earlier established methods and exploits the inherent complexity of the data. Here, we provide a novel user-based representation of public transportation systems, which combines representations, accounting for the presence of multiple lines and reducing the effect of spatial embeddedness, while considering the total travel time, its variability across the schedule, and taking into account the number of transfers necessary. After the adjustment of earlier techniques to the novel representation framework, we analyse the public transportation systems of several French municipal areas and identify hidden patterns of privileged connections. Furthermore, we study their efficiency as compared to the commuting flow. The proposed representation could help to enhance resilience of local transportation systems to provide better design policies for future developments.

  16. Single versus multimodality training basic laparoscopic skills

    NARCIS (Netherlands)

    Brinkman, W.M.; Havermans, S.Y.; Buzink, S.N.; Botden, S.M.B.I.; Jakimowicz, J.J.; Schoot, B.C.

    2012-01-01

    Introduction - Even though literature provides compelling evidence of the value of simulators for training of basic laparoscopic skills, the best way to incorporate them into a surgical curriculum is unclear. This study compares the training outcome of single modality training with multimodality

  17. Modality prediction of biomedical literature images using multimodal feature representation

    Directory of Open Access Journals (Sweden)

    Pelka, Obioma

    2016-08-01

    Full Text Available This paper presents the modelling approaches performed to automatically predict the modality of images found in biomedical literature. Various state-of-the-art visual features such as Bag-of-Keypoints computed with dense SIFT descriptors, texture features and Joint Composite Descriptors were used for visual image representation. Text representation was obtained by vector quantisation on a Bag-of-Words dictionary generated using attribute importance derived from a χ-test. Computing the principal components separately on each feature, dimension reduction as well as computational load reduction was achieved. Various multiple feature fusions were adopted to supplement visual image information with corresponding text information. The improvement obtained when using multimodal features vs. visual or text features was detected, analysed and evaluated. Random Forest models with 100 to 500 deep trees grown by resampling, a multi class linear kernel SVM with C=0.05 and a late fusion of the two classifiers were used for modality prediction. A Random Forest classifier achieved a higher accuracy and computed Bag-of-Keypoints with dense SIFT descriptors proved to be a better approach than with Lowe SIFT.

  18. Single-pulse CARS based multimodal nonlinear optical microscope for bioimaging.

    Science.gov (United States)

    Kumar, Sunil; Kamali, Tschackad; Levitte, Jonathan M; Katz, Ori; Hermann, Boris; Werkmeister, Rene; Považay, Boris; Drexler, Wolfgang; Unterhuber, Angelika; Silberberg, Yaron

    2015-05-18

    Noninvasive label-free imaging of biological systems raises demand not only for high-speed three-dimensional prescreening of morphology over a wide-field of view but also it seeks to extract the microscopic functional and molecular details within. Capitalizing on the unique advantages brought out by different nonlinear optical effects, a multimodal nonlinear optical microscope can be a powerful tool for bioimaging. Bringing together the intensity-dependent contrast mechanisms via second harmonic generation, third harmonic generation and four-wave mixing for structural-sensitive imaging, and single-beam/single-pulse coherent anti-Stokes Raman scattering technique for chemical sensitive imaging in the finger-print region, we have developed a simple and nearly alignment-free multimodal nonlinear optical microscope that is based on a single wide-band Ti:Sapphire femtosecond pulse laser source. Successful imaging tests have been realized on two exemplary biological samples, a canine femur bone and collagen fibrils harvested from a rat tail. Since the ultra-broad band-width femtosecond laser is a suitable source for performing high-resolution optical coherence tomography, a wide-field optical coherence tomography arm can be easily incorporated into the presented multimodal microscope making it a versatile optical imaging tool for noninvasive label-free bioimaging.

  19. Multimodal Hyper-connectivity Networks for MCI Classification.

    Science.gov (United States)

    Li, Yang; Gao, Xinqiang; Jie, Biao; Yap, Pew-Thian; Kim, Min-Jeong; Wee, Chong-Yaw; Shen, Dinggang

    2017-09-01

    Hyper-connectivity network is a network where every edge is connected to more than two nodes, and can be naturally denoted using a hyper-graph. Hyper-connectivity brain network, either based on structural or functional interactions among the brain regions, has been used for brain disease diagnosis. However, the conventional hyper-connectivity network is constructed solely based on single modality data, ignoring potential complementary information conveyed by other modalities. The integration of complementary information from multiple modalities has been shown to provide a more comprehensive representation about the brain disruptions. In this paper, a novel multimodal hyper-network modelling method was proposed for improving the diagnostic accuracy of mild cognitive impairment (MCI). Specifically, we first constructed a multimodal hyper-connectivity network by simultaneously considering information from diffusion tensor imaging and resting-state functional magnetic resonance imaging data. We then extracted different types of network features from the hyper-connectivity network, and further exploited a manifold regularized multi-task feature selection method to jointly select the most discriminative features. Our proposed multimodal hyper-connectivity network demonstrated a better MCI classification performance than the conventional single modality based hyper-connectivity networks.

  20. Multimodal semantic quantity representations: further evidence from Korean Sign Language

    Directory of Open Access Journals (Sweden)

    Frank eDomahs

    2012-01-01

    Full Text Available Korean deaf signers performed a number comparison task on pairs of Arabic digits. In their RT profiles, the expected magnitude effect was systematically modified by properties of number signs in Korean Sign Language in a culture-specific way (not observed in hearing and deaf Germans or hearing Chinese. We conclude that finger-based quantity representations are automatically activated even in simple tasks with symbolic input although this may be irrelevant and even detrimental for task performance. These finger-based numerical representations are accessed in addition to another, more basic quantity system which is evidenced by the magnitude effect. In sum, these results are inconsistent with models assuming only one single amodal representation of numerical quantity.

  1. Efficient generation of sum-of-products representations of high-dimensional potential energy surfaces based on multimode expansions

    Science.gov (United States)

    Ziegler, Benjamin; Rauhut, Guntram

    2016-03-01

    The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.

  2. Modeling decision-making in single- and multi-modal medical images

    Science.gov (United States)

    Canosa, R. L.; Baum, K. G.

    2009-02-01

    This research introduces a mode-specific model of visual saliency that can be used to highlight likely lesion locations and potential errors (false positives and false negatives) in single-mode PET and MRI images and multi-modal fused PET/MRI images. Fused-modality digital images are a relatively recent technological improvement in medical imaging; therefore, a novel component of this research is to characterize the perceptual response to these fused images. Three different fusion techniques were compared to single-mode displays in terms of observer error rates using synthetic human brain images generated from an anthropomorphic phantom. An eye-tracking experiment was performed with naÃve (non-radiologist) observers who viewed the single- and multi-modal images. The eye-tracking data allowed the errors to be classified into four categories: false positives, search errors (false negatives never fixated), recognition errors (false negatives fixated less than 350 milliseconds), and decision errors (false negatives fixated greater than 350 milliseconds). A saliency model consisting of a set of differentially weighted low-level feature maps is derived from the known error and ground truth locations extracted from a subset of the test images for each modality. The saliency model shows that lesion and error locations attract visual attention according to low-level image features such as color, luminance, and texture.

  3. A multimodal parallel architecture: A cognitive framework for multimodal interactions.

    Science.gov (United States)

    Cohn, Neil

    2016-01-01

    Human communication is naturally multimodal, and substantial focus has examined the semantic correspondences in speech-gesture and text-image relationships. However, visual narratives, like those in comics, provide an interesting challenge to multimodal communication because the words and/or images can guide the overall meaning, and both modalities can appear in complicated "grammatical" sequences: sentences use a syntactic structure and sequential images use a narrative structure. These dual structures create complexity beyond those typically addressed by theories of multimodality where only a single form uses combinatorial structure, and also poses challenges for models of the linguistic system that focus on single modalities. This paper outlines a broad theoretical framework for multimodal interactions by expanding on Jackendoff's (2002) parallel architecture for language. Multimodal interactions are characterized in terms of their component cognitive structures: whether a particular modality (verbal, bodily, visual) is present, whether it uses a grammatical structure (syntax, narrative), and whether it "dominates" the semantics of the overall expression. Altogether, this approach integrates multimodal interactions into an existing framework of language and cognition, and characterizes interactions between varying complexity in the verbal, bodily, and graphic domains. The resulting theoretical model presents an expanded consideration of the boundaries of the "linguistic" system and its involvement in multimodal interactions, with a framework that can benefit research on corpus analyses, experimentation, and the educational benefits of multimodality. Copyright © 2015.

  4. Integration of sparse multi-modality representation and geometrical constraint for isointense infant brain segmentation.

    Science.gov (United States)

    Wang, Li; Shi, Feng; Li, Gang; Lin, Weili; Gilmore, John H; Shen, Dinggang

    2013-01-01

    Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination process. During the first year of life, the signal contrast between white matter (WM) and gray matter (GM) in MR images undergoes inverse changes. In particular, the inversion of WM/GM signal contrast appears around 6-8 months of age, where brain tissues appear isointense and hence exhibit extremely low tissue contrast, posing significant challenges for automated segmentation. In this paper, we propose a novel segmentation method to address the above-mentioned challenge based on the sparse representation of the complementary tissue distribution information from T1, T2 and diffusion-weighted images. Specifically, we first derive an initial segmentation from a library of aligned multi-modality images with ground-truth segmentations by using sparse representation in a patch-based fashion. The segmentation is further refined by the integration of the geometrical constraint information. The proposed method was evaluated on 22 6-month-old training subjects using leave-one-out cross-validation, as well as 10 additional infant testing subjects, showing superior results in comparison to other state-of-the-art methods.

  5. Mediating multimodal environmental knowledge across animation techniques

    DEFF Research Database (Denmark)

    Maier, Carmen Daniela

    2011-01-01

    ://www.sustainlane.com/. The multimodal discourse analysis is meant to reveal how selection and representation of environmental knowledge about social actors, social actions, resources, time and space are influenced by animation techniques. Furthermore, in the context of this multimodal discourse analysis, their influence upon......The growing awareness of and concern about present environmental problems generate a proliferation of new forms of environmental discourses that are mediated in various ways. This chapter explores issues related to the ways in which environmental knowledge is multimodally communicated...

  6. Reading Multimodal Texts for Learning – a Model for Cultivating Multimodal Literacy

    Directory of Open Access Journals (Sweden)

    Kristina Danielsson

    2016-08-01

    Full Text Available The re-conceptualisation of texts over the last 20 years, as well as the development of a multimodal understanding of communication and representation of knowledge, has profound consequences for the reading and understanding of multimodal texts, not least in educational contexts. However, if teachers and students are given tools to “unwrap” multimodal texts, they can develop a deeper understanding of texts, information structures, and the textual organisation of knowledge. This article presents a model for working with multimodal texts in education with the intention to highlight mutual multimodal text analysis in relation to the subject content. Examples are taken from a Singaporean science textbook as well as a Chilean science textbook, in order to demonstrate that the framework is versatile and applicable across different cultural contexts. The model takes into account the following aspects of texts: the general structure, how different semiotic resources operate, the ways in which different resources are combined (including coherence, the use of figurative language, and explicit/implicit values. Since learning operates on different dimensions – such as social and affective dimensions besides the cognitive ones – our inclusion of figurative language and values as components for textual analysis is a contribution to multimodal text analysis for learning.

  7. Multimodal sequence learning.

    Science.gov (United States)

    Kemény, Ferenc; Meier, Beat

    2016-02-01

    While sequence learning research models complex phenomena, previous studies have mostly focused on unimodal sequences. The goal of the current experiment is to put implicit sequence learning into a multimodal context: to test whether it can operate across different modalities. We used the Task Sequence Learning paradigm to test whether sequence learning varies across modalities, and whether participants are able to learn multimodal sequences. Our results show that implicit sequence learning is very similar regardless of the source modality. However, the presence of correlated task and response sequences was required for learning to take place. The experiment provides new evidence for implicit sequence learning of abstract conceptual representations. In general, the results suggest that correlated sequences are necessary for implicit sequence learning to occur. Moreover, they show that elements from different modalities can be automatically integrated into one unitary multimodal sequence. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Multimodal Representations: A Fifth-Grade Teacher Influences Students' Design and Production

    Science.gov (United States)

    Shanahan, Lynn E.

    2013-01-01

    The purpose of this interpretive case study is to explore--through a close analysis of one fifth-grade class project--teacher's scaffolding and students' use of visual and linguistic modes when composing multimodally. Using Kress and van Leeuwen's multimodal theory of communication as a framework, this case study examines why teachers, whose…

  9. Unified double- and single-sided homogeneous Green's function representations

    Science.gov (United States)

    Wapenaar, Kees; van der Neut, Joost; Slob, Evert

    2016-06-01

    In wave theory, the homogeneous Green's function consists of the impulse response to a point source, minus its time-reversal. It can be represented by a closed boundary integral. In many practical situations, the closed boundary integral needs to be approximated by an open boundary integral because the medium of interest is often accessible from one side only. The inherent approximations are acceptable as long as the effects of multiple scattering are negligible. However, in case of strongly inhomogeneous media, the effects of multiple scattering can be severe. We derive double- and single-sided homogeneous Green's function representations. The single-sided representation applies to situations where the medium can be accessed from one side only. It correctly handles multiple scattering. It employs a focusing function instead of the backward propagating Green's function in the classical (double-sided) representation. When reflection measurements are available at the accessible boundary of the medium, the focusing function can be retrieved from these measurements. Throughout the paper, we use a unified notation which applies to acoustic, quantum-mechanical, electromagnetic and elastodynamic waves. We foresee many interesting applications of the unified single-sided homogeneous Green's function representation in holographic imaging and inverse scattering, time-reversed wave field propagation and interferometric Green's function retrieval.

  10. A Novel Multimodal Biometrics Recognition Model Based on Stacked ELM and CCA Methods

    Directory of Open Access Journals (Sweden)

    Jucheng Yang

    2018-04-01

    Full Text Available Multimodal biometrics combine a variety of biological features to have a significant impact on identification performance, which is a newly developed trend in biometrics identification technology. This study proposes a novel multimodal biometrics recognition model based on the stacked extreme learning machines (ELMs and canonical correlation analysis (CCA methods. The model, which has a symmetric structure, is found to have high potential for multimodal biometrics. The model works as follows. First, it learns the hidden-layer representation of biological images using extreme learning machines layer by layer. Second, the canonical correlation analysis method is applied to map the representation to a feature space, which is used to reconstruct the multimodal image feature representation. Third, the reconstructed features are used as the input of a classifier for supervised training and output. To verify the validity and efficiency of the method, we adopt it for new hybrid datasets obtained from typical face image datasets and finger-vein image datasets. Our experimental results demonstrate that our model performs better than traditional methods.

  11. Robust Multimodal Dictionary Learning

    Science.gov (United States)

    Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc

    2014-01-01

    We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674

  12. Compressive multi-mode superresolution display

    KAUST Repository

    Heide, Felix

    2014-01-01

    Compressive displays are an emerging technology exploring the co-design of new optical device configurations and compressive computation. Previously, research has shown how to improve the dynamic range of displays and facilitate high-quality light field or glasses-free 3D image synthesis. In this paper, we introduce a new multi-mode compressive display architecture that supports switching between 3D and high dynamic range (HDR) modes as well as a new super-resolution mode. The proposed hardware consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived as a single, high resolution image. © 2014 Optical Society of America.

  13. Unified double- and single-sided homogeneous Green’s function representations

    Science.gov (United States)

    van der Neut, Joost; Slob, Evert

    2016-01-01

    In wave theory, the homogeneous Green’s function consists of the impulse response to a point source, minus its time-reversal. It can be represented by a closed boundary integral. In many practical situations, the closed boundary integral needs to be approximated by an open boundary integral because the medium of interest is often accessible from one side only. The inherent approximations are acceptable as long as the effects of multiple scattering are negligible. However, in case of strongly inhomogeneous media, the effects of multiple scattering can be severe. We derive double- and single-sided homogeneous Green’s function representations. The single-sided representation applies to situations where the medium can be accessed from one side only. It correctly handles multiple scattering. It employs a focusing function instead of the backward propagating Green’s function in the classical (double-sided) representation. When reflection measurements are available at the accessible boundary of the medium, the focusing function can be retrieved from these measurements. Throughout the paper, we use a unified notation which applies to acoustic, quantum-mechanical, electromagnetic and elastodynamic waves. We foresee many interesting applications of the unified single-sided homogeneous Green’s function representation in holographic imaging and inverse scattering, time-reversed wave field propagation and interferometric Green’s function retrieval. PMID:27436983

  14. Multimode-singlemode-multimode fiber sensor for alcohol sensing application

    Science.gov (United States)

    Rofi'ah, Iftihatur; Hatta, A. M.; Sekartedjo, Sekartedjo

    2016-11-01

    Alcohol is volatile and flammable liquid which is soluble substances both on polar and non polar substances that has been used in some industrial sectors. Alcohol detection method now widely used one of them is the optical fiber sensor. In this paper used fiber optic sensor based on Multimode-Single-mode-Multimode (MSM) to detect alcohol solution at a concentration range of 0-3%. The working principle of sensor utilizes the modal interference between the core modes and the cladding modes, thus make the sensor sensitive to environmental changes. The result showed that characteristic of the sensor not affect the length of the single-mode fiber (SMF). We obtain that the sensor with a length of 5 mm of single-mode can sensing the alcohol with a sensitivity of 0.107 dB/v%.

  15. Predicting standard-dose PET image from low-dose PET and multimodal MR images using mapping-based sparse representation

    International Nuclear Information System (INIS)

    Wang, Yan; Zhou, Jiliu; Zhang, Pei; An, Le; Ma, Guangkai; Kang, Jiayin; Shi, Feng; Shen, Dinggang; Wu, Xi; Lalush, David S; Lin, Weili

    2016-01-01

    Positron emission tomography (PET) has been widely used in clinical diagnosis for diseases and disorders. To obtain high-quality PET images requires a standard-dose radionuclide (tracer) injection into the human body, which inevitably increases risk of radiation exposure. One possible solution to this problem is to predict the standard-dose PET image from its low-dose counterpart and its corresponding multimodal magnetic resonance (MR) images. Inspired by the success of patch-based sparse representation (SR) in super-resolution image reconstruction, we propose a mapping-based SR (m-SR) framework for standard-dose PET image prediction. Compared with the conventional patch-based SR, our method uses a mapping strategy to ensure that the sparse coefficients, estimated from the multimodal MR images and low-dose PET image, can be applied directly to the prediction of standard-dose PET image. As the mapping between multimodal MR images (or low-dose PET image) and standard-dose PET images can be particularly complex, one step of mapping is often insufficient. To this end, an incremental refinement framework is therefore proposed. Specifically, the predicted standard-dose PET image is further mapped to the target standard-dose PET image, and then the SR is performed again to predict a new standard-dose PET image. This procedure can be repeated for prediction refinement of the iterations. Also, a patch selection based dictionary construction method is further used to speed up the prediction process. The proposed method is validated on a human brain dataset. The experimental results show that our method can outperform benchmark methods in both qualitative and quantitative measures. (paper)

  16. Watt-level widely tunable single-mode emission by injection-locking of a multimode Fabry-Perot quantum cascade laser

    Science.gov (United States)

    Chevalier, Paul; Piccardo, Marco; Anand, Sajant; Mejia, Enrique A.; Wang, Yongrui; Mansuripur, Tobias S.; Xie, Feng; Lascola, Kevin; Belyanin, Alexey; Capasso, Federico

    2018-02-01

    Free-running Fabry-Perot lasers normally operate in a single-mode regime until the pumping current is increased beyond the single-mode instability threshold, above which they evolve into a multimode state. As a result of this instability, the single-mode operation of these lasers is typically constrained to few percents of their output power range, this being an undesired limitation in spectroscopy applications. In order to expand the span of single-mode operation, we use an optical injection seed generated by an external-cavity single-mode laser source to force the Fabry-Perot quantum cascade laser into a single-mode state in the high current range, where it would otherwise operate in a multimode regime. Utilizing this approach, we achieve single-mode emission at room temperature with a tuning range of 36 cm-1 and stable continuous-wave output power exceeding 1 W at 4.5 μm. Far-field measurements show that a single transverse mode is emitted up to the highest optical power, indicating that the beam properties of the seeded Fabry-Perot laser remain unchanged as compared to free-running operation.

  17. A Multimodal Communication Aid for Global Aphasia Patients

    DEFF Research Database (Denmark)

    Pedersen, Jakob Schou; Dalsgaard, Paul; Lindberg, Børge

    2004-01-01

    This paper presents the basic rationale behind the development and testing of a multimodal communication aid especially designed for people suffering from global aphasia, and thus having severe expressive difficulties. The principle of the aid is to trigger patient associations by presenting...... various multimodal representations of communicative expressions. The aid can in this way be seen as a conceptual continuation of previous research within the field of communication aids based on uni-modal (pictorial) representations of communicative expressions. As patients suffering from global aphasia...... expressions can be used to support patients with global aphasia in communicating by means of short sentences with their surroundings. Only a limited evaluation is carried out, and as such no statistically significant results are obtained. The tests however indicate that the aid is capable of supporting...

  18. The semantic representation of event information depends on the cue modality: an instance of meaning-based retrieval.

    Science.gov (United States)

    Karlsson, Kristina; Sikström, Sverker; Willander, Johan

    2013-01-01

    The semantic content, or the meaning, is the essence of autobiographical memories. In comparison to previous research, which has mainly focused on the phenomenological experience and the age distribution of retrieved events, the present study provides a novel view on the retrieval of event information by quantifying the information as semantic representations. We investigated the semantic representation of sensory cued autobiographical events and studied the modality hierarchy within the multimodal retrieval cues. The experiment comprised a cued recall task, where the participants were presented with visual, auditory, olfactory or multimodal retrieval cues and asked to recall autobiographical events. The results indicated that the three different unimodal retrieval cues generate significantly different semantic representations. Further, the auditory and the visual modalities contributed the most to the semantic representation of the multimodally retrieved events. Finally, the semantic representation of the multimodal condition could be described as a combination of the three unimodal conditions. In conclusion, these results suggest that the meaning of the retrieved event information depends on the modality of the retrieval cues.

  19. The semantic representation of event information depends on the cue modality: an instance of meaning-based retrieval.

    Directory of Open Access Journals (Sweden)

    Kristina Karlsson

    Full Text Available The semantic content, or the meaning, is the essence of autobiographical memories. In comparison to previous research, which has mainly focused on the phenomenological experience and the age distribution of retrieved events, the present study provides a novel view on the retrieval of event information by quantifying the information as semantic representations. We investigated the semantic representation of sensory cued autobiographical events and studied the modality hierarchy within the multimodal retrieval cues. The experiment comprised a cued recall task, where the participants were presented with visual, auditory, olfactory or multimodal retrieval cues and asked to recall autobiographical events. The results indicated that the three different unimodal retrieval cues generate significantly different semantic representations. Further, the auditory and the visual modalities contributed the most to the semantic representation of the multimodally retrieved events. Finally, the semantic representation of the multimodal condition could be described as a combination of the three unimodal conditions. In conclusion, these results suggest that the meaning of the retrieved event information depends on the modality of the retrieval cues.

  20. Drug-related webpages classification based on multi-modal local decision fusion

    Science.gov (United States)

    Hu, Ruiguang; Su, Xiaojing; Liu, Yanxin

    2018-03-01

    In this paper, multi-modal local decision fusion is used for drug-related webpages classification. First, meaningful text are extracted through HTML parsing, and effective images are chosen by the FOCARSS algorithm. Second, six SVM classifiers are trained for six kinds of drug-taking instruments, which are represented by PHOG. One SVM classifier is trained for the cannabis, which is represented by the mid-feature of BOW model. For each instance in a webpage, seven SVMs give seven labels for its image, and other seven labels are given by searching the names of drug-taking instruments and cannabis in its related text. Concatenating seven labels of image and seven labels of text, the representation of those instances in webpages are generated. Last, Multi-Instance Learning is used to classify those drugrelated webpages. Experimental results demonstrate that the classification accuracy of multi-instance learning with multi-modal local decision fusion is much higher than those of single-modal classification.

  1. 107.5 Gb/s 850 nm multi- and single-mode VCSEL transmission over 10 and 100 m of multi-mode fiber

    DEFF Research Database (Denmark)

    Puerta Ramírez, Rafael; Agustin, M.; Chorchos, L.

    2016-01-01

    First time successful 107.5 Gb/s MultiCAP 850 nm OM4 MMF transmissions over 10 m with multi-mode VCSEL and up to 100 m with single-mode VCSEL are demonstrated, with BER below 7% overhead FEC limit measured for each case.......First time successful 107.5 Gb/s MultiCAP 850 nm OM4 MMF transmissions over 10 m with multi-mode VCSEL and up to 100 m with single-mode VCSEL are demonstrated, with BER below 7% overhead FEC limit measured for each case....

  2. Spectral embedding-based registration (SERg) for multimodal fusion of prostate histology and MRI

    Science.gov (United States)

    Hwuang, Eileen; Rusu, Mirabela; Karthigeyan, Sudha; Agner, Shannon C.; Sparks, Rachel; Shih, Natalie; Tomaszewski, John E.; Rosen, Mark; Feldman, Michael; Madabhushi, Anant

    2014-03-01

    Multi-modal image registration is needed to align medical images collected from different protocols or imaging sources, thereby allowing the mapping of complementary information between images. One challenge of multimodal image registration is that typical similarity measures rely on statistical correlations between image intensities to determine anatomical alignment. The use of alternate image representations could allow for mapping of intensities into a space or representation such that the multimodal images appear more similar, thus facilitating their co-registration. In this work, we present a spectral embedding based registration (SERg) method that uses non-linearly embedded representations obtained from independent components of statistical texture maps of the original images to facilitate multimodal image registration. Our methodology comprises the following main steps: 1) image-derived textural representation of the original images, 2) dimensionality reduction using independent component analysis (ICA), 3) spectral embedding to generate the alternate representations, and 4) image registration. The rationale behind our approach is that SERg yields embedded representations that can allow for very different looking images to appear more similar, thereby facilitating improved co-registration. Statistical texture features are derived from the image intensities and then reduced to a smaller set by using independent component analysis to remove redundant information. Spectral embedding generates a new representation by eigendecomposition from which only the most important eigenvectors are selected. This helps to accentuate areas of salience based on modality-invariant structural information and therefore better identifies corresponding regions in both the template and target images. The spirit behind SERg is that image registration driven by these areas of salience and correspondence should improve alignment accuracy. In this work, SERg is implemented using Demons

  3. Naming Block Structures: A Multimodal Approach

    Science.gov (United States)

    Cohen, Lynn; Uhry, Joanna

    2011-01-01

    This study describes symbolic representation in block play in a culturally diverse suburban preschool classroom. Block play is "multimodal" and can allow children to experiment with materials to represent the world in many forms of literacy. Combined qualitative and quantitative data from seventy-seven block structures were collected and analyzed.…

  4. Towards a universal representation for audio information retrieval and analysis

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand; Troelsgaard, Rasmus; Larsen, Jan

    2013-01-01

    A fundamental and general representation of audio and music which integrates multi-modal data sources is important for both application and basic research purposes. In this paper we address this challenge by proposing a multi-modal version of the Latent Dirichlet Allocation model which provides a...

  5. Multimodality Registration without a Dedicated Multimodality Scanner

    Directory of Open Access Journals (Sweden)

    Bradley J. Beattie

    2007-03-01

    Full Text Available Multimodality scanners that allow the acquisition of both functional and structural image sets on a single system have recently become available for animal research use. Although the resultant registered functional/structural image sets can greatly enhance the interpretability of the functional data, the cost of multimodality systems can be prohibitive, and they are often limited to two modalities, which generally do not include magnetic resonance imaging. Using a thin plastic wrap to immobilize and fix a mouse or other small animal atop a removable bed, we are able to calculate registrations between all combinations of four different small animal imaging scanners (positron emission tomography, single-photon emission computed tomography, magnetic resonance, and computed tomography [CT] at our disposal, effectively equivalent to a quadruple-modality scanner. A comparison of serially acquired CT images, with intervening acquisitions on other scanners, demonstrates the ability of the proposed procedures to maintain the rigidity of an anesthetized mouse during transport between scanners. Movement of the bony structures of the mouse was estimated to be 0.62 mm. Soft tissue movement was predominantly the result of the filling (or emptying of the urinary bladder and thus largely constrained to this region. Phantom studies estimate the registration errors for all registration types to be less than 0.5 mm. Functional images using tracers targeted to known structures verify the accuracy of the functional to structural registrations. The procedures are easy to perform and produce robust and accurate results that rival those of dedicated multimodality scanners, but with more flexible registration combinations and while avoiding the expense and redundancy of multimodality systems.

  6. Approaching Athenian Graffiti as a Multimodal Genre with GIS Application

    OpenAIRE

    Stampoulidis, Georgios

    2017-01-01

    Graffiti as an ever-changing form of urban art and visual communication is naturally multimodal, focusing on text–image relations (Bateman 2014; Forceville 2008; Kress 2006), which owe their existence mainly to the sociocultural and historical knowledge of the represented world of our experience – Husserlian Lebenswelt [Lifeworld] (Sonesson 2008; 2015). These relations constitute an interesting challenge to multimodal interpretations, because both verbal and/or pictorial representations can i...

  7. A single-sided representation for the homogeneous Green's function of a unified scalar wave equation.

    Science.gov (United States)

    Wapenaar, Kees

    2017-06-01

    A unified scalar wave equation is formulated, which covers three-dimensional (3D) acoustic waves, 2D horizontally-polarised shear waves, 2D transverse-electric EM waves, 2D transverse-magnetic EM waves, 3D quantum-mechanical waves and 2D flexural waves. The homogeneous Green's function of this wave equation is a combination of the causal Green's function and its time-reversal, such that their singularities at the source position cancel each other. A classical representation expresses this homogeneous Green's function as a closed boundary integral. This representation finds applications in holographic imaging, time-reversed wave propagation and Green's function retrieval by cross correlation. The main drawback of the classical representation in those applications is that it requires access to a closed boundary around the medium of interest, whereas in many practical situations the medium can be accessed from one side only. Therefore, a single-sided representation is derived for the homogeneous Green's function of the unified scalar wave equation. Like the classical representation, this single-sided representation fully accounts for multiple scattering. The single-sided representation has the same applications as the classical representation, but unlike the classical representation it is applicable in situations where the medium of interest is accessible from one side only.

  8. Recent developments in multimodality fluorescence imaging probes

    Directory of Open Access Journals (Sweden)

    Jianhong Zhao

    2018-05-01

    Full Text Available Multimodality optical imaging probes have emerged as powerful tools that improve detection sensitivity and accuracy, important in disease diagnosis and treatment. In this review, we focus on recent developments of optical fluorescence imaging (OFI probe integration with other imaging modalities such as X-ray computed tomography (CT, magnetic resonance imaging (MRI, positron emission tomography (PET, single-photon emission computed tomography (SPECT, and photoacoustic imaging (PAI. The imaging technologies are briefly described in order to introduce the strengths and limitations of each techniques and the need for further multimodality optical imaging probe development. The emphasis of this account is placed on how design strategies are currently implemented to afford physicochemically and biologically compatible multimodality optical fluorescence imaging probes. We also present studies that overcame intrinsic disadvantages of each imaging technique by multimodality approach with improved detection sensitivity and accuracy. KEY WORDS: Optical imaging, Fluorescence, Multimodality, Near-infrared fluorescence, Nanoprobe, Computed tomography, Magnetic resonance imaging, Positron emission tomography, Single-photon emission computed tomography, Photoacoustic imaging

  9. Inorganic Nanoparticles for Multimodal Molecular Imaging

    Directory of Open Access Journals (Sweden)

    Magdalena Swierczewska

    2011-01-01

    Full Text Available Multimodal molecular imaging can offer a synergistic improvement of diagnostic ability over a single imaging modality. Recent development of hybrid imaging systems has profoundly impacted the pool of available multimodal imaging probes. In particular, much interest has been focused on biocompatible, inorganic nanoparticle-based multimodal probes. Inorganic nanoparticles offer exceptional advantages to the field of multimodal imaging owing to their unique characteristics, such as nanometer dimensions, tunable imaging properties, and multifunctionality. Nanoparticles mainly based on iron oxide, quantum dots, gold, and silica have been applied to various imaging modalities to characterize and image specific biologic processes on a molecular level. A combination of nanoparticles and other materials such as biomolecules, polymers, and radiometals continue to increase functionality for in vivo multimodal imaging and therapeutic agents. In this review, we discuss the unique concepts, characteristics, and applications of the various multimodal imaging probes based on inorganic nanoparticles.

  10. Single-mode operation of a coiled multimode fiber amplifier

    International Nuclear Information System (INIS)

    Koplow, Jeffrey P.; Kliner, Dahv A. V.; Goldberg, Lew

    2000-01-01

    We report a new approach to obtaining single-transverse-mode operation of a multimode fiber amplifier in which the gain fiber is coiled to induce significant bend loss for all but the lowest-order mode. We demonstrated this method by constructing a coiled amplifier using Yb-doped, double-clad fiber with a core diameter of 25 μm and a numerical aperture of ∼0.1 (V≅7.4) . When the amplifier was operated as an amplified-spontaneous-emission source, the output beam had an M 2 value of 1.09±0.09 ; when seeded at 1064 nm, the slope efficiency was similar to that of an uncoiled amplifier. This technique will permit scaling of pulsed fiber lasers and amplifiers to significantly higher pulse energies and peak powers and cw fiber sources to higher average powers while maintaining excellent beam quality. (c) 2000 Optical Society of America

  11. Multimodal label-free microscopy

    Directory of Open Access Journals (Sweden)

    Nicolas Pavillon

    2014-09-01

    Full Text Available This paper reviews the different multimodal applications based on a large extent of label-free imaging modalities, ranging from linear to nonlinear optics, while also including spectroscopic measurements. We put specific emphasis on multimodal measurements going across the usual boundaries between imaging modalities, whereas most multimodal platforms combine techniques based on similar light interactions or similar hardware implementations. In this review, we limit the scope to focus on applications for biology such as live cells or tissues, since by their nature of being alive or fragile, we are often not free to take liberties with the image acquisition times and are forced to gather the maximum amount of information possible at one time. For such samples, imaging by a given label-free method usually presents a challenge in obtaining sufficient optical signal or is limited in terms of the types of observable targets. Multimodal imaging is then particularly attractive for these samples in order to maximize the amount of measured information. While multimodal imaging is always useful in the sense of acquiring additional information from additional modes, at times it is possible to attain information that could not be discovered using any single mode alone, which is the essence of the progress that is possible using a multimodal approach.

  12. Bidirectional Joint Representation Learning with Symmetrical Deep Neural Networks for Multimodal and Crossmodal Applications

    OpenAIRE

    Vukotic , Vedran; Raymond , Christian; Gravier , Guillaume

    2016-01-01

    International audience; Common approaches to problems involving multiple modalities (classification, retrieval, hyperlinking, etc.) are early fusion of the initial modalities and crossmodal translation from one modality to the other. Recently, deep neural networks, especially deep autoencoders, have proven promising both for crossmodal translation and for early fusion via multimodal embedding. In this work, we propose a flexible cross-modal deep neural network architecture for multimodal and ...

  13. Using digital technologies to enhance chemistry students' understanding and representational skills

    DEFF Research Database (Denmark)

    Hilton, Annette

    Abstract Chemistry students need to understand chemistry on molecular, symbolic and macroscopic levels. Students find it difficult to use representations on these three levels to interpret and explain data. One approach is to encourage students to use writing-to-learn strategies in inquiry settings...... to present and interpret their laboratory results. This paper describes findings from a study on the effects on students’ learning outcomes of creating multimodal texts to report on laboratory inquiries. The study involved two senior secondary school chemistry classes (n = 22, n = 27). Both classes completed...... representations to make explanations on the molecular level. Student interviews and classroom video-recordings suggested that using digital resources to create multimodal texts promoted knowledge transformation and hence deeper reflection on the meaning of data and representations. The study has implications...

  14. Flavor unifying schemes with a single fermionic representation

    International Nuclear Information System (INIS)

    Davidson, A.; Wali, K.C.

    1980-05-01

    If quarks and leptons are indeed elementary, it is natural that they belong to a single representation of a unifying group, G. It is shown that such a requirement, which is inconsistent with G = SU(N), can be satisfied within the semi-simple group G = SU(N) x SU(N). Furthermore, N = 7 emerges as the unique solution accompanied by a fermionic set that exhibits a natural generation structure

  15. Diffusion Maps for Multimodal Registration

    Directory of Open Access Journals (Sweden)

    Gemma Piella

    2014-06-01

    Full Text Available Multimodal image registration is a difficult task, due to the significant intensity variations between the images. A common approach is to use sophisticated similarity measures, such as mutual information, that are robust to those intensity variations. However, these similarity measures are computationally expensive and, moreover, often fail to capture the geometry and the associated dynamics linked with the images. Another approach is the transformation of the images into a common space where modalities can be directly compared. Within this approach, we propose to register multimodal images by using diffusion maps to describe the geometric and spectral properties of the data. Through diffusion maps, the multimodal data is transformed into a new set of canonical coordinates that reflect its geometry uniformly across modalities, so that meaningful correspondences can be established between them. Images in this new representation can then be registered using a simple Euclidean distance as a similarity measure. Registration accuracy was evaluated on both real and simulated brain images with known ground-truth for both rigid and non-rigid registration. Results showed that the proposed approach achieved higher accuracy than the conventional approach using mutual information.

  16. Mirror representations innate versus determined by experience: a viewpoint from learning theory.

    Science.gov (United States)

    Giese, Martin A

    2014-04-01

    From the viewpoint of pattern recognition and computational learning, mirror neurons form an interesting multimodal representation that links action perception and planning. While it seems unlikely that all details of such representations are specified by the genetic code, robust learning of such complex representations likely requires an appropriate interplay between plasticity, generalization, and anatomical constraints of the underlying neural architecture.

  17. Online multi-modal robust non-negative dictionary learning for visual tracking.

    Science.gov (United States)

    Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang

    2015-01-01

    Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality.

  18. A Single Session of rTMS Enhances Small-Worldness in Writer’s Cramp: Evidence from Simultaneous EEG-fMRI Multi-Modal Brain Graph

    Directory of Open Access Journals (Sweden)

    Rose D. Bharath

    2017-09-01

    Full Text Available Background and Purpose: Repetitive transcranial magnetic stimulation (rTMS induces widespread changes in brain connectivity. As the network topology differences induced by a single session of rTMS are less known we undertook this study to ascertain whether the network alterations had a small-world morphology using multi-modal graph theory analysis of simultaneous EEG-fMRI.Method: Simultaneous EEG-fMRI was acquired in duplicate before (R1 and after (R2 a single session of rTMS in 14 patients with Writer’s Cramp (WC. Whole brain neuronal and hemodynamic network connectivity were explored using the graph theory measures and clustering coefficient, path length and small-world index were calculated for EEG and resting state fMRI (rsfMRI. Multi-modal graph theory analysis was used to evaluate the correlation of EEG and fMRI clustering coefficients.Result: A single session of rTMS was found to increase the clustering coefficient and small-worldness significantly in both EEG and fMRI (p < 0.05. Multi-modal graph theory analysis revealed significant modulations in the fronto-parietal regions immediately after rTMS. The rsfMRI revealed additional modulations in several deep brain regions including cerebellum, insula and medial frontal lobe.Conclusion: Multi-modal graph theory analysis of simultaneous EEG-fMRI can supplement motor physiology methods in understanding the neurobiology of rTMS in vivo. Coinciding evidence from EEG and rsfMRI reports small-world morphology for the acute phase network hyper-connectivity indicating changes ensuing low-frequency rTMS is probably not “noise”.

  19. A Single Rod Multi-modality Multi-interface Level Sensor Using an AC Current Source

    Directory of Open Access Journals (Sweden)

    Abdulgader Hwili

    2008-05-01

    Full Text Available Crude oil separation is an important process in the oil industry. To make efficient use of the separators, it is important to know their internal behaviour, and to measure the levels of multi-interfaces between different materials, such as gas-foam, foam-oil, oil-emulsion, emulsion-water and water-solids. A single-rod multi-modality multi-interface level sensor is presented, which has a current source, and electromagnetic modalities. Some key issues have been addressed, including the effect of salt content and temperature i.e. conductivity on the measurement.

  20. Multimodal integration in statistical learning

    DEFF Research Database (Denmark)

    Mitchell, Aaron; Christiansen, Morten Hyllekvist; Weiss, Dan

    2014-01-01

    , we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker’s face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally...... facilitated participants’ ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.......Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study...

  1. Modelling multimodal expression of emotion in a virtual agent.

    Science.gov (United States)

    Pelachaud, Catherine

    2009-12-12

    Over the past few years we have been developing an expressive embodied conversational agent system. In particular, we have developed a model of multimodal behaviours that includes dynamism and complex facial expressions. The first feature refers to the qualitative execution of behaviours. Our model is based on perceptual studies and encompasses several parameters that modulate multimodal behaviours. The second feature, the model of complex expressions, follows a componential approach where a new expression is obtained by combining facial areas of other expressions. Lately we have been working on adding temporal dynamism to expressions. So far they have been designed statically, typically at their apex. Only full-blown expressions could be modelled. To overcome this limitation, we have defined a representation scheme that describes the temporal evolution of the expression of an emotion. It is no longer represented by a static definition but by a temporally ordered sequence of multimodal signals.

  2. Investigating multimodal communication in virtual meetings

    DEFF Research Database (Denmark)

    Persson, John Stouby; Mathiassen, Lars

    2014-01-01

    recordings of their oral exchanges and video recordings of their shared dynamic representation of the project’s status and plans, our analysis reveals how their interrelating of visual and verbal communication acts enabled effective communication and coordination. In conclusion, we offer theoretical......To manage distributed work, organizations increasingly rely on virtual meetings based on multimodal, synchronous communication technologies. However, despite technological advances, it is still challenging to coordinate knowledge through these meetings with spatial and cultural separation. Against...... propositions that explain how interrelating of verbal and visual acts based on shared dynamic representations enable communication repairs during virtual meetings. We argue the proposed framework provides researchers with a novel and practical approach to investigate the complex data involved in virtual...

  3. Reduced multimodal integration of memory features following continuous theta burst stimulation of angular gyrus.

    Science.gov (United States)

    Yazar, Yasemin; Bergström, Zara M; Simons, Jon S

    Lesions of the angular gyrus (AnG) region of human parietal cortex do not cause amnesia, but appear to be associated with reduction in the ability to consciously experience the reliving of previous events. We used continuous theta burst stimulation to test the hypothesis that the cognitive mechanism implicated in this memory deficit might be the integration of retrieved sensory event features into a coherent multimodal memory representation. Healthy volunteers received stimulation to AnG or a vertex control site after studying stimuli that each comprised a visual object embedded in a scene, with the name of the object presented auditorily. Participants were then asked to make memory judgments about the studied stimuli that involved recollection of single event features (visual or auditory), or required integration of event features within the same modality, or across modalities. Participants' ability to retrieve context features from across multiple modalities was significantly reduced after AnG stimulation compared to stimulation of the vertex. This effect was observed only for the integration of cross-modal context features but not for integration of features within the same modality, and could not be accounted for by task difficulty as performance was matched across integration conditions following vertex stimulation. These results support the hypothesis that AnG is necessary for the multimodal integration of distributed cortical episodic features into a unified conscious representation that enables the experience of remembering. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Cell type discovery using single-cell transcriptomics: implications for ontological representation.

    Science.gov (United States)

    Aevermann, Brian D; Novotny, Mark; Bakken, Trygve; Miller, Jeremy A; Diehl, Alexander D; Osumi-Sutherland, David; Lasken, Roger S; Lein, Ed S; Scheuermann, Richard H

    2018-05-01

    Cells are fundamental function units of multicellular organisms, with different cell types playing distinct physiological roles in the body. The recent advent of single-cell transcriptional profiling using RNA sequencing is producing 'big data', enabling the identification of novel human cell types at an unprecedented rate. In this review, we summarize recent work characterizing cell types in the human central nervous and immune systems using single-cell and single-nuclei RNA sequencing, and discuss the implications that these discoveries are having on the representation of cell types in the reference Cell Ontology (CL). We propose a method, based on random forest machine learning, for identifying sets of necessary and sufficient marker genes, which can be used to assemble consistent and reproducible cell type definitions for incorporation into the CL. The representation of defined cell type classes and their relationships in the CL using this strategy will make the cell type classes being identified by high-throughput/high-content technologies findable, accessible, interoperable and reusable (FAIR), allowing the CL to serve as a reference knowledgebase of information about the role that distinct cellular phenotypes play in human health and disease.

  5. Photoacoustic-Based Multimodal Nanoprobes: from Constructing to Biological Applications.

    Science.gov (United States)

    Gao, Duyang; Yuan, Zhen

    2017-01-01

    Multimodal nanoprobes have attracted intensive attentions since they can integrate various imaging modalities to obtain complementary merits of single modality. Meanwhile, recent interest in laser-induced photoacoustic imaging is rapidly growing due to its unique advantages in visualizing tissue structure and function with high spatial resolution and satisfactory imaging depth. In this review, we summarize multimodal nanoprobes involving photoacoustic imaging. In particular, we focus on the method to construct multimodal nanoprobes. We have divided the synthetic methods into two types. First, we call it "one for all" concept, which involves intrinsic properties of the element in a single particle. Second, "all in one" concept, which means integrating different functional blocks in one particle. Then, we simply introduce the applications of the multifunctional nanoprobes for in vivo imaging and imaging-guided tumor therapy. At last, we discuss the advantages and disadvantages of the present methods to construct the multimodal nanoprobes and share our viewpoints in this area.

  6. Motivating Students' Research Skills and Interests through a Multimodal, Multigenre Research Project

    Science.gov (United States)

    Bailey, Nancy M.; Carroll, Kristen M.

    2010-01-01

    The authors investigate how innovative research assignments based on students' personal interests can help them want to develop their research skills. They find that multimodal communication and representation, including film, written scripts, comic strips, music, and photography, encourage students to carefully select information from the…

  7. Applications of Elpasolites as a Multimode Radiation Sensor

    Science.gov (United States)

    Guckes, Amber

    This study consists of both computational and experimental investigations. The computational results enabled detector design selections and confirmed experimental results. The experimental results determined that the CLYC scintillation detector can be applied as a functional and field-deployable multimode radiation sensor. The computational study utilized MCNP6 code to investigate the response of CLYC to various incident radiations and to determine the feasibility of its application as a handheld multimode sensor and as a single-scintillator collimated directional detection system. These simulations include: • Characterization of the response of the CLYC scintillator to gamma-rays and neutrons; • Study of the isotopic enrichment of 7Li versus 6Li in the CLYC for optimal detection of both thermal neutrons and fast neutrons; • Analysis of collimator designs to determine the optimal collimator for the single CLYC sensor directional detection system to assay gamma rays and neutrons; Simulations of a handheld CLYC multimode sensor and a single CLYC scintillator collimated directional detection system with the optimized collimator to determine the feasibility of detecting nuclear materials that could be encountered during field operations. These nuclear materials include depleted uranium, natural uranium, low-enriched uranium, highly-enriched uranium, reactor-grade plutonium, and weapons-grade plutonium. The experimental study includes the design, construction, and testing of both a handheld CLYC multimode sensor and a single CLYC scintillator collimated directional detection system. Both were designed in the Inventor CAD software and based on results of the computational study to optimize its performance. The handheld CLYC multimode sensor is modular, scalable, low?power, and optimized for high count rates. Commercial?off?the?shelf components were used where possible in order to optimize size, increase robustness, and minimize cost. The handheld CLYC multimode

  8. Exploring Middle School Students' Representational Competence in Science: Development and Verification of a Framework for Learning with Visual Representations

    Science.gov (United States)

    Tippett, Christine Diane

    Scientific knowledge is constructed and communicated through a range of forms in addition to verbal language. Maps, graphs, charts, diagrams, formulae, models, and drawings are just some of the ways in which science concepts can be represented. Representational competence---an aspect of visual literacy that focuses on the ability to interpret, transform, and produce visual representations---is a key component of science literacy and an essential part of science reading and writing. To date, however, most research has examined learning from representations rather than learning with representations. This dissertation consisted of three distinct projects that were related by a common focus on learning from visual representations as an important aspect of scientific literacy. The first project was the development of an exploratory framework that is proposed for use in investigations of students constructing and interpreting multimedia texts. The exploratory framework, which integrates cognition, metacognition, semiotics, and systemic functional linguistics, could eventually result in a model that might be used to guide classroom practice, leading to improved visual literacy, better comprehension of science concepts, and enhanced science literacy because it emphasizes distinct aspects of learning with representations that can be addressed though explicit instruction. The second project was a metasynthesis of the research that was previously conducted as part of the Explicit Literacy Instruction Embedded in Middle School Science project (Pacific CRYSTAL, http://www.educ.uvic.ca/pacificcrystal). Five overarching themes emerged from this case-to-case synthesis: the engaging and effective nature of multimedia genres, opportunities for differentiated instruction using multimodal strategies, opportunities for assessment, an emphasis on visual representations, and the robustness of some multimodal literacy strategies across content areas. The third project was a mixed

  9. From 'Virgin Births' to 'Octomom': Representations of Single Motherhood via Sperm Donation in the UK News.

    Science.gov (United States)

    Zadeh, S; Foster, J

    2016-01-01

    The use of sperm donation by single women has provoked public, professional and political debate. Newspapers serve as a critical means of both broadcasting this debate and effecting a representation of this user group within the public sphere. This study uses the theory of social representations to examine how single motherhood by sperm donation has been represented in the UK news over time. The study sampled news coverage on this topic in eight British newspapers during three 4-year periods between the years 1988 and 2012. The dataset of news reports ( n  = 406) was analysed using a qualitative approach. Findings indicated that UK media reports of single women using donor sperm are underpinned by conventional categories of the 'personal', the 'traditional' and the 'natural' that when paired with their corollaries produce a representation of this user group as the social 'other'. The amount of coverage on this topic over time was found to vary according to the political orientation of different media sources. Using key concepts from social representations theory, this article discusses the relationship between themata and anchoring in the maintenance of representations of the social 'other' in mass mediated communication. Findings are explained in relation to theoretical conceptions of the mass media and its position within the public sphere. It is argued that the use of personal narratives in news reports of single mothers by sperm donation may have significant implications for public understandings of this social group. © 2016 The Authors. Journal of Community & Applied Social Psychology published by John Wiley & Sons Ltd.

  10. Effects of Multimodal Information on Learning Performance and Judgment of Learning

    Science.gov (United States)

    Chen, Gongxiang; Fu, Xiaolan

    2003-01-01

    Two experiments were conducted to investigate the effects of multimodal information on learning performance and judgment of learning (JOL). Experiment 1 examined the effects of representation type (word-only versus word-plus-picture) and presentation channel (visual-only versus visual-plus-auditory) on recall and immediate-JOL in fixed-rate…

  11. Quantum-field theories as representations of a single $^\\ast$-algebra

    OpenAIRE

    Raab, Andreas

    2013-01-01

    We show that many well-known quantum field theories emerge as representations of a single $^\\ast$-algebra. These include free quantum field theories in flat and curved space-times, lattice quantum field theories, Wightman quantum field theories, and string theories. We prove that such theories can be approximated on lattices, and we give a rigorous definition of the continuum limit of lattice quantum field theories.

  12. Investigation of single-mode and multi-mode hydromagnetic Rayleigh-Taylor instability in planar geometry

    International Nuclear Information System (INIS)

    Roderick, N.F.; Cochrane, K.; Douglas, M.R.

    1998-01-01

    Previous investigations carried out to study various methods of seeding the hydromagnetic Rayleigh-Taylor instability in magnetohydrodynamic simulations showed features similar to those seen in hydrodynamic calculations. For periodic single-mode initiations the results showed the appearance of harmonics as the single modes became nonlinear. For periodic multi-mode initiations new modes developed that indicated the presence of mode coupling. The MHD simulations used parameters of the high velocity large radius z-pinch experiments performed in the Z-accelerator at Sandia National Laboratories. The cylindrical convergent geometry and variable acceleration of these configurations made comparison with analytic, developed for planar geometry with constant acceleration, difficult. A set of calculations in planar geometry using constant current to produce acceleration and parameters characteristic of the cylindrical implosions has been performed to allow a better comparison. Results of these calculations, comparison with analytic theory, and comparison with the cylindrical configuration calculations will be discussed

  13. Near field intensity pattern at the output of silica-based graded-index multimode fibers under selective excitation with a single-mode fiber

    NARCIS (Netherlands)

    Tsekrekos, C.P.; Smink, R.W.; Hon, de B.P.; Tijhuis, A.G.; Koonen, A.M.J.

    2007-01-01

    Abstract: Selective excitation of graded-index multimode fibers (GIMMFs) with a single-mode fiber (SMF) has gained increased interest for telecommunication applications. It has been proposed as a way to enhance the transmission bandwidth of GI-MMF links and/or create parallel communication channels

  14. Operative and economic evaluation of a 'Laser Printer Multimodality' System

    International Nuclear Information System (INIS)

    Battaglia, G.; Moscatelli, G.; Maroldi, R.; Chiesa, A.

    1991-01-01

    The increasing application of digital techniques to diagnostic imaging is causing significant changes in several related activities, such as a reproduction of digital images on film. In the Department of Diagnostic Imaging of the University of Brescia, about 70% of the whole of images are produced by digital techniques; at present, most of these images are reproduced on film with a Multimodality System interfacing CT, MR, DSA, and DR units with a single laser printer. Our analysis evaluates the operative and economics aspects of image reproduction, by comparing the 'single cassette' multiformat Camera and the Laser Printer Multimodality SAystem. Our results point out the advantages obtained by reproducing images with a Laser Printer Multimodality System: outstanding quality, reproduction of multiple originals, and marked reduction in the time needed for both image archiving and film handling. The Laser Printer Multimodality System allows over 5 hours/day to be saved -that is to say the working day of an operator, who can be thus shifted to other functions. The important economic aspect of the reproduction of digital images on film proves the Laser Printer Multimodality System to have some advantage over Cameras

  15. The Work of Comics Collaborations: Considerations of Multimodal Composition for Writing Scholarship and Pedagogy

    Science.gov (United States)

    Scanlon, Molly J.

    2015-01-01

    Though multimodality is increasingly incorporated into our pedagogies and scholarship, explorations of collaborative multimodal composition are lacking. Existing literature on collaborative writing focuses predominately on texts either composed in singular modes or by a single author, neglecting the ways in which multimodal texts are composed…

  16. Acoustic multimode interference and self-imaging phenomena realized in multimodal phononic crystal waveguides

    International Nuclear Information System (INIS)

    Zou, Qiushun; Yu, Tianbao; Liu, Jiangtao; Wang, Tongbiao; Liao, Qinghua; Liu, Nianhua

    2015-01-01

    We report an acoustic multimode interference effect and self-imaging phenomena in an acoustic multimode waveguide system which consists of M parallel phononic crystal waveguides (M-PnCWs). Results show that the self-imaging principle remains applicable for acoustic waveguides just as it does for optical multimode waveguides. To achieve the dispersions and replicas of the input acoustic waves produced along the propagation direction, we performed the finite element method on M-PnCWs, which support M guided modes within the target frequency range. The simulation results show that single images (including direct and mirrored images) and N-fold images (N is an integer) are identified along the propagation direction with asymmetric and symmetric incidence discussed separately. The simulated positions of the replicas agree well with the calculated values that are theoretically decided by self-imaging conditions based on the guided mode propagation analysis. Moreover, the potential applications based on this self-imaging effect for acoustic wavelength de-multiplexing and beam splitting in the acoustic field are also presented. (paper)

  17. Sustainable Multi-Modal Sensing by a Single Sensor Utilizing the Passivity of an Elastic Actuator

    Directory of Open Access Journals (Sweden)

    Takashi Takuma

    2014-05-01

    Full Text Available When a robot equipped with compliant joints driven by elastic actuators contacts an object and its joints are deformed, multi-modal information, including the magnitude and direction of the applied force and the deformation of the joint, is used to enhance the performance of the robot such as dexterous manipulation. In conventional approaches, some types of sensors used to obtain the multi-modal information are attached to the point of contact where the force is applied and at the joint. However, this approach is not sustainable for daily use in robots, i.e., not durable or robust, because the sensors can undergo damage due to the application of excessive force and wear due to repeated contacts. Further, multiple types of sensors are required to measure such physical values, which add to the complexity of the device system of the robot. In our approach, a single type of sensor is used and it is located at a point distant from the contact point and the joint, and the information is obtained indirectly by the measurement of certain physical parameters that are influenced by the applied force and the joint deformation. In this study, we employ the McKibben pneumatic actuator whose inner pressure changes passively when a force is applied to the actuator. We derive the relationships between information and the pressures of a two-degrees-of-freedom (2-DoF joint mechanism driven by four pneumatic actuators. Experimental results show that the multi-modal information can be obtained by using the set of pressures measured before and after the force is applied. Further, we apply our principle to obtain the stiffness values of certain contacting objects that can subsequently be categorized by using the aforementioned relationships.

  18. The dynamics of multimodal integration: The averaging diffusion model.

    Science.gov (United States)

    Turner, Brandon M; Gao, Juan; Koenig, Scott; Palfy, Dylan; L McClelland, James

    2017-12-01

    We combine extant theories of evidence accumulation and multi-modal integration to develop an integrated framework for modeling multimodal integration as a process that unfolds in real time. Many studies have formulated sensory processing as a dynamic process where noisy samples of evidence are accumulated until a decision is made. However, these studies are often limited to a single sensory modality. Studies of multimodal stimulus integration have focused on how best to combine different sources of information to elicit a judgment. These studies are often limited to a single time point, typically after the integration process has occurred. We address these limitations by combining the two approaches. Experimentally, we present data that allow us to study the time course of evidence accumulation within each of the visual and auditory domains as well as in a bimodal condition. Theoretically, we develop a new Averaging Diffusion Model in which the decision variable is the mean rather than the sum of evidence samples and use it as a base for comparing three alternative models of multimodal integration, allowing us to assess the optimality of this integration. The outcome reveals rich individual differences in multimodal integration: while some subjects' data are consistent with adaptive optimal integration, reweighting sources of evidence as their relative reliability changes during evidence integration, others exhibit patterns inconsistent with optimality.

  19. Study on multimodal transport route under low carbon background

    Science.gov (United States)

    Liu, Lele; Liu, Jie

    2018-06-01

    Low-carbon environmental protection is the focus of attention around the world, scientists are constantly researching on production of carbon emissions and living carbon emissions. However, there is little literature about multimodal transportation based on carbon emission at home and abroad. Firstly, this paper introduces the theory of multimodal transportation, the multimodal transport models that didn't consider carbon emissions and consider carbon emissions are analyzed. On this basis, a multi-objective programming 0-1 programming model with minimum total transportation cost and minimum total carbon emission is proposed. The idea of weight is applied to Ideal point method for solving problem, multi-objective programming is transformed into a single objective function. The optimal solution of carbon emission to transportation cost under different weights is determined by a single objective function with variable weights. Based on the model and algorithm, an example is given and the results are analyzed.

  20. Mental Representations in Musical Processing and their Role in Action-Perception Loops

    Directory of Open Access Journals (Sweden)

    Rebecca S. Schaefer

    2015-05-01

    Full Text Available I address the diverging usage of the term "imagery" by delineating different types of imagery, each of which is supported by multimodal mental representations that are informed and modulated by the body and its in- and outputs, and that in turn modulate and inform perception and action through predictive processing. These multimodal representations, viewed here as mental models, underlie our individual perceptual experience of music, which is constructed in the listener as it is perceived and interpreted. While tracking incoming auditory information, mental representations of music unfold on multiple levels as we listen, from regularities detected across notes to the structure of entire pieces of music, generating predictions for different musical aspects. These predictions lead to specific percepts and behavioral outputs, illustrating a tight coupling of cognition, perception and action. This coupling and the prominence of predictive mechanisms in music processing are described in the context of the broader role of predictive processing in cognitive function, which is well suited to account for the role of mental models in musical perception and action. As a proxy for mental representations, investigating the cerebral correlates of constructive imagination may offer an experimentally tractable approach to clarifying how mental models of music are implemented in the brain.

  1. Could a Multimodal Dictionary Serve as a Learning Tool? An Examination of the Impact of Technologically Enhanced Visual Glosses on L2 Text Comprehension

    Science.gov (United States)

    Sato, Takeshi

    2016-01-01

    This study examines the efficacy of a multimodal online bilingual dictionary based on cognitive linguistics in order to explore the advantages and limitations of explicit multimodal L2 vocabulary learning. Previous studies have examined the efficacy of the verbal and visual representation of words while reading L2 texts, concluding that it…

  2. Impact of Professional Learning on Teachers' Representational Strategies and Students' Cognitive Engagement with Molecular Genetics Concepts

    Science.gov (United States)

    Nichols, Kim

    2018-01-01

    A variety of practices and specialised representational systems are required to understand, communicate and construct molecular genetics knowledge. This study describes teachers' use of multimodal representations of molecular genetics concepts and how their strategies and choice of resources were interpreted, understood and used by students to…

  3. Optical sensor in planar configuration based on multimode interference

    Science.gov (United States)

    Blahut, Marek

    2017-08-01

    In the paper a numerical analysis of optical sensors based on multimode interference in planar one-dimensional step-index configuration is presented. The structure consists in single-mode input and output waveguides and multimode waveguide which guide only few modes. Material parameters discussed refer to a SU8 polymer waveguide on SiO2 substrate. The optical system described will be designed to the analysis of biological substances.

  4. Adaptive multimodal interaction in mobile augmented reality: A conceptual framework

    Science.gov (United States)

    Abidin, Rimaniza Zainal; Arshad, Haslina; Shukri, Saidatul A'isyah Ahmad

    2017-10-01

    Recently, Augmented Reality (AR) is an emerging technology in many mobile applications. Mobile AR was defined as a medium for displaying information merged with the real world environment mapped with augmented reality surrounding in a single view. There are four main types of mobile augmented reality interfaces and one of them are multimodal interfaces. Multimodal interface processes two or more combined user input modes (such as speech, pen, touch, manual gesture, gaze, and head and body movements) in a coordinated manner with multimedia system output. In multimodal interface, many frameworks have been proposed to guide the designer to develop a multimodal applications including in augmented reality environment but there has been little work reviewing the framework of adaptive multimodal interface in mobile augmented reality. The main goal of this study is to propose a conceptual framework to illustrate the adaptive multimodal interface in mobile augmented reality. We reviewed several frameworks that have been proposed in the field of multimodal interfaces, adaptive interface and augmented reality. We analyzed the components in the previous frameworks and measure which can be applied in mobile devices. Our framework can be used as a guide for designers and developer to develop a mobile AR application with an adaptive multimodal interfaces.

  5. Multimodal Speaker Diarization.

    Science.gov (United States)

    Noulas, A; Englebienne, G; Krose, B J A

    2012-01-01

    We present a novel probabilistic framework that fuses information coming from the audio and video modality to perform speaker diarization. The proposed framework is a Dynamic Bayesian Network (DBN) that is an extension of a factorial Hidden Markov Model (fHMM) and models the people appearing in an audiovisual recording as multimodal entities that generate observations in the audio stream, the video stream, and the joint audiovisual space. The framework is very robust to different contexts, makes no assumptions about the location of the recording equipment, and does not require labeled training data as it acquires the model parameters using the Expectation Maximization (EM) algorithm. We apply the proposed model to two meeting videos and a news broadcast video, all of which come from publicly available data sets. The results acquired in speaker diarization are in favor of the proposed multimodal framework, which outperforms the single modality analysis results and improves over the state-of-the-art audio-based speaker diarization.

  6. Optimal Face-Iris Multimodal Fusion Scheme

    Directory of Open Access Journals (Sweden)

    Omid Sharifi

    2016-06-01

    Full Text Available Multimodal biometric systems are considered a way to minimize the limitations raised by single traits. This paper proposes new schemes based on score level, feature level and decision level fusion to efficiently fuse face and iris modalities. Log-Gabor transformation is applied as the feature extraction method on face and iris modalities. At each level of fusion, different schemes are proposed to improve the recognition performance and, finally, a combination of schemes at different fusion levels constructs an optimized and robust scheme. In this study, CASIA Iris Distance database is used to examine the robustness of all unimodal and multimodal schemes. In addition, Backtracking Search Algorithm (BSA, a novel population-based iterative evolutionary algorithm, is applied to improve the recognition accuracy of schemes by reducing the number of features and selecting the optimized weights for feature level and score level fusion, respectively. Experimental results on verification rates demonstrate a significant improvement of proposed fusion schemes over unimodal and multimodal fusion methods.

  7. From ‘Virgin Births’ to ‘Octomom’: Representations of Single Motherhood via Sperm Donation in the UK News

    Science.gov (United States)

    Foster, J.

    2016-01-01

    Abstract The use of sperm donation by single women has provoked public, professional and political debate. Newspapers serve as a critical means of both broadcasting this debate and effecting a representation of this user group within the public sphere. This study uses the theory of social representations to examine how single motherhood by sperm donation has been represented in the UK news over time. The study sampled news coverage on this topic in eight British newspapers during three 4‐year periods between the years 1988 and 2012. The dataset of news reports (n = 406) was analysed using a qualitative approach. Findings indicated that UK media reports of single women using donor sperm are underpinned by conventional categories of the ‘personal’, the ‘traditional’ and the ‘natural’ that when paired with their corollaries produce a representation of this user group as the social ‘other’. The amount of coverage on this topic over time was found to vary according to the political orientation of different media sources. Using key concepts from social representations theory, this article discusses the relationship between themata and anchoring in the maintenance of representations of the social ‘other’ in mass mediated communication. Findings are explained in relation to theoretical conceptions of the mass media and its position within the public sphere. It is argued that the use of personal narratives in news reports of single mothers by sperm donation may have significant implications for public understandings of this social group. © 2016 The Authors. Journal of Community & Applied Social Psychology published by John Wiley & Sons Ltd. PMID:27867283

  8. Multi-modal locomotion: from animal to application

    International Nuclear Information System (INIS)

    Lock, R J; Burgess, S C; Vaidyanathan, R

    2014-01-01

    The majority of robotic vehicles that can be found today are bound to operations within a single media (i.e. land, air or water). This is very rarely the case when considering locomotive capabilities in natural systems. Utility for small robots often reflects the exact same problem domain as small animals, hence providing numerous avenues for biological inspiration. This paper begins to investigate the various modes of locomotion adopted by different genus groups in multiple media as an initial attempt to determine the compromise in ability adopted by the animals when achieving multi-modal locomotion. A review of current biologically inspired multi-modal robots is also presented. The primary aim of this research is to lay the foundation for a generation of vehicles capable of multi-modal locomotion, allowing ambulatory abilities in more than one media, surpassing current capabilities. By identifying and understanding when natural systems use specific locomotion mechanisms, when they opt for disparate mechanisms for each mode of locomotion rather than using a synergized singular mechanism, and how this affects their capability in each medium, similar combinations can be used as inspiration for future multi-modal biologically inspired robotic platforms. (topical review)

  9. Severe, multimodal stress exposure induces PTSD-like characteristics in a mouse model of single prolonged stress.

    Science.gov (United States)

    Perrine, Shane A; Eagle, Andrew L; George, Sophie A; Mulo, Kostika; Kohler, Robert J; Gerard, Justin; Harutyunyan, Arman; Hool, Steven M; Susick, Laura L; Schneider, Brandy L; Ghoddoussi, Farhad; Galloway, Matthew P; Liberzon, Israel; Conti, Alana C

    2016-04-15

    Appropriate animal models of posttraumatic stress disorder (PTSD) are needed because human studies remain limited in their ability to probe the underlying neurobiology of PTSD. Although the single prolonged stress (SPS) model is an established rat model of PTSD, the development of a similarly-validated mouse model emphasizes the benefits and cross-species utility of rodent PTSD models and offers unique methodological advantages to that of the rat. Therefore, the aims of this study were to develop and describe a SPS model for mice and to provide data that support current mechanisms relevant to PTSD. The mouse single prolonged stress (mSPS) paradigm, involves exposing C57Bl/6 mice to a series of severe, multimodal stressors, including 2h restraint, 10 min group forced swim, exposure to soiled rat bedding scent, and exposure to ether until unconsciousness. Following a 7-day undisturbed period, mice were tested for cue-induced fear behavior, effects of paroxetine on cue-induced fear behavior, extinction retention of a previously extinguished fear memory, dexamethasone suppression of corticosterone (CORT) response, dorsal hippocampal glucocorticoid receptor protein and mRNA expression, and prefrontal cortex glutamate levels. Exposure to mSPS enhanced cue-induced fear, which was attenuated by oral paroxetine treatment. mSPS also disrupted extinction retention, enhanced suppression of stress-induced CORT response, increased mRNA expression of dorsal hippocampal glucocorticoid receptors and decreased prefrontal cortex glutamate levels. These data suggest that the mSPS model is a translationally-relevant model for future PTSD research with strong face, construct, and predictive validity. In summary, mSPS models characteristics relevant to PTSD and this severe, multimodal stress modifies fear learning in mice that coincides with changes in the hypothalamo-pituitary-adrenal (HPA) axis, brain glucocorticoid systems, and glutamatergic signaling in the prefrontal cortex

  10. An Efficient Human Identification through MultiModal Biometric System

    Directory of Open Access Journals (Sweden)

    K. Meena

    Full Text Available ABSTRACT Human identification is essential for proper functioning of society. Human identification through multimodal biometrics is becoming an emerging trend, and one of the reasons is to improve recognition accuracy. Unimodal biometric systems are affected by various problemssuch as noisy sensor data,non-universality, lack of individuality, lack of invariant representation and susceptibility to circumvention.A unimodal system has limited accuracy. Hence, Multimodal biometric systems by combining more than one biometric feature in different levels are proposed in order to enhance the performance of the system. A supervisor module combines the different opinions or decisions delivered by each subsystem and then make a final decision. In this paper, a multimodal biometrics authentication is proposed by combining face, iris and finger features. Biometric features are extracted by Local Derivative Ternary Pattern (LDTP in Contourlet domain and an extensive evaluation of LDTP is done using Support Vector Machine and Nearest Neighborhood Classifier. The experimental evaluations are performed on a public dataset demonstrating the accuracy of the proposed system compared with the existing systems. It is observed that, the combination of face, fingerprint and iris gives better performance in terms of accuracy, False Acceptance Rate, False Rejection Rate with minimum computation time.

  11. Computational investigation of single mode vs multimode Rayleigh endash Taylor seeding in Z-pinch implosions

    International Nuclear Information System (INIS)

    Douglas, M.R.; Deeney, C.; Roderick, N.F.

    1998-01-01

    A series of two-dimensional magnetohydrodynamic calculations have been carried out to investigate single and multimode growth and mode coupling for magnetically-driven Rayleigh endash Taylor instabilities in Z pinches. Wavelengths ranging from 5.0 mm down to 1.25 mm were considered. Such wavelengths are comparable to those observed at stagnation using a random density open-quotes seedingclose quotes method. The calculations show that wavelengths resolved by less than 10 cells exhibit an artificial decrease in initial Fourier spectrum amplitudes and a reduction in the corresponding amplitude growth. Single mode evolution exhibits linear exponential growth and the development of higher harmonics as the mode transitions into the nonlinear phase. The mode growth continues to exponentiate but at a slower rate than determined by linear hydrodynamic theory. In the two and three mode case, there is clear evidence of mode coupling and inverse cascade. In addition, distinct modal patterns are observed late in the implosion, resulting from finite shell thickness and magnetic field effects. copyright 1998 American Institute of Physics. thinsp

  12. Analyzing Multimode Wireless Sensor Networks Using the Network Calculus

    Directory of Open Access Journals (Sweden)

    Xi Jin

    2015-01-01

    Full Text Available The network calculus is a powerful tool to analyze the performance of wireless sensor networks. But the original network calculus can only model the single-mode wireless sensor network. In this paper, we combine the original network calculus with the multimode model to analyze the maximum delay bound of the flow of interest in the multimode wireless sensor network. There are two combined methods A-MM and N-MM. The method A-MM models the whole network as a multimode component, and the method N-MM models each node as a multimode component. We prove that the maximum delay bound computed by the method A-MM is tighter than or equal to that computed by the method N-MM. Experiments show that our proposed methods can significantly decrease the analytical delay bound comparing with the separate flow analysis method. For the large-scale wireless sensor network with 32 thousands of sensor nodes, our proposed methods can decrease about 70% of the analytical delay bound.

  13. A Multimode Equivalent Network Approach for the Analysis of a 'Realistic' Finite Array of Open Ended Waveguides

    NARCIS (Netherlands)

    Neto, A.; Bolt, R.; Gerini, G.; Schmitt, D.

    2003-01-01

    In this contribution we present a theoretical model for the analysis of finite arrays of open-ended waveguides mounted on finite mounting platforms or having radome coverages. This model is based on a Multimode Equivalent Network (MEN) [1] representation of the radiating waveguides complete with

  14. Sustained Spatial Attention in Touch: Modality-Specific and Multimodal Mechanisms

    Directory of Open Access Journals (Sweden)

    Chiara F. Sambo

    2011-01-01

    Full Text Available Sustained attention to a body location results in enhanced processing of tactile stimuli presented at that location compared to another unattended location. In this paper, we review studies investigating the neural correlates of sustained spatial attention in touch. These studies consistently show that activity within modality-specific somatosensory areas (SI and SII is modulated by sustained tactile-spatial attention. Recent evidence suggests that these somatosensory areas may be recruited as part of a larger cortical network,also including higher-level multimodal regions involved in spatial selection across modalities. We discuss, in turn, the following multimodal effects in sustained tactile-spatial attention tasks. First, cross-modal attentional links between touch and vision, reflected in enhanced processing of task-irrelevant visual stimuli at tactuallyattended locations, are mediated by common (multimodal representations of external space. Second, vision of the body modulates activity underlying sustained tactile-spatial attention, facilitating attentional modulation of tactile processing in between-hand (when hands are sufficiently far apart and impairing attentional modulation in within-hand selection tasks. Finally, body posture influences mechanisms of sustained tactile-spatial attention, relying, at least partly, on remapping of tactile stimuli in external, visuallydefined, spatial coordinates. Taken together, the findings reviewed in this paper indicate that sustained spatial attention in touch is subserved by both modality-specific and multimodal mechanisms. The interplay between these mechanisms allows flexible and efficient spatial selection within and across sensory modalities.

  15. Multimodal imaging analysis of single-photon emission computed tomography and magnetic resonance tomography for improving diagnosis of Parkinson's disease

    International Nuclear Information System (INIS)

    Barthel, H.; Georgi, P.; Slomka, P.; Dannenberg, C.; Kahn, T.

    2000-01-01

    Parkinson's disease (PD) is characterized by a degeneration of nigrostriated dopaminergic neurons, which can be imaged with 123 I-labeled 2β-carbomethoxy-3β-(4-iodophenyl) tropane ([ 123 I]β-CIT) and single-photon emission computed tomography (SPECT). However, the quality of the region of interest (ROI) technique used for quantitative analysis of SPECT data is compromised by limited anatomical information in the images. We investigated whether the diagnosis of PD can be improved by combining the use of SPECT images with morphological image data from magnetic resonance imaging (MRI)/computed tomography (CT). We examined 27 patients (8 men, 19 women; aged 55±13 years) with PD (Hoehn and Yahr stage 2.1±0.8) by high-resolution [ 123 I]β-CIT SPECT (185-200 MBq, Ceraspect camera). SPECT images were analyzed both by a unimodal technique (ROIs defined directly within the SPECT studies) and a multimodal technique (ROIs defined within individual MRI/CT studies and transferred to the corresponding interactively coregistered SPECT studies). [ 123 I]β-CIT binding ratios (cerebellum as reference), which were obtained for heads of caudate nuclei (CA), putamina (PU), and global striatal structures were compared with clinical parameters. Differences between contra- and ipsilateral (related to symptom dominance) striatal [ 123 I]β-CIT binding ratios proved to be larger in the multimodal ROI technique than in the unimodal approach (e.g., for PU: 1.2*** vs. 0.7**). Binding ratios obtained by the unimodal ROI technique were significantly correlated with those of the multimodal technique (e.g., for CA: y=0.97x+2.8; r=0.70; P com subscore (r=-0.49* vs. -0.32). These results show that the impact of [ 123 I]β-CIT SPECT for diagnosing PD is affected by the method used to analyze the SPECT images. The described multimodal approach, which is based on coregistration of SPECT and morphological imaging data, leads to improved determination of the degree of this dopaminergic disorder

  16. Spectroelectrochemical Sensing Based on Multimode Selectivity simultaneously Achievable in a Single Device. 11. Design and Evaluation of a Small Portable Sensor for the Determination of Ferrocyanide in Hanford Waste Samples

    International Nuclear Information System (INIS)

    Stegemiller, Michael L.; Heineman, William R.; Seliskar, Carl J.; Ridgway, Thomas H.; Bryan, Samuel A.; Hubler, Timothy L.; Sell, Richard L.

    2003-01-01

    Spectroelectrochemical sensing based on multimode selectivity simultaneously achievable in a single device. 11. Design and evaluation of a small portable sensor for the determination of ferrocyanide in Hanford waste samples

  17. A single-column model intercomparison on the stratocumulus representation in present-day and future climate

    NARCIS (Netherlands)

    Dal Gesso, S.; Van der Dussen, J.J.; Siebesma, A.P.; De Roode, S.R.; Boutle, I.A.; Kamae, Y.; Roehrig, R.; Vial, J.

    2015-01-01

    Six Single-Column Model (SCM) versions of climate models are evaluated on the basis of their representation of the dependence of the stratocumulus-topped boundary layer regime on the free tropospheric thermodynamic conditions. The study includes two idealized experiments corresponding to the

  18. Reconfigurable optical interconnection network for multimode optical fiber sensor arrays

    Science.gov (United States)

    Chen, R. T.; Robinson, D.; Lu, H.; Wang, M. R.; Jannson, T.; Baumbick, R.

    1992-01-01

    A single-source, single-detector architecture has been developed to implement a reconfigurable optical interconnection network multimode optical fiber sensor arrays. The network was realized by integrating LiNbO3 electrooptic (EO) gratings working at the Raman Na regime and a massive fan-out waveguide hologram (WH) working at the Bragg regime onto a multimode glass waveguide. The glass waveguide utilized the whole substrate as a guiding medium. A 1-to-59 massive waveguide fan-out was demonstrated using a WH operating at 514 nm. Measured diffraction efficiency of 59 percent was experimentally confirmed. Reconfigurability of the interconnection was carried out by generating an EO grating through an externally applied electric field. Unlike conventional single-mode integrated optical devices, the guided mode demonstrated has an azimuthal symmetry in mode profile which is the same as that of a fiber mode.

  19. Multimodal neural correlates of cognitive control in the Human Connectome Project.

    Science.gov (United States)

    Lerman-Sinkoff, Dov B; Sui, Jing; Rachakonda, Srinivas; Kandala, Sridhar; Calhoun, Vince D; Barch, Deanna M

    2017-12-01

    Cognitive control is a construct that refers to the set of functions that enable decision-making and task performance through the representation of task states, goals, and rules. The neural correlates of cognitive control have been studied in humans using a wide variety of neuroimaging modalities, including structural MRI, resting-state fMRI, and task-based fMRI. The results from each of these modalities independently have implicated the involvement of a number of brain regions in cognitive control, including dorsal prefrontal cortex, and frontal parietal and cingulo-opercular brain networks. However, it is not clear how the results from a single modality relate to results in other modalities. Recent developments in multimodal image analysis methods provide an avenue for answering such questions and could yield more integrated models of the neural correlates of cognitive control. In this study, we used multiset canonical correlation analysis with joint independent component analysis (mCCA + jICA) to identify multimodal patterns of variation related to cognitive control. We used two independent cohorts of participants from the Human Connectome Project, each of which had data from four imaging modalities. We replicated the findings from the first cohort in the second cohort using both independent and predictive analyses. The independent analyses identified a component in each cohort that was highly similar to the other and significantly correlated with cognitive control performance. The replication by prediction analyses identified two independent components that were significantly correlated with cognitive control performance in the first cohort and significantly predictive of performance in the second cohort. These components identified positive relationships across the modalities in neural regions related to both dynamic and stable aspects of task control, including regions in both the frontal-parietal and cingulo-opercular networks, as well as regions

  20. Quantitative multimodality imaging in cancer research and therapy.

    Science.gov (United States)

    Yankeelov, Thomas E; Abramson, Richard G; Quarles, C Chad

    2014-11-01

    Advances in hardware and software have enabled the realization of clinically feasible, quantitative multimodality imaging of tissue pathophysiology. Earlier efforts relating to multimodality imaging of cancer have focused on the integration of anatomical and functional characteristics, such as PET-CT and single-photon emission CT (SPECT-CT), whereas more-recent advances and applications have involved the integration of multiple quantitative, functional measurements (for example, multiple PET tracers, varied MRI contrast mechanisms, and PET-MRI), thereby providing a more-comprehensive characterization of the tumour phenotype. The enormous amount of complementary quantitative data generated by such studies is beginning to offer unique insights into opportunities to optimize care for individual patients. Although important technical optimization and improved biological interpretation of multimodality imaging findings are needed, this approach can already be applied informatively in clinical trials of cancer therapeutics using existing tools. These concepts are discussed herein.

  1. Integrative Data Analysis of Multi-Platform Cancer Data with a Multimodal Deep Learning Approach.

    Science.gov (United States)

    Liang, Muxuan; Li, Zhizhong; Chen, Ting; Zeng, Jianyang

    2015-01-01

    Identification of cancer subtypes plays an important role in revealing useful insights into disease pathogenesis and advancing personalized therapy. The recent development of high-throughput sequencing technologies has enabled the rapid collection of multi-platform genomic data (e.g., gene expression, miRNA expression, and DNA methylation) for the same set of tumor samples. Although numerous integrative clustering approaches have been developed to analyze cancer data, few of them are particularly designed to exploit both deep intrinsic statistical properties of each input modality and complex cross-modality correlations among multi-platform input data. In this paper, we propose a new machine learning model, called multimodal deep belief network (DBN), to cluster cancer patients from multi-platform observation data. In our integrative clustering framework, relationships among inherent features of each single modality are first encoded into multiple layers of hidden variables, and then a joint latent model is employed to fuse common features derived from multiple input modalities. A practical learning algorithm, called contrastive divergence (CD), is applied to infer the parameters of our multimodal DBN model in an unsupervised manner. Tests on two available cancer datasets show that our integrative data analysis approach can effectively extract a unified representation of latent features to capture both intra- and cross-modality correlations, and identify meaningful disease subtypes from multi-platform cancer data. In addition, our approach can identify key genes and miRNAs that may play distinct roles in the pathogenesis of different cancer subtypes. Among those key miRNAs, we found that the expression level of miR-29a is highly correlated with survival time in ovarian cancer patients. These results indicate that our multimodal DBN based data analysis approach may have practical applications in cancer pathogenesis studies and provide useful guidelines for

  2. Investigating the Strain, Temperature and Humidity Sensitivity of a Multimode Graded-Index Perfluorinated Polymer Optical Fiber with Bragg Grating.

    Science.gov (United States)

    Zheng, Yulong; Bremer, Kort; Roth, Bernhard

    2018-05-05

    In this work we investigate the strain, temperature and humidity sensitivity of a Fiber Bragg Grating (FBG) inscribed in a near infrared low-loss multimode perfluorinated polymer optical fiber based on cyclic transparent optical polymer (CYTOP). For this purpose, FBGs were inscribed into the multimode CYTOP fiber with a core diameter of 50 µm by using a krypton fluoride (KrF) excimer laser and the phase mask method. The evolution of the reflection spectrum of the FBG detected with a multimode interrogation technique revealed a single reflection peak with a full width at half maximum (FHWM) bandwidth of about 9 nm. Furthermore, the spectral envelope of the single FBG reflection peak can be optimized depending on the KrF excimer laser irradiation time. A linear shift of the Bragg wavelength due to applied strain, temperature and humidity was measured. Furthermore, depending on irradiation time of the KrF excimer laser, both the failure strain and strain sensitivity of the multimode fiber with FBG can be controlled. The inherent low light attenuation in the near infrared wavelength range (telecommunication window) of the multimode CYTOP fiber and the single FBG reflection peak when applying the multimode interrogation set-up will allow for new applications in the area of telecommunication and optical sensing.

  3. Semiconductor laser using multimode interference principle

    Science.gov (United States)

    Gong, Zisu; Yin, Rui; Ji, Wei; Wu, Chonghao

    2018-01-01

    Multimode interference (MMI) structure is introduced in semiconductor laser used in optical communication system to realize higher power and better temperature tolerance. Using beam propagation method (BPM), Multimode interference laser diode (MMI-LD) is designed and fabricated in InGaAsP/InP based material. As a comparison, conventional semiconductor laser using straight single-mode waveguide is also fabricated in the same wafer. With a low injection current (about 230 mA), the output power of the implemented MMI-LD is up to 2.296 mW which is about four times higher than the output power of the conventional semiconductor laser. The implemented MMI-LD exhibits stable output operating at the wavelength of 1.52 μm and better temperature tolerance when the temperature varies from 283.15 K to 293.15 K.

  4. Multimodal Feature Integration in the Angular Gyrus during Episodic and Semantic Retrieval

    Science.gov (United States)

    Bonnici, Heidi M.; Richter, Franziska R.; Yazar, Yasemin

    2016-01-01

    G) contribute to the retrieval of episodic and semantic memories. Our multivariate pattern classifier could distinguish episodic memory representations in AnG according to whether they were multimodal (audio-visual) or unimodal (auditory or visual) in nature, whereas statistically equivalent AnG activity was observed during retrieval of unimodal and multimodal semantic memories. Classification accuracy during episodic retrieval scaled with the trial-by-trial vividness with which participants experienced their recollections. Therefore, the findings offer new insights into the integrative processes subserved by AnG and how its function may contribute to our subjective experience of remembering. PMID:27194327

  5. Multimodal Feature Integration in the Angular Gyrus during Episodic and Semantic Retrieval.

    Science.gov (United States)

    Bonnici, Heidi M; Richter, Franziska R; Yazar, Yasemin; Simons, Jon S

    2016-05-18

    episodic and semantic memories. Our multivariate pattern classifier could distinguish episodic memory representations in AnG according to whether they were multimodal (audio-visual) or unimodal (auditory or visual) in nature, whereas statistically equivalent AnG activity was observed during retrieval of unimodal and multimodal semantic memories. Classification accuracy during episodic retrieval scaled with the trial-by-trial vividness with which participants experienced their recollections. Therefore, the findings offer new insights into the integrative processes subserved by AnG and how its function may contribute to our subjective experience of remembering. Copyright © 2016 Bonnici, Richter, et al.

  6. Multimodality and Ambient Intelligence

    NARCIS (Netherlands)

    Nijholt, Antinus; Verhaegh, W.; Aarts, E.; Korst, J.

    2004-01-01

    In this chapter we discuss multimodal interface technology. We present eexamples of multimodal interfaces and show problems and opportunities. Fusion of modalities is discussed and some roadmap discussions on research in multimodality are summarized. This chapter also discusses future developments

  7. Multimodal Imaging of Human Brain Activity: Rational, Biophysical Aspects and Modes of Integration

    Science.gov (United States)

    Blinowska, Katarzyna; Müller-Putz, Gernot; Kaiser, Vera; Astolfi, Laura; Vanderperren, Katrien; Van Huffel, Sabine; Lemieux, Louis

    2009-01-01

    Until relatively recently the vast majority of imaging and electrophysiological studies of human brain activity have relied on single-modality measurements usually correlated with readily observable or experimentally modified behavioural or brain state patterns. Multi-modal imaging is the concept of bringing together observations or measurements from different instruments. We discuss the aims of multi-modal imaging and the ways in which it can be accomplished using representative applications. Given the importance of haemodynamic and electrophysiological signals in current multi-modal imaging applications, we also review some of the basic physiology relevant to understanding their relationship. PMID:19547657

  8. Visual analytics for multimodal social network analysis: a design study with social scientists.

    Science.gov (United States)

    Ghani, Sohaib; Kwon, Bum Chul; Lee, Seungyoon; Yi, Ji Soo; Elmqvist, Niklas

    2013-12-01

    Social network analysis (SNA) is becoming increasingly concerned not only with actors and their relations, but also with distinguishing between different types of such entities. For example, social scientists may want to investigate asymmetric relations in organizations with strict chains of command, or incorporate non-actors such as conferences and projects when analyzing coauthorship patterns. Multimodal social networks are those where actors and relations belong to different types, or modes, and multimodal social network analysis (mSNA) is accordingly SNA for such networks. In this paper, we present a design study that we conducted with several social scientist collaborators on how to support mSNA using visual analytics tools. Based on an openended, formative design process, we devised a visual representation called parallel node-link bands (PNLBs) that splits modes into separate bands and renders connections between adjacent ones, similar to the list view in Jigsaw. We then used the tool in a qualitative evaluation involving five social scientists whose feedback informed a second design phase that incorporated additional network metrics. Finally, we conducted a second qualitative evaluation with our social scientist collaborators that provided further insights on the utility of the PNLBs representation and the potential of visual analytics for mSNA.

  9. Reflection effects in multimode fiber systems utilizing laser transmitters

    Science.gov (United States)

    Bates, Harry E.

    1991-11-01

    A number of optical communication lines are now in use at NASA-Kennedy for the transmission of voice, computer data, and video signals. Now, all of these channels use a single carrier wavelength centered near 1300 or 1550 nm. Engineering tests in the past have given indications of the growth of systematic and random noise in the RF spectrum of a fiber network as the number of connector pairs is increased. This noise seems to occur when a laser transmitter is used instead of a LED. It has been suggested that the noise is caused by back reflections created at connector fiber interfaces. Experiments were performed to explore the effect of reflection on the transmitting laser under conditions of reflective feedback. This effort included computer integration of some of the instrumentation in the fiber optic lab using the Lab View software recently acquired by the lab group. The main goal was to interface the Anritsu Optical and RF spectrum analyzers to the MacIntosh II computer so that laser spectra and network RF spectra could be simultaneously and rapidly acquired in a form convenient for analysis. Both single and multimode fiber is installed at Kennedy. Since most are multimode, this effort concentrated on multimode systems.

  10. Log-Gabor Energy Based Multimodal Medical Image Fusion in NSCT Domain

    Directory of Open Access Journals (Sweden)

    Yong Yang

    2014-01-01

    Full Text Available Multimodal medical image fusion is a powerful tool in clinical applications such as noninvasive diagnosis, image-guided radiotherapy, and treatment planning. In this paper, a novel nonsubsampled Contourlet transform (NSCT based method for multimodal medical image fusion is presented, which is approximately shift invariant and can effectively suppress the pseudo-Gibbs phenomena. The source medical images are initially transformed by NSCT followed by fusing low- and high-frequency components. The phase congruency that can provide a contrast and brightness-invariant representation is applied to fuse low-frequency coefficients, whereas the Log-Gabor energy that can efficiently determine the frequency coefficients from the clear and detail parts is employed to fuse the high-frequency coefficients. The proposed fusion method has been compared with the discrete wavelet transform (DWT, the fast discrete curvelet transform (FDCT, and the dual tree complex wavelet transform (DTCWT based image fusion methods and other NSCT-based methods. Visually and quantitatively experimental results indicate that the proposed fusion method can obtain more effective and accurate fusion results of multimodal medical images than other algorithms. Further, the applicability of the proposed method has been testified by carrying out a clinical example on a woman affected with recurrent tumor images.

  11. Multimodos e múltiplas Representações, aprendizagem significativa e subjetividade: três referenciais conciliáveis da educação científica Multimodal and multiple representation, significant learning and subjectivity: three reconcilable scientific education frameworks

    Directory of Open Access Journals (Sweden)

    Carlos Eduardo Laburú

    2011-01-01

    Full Text Available A linha de pesquisa em multimodos e múltiplas representações vem atualmente sendo inspiradora de ações instrucionais na educação científica. Partindo dos fundamentos que justificam um encaminhamento didático à luz dessas referências, este trabalho procura mostrar que há compatibilidade dos seus fundamentos com a teoria da aprendizagem significativa de Ausubel e com as questões levantadas pelas pesquisas que indicam a necessidade de se considerar a subjetividade dos alunos presentes numa sala de aula. Essencialmente, procuramos argumentar que a promoção de um ensino por meio de multimodos e múltiplas representações é consistente com o ambiente plural das subjetividades existentes numa sala de aula e com uma aprendizagem significativa.The research line in multi-modal and multiple representations is being currently inspiring instruction of actions in the scientific education. Starting by the foundations that justify a didactic direction in the light of theses references, this work tries to show that there is compatibility of their foundations with the theory of the significant learning of Ausubel, and with the lifted up subjects for the researches that indicate the need to consider the students' subjective diversity present in a classroom. Essentially, we tried to argue that the promotion of a teaching through multiple and multi-modal representation is consistent with the plural atmosphere of the existent subjectivities in a classroom and with a significant learning.

  12. Spectral decomposition of single-tone-driven quantum phase modulation

    International Nuclear Information System (INIS)

    Capmany, Jose; Fernandez-Pousa, Carlos R

    2011-01-01

    Electro-optic phase modulators driven by a single radio-frequency tone Ω can be described at the quantum level as scattering devices where input single-mode radiation undergoes energy changes in multiples of ℎΩ. In this paper, we study the spectral representation of the unitary, multimode scattering operator describing these devices. The eigenvalue equation, phase modulation being a process preserving the photon number, is solved at each subspace with definite number of photons. In the one-photon subspace F 1 , the problem is equivalent to the computation of the continuous spectrum of the Susskind-Glogower cosine operator of the harmonic oscillator. Using this analogy, the spectral decomposition in F 1 is constructed and shown to be equivalent to the usual Fock-space representation. The result is then generalized to arbitrary N-photon subspaces, where eigenvectors are symmetrized combinations of N one-photon eigenvectors and the continuous spectrum spans the entire unit circle. Approximate normalizable one-photon eigenstates are constructed in terms of London phase states truncated to optical bands. Finally, we show that synchronous ultrashort pulse trains represent classical field configurations with the same structure as these approximate eigenstates, and that they can be considered as approximate eigenvectors of the classical formulation of phase modulation.

  13. Spectral decomposition of single-tone-driven quantum phase modulation

    Energy Technology Data Exchange (ETDEWEB)

    Capmany, Jose [ITEAM Research Institute, Univ. Politecnica de Valencia, 46022 Valencia (Spain); Fernandez-Pousa, Carlos R, E-mail: c.pousa@umh.es [Signal Theory and Communications, Department of Physics and Computer Science, Univ. Miguel Hernandez, 03202 Elche (Spain)

    2011-02-14

    Electro-optic phase modulators driven by a single radio-frequency tone {Omega} can be described at the quantum level as scattering devices where input single-mode radiation undergoes energy changes in multiples of {h_bar}{Omega}. In this paper, we study the spectral representation of the unitary, multimode scattering operator describing these devices. The eigenvalue equation, phase modulation being a process preserving the photon number, is solved at each subspace with definite number of photons. In the one-photon subspace F{sub 1}, the problem is equivalent to the computation of the continuous spectrum of the Susskind-Glogower cosine operator of the harmonic oscillator. Using this analogy, the spectral decomposition in F{sub 1} is constructed and shown to be equivalent to the usual Fock-space representation. The result is then generalized to arbitrary N-photon subspaces, where eigenvectors are symmetrized combinations of N one-photon eigenvectors and the continuous spectrum spans the entire unit circle. Approximate normalizable one-photon eigenstates are constructed in terms of London phase states truncated to optical bands. Finally, we show that synchronous ultrashort pulse trains represent classical field configurations with the same structure as these approximate eigenstates, and that they can be considered as approximate eigenvectors of the classical formulation of phase modulation.

  14. FDTD simulation of microwave sintering of ceramics in multimode cavities

    Energy Technology Data Exchange (ETDEWEB)

    Iskander, M.F.; Smith, R.L.; Andrade, A.O.M.; Walsh, L.M. (Univ. of Utah, Salt Lake City, UT (United States). Dept. of Electrical Engineering); Kimrey, H. Jr. (Oak Ridge National Lab., TN (United States))

    1994-05-01

    At present, various aspects of the sintering process such as preparation of sample sizes and shapes, types of insulations, and the desirability of including a process stimulus such as SiC rods are considered forms of art and highly dependent on human expertise. The simulation of realistic sintering experiments in a multimode cavity may provide an improved understanding of critical parameters involved and allow for the development of guidelines towards the optimization of the sintering process. In this paper, the authors utilize the FDTD technique to model various geometrical arrangements and material compatibility aspects in multimode microwave cavities and to simulate realistic sintering experiments. The FDTD procedure starts with the simulation of a field distribution in multimode microwave cavities that resembles a set of measured data using liquid crystal sheets. Also included in the simulation is the waveguide feed as well as a ceramic loading plate placed at the base of the cavity. The FDTD simulation thus provides realistic representation of a typical sintering experiment. Aspects that have been successfully simulated include the effects of various types of insulation, the role of SiC rods on the uniformity of the resulting microwave fields, and the possible shielding effects that may result from excessive use of SiC. These results as well as others showing the electromagnetic fields and power-deposition patterns in multiple ceramic samples are presented.

  15. Multimodal profusion in the literacies of the Massive Open Online Course

    Directory of Open Access Journals (Sweden)

    Jeremy Knox

    2014-01-01

    Full Text Available This paper takes a view of digital literacy, which moves beyond a focus on technical methods and skills in an attempt to maintain a broader approach that encompasses a critical view of the learning subject. In doing this, we consider socio-materialism and its relation to aspects of literacy theory. We anchor the discussion in a consideration of the ‘E-learning and Digital Cultures’ Coursera MOOC, which provided a tangible setting for theorising some of the practices of digital literacy differently. The profusion of multimodal artefacts produced in response to this course constituted a complex series of socio-material entanglements, in which human beings and technologies each played a constituent part. Two specific digital artefacts are analysed according to these terms. We conclude that socio-material multimodality constitutes a different way of thinking about digital literacy: not as representational practices, but rather as multifaceted and relational enactments of knowledge, specific to particular contexts and moments.

  16. Analysis and synthesis of multi-qubit, multi-mode quantum devices

    Energy Technology Data Exchange (ETDEWEB)

    Solgun, Firat

    2015-03-27

    In this thesis we propose new methods in multi-qubit multi-mode circuit quantum electrodynamics (circuit-QED) architectures. First we describe a direct parity measurement method for three qubits, which can be realized in 2D circuit-QED with a possible extension to four qubits in a 3D circuit-QED setup for the implementation of the surface code. In Chapter 3 we show how to derive Hamiltonians and compute relaxation rates of the multi-mode superconducting microwave circuits consisting of single Josephson junctions using an exact impedance synthesis technique (the Brune synthesis) and applying previous formalisms for lumped element circuit quantization. In the rest of the thesis we extend our method to multi-junction (multi-qubit) multi-mode circuits through the use of state-space descriptions which allows us to quantize any multiport microwave superconducting circuit with a reciprocal lossy impedance response.

  17. A data fusion environment for multimodal and multi-informational neuronavigation.

    Science.gov (United States)

    Jannin, P; Fleig, O J; Seigneuret, E; Grova, C; Morandi, X; Scarabin, J M

    2000-01-01

    Part of the planning and performance of neurosurgery consists of determining target areas, areas to be avoided, landmark areas, and trajectories, all of which are components of the surgical script. Nowadays, neurosurgeons have access to multimodal medical imaging to support the definition of the surgical script. The purpose of this paper is to present a software environment developed by the authors that allows full multimodal and multi-informational planning as well as neuronavigation for epilepsy and tumor surgery. We have developed a data fusion environment dedicated to neuronavigation around the Surgical Microscope Neuronavigator system (Carl Zeiss, Oberkochen, Germany). This environment includes registration, segmentation, 3D visualization, and interaction-applied tools. It provides the neuronavigation system with the multimodal information involved in the definition of the surgical script: lesional areas, sulci, ventricles segmented from magnetic resonance imaging (MRI), vessels segmented from magnetic resonance angiography (MRA), functional areas from magneto-encephalography (MEG), and functional magnetic resonance imaging (fMRI) for somatosensory, motor, or language activation. These data are considered to be relevant for the performance of the surgical procedure. The definition of each entity results from the same procedure: registration to the anatomical MRI data set (defined as the reference data set), segmentation, fused 3D display, selection of the relevant entities for the surgical step, encoding in 3D surface-based representation, and storage of the 3D surfaces in a file recognized by the neuronavigation software (STP 3.4, Leibinger; Freiburg, Germany). Multimodal neuronavigation is illustrated with two clinical cases for which multimodal information was introduced into the neuronavigation system. Lesional areas were used to define and follow the surgical path, sulci and vessels helped identify the anatomical environment of the surgical field, and

  18. Representaciones sociales a futuro en la publicidad / Future social representations in advertising

    Directory of Open Access Journals (Sweden)

    Lucía Hellín

    2010-08-01

    Full Text Available RESUMEN: En este artículo analizamos las representaciones sociales a futuro construidas en la publicidad.Más específicamente, cómo el modo de presentar u omitir a los participantes y contextos produce el borramiento en el plano simbólico de las relaciones sociales que sustentan el modo de vida implicado en los mensajes publicitarios. Para el análisis tomaremos la noción de análisis multimodal propuesta por Kress y Van Leeuwen (2001. Las representaciones resultantes, parapersuadirnos, acentúan con fuerza los aspectos individuales borrando la inscripción social de toda acción individual. ABSTRACT: This article's purpose is the study of future social representations built in graphic advertising. More specically, how the way participants and contexts are shown produces the erasure in the symbolic plane of the social relations that ground the lifestyle implied by those messages. We will use the multimodal analysis developed by Kress & Van Leeuwen(2001. The outcoming representations, in order to persuade us, heavily stress the individual aspects, erasing the social inscription of any individual action.

  19. Multimodality therapy of local regional esophageal cancer.

    Science.gov (United States)

    Kelsen, David P

    2005-12-01

    Recent trials regarding the use of multimodality therapy for patients with cancers of the esophagus and gastroesophageal junction have not conclusively shown benefit. Regimens containing cisplatin and fluorouracil administered preoperatively appear to be tolerable and do not increase operative morbidity or mortality when compared with surgery alone. Yet clinical trials have not clearly shown that such regimens improve outcome as measured by survival. Likewise, trials of postoperative chemoradiation have not reported a significant improvement in median or overall survival. The reasons for the lack of clinical benefit from multimodality therapy are not completely understood, but improvements in systemic therapy will probably be necessary before disease-free or overall survival improves substantially. Some new single agents such as the taxanes (docetaxel or paclitaxel) and the camptothecan analog irinotecan have shown modest activity for palliative therapy.

  20. Facilitating Multiple Intelligences Through Multimodal Learning Analytics

    Directory of Open Access Journals (Sweden)

    Ayesha PERVEEN

    2018-01-01

    Full Text Available This paper develops a theoretical framework for employing learning analytics in online education to trace multiple learning variations of online students by considering their potential of being multiple intelligences based on Howard Gardner’s 1983 theory of multiple intelligences. The study first emphasizes the need to facilitate students as multiple intelligences by online education systems and then suggests a framework of the advanced form of learning analytics i.e., multimodal learning analytics for tracing and facilitating multiple intelligences while they are engaged in online ubiquitous learning. As multimodal learning analytics is still an evolving area, it poses many challenges for technologists, educationists as well as organizational managers. Learning analytics make machines meet humans, therefore, the educationists with an expertise in learning theories can help technologists devise latest technological methods for multimodal learning analytics and organizational managers can implement them for the improvement of online education. Therefore, a careful instructional design based on a deep understanding of students’ learning abilities, is required to develop teaching plans and technological possibilities for monitoring students’ learning paths. This is how learning analytics can help design an adaptive instructional design based on a quick analysis of the data gathered. Based on that analysis, the academicians can critically reflect upon the quick or delayed implementation of the existing instructional design based on students’ cognitive abilities or even about the single or double loop learning design. The researcher concludes that the online education is multimodal in nature, has the capacity to endorse multiliteracies and, therefore, multiple intelligences can be tracked and facilitated through multimodal learning analytics in an online mode. However, online teachers’ training both in technological implementations and

  1. Entanglement purification of multi-mode quantum states

    International Nuclear Information System (INIS)

    Clausen, J; Knoell, L; Welsch, D-G

    2003-01-01

    An iterative random procedure is considered allowing entanglement purification of a class of multi-mode quantum states. In certain cases, complete purification may be achieved using only a single signal state preparation. A physical implementation based on beam splitter arrays and non-linear elements is suggested. The influence of loss is analysed in the example of purification of entangled N-mode coherent states

  2. A single-sided homogeneous Green's function representation for holographic imaging, inverse scattering, time-reversal acoustics and interferometric Green's function retrieval

    Science.gov (United States)

    Wapenaar, Kees; Thorbecke, Jan; van der Neut, Joost

    2016-04-01

    Green's theorem plays a fundamental role in a diverse range of wavefield imaging applications, such as holographic imaging, inverse scattering, time-reversal acoustics and interferometric Green's function retrieval. In many of those applications, the homogeneous Green's function (i.e. the Green's function of the wave equation without a singularity on the right-hand side) is represented by a closed boundary integral. In practical applications, sources and/or receivers are usually present only on an open surface, which implies that a significant part of the closed boundary integral is by necessity ignored. Here we derive a homogeneous Green's function representation for the common situation that sources and/or receivers are present on an open surface only. We modify the integrand in such a way that it vanishes on the part of the boundary where no sources and receivers are present. As a consequence, the remaining integral along the open surface is an accurate single-sided representation of the homogeneous Green's function. This single-sided representation accounts for all orders of multiple scattering. The new representation significantly improves the aforementioned wavefield imaging applications, particularly in situations where the first-order scattering approximation breaks down.

  3. Multimodal correlation and intraoperative matching of virtual models in neurosurgery

    Science.gov (United States)

    Ceresole, Enrico; Dalsasso, Michele; Rossi, Aldo

    1994-01-01

    The multimodal correlation between different diagnostic exams, the intraoperative calibration of pointing tools and the correlation of the patient's virtual models with the patient himself, are some examples, taken from the biomedical field, of a unique problem: determine the relationship linking representation of the same object in different reference frames. Several methods have been developed in order to determine this relationship, among them, the surface matching method is one that gives the patient minimum discomfort and the errors occurring are compatible with the required precision. The surface matching method has been successfully applied to the multimodal correlation of diagnostic exams such as CT, MR, PET and SPECT. Algorithms for automatic segmentation of diagnostic images have been developed to extract the reference surfaces from the diagnostic exams, whereas the surface of the patient's skull has been monitored, in our approach, by means of a laser sensor mounted on the end effector of an industrial robot. An integrated system for virtual planning and real time execution of surgical procedures has been realized.

  4. Learning multimodal dictionaries.

    Science.gov (United States)

    Monaci, Gianluca; Jost, Philippe; Vandergheynst, Pierre; Mailhé, Boris; Lesage, Sylvain; Gribonval, Rémi

    2007-09-01

    Real-world phenomena involve complex interactions between multiple signal modalities. As a consequence, humans are used to integrate at each instant perceptions from all their senses in order to enrich their understanding of the surrounding world. This paradigm can be also extremely useful in many signal processing and computer vision problems involving mutually related signals. The simultaneous processing of multimodal data can, in fact, reveal information that is otherwise hidden when considering the signals independently. However, in natural multimodal signals, the statistical dependencies between modalities are in general not obvious. Learning fundamental multimodal patterns could offer deep insight into the structure of such signals. In this paper, we present a novel model of multimodal signals based on their sparse decomposition over a dictionary of multimodal structures. An algorithm for iteratively learning multimodal generating functions that can be shifted at all positions in the signal is proposed, as well. The learning is defined in such a way that it can be accomplished by iteratively solving a generalized eigenvector problem, which makes the algorithm fast, flexible, and free of user-defined parameters. The proposed algorithm is applied to audiovisual sequences and it is able to discover underlying structures in the data. The detection of such audio-video patterns in audiovisual clips allows to effectively localize the sound source on the video in presence of substantial acoustic and visual distractors, outperforming state-of-the-art audiovisual localization algorithms.

  5. Coherent multimoded dielectric wakefield accelerators

    International Nuclear Information System (INIS)

    Power, J.

    1998-01-01

    There has recently been a study of the potential uses of multimode dielectric structures for wakefield acceleration [1]. This technique is based on adjusting the wakefield modes of the structure to constructively interfere at certain delays with respect to the drive bunch, thus providing an accelerating gradient enhancement over single mode devices. In this report we examine and attempt to clarify the issues raised by this work in the light of the present state of the art in wakefield acceleration

  6. Multimodality

    DEFF Research Database (Denmark)

    Buhl, Mie

    2010-01-01

    In this paper, I address an ongoing discussion in Danish E-learning research about how to take advantage of the fact that digital media facilitate other communication forms than text, so-called ‘multimodal' communication, which should not be confused with the term ‘multimedia'. While multimedia...... on their teaching and learning situations. The choices they make involve e-learning resources like videos, social platforms and mobile devices, not just as digital artefacts we interact with, but the entire practice of using digital media. In a life-long learning perspective, multimodality is potentially very...

  7. Quantitative multi-modal NDT data analysis

    International Nuclear Information System (INIS)

    Heideklang, René; Shokouhi, Parisa

    2014-01-01

    A single NDT technique is often not adequate to provide assessments about the integrity of test objects with the required coverage or accuracy. In such situations, it is often resorted to multi-modal testing, where complementary and overlapping information from different NDT techniques are combined for a more comprehensive evaluation. Multi-modal material and defect characterization is an interesting task which involves several diverse fields of research, including signal and image processing, statistics and data mining. The fusion of different modalities may improve quantitative nondestructive evaluation by effectively exploiting the augmented set of multi-sensor information about the material. It is the redundant information in particular, whose quantification is expected to lead to increased reliability and robustness of the inspection results. There are different systematic approaches to data fusion, each with its specific advantages and drawbacks. In our contribution, these will be discussed in the context of nondestructive materials testing. A practical study adopting a high-level scheme for the fusion of Eddy Current, GMR and Thermography measurements on a reference metallic specimen with built-in grooves will be presented. Results show that fusion is able to outperform the best single sensor regarding detection specificity, while retaining the same level of sensitivity

  8. The Boy Factor: Can Single-Gender Classes Reduce the Over-Representation of Boys in Special Education?

    Science.gov (United States)

    Piechura-Couture, Kathy; Heins, Elizabeth; Tichenor, Mercedes

    2013-01-01

    Since the early 1990s numerous studies have concluded that there is an over-representation of males and minorities in special education. This paper examines the question if a different educational format, such as single-gender education, can help boys' behavior and thus reduce the number of special education referrals? The rationale for…

  9. A Robust Multimodal Bio metric Authentication Scheme with Voice and Face Recognition

    International Nuclear Information System (INIS)

    Kasban, H.

    2017-01-01

    This paper proposes a multimodal biometric scheme for human authentication based on fusion of voice and face recognition. For voice recognition, three categories of features (statistical coefficients, cepstral coefficients and voice timbre) are used and compared. The voice identification modality is carried out using Gaussian Mixture Model (GMM). For face recognition, three recognition methods (Eigenface, Linear Discriminate Analysis (LDA), and Gabor filter) are used and compared. The combination of voice and face biometrics systems into a single multimodal biometrics system is performed using features fusion and scores fusion. This study shows that the best results are obtained using all the features (cepstral coefficients, statistical coefficients and voice timbre features) for voice recognition, LDA face recognition method and scores fusion for the multimodal biometrics system

  10. Discrimination of skin diseases using the multimodal imaging approach

    Science.gov (United States)

    Vogler, N.; Heuke, S.; Akimov, D.; Latka, I.; Kluschke, F.; Röwert-Huber, H.-J.; Lademann, J.; Dietzek, B.; Popp, J.

    2012-06-01

    Optical microspectroscopic tools reveal great potential for dermatologic diagnostics in the clinical day-to-day routine. To enhance the diagnostic value of individual nonlinear optical imaging modalities such as coherent anti-Stokes Raman scattering (CARS), second harmonic generation (SHG) or two-photon excited fluorescence (TPF), the approach of multimodal imaging has recently been developed. Here, we present an application of nonlinear optical multimodal imaging with Raman-scattering microscopy to study sizable human-tissue cross-sections. The samples investigated contain both healthy tissue and various skin tumors. This contribution details the rich information content, which can be obtained from the multimodal approach: While CARS microscopy, which - in contrast to spontaneous Raman-scattering microscopy - is not hampered by single-photon excited fluorescence, is used to monitor the lipid and protein distribution in the samples, SHG imaging selectively highlights the distribution of collagen structures within the tissue. This is due to the fact, that SHG is only generated in structures which lack inversion geometry. Finally, TPF reveals the distribution of autofluorophores in tissue. The combination of these techniques, i.e. multimodal imaging, allows for recording chemical images of large area samples and is - as this contribution will highlight - of high clinically diagnostic value.

  11. A neural network model of semantic memory linking feature-based object representation and words.

    Science.gov (United States)

    Cuppini, C; Magosso, E; Ursino, M

    2009-06-01

    Recent theories in cognitive neuroscience suggest that semantic memory is a distributed process, which involves many cortical areas and is based on a multimodal representation of objects. The aim of this work is to extend a previous model of object representation to realize a semantic memory, in which sensory-motor representations of objects are linked with words. The model assumes that each object is described as a collection of features, coded in different cortical areas via a topological organization. Features in different objects are segmented via gamma-band synchronization of neural oscillators. The feature areas are further connected with a lexical area, devoted to the representation of words. Synapses among the feature areas, and among the lexical area and the feature areas are trained via a time-dependent Hebbian rule, during a period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from acoustic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits).

  12. Parametric Representation of the Speaker's Lips for Multimodal Sign Language and Speech Recognition

    Science.gov (United States)

    Ryumin, D.; Karpov, A. A.

    2017-05-01

    In this article, we propose a new method for parametric representation of human's lips region. The functional diagram of the method is described and implementation details with the explanation of its key stages and features are given. The results of automatic detection of the regions of interest are illustrated. A speed of the method work using several computers with different performances is reported. This universal method allows applying parametrical representation of the speaker's lipsfor the tasks of biometrics, computer vision, machine learning, and automatic recognition of face, elements of sign languages, and audio-visual speech, including lip-reading.

  13. The multimodal treatment of eating disorders

    OpenAIRE

    HALMI, KATHERINE A.

    2005-01-01

    The treatment of eating disorders is based on a multimodal model, recognizing that these disorders do not have a single cause or a predictable course. The treatment strategy is determined by the severity of illness and the specific eating disorder diagnosis. For the treatment of anorexia nervosa, the key elements are medical management, behavioral therapy, cognitive therapy and family therapy, while pharmacotherapy is at best an adjunct to other therapies. In bulimia nervosa...

  14. Dual CARS and SHG image acquisition scheme that combines single central fiber and multimode fiber bundle to collect and differentiate backward and forward generated photons

    Science.gov (United States)

    Weng, Sheng; Chen, Xu; Xu, Xiaoyun; Wong, Kelvin K.; Wong, Stephen T. C.

    2016-01-01

    In coherent anti-Stokes Raman scattering (CARS) and second harmonic generation (SHG) imaging, backward and forward generated photons exhibit different image patterns and thus capture salient intrinsic information of tissues from different perspectives. However, they are often mixed in collection using traditional image acquisition methods and thus are hard to interpret. We developed a multimodal scheme using a single central fiber and multimode fiber bundle to simultaneously collect and differentiate images formed by these two types of photons and evaluated the scheme in an endomicroscopy prototype. The ratio of these photons collected was calculated for the characterization of tissue regions with strong or weak epi-photon generation while different image patterns of these photons at different tissue depths were revealed. This scheme provides a new approach to extract and integrate information captured by backward and forward generated photons in dual CARS/SHG imaging synergistically for biomedical applications. PMID:27375938

  15. A novel multimodal chromatography based single step purification process for efficient manufacturing of an E. coli based biotherapeutic protein product.

    Science.gov (United States)

    Bhambure, Rahul; Gupta, Darpan; Rathore, Anurag S

    2013-11-01

    Methionine oxidized, reduced and fMet forms of a native recombinant protein product are often the critical product variants which are associated with proteins expressed as bacterial inclusion bodies in E. coli. Such product variants differ from native protein in their structural and functional aspects, and may lead to loss of biological activity and immunogenic response in patients. This investigation focuses on evaluation of multimodal chromatography for selective removal of these product variants using recombinant human granulocyte colony stimulating factor (GCSF) as the model protein. Unique selectivity in separation of closely related product variants was obtained using combined pH and salt based elution gradients in hydrophobic charge induction chromatography. Simultaneous removal of process related impurities was also achieved in flow-through leading to single step purification process for the GCSF. Results indicate that the product recovery of up to 90.0% can be obtained with purity levels of greater than 99.0%. Binding the target protein at pHproduct variants using the combined pH and salt based elution gradient and removal of the host cell impurities in flow-through are the key novel features of the developed multimodal chromatographic purification step. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Critical Analysis of Multimodal Discourse

    DEFF Research Database (Denmark)

    van Leeuwen, Theo

    2013-01-01

    This is an encyclopaedia article which defines the fields of critical discourse analysis and multimodality studies, argues that within critical discourse analysis more attention should be paid to multimodality, and within multimodality to critical analysis, and ends reviewing a few examples of re...

  17. Multimodal mechanisms of food creaminess sensation.

    Science.gov (United States)

    Chen, Jianshe; Eaton, Louise

    2012-12-01

    In this work, the sensory creaminess of a set of four viscosity-matched fluid foods (single cream, evaporated milk, corn starch solution, and corn starch solution containing long chain free fatty acids) was tested by a panel of 16 assessors via controlled sensation mechanisms of smell only, taste only, taste and tactile, and integrated multimodal. It was found that all sensation channels were able to discriminate between creamy and non-creamy foods, but only the multimodal method gave creaminess ratings in agreement with the samples' fat content. Results from this study show that the presence of long chain free fatty acids has no influence on creaminess perception. It is certain that food creaminess is not a primary sensory property but an integrated sensory perception (or sensory experience) derived from combined sensations of visual, olfactory, gustatory, and tactile cues. Creamy colour, milky flavour, and smooth texture are probably the most important sensory features of food creaminess.

  18. Multimodal fusion framework: a multiresolution approach for emotion classification and recognition from physiological signals.

    Science.gov (United States)

    Verma, Gyanendra K; Tiwary, Uma Shanker

    2014-11-15

    The purpose of this paper is twofold: (i) to investigate the emotion representation models and find out the possibility of a model with minimum number of continuous dimensions and (ii) to recognize and predict emotion from the measured physiological signals using multiresolution approach. The multimodal physiological signals are: Electroencephalogram (EEG) (32 channels) and peripheral (8 channels: Galvanic skin response (GSR), blood volume pressure, respiration pattern, skin temperature, electromyogram (EMG) and electrooculogram (EOG)) as given in the DEAP database. We have discussed the theories of emotion modeling based on i) basic emotions, ii) cognitive appraisal and physiological response approach and iii) the dimensional approach and proposed a three continuous dimensional representation model for emotions. The clustering experiment on the given valence, arousal and dominance values of various emotions has been done to validate the proposed model. A novel approach for multimodal fusion of information from a large number of channels to classify and predict emotions has also been proposed. Discrete Wavelet Transform, a classical transform for multiresolution analysis of signal has been used in this study. The experiments are performed to classify different emotions from four classifiers. The average accuracies are 81.45%, 74.37%, 57.74% and 75.94% for SVM, MLP, KNN and MMC classifiers respectively. The best accuracy is for 'Depressing' with 85.46% using SVM. The 32 EEG channels are considered as independent modes and features from each channel are considered with equal importance. May be some of the channel data are correlated but they may contain supplementary information. In comparison with the results given by others, the high accuracy of 85% with 13 emotions and 32 subjects from our proposed method clearly proves the potential of our multimodal fusion approach. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Quantum measure of nonclassical light

    International Nuclear Information System (INIS)

    Kim, Ki Sik

    2003-01-01

    The nonclassical light and its properties are reviewed in the phase space representation. The quantitative measure of nonclassicality for a single-mode case is introduced and its physical significance is discussed in terms of the environmental effects on nonclassicality. The quantitative measure of nonclassical property is defined and used to classify the different nonclassical properties. The nonclassical measure is also extended to the multi-mode case. One of the distinctive features of multi-mode nonclassical light is entanglement, which is not possessed by a single-mode light, and the multi-mode nonclassical measure may reflect the contents of entanglement. The multi-mode nonclassical measure is calculated for the superposition through a beam spitter and compared with the single-mode nonclassical measure.

  20. Hearing and Seeing Tone through Color: An Efficacy Study of Web-Based, Multimodal Chinese Tone Perception Training

    Science.gov (United States)

    Godfroid, Aline; Lin, Chin-Hsi; Ryu, Catherine

    2017-01-01

    Multimodal approaches have been shown to be effective for many learning tasks. In this study, we compared the effectiveness of five multimodal methods for second language (L2) Mandarin tone perception training: three single-cue methods (number, pitch contour, color) and two dual-cue methods (color and number, color and pitch contour). A total of…

  1. Amputation and prosthesis implantation shape body and peripersonal space representations

    OpenAIRE

    Canzoneri, Elisa; Marzolla, Marilena; Amoresano, Amedeo; Verni, Gennaro; Serino, Andrea

    2013-01-01

    Little is known about whether and how multimodal representations of the body (BRs) and of the space around the body (Peripersonal Space, PPS) adapt to amputation and prosthesis implantation. In order to investigate this issue, we tested BR in a group of upper limb amputees by means of a tactile distance perception task and PPS by means of an audio-tactile interaction task. Subjects performed the tasks with stimulation either on the healthy limb or the stump of the amputated limb, while wearin...

  2. The Multimodal Possibilities of Online Instructions

    DEFF Research Database (Denmark)

    Kampf, Constance

    2006-01-01

    The WWW simplifies the process of delivering online instructions through multimodal channels because of the ease of use for voice, video, pictures, and text modes of communication built into it.  Given that instructions are being produced in multimodal format for the WWW, how do multi-modal analy......The WWW simplifies the process of delivering online instructions through multimodal channels because of the ease of use for voice, video, pictures, and text modes of communication built into it.  Given that instructions are being produced in multimodal format for the WWW, how do multi...

  3. Toward multimodal signal detection of adverse drug reactions.

    Science.gov (United States)

    Harpaz, Rave; DuMouchel, William; Schuemie, Martijn; Bodenreider, Olivier; Friedman, Carol; Horvitz, Eric; Ripple, Anna; Sorbello, Alfred; White, Ryen W; Winnenburg, Rainer; Shah, Nigam H

    2017-12-01

    Improving mechanisms to detect adverse drug reactions (ADRs) is key to strengthening post-marketing drug safety surveillance. Signal detection is presently unimodal, relying on a single information source. Multimodal signal detection is based on jointly analyzing multiple information sources. Building on, and expanding the work done in prior studies, the aim of the article is to further research on multimodal signal detection, explore its potential benefits, and propose methods for its construction and evaluation. Four data sources are investigated; FDA's adverse event reporting system, insurance claims, the MEDLINE citation database, and the logs of major Web search engines. Published methods are used to generate and combine signals from each data source. Two distinct reference benchmarks corresponding to well-established and recently labeled ADRs respectively are used to evaluate the performance of multimodal signal detection in terms of area under the ROC curve (AUC) and lead-time-to-detection, with the latter relative to labeling revision dates. Limited to our reference benchmarks, multimodal signal detection provides AUC improvements ranging from 0.04 to 0.09 based on a widely used evaluation benchmark, and a comparative added lead-time of 7-22 months relative to labeling revision dates from a time-indexed benchmark. The results support the notion that utilizing and jointly analyzing multiple data sources may lead to improved signal detection. Given certain data and benchmark limitations, the early stage of development, and the complexity of ADRs, it is currently not possible to make definitive statements about the ultimate utility of the concept. Continued development of multimodal signal detection requires a deeper understanding the data sources used, additional benchmarks, and further research on methods to generate and synthesize signals. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Multimodal treatment in children and adolescents with attention-deficit/hyperactivity disorder: a 6-month follow-up.

    Science.gov (United States)

    Duric, Nezla S; Assmus, Jørg; Gundersen, Doris; Duric Golos, Alisa; Elgen, Irene B

    2017-07-01

    Different treatment approaches aimed at reducing attention-deficit/hyperactivity disorder (ADHD) core symptoms are available. However, factors such as intolerance, side-effects, lack of efficacy, high new technology costs, and placebo effect have spurred on an increasing interest in alternative or complementary treatment. The aim of this study is to explore efficacy of multimodal treatment consisting of standard stimulant medication (methylphenidate) and neurofeedback (NF) in combination, and to compare it with the single treatment in 6-month follow-up in ADHD children and adolescents. This randomized controlled trial with 6-month follow-up comprised three treatment arms: multimodal treatment (NF + MED), MED alone, and NF alone. A total of 130 ADHD children/adolescents participated, and 62% completed the study. ADHD core symptoms were recorded pre-/post-treatment, using parents' and teachers' forms taken from Barkley's Defiant Children: A Clinician's Manual for Assessment and Parent Training, and a self-report questionnaire. Significant ADHD core symptom improvements were reported 6 months after treatment completion by parents, teachers, and participants in all three groups, with marked improvement in inattention in all groups. However, no significant improvements in hyperactivity or academic performance were reported by teachers or self-reported by children/adolescents, respectively, in the three groups. Changes obtained with multimodal treatment at 6-month follow-up were comparable to those with single medication treatment, as reported by all participants. Multimodal treatment using combined stimulant medication and NF showed 6-month efficacy in ADHD treatment. More research is needed to explore whether multimodal treatment is suitable for ADHD children and adolescents who showed a poor response to single medication treatment, and for those who want to reduce the use of stimulant medication.

  5. Single-Mode VCSELs

    Science.gov (United States)

    Larsson, Anders; Gustavsson, Johan S.

    The only active transverse mode in a truly single-mode VCSEL is the fundamental mode with a near Gaussian field distribution. A single-mode VCSEL produces a light beam of higher spectral purity, higher degree of coherence and lower divergence than a multimode VCSEL and the beam can be more precisely shaped and focused to a smaller spot. Such beam properties are required in many applications. In this chapter, after discussing applications of single-mode VCSELs, we introduce the basics of fields and modes in VCSELs and review designs implemented for single-mode emission from VCSELs in different materials and at different wavelengths. This includes VCSELs that are inherently single-mode as well as inherently multimode VCSELs where higher-order modes are suppressed by mode selective gain or loss. In each case we present the current state-of-the-art and discuss pros and cons. At the end, a specific example with experimental results is provided and, as a summary, the most promising designs based on current technologies are identified.

  6. Multimodality imaging techniques.

    Science.gov (United States)

    Martí-Bonmatí, Luis; Sopena, Ramón; Bartumeus, Paula; Sopena, Pablo

    2010-01-01

    In multimodality imaging, the need to combine morphofunctional information can be approached by either acquiring images at different times (asynchronous), and fused them through digital image manipulation techniques or simultaneously acquiring images (synchronous) and merging them automatically. The asynchronous post-processing solution presents various constraints, mainly conditioned by the different positioning of the patient in the two scans acquired at different times in separated machines. The best solution to achieve consistency in time and space is obtained by the synchronous image acquisition. There are many multimodal technologies in molecular imaging. In this review we will focus on those multimodality image techniques more commonly used in the field of diagnostic imaging (SPECT-CT, PET-CT) and new developments (as PET-MR). The technological innovations and development of new tracers and smart probes are the main key points that will condition multimodality image and diagnostic imaging professionals' future. Although SPECT-CT and PET-CT are standard in most clinical scenarios, MR imaging has some advantages, providing excellent soft-tissue contrast and multidimensional functional, structural and morphological information. The next frontier is to develop efficient detectors and electronics systems capable of detecting two modality signals at the same time. Not only PET-MR but also MR-US or optic-PET will be introduced in clinical scenarios. Even more, MR diffusion-weighted, pharmacokinetic imaging, spectroscopy or functional BOLD imaging will merge with PET tracers to further increase molecular imaging as a relevant medical discipline. Multimodality imaging techniques will play a leading role in relevant clinical applications. The development of new diagnostic imaging research areas, mainly in the field of oncology, cardiology and neuropsychiatry, will impact the way medicine is performed today. Both clinical and experimental multimodality studies, in

  7. Could a multimodal dictionary serve as a learning tool? An examination of the impact of technologically enhanced visual glosses on L2 text comprehension

    Directory of Open Access Journals (Sweden)

    Takeshi Sato

    2016-09-01

    Full Text Available This study examines the efficacy of a multimodal online bilingual dictionary based on cognitive linguistics in order to explore the advantages and limitations of explicit multimodal L2 vocabulary learning. Previous studies have examined the efficacy of the verbal and visual representation of words while reading L2 texts, concluding that it facilitates incidental word retention. This study explores other potentials of multimodal L2 vocabulary learning: explicit learning with a multimodal dictionary could enhance not only word retention, but also text comprehension; the dictionary could serve not only as a reference tool, but also as a learning tool; and technology-enhanced visual glosses could facilitate deeper text comprehension. To verify these claims, this study investigates the multimodal representations’ effects on Japanese students learning L2 locative prepositions by developing two online dictionaries, one with static pictures and one with animations. The findings show the advantage of such dictionaries in explicit learning; however, no significant differences are found between the two types of visual glosses, either in the vocabulary or in the listening tests. This study confirms the effectiveness of multimodal L2 materials, but also emphasizes the need for further research into making the technologically enhanced materials more effective.

  8. Multimodal emotional state recognition using sequence-dependent deep hierarchical features.

    Science.gov (United States)

    Barros, Pablo; Jirak, Doreen; Weber, Cornelius; Wermter, Stefan

    2015-12-01

    Emotional state recognition has become an important topic for human-robot interaction in the past years. By determining emotion expressions, robots can identify important variables of human behavior and use these to communicate in a more human-like fashion and thereby extend the interaction possibilities. Human emotions are multimodal and spontaneous, which makes them hard to be recognized by robots. Each modality has its own restrictions and constraints which, together with the non-structured behavior of spontaneous expressions, create several difficulties for the approaches present in the literature, which are based on several explicit feature extraction techniques and manual modality fusion. Our model uses a hierarchical feature representation to deal with spontaneous emotions, and learns how to integrate multiple modalities for non-verbal emotion recognition, making it suitable to be used in an HRI scenario. Our experiments show that a significant improvement of recognition accuracy is achieved when we use hierarchical features and multimodal information, and our model improves the accuracy of state-of-the-art approaches from 82.5% reported in the literature to 91.3% for a benchmark dataset on spontaneous emotion expressions. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. The Uses of Literacy in Studying Computer Games: Comparing Students' Oral and Visual Representations of Games

    Science.gov (United States)

    Pelletier, Caroline

    2005-01-01

    This paper compares the oral and visual representations which 12 to 13-year-old students produced in studying computer games as part of an English and Media course. It presents the arguments for studying multimodal texts as part of a literacy curriculum and then provides an overview of the games course devised by teachers and researchers. The…

  10. Multimodal Dispersion of Nanoparticles: A Comprehensive Evaluation of Size Distribution with 9 Size Measurement Methods.

    Science.gov (United States)

    Varenne, Fanny; Makky, Ali; Gaucher-Delmas, Mireille; Violleau, Frédéric; Vauthier, Christine

    2016-05-01

    Evaluation of particle size distribution (PSD) of multimodal dispersion of nanoparticles is a difficult task due to inherent limitations of size measurement methods. The present work reports the evaluation of PSD of a dispersion of poly(isobutylcyanoacrylate) nanoparticles decorated with dextran known as multimodal and developed as nanomedecine. The nine methods used were classified as batch particle i.e. Static Light Scattering (SLS) and Dynamic Light Scattering (DLS), single particle i.e. Electron Microscopy (EM), Atomic Force Microscopy (AFM), Tunable Resistive Pulse Sensing (TRPS) and Nanoparticle Tracking Analysis (NTA) and separative particle i.e. Asymmetrical Flow Field-Flow Fractionation coupled with DLS (AsFlFFF) size measurement methods. The multimodal dispersion was identified using AFM, TRPS and NTA and results were consistent with those provided with the method based on a separation step prior to on-line size measurements. None of the light scattering batch methods could reveal the complexity of the PSD of the dispersion. Difference between PSD obtained from all size measurement methods tested suggested that study of the PSD of multimodal dispersion required to analyze samples by at least one of the single size particle measurement method or a method that uses a separation step prior PSD measurement.

  11. Conditional generation of arbitrary multimode entangled states of light with linear optics

    International Nuclear Information System (INIS)

    Fiurasek, J.; Massar, S.; Cerf, N. J.

    2003-01-01

    We propose a universal scheme for the probabilistic generation of an arbitrary multimode entangled state of light with finite expansion in Fock basis. The suggested setup involves passive linear optics, single-photon sources, strong coherent laser beams, and photodetectors with single-photon resolution. The efficiency of this setup may be greatly enhanced if, in addition, a quantum memory is available

  12. Multimodal fluorescence imaging spectroscopy

    NARCIS (Netherlands)

    Stopel, Martijn H W; Blum, Christian; Subramaniam, Vinod; Engelborghs, Yves; Visser, Anthonie J.W.G.

    2014-01-01

    Multimodal fluorescence imaging is a versatile method that has a wide application range from biological studies to materials science. Typical observables in multimodal fluorescence imaging are intensity, lifetime, excitation, and emission spectra which are recorded at chosen locations at the sample.

  13. Ideology and Orientalism in American and Cuban news media : Representation of the Chinese government in foreign media during the Umbrella Revolution

    OpenAIRE

    Aleñá Naval, Gerard

    2017-01-01

    This study aims to examine the representation of the Chinese government in foreign media during the Umbrella Revolution along 2014. Hence, this paper analyzes The New York Times and Granma by using Critical Discourse Analysis along with Multimodal Critical Discourse Analysis in order to reveal underlying ideology and Orientalism in their news discourse. Thus, this study aims to understand how influenced is their representation of the Chinese government by the ideology of their countries. In t...

  14. Multimodality in organization studies

    DEFF Research Database (Denmark)

    Van Leeuwen, Theo

    2017-01-01

    This afterword reviews the chapters in this volume and reflects on the synergies between organization and management studies and multimodality studies that emerge from the volume. These include the combination of strong sociological theorizing and detailed multimodal analysis, a focus on material...

  15. Multimodality image registration with software: state-of-the-art

    International Nuclear Information System (INIS)

    Slomka, Piotr J.; Baum, Richard P.

    2009-01-01

    Multimodality image integration of functional and anatomical data can be performed by means of dedicated hybrid imaging systems or by software image co-registration techniques. Hybrid positron emission tomography (PET)/computed tomography (CT) systems have found wide acceptance in oncological imaging, while software registration techniques have a significant role in patient-specific, cost-effective, and radiation dose-effective application of integrated imaging. Software techniques allow accurate (2-3 mm) rigid image registration of brain PET with CT and MRI. Nonlinear techniques are used in whole-body image registration, and recent developments allow for significantly accelerated computing times. Nonlinear software registration of PET with CT or MRI is required for multimodality radiation planning. Difficulties remain in the validation of nonlinear registration of soft tissue organs. The utilization of software-based multimodality image integration in a clinical environment is sometimes hindered by the lack of appropriate picture archiving and communication systems (PACS) infrastructure needed to efficiently and automatically integrate all available images into one common database. In cardiology applications, multimodality PET/single photon emission computed tomography and coronary CT angiography imaging is typically not required unless the results of one of the tests are equivocal. Software image registration is likely to be used in a complementary fashion with hybrid PET/CT or PET/magnetic resonance imaging systems. Software registration of stand-alone scans ''paved the way'' for the clinical application of hybrid scanners, demonstrating practical benefits of image integration before the hybrid dual-modality devices were available. (orig.)

  16. Functional Wigner representation of quantum dynamics of Bose-Einstein condensate

    Energy Technology Data Exchange (ETDEWEB)

    Opanchuk, B.; Drummond, P. D. [Centre for Atom Optics and Ultrafast Spectroscopy, Swinburne University of Technology, Hawthorn VIC 3122 (Australia)

    2013-04-15

    We develop a method of simulating the full quantum field dynamics of multi-mode multi-component Bose-Einstein condensates in a trap. We use the truncated Wigner representation to obtain a probabilistic theory that can be sampled. This method produces c-number stochastic equations which may be solved using conventional stochastic methods. The technique is valid for large mode occupation numbers. We give a detailed derivation of methods of functional Wigner representation appropriate for quantum fields. Our approach describes spatial evolution of spinor components and properly accounts for nonlinear losses. Such techniques are applicable to calculating the leading quantum corrections, including effects such as quantum squeezing, entanglement, EPR correlations, and interactions with engineered nonlinear reservoirs. By using a consistent expansion in the inverse density, we are able to explain an inconsistency in the nonlinear loss equations found by earlier authors.

  17. Practical multimodal care for cancer cachexia.

    Science.gov (United States)

    Maddocks, Matthew; Hopkinson, Jane; Conibear, John; Reeves, Annie; Shaw, Clare; Fearon, Ken C H

    2016-12-01

    Cancer cachexia is common and reduces function, treatment tolerability and quality of life. Given its multifaceted pathophysiology a multimodal approach to cachexia management is advocated for, but can be difficult to realise in practice. We use a case-based approach to highlight practical approaches to the multimodal management of cachexia for patients across the cancer trajectory. Four cases with lung cancer spanning surgical resection, radical chemoradiotherapy, palliative chemotherapy and no anticancer treatment are presented. We propose multimodal care approaches that incorporate nutritional support, exercise, and anti-inflammatory agents, on a background of personalized oncology care and family-centred education. Collectively, the cases reveal that multimodal care is part of everyone's remit, often focuses on supported self-management, and demands buy-in from the patient and their family. Once operationalized, multimodal care approaches can be tested pragmatically, including alongside emerging pharmacological cachexia treatments. We demonstrate that multimodal care for cancer cachexia can be achieved using simple treatments and without a dedicated team of specialists. The sharing of advice between health professionals can help build collective confidence and expertise, moving towards a position in which every team member feels they can contribute towards multimodal care.

  18. Implementation and flight-test of a multi-mode rotorcraft flight-control system for single-pilot use in poor visibility

    Science.gov (United States)

    Hindson, William S.

    1987-01-01

    A flight investigation was conducted to evaluate a multi-mode flight control system designed according to the most recent recommendations for handling qualities criteria for new military helicopters. The modes and capabilities that were included in the system are those considered necessary to permit divided-attention (single-pilot) lowspeed and hover operations near the ground in poor visibility conditions. Design features included mode-selection and mode-blending logic, the use of an automatic position-hold mode that employed precision measurements of aircraft position, and a hover display which permitted manually-controlled hover flight tasks in simulated instrument conditions. Pilot evaluations of the system were conducted using a multi-segment evaluation task. Pilot comments concerning the use of the system are provided, and flight-test data are presented to show system performance.

  19. A Multimodal Discourse Analysis of Advertisements-Based on Visual Grammar

    Directory of Open Access Journals (Sweden)

    Fang Guo

    2017-03-01

    Full Text Available In addition to words, the symbols, colors, sculptures, photographs, music, etc. are also frequently employed by participants to express themselves in communication. Advertising is closely related to sounds, colors, picture animations and other symbols. This paper aims to present how semiotics acts effectively to realize the real business purpose to reflect the unique significance of the multimodal discourse analysis. Based on Visual Grammar, this paper analyzes the 2014 Brazil World Cup advertisements from the perspective of representational meaning, interactive meaning and compositional meaning, this research means to prove that different modes within an advertisement depend on each other and have an interdependent relationship. And these relationships have different roles in different contexts.

  20. The expert surgical assistant. An intelligent virtual environment with multimodal input.

    Science.gov (United States)

    Billinghurst, M; Savage, J; Oppenheimer, P; Edmond, C

    1996-01-01

    Virtual Reality has made computer interfaces more intuitive but not more intelligent. This paper shows how an expert system can be coupled with multimodal input in a virtual environment to provide an intelligent simulation tool or surgical assistant. This is accomplished in three steps. First, voice and gestural input is interpreted and represented in a common semantic form. Second, a rule-based expert system is used to infer context and user actions from this semantic representation. Finally, the inferred user actions are matched against steps in a surgical procedure to monitor the user's progress and provide automatic feedback. In addition, the system can respond immediately to multimodal commands for navigational assistance and/or identification of critical anatomical structures. To show how these methods are used we present a prototype sinus surgery interface. The approach described here may easily be extended to a wide variety of medical and non-medical training applications by making simple changes to the expert system database and virtual environment models. Successful implementation of an expert system in both simulated and real surgery has enormous potential for the surgeon both in training and clinical practice.

  1. Entanglement and Wigner Function Negativity of Multimode Non-Gaussian States

    Science.gov (United States)

    Walschaers, Mattia; Fabre, Claude; Parigi, Valentina; Treps, Nicolas

    2017-11-01

    Non-Gaussian operations are essential to exploit the quantum advantages in optical continuous variable quantum information protocols. We focus on mode-selective photon addition and subtraction as experimentally promising processes to create multimode non-Gaussian states. Our approach is based on correlation functions, as is common in quantum statistical mechanics and condensed matter physics, mixed with quantum optics tools. We formulate an analytical expression of the Wigner function after the subtraction or addition of a single photon, for arbitrarily many modes. It is used to demonstrate entanglement properties specific to non-Gaussian states and also leads to a practical and elegant condition for Wigner function negativity. Finally, we analyze the potential of photon addition and subtraction for an experimentally generated multimode Gaussian state.

  2. A Learning Algorithm for Multimodal Grammar Inference.

    Science.gov (United States)

    D'Ulizia, A; Ferri, F; Grifoni, P

    2011-12-01

    The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.

  3. Palmprint and face multi-modal biometric recognition based on SDA-GSVD and its kernelization.

    Science.gov (United States)

    Jing, Xiao-Yuan; Li, Sheng; Li, Wen-Qian; Yao, Yong-Fang; Lan, Chao; Lu, Jia-Sen; Yang, Jing-Yu

    2012-01-01

    When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person's overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one person's different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance.

  4. Palmprint and Face Multi-Modal Biometric Recognition Based on SDA-GSVD and Its Kernelization

    Science.gov (United States)

    Jing, Xiao-Yuan; Li, Sheng; Li, Wen-Qian; Yao, Yong-Fang; Lan, Chao; Lu, Jia-Sen; Yang, Jing-Yu

    2012-01-01

    When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person's overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one person's different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance. PMID:22778600

  5. Atypical right hemisphere specialization for object representations in an adolescent with specific language impairment

    Directory of Open Access Journals (Sweden)

    Timothy T. Brown

    2014-02-01

    Full Text Available Individuals with a diagnosis of specific language impairment (SLI show abnormal spoken language occurring alongside normal nonverbal abilities. Behaviorally, people with SLI exhibit diverse profiles of impairment involving phonological, grammatical, syntactic, and semantic aspects of language. In this study, we used a multimodal neuroimaging technique called anatomically constrained magnetoencephalography (aMEG to measure the dynamic functional brain organization of an adolescent with SLI. Using single-subject statistical maps of cortical activity, we compared this patient to a sibling and to a cohort of typically developing subjects during the performance of tasks designed to evoke semantic representations of concrete objects. Localized, real-time patterns of brain activity within the language impaired patient showed marked differences from the typical functional organization, with significant engagement of right hemisphere heteromodal cortical regions generally homotopic to the left hemisphere areas that usually show the greatest activity for such tasks. Functional neuroanatomical differences were evident at early sensoriperceptual processing stages and continued through later cognitive stages, observed specifically at latencies typically associated with semantic encoding operations. Our findings show with real-time temporal specificity evidence for an atypical right hemisphere specialization for the representation of concrete entities, independent of verbal motor demands. More broadly, our results demonstrate the feasibility and potential utility of using aMEG to characterize individual patient differences in the dynamic functional organization of the brain.

  6. Optical Splitters Based on Self-Imaging Effect in Multi-Mode Waveguide Made by Ion Exchange in Glass

    Directory of Open Access Journals (Sweden)

    O. Barkman

    2013-04-01

    Full Text Available Design and modeling of single mode optical multi-mode interference structures with graded refractive index is reported. Several samples of planar optical channel waveguides were obtained by Ag+, Na+ and K+, Na+ one step thermal ion exchange process in molten salt on GIL49 glass substrate and new special optical glass for ion exchange technology. Waveguide properties were measured by optical mode spectroscopy. Obtained data were used for further design and modeling of single mode channel waveguide and subsequently for the design of 1 to 3 multimode interference power splitter in order to improve simulation accuracy. Designs were developed by utilizing finite difference beam propagation method.

  7. Analysis of the Ballot Shuffling Attack on Irish ballot counting for Proportional Representation by Single Transferable Vote (PR-STV)

    DEFF Research Database (Denmark)

    Cochran, Dermot Robert

    2015-01-01

    The current Irish legislation for counting of ballots does not fully comply with the true meaning of proportional representation by single transferable vote. This is due to the way in which second and subsequent transfers are handled, the legislative requirement to only count the last set of ball...

  8. Preferential loss of dorsal-hippocampus synapses underlies memory impairments provoked by short, multimodal stress.

    Science.gov (United States)

    Maras, P M; Molet, J; Chen, Y; Rice, C; Ji, S G; Solodkin, A; Baram, T Z

    2014-07-01

    The cognitive effects of stress are profound, yet it is unknown if the consequences of concurrent multiple stresses on learning and memory differ from those of a single stress of equal intensity and duration. We compared the effects on hippocampus-dependent memory of concurrent, hours-long light, loud noise, jostling and restraint (multimodal stress) with those of restraint or of loud noise alone. We then examined if differences in memory impairment following these two stress types might derive from their differential impact on hippocampal synapses, distinguishing dorsal and ventral hippocampus. Mice exposed to hours-long restraint or loud noise were modestly or minimally impaired in novel object recognition, whereas similar-duration multimodal stress provoked severe deficits. Differences in memory were not explained by differences in plasma corticosterone levels or numbers of Fos-labeled neurons in stress-sensitive hypothalamic neurons. However, although synapses in hippocampal CA3 were impacted by both restraint and multimodal stress, multimodal stress alone reduced synapse numbers severely in dorsal CA1, a region crucial for hippocampus-dependent memory. Ventral CA1 synapses were not significantly affected by either stress modality. Probing the basis of the preferential loss of dorsal synapses after multimodal stress, we found differential patterns of neuronal activation by the two stress types. Cross-correlation matrices, reflecting functional connectivity among activated regions, demonstrated that multimodal stress reduced hippocampal correlations with septum and thalamus and increased correlations with amygdala and BST. Thus, despite similar effects on plasma corticosterone and on hypothalamic stress-sensitive cells, multimodal and restraint stress differ in their activation of brain networks and in their impact on hippocampal synapses. Both of these processes might contribute to amplified memory impairments following short, multimodal stress.

  9. Preferential loss of dorsal-hippocampus synapses underlies memory impairments provoked by short, multimodal stress

    Science.gov (United States)

    Maras, P M; Molet, J; Chen, Y; Rice, C; Ji, S G; Solodkin, A; Baram, T Z

    2014-01-01

    The cognitive effects of stress are profound, yet it is unknown if the consequences of concurrent multiple stresses on learning and memory differ from those of a single stress of equal intensity and duration. We compared the effects on hippocampus-dependent memory of concurrent, hours-long light, loud noise, jostling and restraint (multimodal stress) with those of restraint or of loud noise alone. We then examined if differences in memory impairment following these two stress types might derive from their differential impact on hippocampal synapses, distinguishing dorsal and ventral hippocampus. Mice exposed to hours-long restraint or loud noise were modestly or minimally impaired in novel object recognition, whereas similar-duration multimodal stress provoked severe deficits. Differences in memory were not explained by differences in plasma corticosterone levels or numbers of Fos-labeled neurons in stress-sensitive hypothalamic neurons. However, although synapses in hippocampal CA3 were impacted by both restraint and multimodal stress, multimodal stress alone reduced synapse numbers severely in dorsal CA1, a region crucial for hippocampus-dependent memory. Ventral CA1 synapses were not significantly affected by either stress modality. Probing the basis of the preferential loss of dorsal synapses after multimodal stress, we found differential patterns of neuronal activation by the two stress types. Cross-correlation matrices, reflecting functional connectivity among activated regions, demonstrated that multimodal stress reduced hippocampal correlations with septum and thalamus and increased correlations with amygdala and BST. Thus, despite similar effects on plasma corticosterone and on hypothalamic stress-sensitive cells, multimodal and restraint stress differ in their activation of brain networks and in their impact on hippocampal synapses. Both of these processes might contribute to amplified memory impairments following short, multimodal stress. PMID:24589888

  10. New developments in multimodal clinical multiphoton tomography

    Science.gov (United States)

    König, Karsten

    2011-03-01

    80 years ago, the PhD student Maria Goeppert predicted in her thesis in Goettingen, Germany, two-photon effects. It took 30 years to prove her theory, and another three decades to realize the first two-photon microscope. With the beginning of this millennium, first clinical multiphoton tomographs started operation in research institutions, hospitals, and in the cosmetic industry. The multiphoton tomograph MPTflexTM with its miniaturized flexible scan head became the Prism-Award 2010 winner in the category Life Sciences. Multiphoton tomographs with its superior submicron spatial resolution can be upgraded to 5D imaging tools by adding spectral time-correlated single photon counting units. Furthermore, multimodal hybrid tomographs provide chemical fingerprinting and fast wide-field imaging. The world's first clinical CARS studies have been performed with a hybrid multimodal multiphoton tomograph in spring 2010. In particular, nonfluorescent lipids and water as well as mitochondrial fluorescent NAD(P)H, fluorescent elastin, keratin, and melanin as well as SHG-active collagen have been imaged in patients with dermatological disorders. Further multimodal approaches include the combination of multiphoton tomographs with low-resolution imaging tools such as ultrasound, optoacoustic, OCT, and dermoscopy systems. Multiphoton tomographs are currently employed in Australia, Japan, the US, and in several European countries for early diagnosis of skin cancer (malignant melanoma), optimization of treatment strategies (wound healing, dermatitis), and cosmetic research including long-term biosafety tests of ZnO sunscreen nanoparticles and the measurement of the stimulated biosynthesis of collagen by anti-ageing products.

  11. Multi-focus beam shaping of high power multimode lasers

    Science.gov (United States)

    Laskin, Alexander; Volpp, Joerg; Laskin, Vadim; Ostrun, Aleksei

    2017-08-01

    Beam shaping of powerful multimode fiber lasers, fiber-coupled solid-state and diode lasers is of great importance for improvements of industrial laser applications. Welding, cladding with millimetre scale working spots benefit from "inverseGauss" intensity profiles; performance of thick metal sheet cutting, deep penetration welding can be enhanced when distributing the laser energy along the optical axis as more efficient usage of laser energy, higher edge quality and reduction of the heat affected zone can be achieved. Building of beam shaping optics for multimode lasers encounters physical limitations due to the low beam spatial coherence of multimode fiber-coupled lasers resulting in big Beam Parameter Products (BPP) or M² values. The laser radiation emerging from a multimode fiber presents a mixture of wavefronts. The fiber end can be considered as a light source which optical properties are intermediate between a Lambertian source and a single mode laser beam. Imaging of the fiber end, using a collimator and a focusing objective, is a robust and widely used beam delivery approach. Beam shaping solutions are suggested in form of optics combining fiber end imaging and geometrical separation of focused spots either perpendicular to or along the optical axis. Thus, energy of high power lasers is distributed among multiple foci. In order to provide reliable operation with multi-kW lasers and avoid damages the optics are designed as refractive elements with smooth optical surfaces. The paper presents descriptions of multi-focus optics as well as examples of intensity profile measurements of beam caustics and application results.

  12. Multimodality image registration with software: state-of-the-art

    Energy Technology Data Exchange (ETDEWEB)

    Slomka, Piotr J. [Cedars-Sinai Medical Center, AIM Program/Department of Imaging, Los Angeles, CA (United States); University of California, David Geffen School of Medicine, Los Angeles, CA (United States); Baum, Richard P. [Center for PET, Department of Nuclear Medicine, Bad Berka (Germany)

    2009-03-15

    Multimodality image integration of functional and anatomical data can be performed by means of dedicated hybrid imaging systems or by software image co-registration techniques. Hybrid positron emission tomography (PET)/computed tomography (CT) systems have found wide acceptance in oncological imaging, while software registration techniques have a significant role in patient-specific, cost-effective, and radiation dose-effective application of integrated imaging. Software techniques allow accurate (2-3 mm) rigid image registration of brain PET with CT and MRI. Nonlinear techniques are used in whole-body image registration, and recent developments allow for significantly accelerated computing times. Nonlinear software registration of PET with CT or MRI is required for multimodality radiation planning. Difficulties remain in the validation of nonlinear registration of soft tissue organs. The utilization of software-based multimodality image integration in a clinical environment is sometimes hindered by the lack of appropriate picture archiving and communication systems (PACS) infrastructure needed to efficiently and automatically integrate all available images into one common database. In cardiology applications, multimodality PET/single photon emission computed tomography and coronary CT angiography imaging is typically not required unless the results of one of the tests are equivocal. Software image registration is likely to be used in a complementary fashion with hybrid PET/CT or PET/magnetic resonance imaging systems. Software registration of stand-alone scans ''paved the way'' for the clinical application of hybrid scanners, demonstrating practical benefits of image integration before the hybrid dual-modality devices were available. (orig.)

  13. Multimodal Aspects of Corporate Social Responsibility Communication

    Directory of Open Access Journals (Sweden)

    Carmen Daniela Maier

    2014-12-01

    Full Text Available This article addresses how the multimodal persuasive strategies of corporate social responsibility communication can highlight a company’s commitment to gender empowerment and environmental protection while advertising simultaneously its products. Drawing on an interdisciplinary methodological framework related to CSR communication, multimodal discourse analysis and gender theory, the article proposes a multimodal analysis model through which it is possible to map and explain the multimodal persuasive strategies employed by Coca-Cola company in their community-related films. By examining the semiotic modes’ interconnectivity and functional differentiation, this analytical endeavour expands the existing research work as the usual textual focus is extended to a multimodal one.

  14. Human Behavior Analysis by Means of Multimodal Context Mining

    Directory of Open Access Journals (Sweden)

    Oresti Banos

    2016-08-01

    Full Text Available There is sufficient evidence proving the impact that negative lifestyle choices have on people’s health and wellness. Changing unhealthy behaviours requires raising people’s self-awareness and also providing healthcare experts with a thorough and continuous description of the user’s conduct. Several monitoring techniques have been proposed in the past to track users’ behaviour; however, these approaches are either subjective and prone to misreporting, such as questionnaires, or only focus on a specific component of context, such as activity counters. This work presents an innovative multimodal context mining framework to inspect and infer human behaviour in a more holistic fashion. The proposed approach extends beyond the state-of-the-art, since it not only explores a sole type of context, but also combines diverse levels of context in an integral manner. Namely, low-level contexts, including activities, emotions and locations, are identified from heterogeneous sensory data through machine learning techniques. Low-level contexts are combined using ontological mechanisms to derive a more abstract representation of the user’s context, here referred to as high-level context. An initial implementation of the proposed framework supporting real-time context identification is also presented. The developed system is evaluated for various realistic scenarios making use of a novel multimodal context open dataset and data on-the-go, demonstrating prominent context-aware capabilities at both low and high levels.

  15. Palmprint and Face Multi-Modal Biometric Recognition Based on SDA-GSVD and Its Kernelization

    Directory of Open Access Journals (Sweden)

    Jing-Yu Yang

    2012-04-01

    Full Text Available When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person’s overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA. Specifically, one person’s different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance.

  16. Recognition of emotions using multimodal physiological signals and an ensemble deep learning model.

    Science.gov (United States)

    Yin, Zhong; Zhao, Mengyuan; Wang, Yongxiong; Yang, Jingdong; Zhang, Jianhua

    2017-03-01

    Using deep-learning methodologies to analyze multimodal physiological signals becomes increasingly attractive for recognizing human emotions. However, the conventional deep emotion classifiers may suffer from the drawback of the lack of the expertise for determining model structure and the oversimplification of combining multimodal feature abstractions. In this study, a multiple-fusion-layer based ensemble classifier of stacked autoencoder (MESAE) is proposed for recognizing emotions, in which the deep structure is identified based on a physiological-data-driven approach. Each SAE consists of three hidden layers to filter the unwanted noise in the physiological features and derives the stable feature representations. An additional deep model is used to achieve the SAE ensembles. The physiological features are split into several subsets according to different feature extraction approaches with each subset separately encoded by a SAE. The derived SAE abstractions are combined according to the physiological modality to create six sets of encodings, which are then fed to a three-layer, adjacent-graph-based network for feature fusion. The fused features are used to recognize binary arousal or valence states. DEAP multimodal database was employed to validate the performance of the MESAE. By comparing with the best existing emotion classifier, the mean of classification rate and F-score improves by 5.26%. The superiority of the MESAE against the state-of-the-art shallow and deep emotion classifiers has been demonstrated under different sizes of the available physiological instances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Multimodal Processes Rescheduling

    DEFF Research Database (Denmark)

    Bocewicz, Grzegorz; Banaszak, Zbigniew A.; Nielsen, Peter

    2013-01-01

    Cyclic scheduling problems concerning multimodal processes are usually observed in FMSs producing multi-type parts where the Automated Guided Vehicles System (AGVS) plays a role of a material handling system. Schedulability analysis of concurrently flowing cyclic processes (SCCP) exe-cuted in the......Cyclic scheduling problems concerning multimodal processes are usually observed in FMSs producing multi-type parts where the Automated Guided Vehicles System (AGVS) plays a role of a material handling system. Schedulability analysis of concurrently flowing cyclic processes (SCCP) exe...

  18. Women and political representation.

    Science.gov (United States)

    Rathod, P B

    1999-01-01

    A remarkable progress in women's participation in politics throughout the world was witnessed in the final decade of the 20th century. According to the Inter-Parliamentary Union report, there were only eight countries with no women in their legislatures in 1998. The number of women ministers at the cabinet level worldwide doubled in a decade, and the number of countries without any women ministers dropped from 93 to 48 during 1987-96. However, this progress is far from satisfactory. Political representation of women, minorities, and other social groups is still inadequate. This may be due to a complex combination of socioeconomic, cultural, and institutional factors. The view that women's political participation increases with social and economic development is supported by data from the Nordic countries, where there are higher proportions of women legislators than in less developed countries. While better levels of socioeconomic development, having a women-friendly political culture, and higher literacy are considered favorable factors for women's increased political representation, adopting one of the proportional representation systems (such as a party-list system, a single transferable vote system, or a mixed proportional system with multi-member constituencies) is the single factor most responsible for the higher representation of women.

  19. Multimodal Resources in Transnational Adoption

    DEFF Research Database (Denmark)

    Raudaskoski, Pirkko Liisa

    The paper discusses an empirical analysis which highlights the multimodal nature of identity construction. A documentary on transnational adoption provides real life incidents as research material. The incidents involve (or from them emerge) various kinds of multimodal resources and participants...

  20. Dopaminergic neurons encode a distributed, asymmetric representation of temperature in Drosophila.

    Science.gov (United States)

    Tomchik, Seth M

    2013-01-30

    Dopaminergic circuits modulate a wide variety of innate and learned behaviors in animals, including olfactory associative learning, arousal, and temperature-preference behavior. It is not known whether distinct or overlapping sets of dopaminergic neurons modulate these behaviors. Here, I have functionally characterized the dopaminergic circuits innervating the Drosophila mushroom body with in vivo calcium imaging and conditional silencing of genetically defined subsets of neurons. Distinct subsets of PPL1 dopaminergic neurons innervating the vertical lobes of the mushroom body responded to decreases in temperature, but not increases, with rapidly adapting bursts of activity. PAM neurons innervating the horizontal lobes did not respond to temperature shifts. Ablation of the antennae and maxillary palps reduced, but did not eliminate, the responses. Genetic silencing of dopaminergic neurons innervating the vertical mushroom body lobes substantially reduced behavioral cold avoidance, but silencing smaller subsets of these neurons had no effect. These data demonstrate that overlapping dopaminergic circuits encode a broadly distributed, asymmetric representation of temperature that overlays regions implicated previously in learning, memory, and forgetting. Thus, diverse behaviors engage overlapping sets of dopaminergic neurons that encode multimodal stimuli and innervate a single anatomical target, the mushroom body.

  1. Multimodal neuromonitoring in pediatric cardiac anesthesia

    Directory of Open Access Journals (Sweden)

    Alexander J. C. Mittnacht

    2014-01-01

    Full Text Available Despite significant improvements in overall outcome, neurological injury remains a feared complication following pediatric congenital heart surgery (CHS. Only if adverse events are detected early enough, can effective actions be initiated preventing potentially serious injury. The multifactorial etiology of neurological injury in CHS patients makes it unlikely that one single monitoring modality will be effective in capturing all possible threats. Improving current and developing new technologies and combining them according to the concept of multimodal monitoring may allow for early detection and possible intervention with the goal to further improve neurological outcome in children undergoing CHS.

  2. Multimodal Diversity of Postmodernist Fiction Text

    Directory of Open Access Journals (Sweden)

    U. I. Tykha

    2016-12-01

    Full Text Available The article is devoted to the analysis of structural and functional manifestations of multimodal diversity in postmodernist fiction texts. Multimodality is defined as the coexistence of more than one semiotic mode within a certain context. Multimodal texts feature a diversity of semiotic modes in the communication and development of their narrative. Such experimental texts subvert conventional patterns by introducing various semiotic resources – verbal or non-verbal.

  3. Experiments in Multimodal Information Presentation

    NARCIS (Netherlands)

    van Hooijdonk, Charlotte; Bosma, W.E.; Krahmer, Emiel; Maes, Alfons; Theune, Mariet; van den Bosch, Antal; Bouma, Gosse

    In this chapter we describe three experiments investigating multimodal information presentation in the context of a medical QA system. In Experiment 1, we wanted to know how non-experts design (multimodal) answers to medical questions, distinguishing between what questions and how questions. In

  4. Modeling multimodal human-computer interaction

    NARCIS (Netherlands)

    Obrenovic, Z.; Starcevic, D.

    2004-01-01

    Incorporating the well-known Unified Modeling Language into a generic modeling framework makes research on multimodal human-computer interaction accessible to a wide range off software engineers. Multimodal interaction is part of everyday human discourse: We speak, move, gesture, and shift our gaze

  5. All-fiber multimode interference micro-displacement sensor

    International Nuclear Information System (INIS)

    Antonio-Lopez, J E; LiKamWa, P; Sanchez-Mondragon, J J; May-Arrioja, D A

    2013-01-01

    We report an all-fiber micro-displacement sensor based on multimode interference (MMI) effects. The micro-displacement sensor consists of a segment of No-Core multimode fiber (MMF) with one end spliced to a segment of single mode fiber (SMF) which acts as the input. The other end of the MMF and another SMF are inserted into a capillary ferrule filled with index matching liquid. Since the refractive index of the liquid is higher than that of the ferrule, a liquid MMF with a diameter of 125 µm is formed between the fibers inside the ferrule. When the fibers are separated this effectively increases the length of the MMF. Since the peak wavelength response of MMI devices is very sensitive to changes in the MMF's length, this can be used to detect micro-displacements. By measuring spectral changes we have obtained a sensing range of 3 mm with a sensitivity of 25 nm mm −1 and a resolution of 20 µm. The sensor can also be used to monitor small displacements by using a single wavelength to interrogate the transmission of the MMI device close to the resonance peak. Under this latter regime we were able to obtain a sensitivity of 7000 mV mm −1 and a sensing range of 100 µm, with a resolution up to 1 µm. The simplicity and versatility of the sensor make it very suitable for many diverse applications. (paper)

  6. Multimodal Discourse Analysis of the Movie "Argo"

    Science.gov (United States)

    Bo, Xu

    2018-01-01

    Based on multimodal discourse theory, this paper makes a multimodal discourse analysis of some shots in the movie "Argo" from the perspective of context of culture, context of situation and meaning of image. Results show that this movie constructs multimodal discourse through particular context, language and image, and successfully…

  7. Single-mode glass waveguide technology for optical interchip communication on board level

    Science.gov (United States)

    Brusberg, Lars; Neitz, Marcel; Schröder, Henning

    2012-01-01

    The large bandwidth demand in long-distance telecom networks lead to single-mode fiber interconnects as result of low dispersion, low loss and dense wavelength multiplexing possibilities. In contrast, multi-mode interconnects are suitable for much shorter lengths up to 300 meters and are promising for optical links between racks and on board level. Active optical cables based on multi-mode fiber links are at the market and research in multi-mode waveguide integration on board level is still going on. Compared to multi-mode, a single-mode waveguide has much more integration potential because of core diameters of around 20% of a multi-mode waveguide by a much larger bandwidth. But light coupling in single-mode waveguides is much more challenging because of lower coupling tolerances. Together with the silicon photonics technology, a single-mode waveguide technology on board-level will be the straight forward development goal for chip-to-chip optical interconnects integration. Such a hybrid packaging platform providing 3D optical single-mode links bridges the gap between novel photonic integrated circuits and the glass fiber based long-distance telecom networks. Following we introduce our 3D photonic packaging approach based on thin glass substrates with planar integrated optical single-mode waveguides for fiber-to-chip and chip-to-chip interconnects. This novel packaging approach merges micro-system packaging and glass integrated optics. It consists of a thin glass substrate with planar integrated singlemode waveguide circuits, optical mirrors and lenses providing an integration platform for photonic IC assembly and optical fiber interconnect. Thin glass is commercially available in panel and wafer formats and characterizes excellent optical and high-frequency properties. That makes it perfect for microsystem packaging. The paper presents recent results in single-mode waveguide technology on wafer level and waveguide characterization. Furthermore the integration in a

  8. Multimodal exemplification: The expansion of meaning in electronic ...

    African Journals Online (AJOL)

    Functional Multimodal Discourse Analysis (SF-MDA) and argues for improving their exemplifica-tion multimodally. Multimodal devices, if well coordinated, can help optimize e-dictionary exam-ples in informativity, diversity, dynamicity and ...

  9. Improving treatment planning accuracy through multimodality imaging

    International Nuclear Information System (INIS)

    Sailer, Scott L.; Rosenman, Julian G.; Soltys, Mitchel; Cullip, Tim J.; Chen, Jun

    1996-01-01

    Purpose: In clinical practice, physicians are constantly comparing multiple images taken at various times during the patient's treatment course. One goal of such a comparison is to accurately define the gross tumor volume (GTV). The introduction of three-dimensional treatment planning has greatly enhanced the ability to define the GTV, but there are times when the GTV is not visible on the treatment-planning computed tomography (CT) scan. We have modified our treatment-planning software to allow for interactive display of multiple, registered images that enhance the physician's ability to accurately determine the GTV. Methods and Materials: Images are registered using interactive tools developed at the University of North Carolina at Chapel Hill (UNC). Automated methods are also available. Images registered with the treatment-planning CT scan are digitized from film. After a physician has approved the registration, the registered images are made available to the treatment-planning software. Structures and volumes of interest are contoured on all images. In the beam's eye view, wire loop representations of these structures can be visualized from all image types simultaneously. Each registered image can be seamlessly viewed during the treatment-planning process, and all contours from all image types can be seen on any registered image. A beam may, therefore, be designed based on any contour. Results: Nineteen patients have been planned and treated using multimodality imaging from November 1993 through August 1994. All registered images were digitized from film, and many were from outside institutions. Brain has been the most common site (12), but the techniques of registration and image display have also been used for the thorax (4), abdomen (2), and extremity (1). The registered image has been an magnetic resonance (MR) scan in 15 cases and a diagnostic CT scan in 5 cases. In one case, sequential MRs, one before treatment and another after 30 Gy, were used to plan

  10. Integration of Multi-Modal Biomedical Data to Predict Cancer Grade and Patient Survival.

    Science.gov (United States)

    Phan, John H; Hoffman, Ryan; Kothari, Sonal; Wu, Po-Yen; Wang, May D

    2016-02-01

    The Big Data era in Biomedical research has resulted in large-cohort data repositories such as The Cancer Genome Atlas (TCGA). These repositories routinely contain hundreds of matched patient samples for genomic, proteomic, imaging, and clinical data modalities, enabling holistic and multi-modal integrative analysis of human disease. Using TCGA renal and ovarian cancer data, we conducted a novel investigation of multi-modal data integration by combining histopathological image and RNA-seq data. We compared the performances of two integrative prediction methods: majority vote and stacked generalization. Results indicate that integration of multiple data modalities improves prediction of cancer grade and outcome. Specifically, stacked generalization, a method that integrates multiple data modalities to produce a single prediction result, outperforms both single-data-modality prediction and majority vote. Moreover, stacked generalization reveals the contribution of each data modality (and specific features within each data modality) to the final prediction result and may provide biological insights to explain prediction performance.

  11. Shared Representations and the Translation Process

    DEFF Research Database (Denmark)

    Schaeffer, Moritz; Carl, Michael

    2015-01-01

    The purpose of the present chapter is to investigate automated processing during translation. We provide evidence from a translation priming study which suggests that translation involves activation of shared lexico-semantic and syntactical representations, i.e., the activation of features of both...... source and target language items which share one single cognitive representation. We argue that activation of shared representations facilitates automated processing. The chapter revises the literal translation hypothesis and the monitor model (Ivir 1981; Toury 1995; Tirkkonen-Condit 2005), and re...

  12. Shared Representations and the Translation Process

    DEFF Research Database (Denmark)

    Schaeffer, Moritz; Carl, Michael

    2013-01-01

    The purpose of the present paper is to investigate automated processing during translation. We provide evidence from a translation priming study which suggests that translation involves activation of shared lexico-semantic and syntactical representations, i.e., the activation of features of both...... source and target language items which share one single cognitive representation. We argue that activation of shared representations facilitates automated processing. The paper revises the literal translation hypothesis and the monitor model (Ivir 1981; Toury 1995; Tirkkonen-Condit 2005), and re...

  13. A robo-pigeon based on an innovative multi-mode telestimulation system.

    Science.gov (United States)

    Yang, Junqing; Huai, Ruituo; Wang, Hui; Lv, Changzhi; Su, Xuecheng

    2015-01-01

    In this paper, we describe a new multi-mode telestimulation system for brain-microstimulation for the navigation of a robo-pigeon, a new type of bio-robot based on Brain-Computer Interface (BCI) techniques. The multi-mode telestimulation system overcomes neuron adaptation that was a key shortcoming of the previous single-mode stimulation by the use of non-steady TTL biphasic pulses accomplished by randomly alternating pulse modes. To improve efficiency, a new behavior model ("virtual fear") is proposed and applied to the robo-pigeon. Unlike the previous "virtual reward" model, the "virtual fear" behavior model does not require special training. The performance and effectiveness of the system to alleviate the adaptation of neurons was verified by a robo-pigeon navigation test, simultaneously confirming the practicality of the "virtual fear" behavioral model.

  14. Representation of Aloneness in Forever Alone Guy Comic Strips

    Directory of Open Access Journals (Sweden)

    Pricillia Chandra

    2017-01-01

    Full Text Available This study aims to discuss the representation of aloneness in Forever Alone Guy comic strips. The purpose of this research is to find out how the meaning of aloneness is constructed in the representation of Forever Alone Guy through the theory of representation described by Stuart Hall (1997, 2013. In the theory suggested by Hall, it is described that there are two ways to be done in creating representation. Those ways are through language/sign and mental representation. The mental representation is the only way used in this research with a reason that this analysis focuses to the stigmas attached to the concept of aloneness. The analysis shows that the construction of meaning is done through embedding clusters of negative stigmas to the three entities: single, alone and lonely. Thus, through the analysis, it can be concluded that the dominant meaning which represents being single and alone as the ‘imperfect’ condition plays an important role in the construction of the meaning

  15. Passively Q-switched dual-wavelength thulium-doped fiber laser based on a multimode interference filter and a semiconductor saturable absorber

    Science.gov (United States)

    Wang, M.; Huang, Y. J.; Ruan, S. C.

    2018-04-01

    In this paper, we have demonstrated a theta cavity passively Q-switched dual-wavelength fiber laser based on a multimode interference filter and a semiconductor saturable absorber. Relying on the properties of the fiber theta cavity, the laser can operate unidirectionally without an optical isolator. A semiconductor saturable absorber played the role of passive Q-switch while a section of single-mode-multimode-single-mode fiber structure served as an multimode interference filter and was used for selecting the lasing wavelengths. By suitably manipulating the polarization controller, stable dual-wavelength Q-switched operation was obtained at ~1946.8 nm and ~1983.8 nm with maximum output power and minimum pulse duration of ~47 mW and ~762.5 ns, respectively. The pulse repetition rate can be tuned from ~20.2 kHz to ~79.7 kHz by increasing the pump power from ~2.12 W to ~5.4 W.

  16. Differential modal delay measurements in a graded-index multimode fibre waveguide, using a single-mode fibre pro mode selection

    International Nuclear Information System (INIS)

    Sunak, H.R.D.; Soares, S.M.

    1981-01-01

    Differential model delay (DMD) measurements in graded-index multimode optical fibre waveguides, which are very promising for many types of communication system were carried out. These DMD measurements give a direct indication of the deviation of the refractive index profile, from the optimum value, at a given wavelength. For the first time, by using a single-mode fibre, a few guided modes in the graded-index fibre were selected, in two different ways: launching a few modes at the input end or selecting a few modes at the output end. By doing so important features of propagation in the fibre were revealed, especially the intermodal coupling that may exist. The importance of this determination of intermodal coupling or mode mixing, particularly when many fibres are joined together in a link, and the merits of DMD measurements in general and their importance for the production of high bandwidth graded-index fibres are discussed. (Author) [pt

  17. Fiber Optic Pressure Sensor using Multimode Interference

    International Nuclear Information System (INIS)

    Ruiz-Perez, V I; Sanchez-Mondragon, J J; Basurto-Pensado, M A; LiKamWa, P; May-Arrioja, D A

    2011-01-01

    Based on the theory of multimode interference (MMI) and self-image formation, we developed a novel intrinsic optical fiber pressure sensor. The sensing element consists of a section of multimode fiber (MMF) without cladding spliced between two single mode fibers (SMF). The MMI pressure sensor is based on the intensity changes that occur in the transmitted light when the effective refractive index of the MMF is changed. Basically, a thick layer of Polydimethylsiloxane (PDMS) is placed in direct contact with the MMF section, such that the contact area between the PDMS and the fiber will change proportionally with the applied pressure, which results in a variation of the transmitted light intensity. Using this configuration, a good correlation between the measured intensity variations and the applied pressure is obtained. The sensitivity of the sensor is 3 μV/psi, for a range of 0-60 psi, and the maximum resolution of our system is 0.25 psi. Good repeatability is also observed with a standard deviation of 0.0019. The key feature of the proposed pressure sensor is its low fabrication cost, since the cost of the MMF is minimal.

  18. Fiber Optic Pressure Sensor using Multimode Interference

    Energy Technology Data Exchange (ETDEWEB)

    Ruiz-Perez, V I; Sanchez-Mondragon, J J [INAOE, Apartado Postal 51 y 216, Puebla 72000 (Mexico); Basurto-Pensado, M A [CIICAp, Universidad Autonoma del Estado de Morelos (Mexico); LiKamWa, P [CREOL, University of Central Florida, Orlando, FL 32816 (United States); May-Arrioja, D A, E-mail: iruiz@inaoep.mx, E-mail: mbasurto@uaem.mx, E-mail: delta_dirac@hotmail.com, E-mail: daniel_may_arrioja@hotmail.com [UAT Reynosa Rodhe, Universidad Autonoma de Tamaulipas (Mexico)

    2011-01-01

    Based on the theory of multimode interference (MMI) and self-image formation, we developed a novel intrinsic optical fiber pressure sensor. The sensing element consists of a section of multimode fiber (MMF) without cladding spliced between two single mode fibers (SMF). The MMI pressure sensor is based on the intensity changes that occur in the transmitted light when the effective refractive index of the MMF is changed. Basically, a thick layer of Polydimethylsiloxane (PDMS) is placed in direct contact with the MMF section, such that the contact area between the PDMS and the fiber will change proportionally with the applied pressure, which results in a variation of the transmitted light intensity. Using this configuration, a good correlation between the measured intensity variations and the applied pressure is obtained. The sensitivity of the sensor is 3 {mu}V/psi, for a range of 0-60 psi, and the maximum resolution of our system is 0.25 psi. Good repeatability is also observed with a standard deviation of 0.0019. The key feature of the proposed pressure sensor is its low fabrication cost, since the cost of the MMF is minimal.

  19. Nineteen-port photonic lantern with multimode delivery fiber

    DEFF Research Database (Denmark)

    Noordegraaf, Danny; Skovgaard, Peter M. W.; Sandberg, Rasmus Kousholt

    2012-01-01

    We demonstrate efficient multimode (MM) to single-mode (SM) conversion in a 19-port photonic lantern with a 50 μm core MM delivery fiber. The photonic lantern can be used within the field of astrophotonics for coupling MM starlight to an ensemble of SM fibers in order to perform fiber-Bragg-grati....... The coupling loss from a 50 μm core MM fiber to an ensemble of 19 SM fibers and back to a 50 μm core MM fiber is below 1.1 dB....

  20. "Look at what I am saying": Multimodal science teaching

    Science.gov (United States)

    Pozzer-Ardenghi, Lilian

    Language constitutes the dominant representational mode in science teaching, and lectures are still the most prevalent of the teaching methods in school science. In this dissertation, I investigate lectures from a multimodal and communicative perspective to better understand how teaching as a cultural-historical and social activity unfolds; that is, I am concerned with teaching as a communicative event, where a variety of signs (or semiotic resources), expressed in diverse modalities (or modes of communication) are produced and reproduced while the teacher articulates very specific conceptual meanings for the students. Within a trans-disciplinary approach that merges theoretical and methodical frameworks of social and cultural studies of human activity and interaction, communicative and gestures studies, linguistics, semiotics, pragmatics, and studies on teaching and learning science, I investigate teaching as a communicative, dynamic, multimodal, and social activity. My research questions include: What are the resources produced and reproduced in the classroom when the teacher is lecturing? How do these resources interact with each other? What meanings do they carry and how are these associated to achieve the coherence necessary to accomplish the communication of complex and abstract scientific concepts, not only within one lecture, but also within an entire unit of the curricula encompassing various lectures? My results show that, when lecturing, the communication of scientific concepts occur along trajectories driven by the dialectical relation among the various semiotic resources a lecturer makes available that together constitute a unit---the idea. Speech, gestures, and other nonverbal resources are but one-sided expressions of a higher order communicative meaning unit. The iterable nature of the signs produced and reproduced during science lectures permits, supports, and encourages the repetition, variation, and translation of ideas, themes, and languages and

  1. Reference resolution in multi-modal interaction: Preliminary observations

    NARCIS (Netherlands)

    González González, G.R.; Nijholt, Antinus

    2002-01-01

    In this paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can apply

  2. Reference Resolution in Multi-modal Interaction: Position paper

    NARCIS (Netherlands)

    Fernando, T.; Nijholt, Antinus

    2002-01-01

    In this position paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can

  3. Using "Slowmation" to Enable Preservice Primary Teachers to Create Multimodal Representations of Science Concepts

    Science.gov (United States)

    Hoban, Garry; Nielsen, Wendy

    2012-01-01

    Research has identified the value of students constructing their own representations of science concepts using modes such as writing, diagrams, 2-D and 3-D models, images or speech to communicate meaning. "Slowmation" (abbreviated from "Slow Animation") is a simplified way for students, such as preservice teachers, to make a narrated animation…

  4. Multimodal pain management after arthroscopic surgery

    DEFF Research Database (Denmark)

    Rasmussen, Sten

    Multimodal Pain Management after Arthroscopic Surgery By Sten Rasmussen, M.D. The thesis is based on four randomized controlled trials. The main hypothesis was that multimodal pain treatment provides faster recovery after arthroscopic surgery. NSAID was tested against placebo after knee arthroscopy...

  5. Additive and polynomial representations

    CERN Document Server

    Krantz, David H; Suppes, Patrick

    1971-01-01

    Additive and Polynomial Representations deals with major representation theorems in which the qualitative structure is reflected as some polynomial function of one or more numerical functions defined on the basic entities. Examples are additive expressions of a single measure (such as the probability of disjoint events being the sum of their probabilities), and additive expressions of two measures (such as the logarithm of momentum being the sum of log mass and log velocity terms). The book describes the three basic procedures of fundamental measurement as the mathematical pivot, as the utiliz

  6. Multimodality, creativity and children's meaning-making: Drawings ...

    African Journals Online (AJOL)

    Multimodality, creativity and children's meaning-making: Drawings, writings, imaginings. ... Framed by social semiotic theories of communication, multimodal ... to create imaginary worlds and express meanings according to their interests.

  7. A multimodal communication program for aphasia during inpatient rehabilitation: A case study.

    Science.gov (United States)

    Wallace, Sarah E; Purdy, Mary; Skidmore, Elizabeth

    2014-01-01

    Communication is essential for successful rehabilitation, yet few aphasia treatments have been investigated during the acute stroke phase. Alternative modality use including gesturing, writing, or drawing has been shown to increase communicative effectiveness in people with chronic aphasia. Instruction in alternative modality use during acute stroke may increase patient communication and participation, therefore resulting in fewer adverse situations and improved rehabilitation outcomes. The study purpose was to explore a multimodal communication program for aphasia (MCPA) implemented during acute stroke rehabilitation. MCPA aims to improve communication modality production, and to facilitate switching among modalities to resolve communication breakdowns. Two adults with severe aphasia completed MCPA beginning at 2 and 3 weeks post onset a single left-hemisphere stroke. Probes completed during each session allowed for evaluation of modality production and modality switching accuracy. Participants completed MCPA (10 and 14 treatment sessions respectively) and their performance on probes suggested increased accuracy in the production of various alternate communication modalities. However, increased switching to an alternate modality was noted for only one participant. Further investigation of multimodal treatment during inpatient rehabilitation is warranted. In particular, comparisons between multimodal and standard treatments would help determine appropriate interventions for this setting.

  8. A multimodal interface for real-time soldier-robot teaming

    Science.gov (United States)

    Barber, Daniel J.; Howard, Thomas M.; Walter, Matthew R.

    2016-05-01

    Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.

  9. Multiple foci of spatial attention in multimodal working memory.

    Science.gov (United States)

    Katus, Tobias; Eimer, Martin

    2016-11-15

    The maintenance of sensory information in working memory (WM) is mediated by the attentional activation of stimulus representations that are stored in perceptual brain regions. Using event-related potentials (ERPs), we measured tactile and visual contralateral delay activity (tCDA/CDA components) in a bimodal WM task to concurrently track the attention-based maintenance of information stored in anatomically segregated (somatosensory and visual) brain areas. Participants received tactile and visual sample stimuli on both sides, and in different blocks, memorized these samples on the same side or on opposite sides. After a retention delay, memory was unpredictably tested for touch or vision. In the same side blocks, tCDA and CDA components simultaneously emerged over the same hemisphere, contralateral to the memorized tactile/visual sample set. In opposite side blocks, these two components emerged over different hemispheres, but had the same sizes and onset latencies as in the same side condition. Our results reveal distinct foci of tactile and visual spatial attention that were concurrently maintained on task-relevant stimulus representations in WM. The independence of spatially-specific biasing mechanisms for tactile and visual WM content suggests that multimodal information is stored in distributed perceptual brain areas that are activated through modality-specific processes that can operate simultaneously and largely independently of each other. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Towards an intelligent framework for multimodal affective data analysis.

    Science.gov (United States)

    Poria, Soujanya; Cambria, Erik; Hussain, Amir; Huang, Guang-Bin

    2015-03-01

    An increasingly large amount of multimodal content is posted on social media websites such as YouTube and Facebook everyday. In order to cope with the growth of such so much multimodal data, there is an urgent need to develop an intelligent multi-modal analysis framework that can effectively extract information from multiple modalities. In this paper, we propose a novel multimodal information extraction agent, which infers and aggregates the semantic and affective information associated with user-generated multimodal data in contexts such as e-learning, e-health, automatic video content tagging and human-computer interaction. In particular, the developed intelligent agent adopts an ensemble feature extraction approach by exploiting the joint use of tri-modal (text, audio and video) features to enhance the multimodal information extraction process. In preliminary experiments using the eNTERFACE dataset, our proposed multi-modal system is shown to achieve an accuracy of 87.95%, outperforming the best state-of-the-art system by more than 10%, or in relative terms, a 56% reduction in error rate. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. (Re-)Examination of Multimodal Augmented Reality

    NARCIS (Netherlands)

    Rosa, N.E.; Werkhoven, P.J.; Hürst, W.O.

    2016-01-01

    The majority of augmented reality (AR) research has been concerned with visual perception, however the move towards multimodality is imminent. At the same time, there is no clear vision of what multimodal AR is. The purpose of this position paper is to consider possible ways of examining AR other

  12. Score level fusion scheme based on adaptive local Gabor features for face-iris-fingerprint multimodal biometric

    Science.gov (United States)

    He, Fei; Liu, Yuanning; Zhu, Xiaodong; Huang, Chun; Han, Ye; Chen, Ying

    2014-05-01

    A multimodal biometric system has been considered a promising technique to overcome the defects of unimodal biometric systems. We have introduced a fusion scheme to gain a better understanding and fusion method for a face-iris-fingerprint multimodal biometric system. In our case, we use particle swarm optimization to train a set of adaptive Gabor filters in order to achieve the proper Gabor basic functions for each modality. For a closer analysis of texture information, two different local Gabor features for each modality are produced by the corresponding Gabor coefficients. Next, all matching scores of the two Gabor features for each modality are projected to a single-scalar score via a trained, supported, vector regression model for a final decision. A large-scale dataset is formed to validate the proposed scheme using the Facial Recognition Technology database-fafb and CASIA-V3-Interval together with FVC2004-DB2a datasets. The experimental results demonstrate that as well as achieving further powerful local Gabor features of multimodalities and obtaining better recognition performance by their fusion strategy, our architecture also outperforms some state-of-the-art individual methods and other fusion approaches for face-iris-fingerprint multimodal biometric systems.

  13. Exploring Multi-Modal and Structured Representation Learning for Visual Image and Video Understanding

    OpenAIRE

    Xu, Dan

    2018-01-01

    As the explosive growth of the visual data, it is particularly important to develop intelligent visual understanding techniques for dealing with a large amount of data. Many efforts have been made in recent years to build highly effective and large-scale visual processing algorithms and systems. One of the core aspects in the research line is how to learn robust representations to better describe the data. In this thesis we study the problem of visual image and video understanding and specifi...

  14. Multimodal integration of anatomy and physiology classes: How instructors utilize multimodal teaching in their classrooms

    Science.gov (United States)

    McGraw, Gerald M., Jr.

    Multimodality is the theory of communication as it applies to social and educational semiotics (making meaning through the use of multiple signs and symbols). The term multimodality describes a communication methodology that includes multiple textual, aural, and visual applications (modes) that are woven together to create what is referred to as an artifact. Multimodal teaching methodology attempts to create a deeper meaning to course content by activating the higher cognitive areas of the student's brain, creating a more sustained retention of the information (Murray, 2009). The introduction of multimodality educational methodologies as a means to more optimally engage students has been documented within educational literature. However, studies analyzing the distribution and penetration into basic sciences, more specifically anatomy and physiology, have not been forthcoming. This study used a quantitative survey design to determine the degree to which instructors integrated multimodality teaching practices into their course curricula. The instrument used for the study was designed by the researcher based on evidence found in the literature and sent to members of three associations/societies for anatomy and physiology instructors: the Human Anatomy and Physiology Society; the iTeach Anatomy & Physiology Collaborate; and the American Physiology Society. Respondents totaled 182 instructor members of two- and four-year, private and public higher learning colleges collected from the three organizations collectively with over 13,500 members in over 925 higher learning institutions nationwide. The study concluded that the expansion of multimodal methodologies into anatomy and physiology classrooms is at the beginning of the process and that there is ample opportunity for expansion. Instructors continue to use lecture as their primary means of interaction with students. Email is still the major form of out-of-class communication for full-time instructors. Instructors with

  15. Multi-mode energy management strategy for fuel cell electric vehicles based on driving pattern identification using learning vector quantization neural network algorithm

    Science.gov (United States)

    Song, Ke; Li, Feiqiang; Hu, Xiao; He, Lin; Niu, Wenxu; Lu, Sihao; Zhang, Tong

    2018-06-01

    The development of fuel cell electric vehicles can to a certain extent alleviate worldwide energy and environmental issues. While a single energy management strategy cannot meet the complex road conditions of an actual vehicle, this article proposes a multi-mode energy management strategy for electric vehicles with a fuel cell range extender based on driving condition recognition technology, which contains a patterns recognizer and a multi-mode energy management controller. This paper introduces a learning vector quantization (LVQ) neural network to design the driving patterns recognizer according to a vehicle's driving information. This multi-mode strategy can automatically switch to the genetic algorithm optimized thermostat strategy under specific driving conditions in the light of the differences in condition recognition results. Simulation experiments were carried out based on the model's validity verification using a dynamometer test bench. Simulation results show that the proposed strategy can obtain better economic performance than the single-mode thermostat strategy under dynamic driving conditions.

  16. Multimodal Sensing Interface for Haptic Interaction

    Directory of Open Access Journals (Sweden)

    Carlos Diaz

    2017-01-01

    Full Text Available This paper investigates the integration of a multimodal sensing system for exploring limits of vibrato tactile haptic feedback when interacting with 3D representation of real objects. In this study, the spatial locations of the objects are mapped to the work volume of the user using a Kinect sensor. The position of the user’s hand is obtained using the marker-based visual processing. The depth information is used to build a vibrotactile map on a haptic glove enhanced with vibration motors. The users can perceive the location and dimension of remote objects by moving their hand inside a scanning region. A marker detection camera provides the location and orientation of the user’s hand (glove to map the corresponding tactile message. A preliminary study was conducted to explore how different users can perceive such haptic experiences. Factors such as total number of objects detected, object separation resolution, and dimension-based and shape-based discrimination were evaluated. The preliminary results showed that the localization and counting of objects can be attained with a high degree of success. The users were able to classify groups of objects of different dimensions based on the perceived haptic feedback.

  17. Multimodale trafiknet i GIS (Multimodal Traffic Network in GIS)

    DEFF Research Database (Denmark)

    Kronbak, Jacob; Brems, Camilla Riff

    1996-01-01

    The report introduces the use of multi-modal traffic networks within a geographical Information System (GIS). The necessary theory of modelling multi-modal traffic network is reviewed and applied to the ARC/INFO GIS by an explorative example.......The report introduces the use of multi-modal traffic networks within a geographical Information System (GIS). The necessary theory of modelling multi-modal traffic network is reviewed and applied to the ARC/INFO GIS by an explorative example....

  18. Multimodal biometric method that combines veins, prints, and shape of a finger

    Science.gov (United States)

    Kang, Byung Jun; Park, Kang Ryoung; Yoo, Jang-Hee; Kim, Jeong Nyeo

    2011-01-01

    Multimodal biometrics provides high recognition accuracy and population coverage by using various biometric features. A single finger contains finger veins, fingerprints, and finger geometry features; by using multimodal biometrics, information on these multiple features can be simultaneously obtained in a short time and their fusion can outperform the use of a single feature. This paper proposes a new finger recognition method based on the score-level fusion of finger veins, fingerprints, and finger geometry features. This research is novel in the following four ways. First, the performances of the finger-vein and fingerprint recognition are improved by using a method based on a local derivative pattern. Second, the accuracy of the finger geometry recognition is greatly increased by combining a Fourier descriptor with principal component analysis. Third, a fuzzy score normalization method is introduced; its performance is better than the conventional Z-score normalization method. Fourth, finger-vein, fingerprint, and finger geometry recognitions are combined by using three support vector machines and a weighted SUM rule. Experimental results showed that the equal error rate of the proposed method was 0.254%, which was lower than those of the other methods.

  19. Learning Document Semantic Representation with Hybrid Deep Belief Network

    Directory of Open Access Journals (Sweden)

    Yan Yan

    2015-01-01

    it is also an effective way to remove noise from the different document representation type; the DBN can enhance extract abstract of the document in depth, making the model learn sufficient semantic representation. At the same time, we explore different input strategies for semantic distributed representation. Experimental results show that our model using the word embedding instead of single word has better performance.

  20. MINERVA - a multi-modal radiation treatment planning system

    Energy Technology Data Exchange (ETDEWEB)

    Wemple, C.A. E-mail: cew@enel.gov; Wessol, D.E.; Nigg, D.W.; Cogliati, J.J.; Milvich, M.L.; Frederickson, C.; Perkins, M.; Harkin, G.J

    2004-11-01

    Researchers at the Idaho National Engineering and Environmental Laboratory and Montana State University have undertaken development of MINERVA, a patient-centric, multi-modal, radiation treatment planning system. This system can be used for planning and analyzing several radiotherapy modalities, either singly or combined, using common modality independent image and geometry construction and dose reporting and guiding. It employs an integrated, lightweight plugin architecture to accommodate multi-modal treatment planning using standard interface components. The MINERVA design also facilitates the future integration of improved planning technologies. The code is being developed with the Java Virtual Machine for interoperability. A full computation path has been established for molecular targeted radiotherapy treatment planning, with the associated transport plugin developed by researchers at the Lawrence Livermore National Laboratory. Development of the neutron transport plugin module is proceeding rapidly, with completion expected later this year. Future development efforts will include development of deformable registration methods, improved segmentation methods for patient model definition, and three-dimensional visualization of the patient images, geometry, and dose data. Transport and source plugins will be created for additional treatment modalities, including brachytherapy, external beam proton radiotherapy, and the EGSnrc/BEAMnrc codes for external beam photon and electron radiotherapy.

  1. Training of Perceptual Motor Skills in Multimodal Virtual Environments

    Directory of Open Access Journals (Sweden)

    Gopher Daniel

    2011-12-01

    Full Text Available Multimodal, immersive, virtual reality (VR techniques open new perspectives for perceptualmotor skill trainers. They also introduce new risks and dangers. This paper describes the benefits and pitfalls of multimodal training and the cognitive building blocks of a multimodal, VR training simulators.

  2. Multimodal processes scheduling in mesh-like network environment

    Directory of Open Access Journals (Sweden)

    Bocewicz Grzegorz

    2015-06-01

    Full Text Available Multimodal processes planning and scheduling play a pivotal role in many different domains including city networks, multimodal transportation systems, computer and telecommunication networks and so on. Multimodal process can be seen as a process partially processed by locally executed cyclic processes. In that context the concept of a Mesh-like Multimodal Transportation Network (MMTN in which several isomorphic subnetworks interact each other via distinguished subsets of common shared intermodal transport interchange facilities (such as a railway station, bus station or bus/tram stop as to provide a variety of demand-responsive passenger transportation services is examined. Consider a mesh-like layout of a passengers transport network equipped with different lines including buses, trams, metro, trains etc. where passenger flows are treated as multimodal processes. The goal is to provide a declarative model enabling to state a constraint satisfaction problem aimed at multimodal transportation processes scheduling encompassing passenger flow itineraries. Then, the main objective is to provide conditions guaranteeing solvability of particular transport lines scheduling, i.e. guaranteeing the right match-up of local cyclic acting bus, tram, metro and train schedules to a given passengers flow itineraries.

  3. Multifocus confocal Raman microspectroscopy for fast multimode vibrational imaging of living cells.

    Science.gov (United States)

    Okuno, Masanari; Hamaguchi, Hiro-o

    2010-12-15

    We have developed a multifocus confocal Raman microspectroscopic system for the fast multimode vibrational imaging of living cells. It consists of an inverted microscope equipped with a microlens array, a pinhole array, a fiber bundle, and a multichannel Raman spectrometer. Forty-eight Raman spectra from 48 foci under the microscope are simultaneously obtained by using multifocus excitation and image-compression techniques. The multifocus confocal configuration suppresses the background generated from the cover glass and the cell culturing medium so that high-contrast images are obtainable with a short accumulation time. The system enables us to obtain multimode (10 different vibrational modes) vibrational images of living cells in tens of seconds with only 1 mW laser power at one focal point. This image acquisition time is more than 10 times faster than that in conventional single-focus Raman microspectroscopy.

  4. MULTIMODAL ANALGESIA AFTER TOTAL HIP ARTHROPLASTY

    Directory of Open Access Journals (Sweden)

    I. G. Mukutsa

    2012-01-01

    Full Text Available Purpose - to assess the effect of multimodal analgesia in the early rehabilitation of patients after hip replacement. Materials and methods. A prospective single-centre randomized research, which included 32 patients. Patients of the 1st group received paracetamol, ketorolac and tramadol, the 2nd group of patients - ketorolac intravenously and the 3rd group of patients - etoricoxib and gabapentin. Patients of the 2nd and the 3rd groups underwent epidural analgesia with ropivacaine. Multimodal analgesia was carried out for 48 hours after the surgery. Assessment of pain intensity was performed by the VAS (visual analogue scale, a neuropathic pain component - on the DN4 questionnaire . Time was recorded during the first and second verticalization of patients, using the distance walkers and by fixing the distance covered with in 2 minutes. Results. The intensity of pain for more than 50 mm on VAS at movement at least once every 48 hours after the surgery was occurred among 9% of the 1st group, 22% of patients from the 2nd group and 8% of patients of the 3rd group. Number of patients with neuropathic pain component decreased from 25% to 3% (p ≤ 0.05. The first verticalization was performed 10 ± 8 hours after the surgery, the second - 21 ± 8 hours later. Two-minute walk distance was 5 ± 3 and 8 ± 4 m, respectively. It is noted more frequent adverse events in patients of the 1st group was noted compared to patients of the 2nd and the 3rd groups during first (91%, 33% and 25%, p ≤ 0.05 and the second verticalization (70%, 25% and 17%, p ≤ 0.05. Multimodal analgesia allows to proceed with the successful activation of patients after hip replacement with in the first day after the surgery. The 3rd group patients are noted with a tendency for the optimal combination of efficient and safe of analgetic therapy.

  5. Manifold regularized multitask feature learning for multimodality disease classification.

    Science.gov (United States)

    Jie, Biao; Zhang, Daoqiang; Cheng, Bo; Shen, Dinggang

    2015-02-01

    Multimodality based methods have shown great advantages in classification of Alzheimer's disease (AD) and its prodromal stage, that is, mild cognitive impairment (MCI). Recently, multitask feature selection methods are typically used for joint selection of common features across multiple modalities. However, one disadvantage of existing multimodality based methods is that they ignore the useful data distribution information in each modality, which is essential for subsequent classification. Accordingly, in this paper we propose a manifold regularized multitask feature learning method to preserve both the intrinsic relatedness among multiple modalities of data and the data distribution information in each modality. Specifically, we denote the feature learning on each modality as a single task, and use group-sparsity regularizer to capture the intrinsic relatedness among multiple tasks (i.e., modalities) and jointly select the common features from multiple tasks. Furthermore, we introduce a new manifold-based Laplacian regularizer to preserve the data distribution information from each task. Finally, we use the multikernel support vector machine method to fuse multimodality data for eventual classification. Conversely, we also extend our method to the semisupervised setting, where only partial data are labeled. We evaluate our method using the baseline magnetic resonance imaging (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET), and cerebrospinal fluid (CSF) data of subjects from AD neuroimaging initiative database. The experimental results demonstrate that our proposed method can not only achieve improved classification performance, but also help to discover the disease-related brain regions useful for disease diagnosis. © 2014 Wiley Periodicals, Inc.

  6. Multimode approximation for {sup 238}U photofission at intermediate energies

    Energy Technology Data Exchange (ETDEWEB)

    Demekhina, N. A., E-mail: demekhina@lnr.jinr.ru [Yerevan Physics Institute (Armenia); Karapetyan, G. S. [Yerevan State University (Armenia)

    2008-01-15

    The yields of products originating from {sup 238}U photofission are measured at the bremsstrahlung endpoint energies of 50 and 3500 MeV. Charge and mass distributions of fission fragments are obtained. Symmetric and asymmetric channels in {sup 238}U photofission are singled out on the basis of the model of multimode fission. This decomposition makes it possible to estimate the contributions of various fission components and to calculate the fissilities of {sup 238}U in the photon-energy regions under study.

  7. A Multimodal Communication Program for Aphasia during Inpatient Rehabilitation: A Case Study

    Science.gov (United States)

    Wallace, Sarah E.; Purdy, Mary; Skidmore, Elizabeth

    2014-01-01

    BACKGROUND Communication is essential for successful rehabilitation, yet few aphasia treatments have been investigated during the acute stroke phase. Alternative modality use including gesturing, writing, or drawing has been shown to increase communicative effectiveness in people with chronic aphasia. Instruction in alternative modality use during acute stroke may increase patient communication and participation, therefore resulting in fewer adverse situations and improved rehabilitation outcomes. OBJECTIVE The study purpose was to explore a multimodal communication program for aphasia (MCPA) implemented during acute stroke rehabilitation. MCPA aims to improve communication modality production, and to facilitate switching among modalities to resolve communication breakdowns. METHODS Two adults with severe aphasia completed MCPA beginning at 2 and 3 weeks post onset a single left-hemisphere stroke. Probes completed during each session allowed for evaluation of modality production and modality switching accuracy. RESULTS Participants completed MCPA (10 and 14 treatment sessions respectively) and their performance on probes suggested increased accuracy in the production of various alternate communication modalities. However, increased switching to an alternate modality was noted for only one participant. CONCLUSIONS Further investigation of multimodal treatment during inpatient rehabilitation is warranted. In particular, comparisons between multimodal and standard treatments would help determine appropriate interventions for this setting. PMID:25227547

  8. Associative learning changes cross-modal representations in the gustatory cortex.

    Science.gov (United States)

    Vincis, Roberto; Fontanini, Alfredo

    2016-08-30

    A growing body of literature has demonstrated that primary sensory cortices are not exclusively unimodal, but can respond to stimuli of different sensory modalities. However, several questions concerning the neural representation of cross-modal stimuli remain open. Indeed, it is poorly understood if cross-modal stimuli evoke unique or overlapping representations in a primary sensory cortex and whether learning can modulate these representations. Here we recorded single unit responses to auditory, visual, somatosensory, and olfactory stimuli in the gustatory cortex (GC) of alert rats before and after associative learning. We found that, in untrained rats, the majority of GC neurons were modulated by a single modality. Upon learning, both prevalence of cross-modal responsive neurons and their breadth of tuning increased, leading to a greater overlap of representations. Altogether, our results show that the gustatory cortex represents cross-modal stimuli according to their sensory identity, and that learning changes the overlap of cross-modal representations.

  9. Polarization Characterization of a Multi-Moded Feed Structure

    Data.gov (United States)

    National Aeronautics and Space Administration — The Polarization Characterization of a Multi-Moded Feed Structure projects characterize the polarization response of a multi-moded feed horn as an innovative...

  10. Filter. Remix. Make.: Cultivating Adaptability through Multimodality

    Science.gov (United States)

    Dusenberry, Lisa; Hutter, Liz; Robinson, Joy

    2015-01-01

    This article establishes traits of adaptable communicators in the 21st century, explains why adaptability should be a goal of technical communication educators, and shows how multimodal pedagogy supports adaptability. Three examples of scalable, multimodal assignments (infographics, research interviews, and software demonstrations) that evidence…

  11. Residual Shuffling Convolutional Neural Networks for Deep Semantic Image Segmentation Using Multi-Modal Data

    Science.gov (United States)

    Chen, K.; Weinmann, M.; Gao, X.; Yan, M.; Hinz, S.; Jutzi, B.; Weinmann, M.

    2018-05-01

    In this paper, we address the deep semantic segmentation of aerial imagery based on multi-modal data. Given multi-modal data composed of true orthophotos and the corresponding Digital Surface Models (DSMs), we extract a variety of hand-crafted radiometric and geometric features which are provided separately and in different combinations as input to a modern deep learning framework. The latter is represented by a Residual Shuffling Convolutional Neural Network (RSCNN) combining the characteristics of a Residual Network with the advantages of atrous convolution and a shuffling operator to achieve a dense semantic labeling. Via performance evaluation on a benchmark dataset, we analyze the value of different feature sets for the semantic segmentation task. The derived results reveal that the use of radiometric features yields better classification results than the use of geometric features for the considered dataset. Furthermore, the consideration of data on both modalities leads to an improvement of the classification results. However, the derived results also indicate that the use of all defined features is less favorable than the use of selected features. Consequently, data representations derived via feature extraction and feature selection techniques still provide a gain if used as the basis for deep semantic segmentation.

  12. Multimodal Imaging Brain Connectivity Analysis (MIBCA toolbox

    Directory of Open Access Journals (Sweden)

    Andre Santos Ribeiro

    2015-07-01

    Full Text Available Aim. In recent years, connectivity studies using neuroimaging data have increased the understanding of the organization of large-scale structural and functional brain networks. However, data analysis is time consuming as rigorous procedures must be assured, from structuring data and pre-processing to modality specific data procedures. Until now, no single toolbox was able to perform such investigations on truly multimodal image data from beginning to end, including the combination of different connectivity analyses. Thus, we have developed the Multimodal Imaging Brain Connectivity Analysis (MIBCA toolbox with the goal of diminishing time waste in data processing and to allow an innovative and comprehensive approach to brain connectivity.Materials and Methods. The MIBCA toolbox is a fully automated all-in-one connectivity toolbox that offers pre-processing, connectivity and graph theoretical analyses of multimodal image data such as diffusion-weighted imaging, functional magnetic resonance imaging (fMRI and positron emission tomography (PET. It was developed in MATLAB environment and pipelines well-known neuroimaging softwares such as Freesurfer, SPM, FSL, and Diffusion Toolkit. It further implements routines for the construction of structural, functional and effective or combined connectivity matrices, as well as, routines for the extraction and calculation of imaging and graph-theory metrics, the latter using also functions from the Brain Connectivity Toolbox. Finally, the toolbox performs group statistical analysis and enables data visualization in the form of matrices, 3D brain graphs and connectograms. In this paper the MIBCA toolbox is presented by illustrating its capabilities using multimodal image data from a group of 35 healthy subjects (19–73 years old with volumetric T1-weighted, diffusion tensor imaging, and resting state fMRI data, and 10 subjets with 18F-Altanserin PET data also.Results. It was observed both a high inter

  13. The Weyl approach to the representation theory of reflection equation algebra

    International Nuclear Information System (INIS)

    Saponov, P A

    2004-01-01

    The present paper deals with the representation theory of reflection equation algebra, connected to a Hecke type R-matrix. Up to some reasonable additional conditions, the R-matrix is arbitrary (not necessary originating from quantum groups). We suggest a universal method for constructing finite dimensional irreducible representations in the framework of the Weyl approach well known in the representation theory of classical Lie groups and algebras. With this method a series of irreducible modules is constructed. The modules are parametrized by Young diagrams. The spectrum of central elements s k Tr q L k is calculated in the single-row and single-column representations. A rule for the decomposition of the tensor product of modules into a direct sum of irreducible components is also suggested

  14. Towards New Mappings between Emotion Representation Models

    Directory of Open Access Journals (Sweden)

    Agnieszka Landowska

    2018-02-01

    Full Text Available There are several models for representing emotions in affect-aware applications, and available emotion recognition solutions provide results using diverse emotion models. As multimodal fusion is beneficial in terms of both accuracy and reliability of emotion recognition, one of the challenges is mapping between the models of affect representation. This paper addresses this issue by: proposing a procedure to elaborate new mappings, recommending a set of metrics for evaluation of the mapping accuracy, and delivering new mapping matrices for estimating the dimensions of a Pleasure-Arousal-Dominance model from Ekman’s six basic emotions. The results are based on an analysis using three datasets that were constructed based on affect-annotated lexicons. The new mappings were obtained with linear regression learning methods. The proposed mappings showed better results on the datasets in comparison with the state-of-the-art matrix. The procedure, as well as the proposed metrics, might be used, not only in evaluation of the mappings between representation models, but also in comparison of emotion recognition and annotation results. Moreover, the datasets are published along with the paper and new mappings might be created and evaluated using the proposed methods. The study results might be interesting for both researchers and developers, who aim to extend their software solutions with affect recognition techniques.

  15. Generating Cognitive Dissonance in Student Interviews through Multiple Representations

    Science.gov (United States)

    Linenberger, Kimberly J.; Bretz, Stacey Lowery

    2012-01-01

    This study explores what students understand about enzyme-substrate interactions, using multiple representations of the phenomenon. In this paper we describe our use of the 3 Phase-Single Interview Technique with multiple representations to generate cognitive dissonance within students in order to uncover misconceptions of enzyme-substrate…

  16. Self-identification with another person's face: the time relevant role of multimodal brain areas in the enfacement illusion.

    Science.gov (United States)

    Bufalari, Ilaria; Porciello, Giuseppina; Sperduti, Marco; Minio-Paluello, Ilaria

    2015-04-01

    The illusory subjective experience of looking at one's own face while in fact looking at another person's face can surprisingly be induced by simple synchronized visuotactile stimulation of the two faces. A recent study (Apps MA, Tajadura-Jiménez A, Sereno M, Blanke O, Tsakiris M. Cereb Cortex. First published August 20, 2013; doi:10.1093/cercor/bht199) investigated for the first time the role of visual unimodal and temporoparietal multimodal brain areas in the enfacement illusion and suggested a model in which multisensory mechanisms are crucial to construct and update self-face representation. Copyright © 2015 the American Physiological Society.

  17. Integration of geospatial multi-mode transportation Systems in Kuala Lumpur

    Science.gov (United States)

    Ismail, M. A.; Said, M. N.

    2014-06-01

    Public transportation serves people with mobility and accessibility to workplaces, health facilities, community resources, and recreational areas across the country. Development in the application of Geographical Information Systems (GIS) to transportation problems represents one of the most important areas of GIS-technology today. To show the importance of GIS network analysis, this paper highlights the determination of the optimal path between two or more destinations based on multi-mode concepts. The abstract connector is introduced in this research as an approach to integrate urban public transportation in Kuala Lumpur, Malaysia including facilities such as Light Rapid Transit (LRT), Keretapi Tanah Melayu (KTM) Komuter, Express Rail Link (ERL), KL Monorail, road driving as well as pedestrian modes into a single intelligent data model. To assist such analysis, ArcGIS's Network Analyst functions are used whereby the final output includes the total distance, total travelled time, directional maps produced to find the quickest, shortest paths, and closest facilities based on either time or distance impedance for multi-mode route analysis.

  18. Integration of geospatial multi-mode transportation Systems in Kuala Lumpur

    International Nuclear Information System (INIS)

    Ismail, M A; Said, M N

    2014-01-01

    Public transportation serves people with mobility and accessibility to workplaces, health facilities, community resources, and recreational areas across the country. Development in the application of Geographical Information Systems (GIS) to transportation problems represents one of the most important areas of GIS-technology today. To show the importance of GIS network analysis, this paper highlights the determination of the optimal path between two or more destinations based on multi-mode concepts. The abstract connector is introduced in this research as an approach to integrate urban public transportation in Kuala Lumpur, Malaysia including facilities such as Light Rapid Transit (LRT), Keretapi Tanah Melayu (KTM) Komuter, Express Rail Link (ERL), KL Monorail, road driving as well as pedestrian modes into a single intelligent data model. To assist such analysis, ArcGIS's Network Analyst functions are used whereby the final output includes the total distance, total travelled time, directional maps produced to find the quickest, shortest paths, and closest facilities based on either time or distance impedance for multi-mode route analysis

  19. A representation independent propagator. Pt. 1. Compact Lie groups

    International Nuclear Information System (INIS)

    Tome, W.A.

    1995-01-01

    Conventional path integral expressions for propagators are representation dependent. Rather than having to adapt each propagator to the representation in question, it is shown that for compact Lie groups it is possible to introduce a propagator that is representation independent. For a given set of kinematical variables this propagator is a single function independent of any particular choice of fiducial vector, which monetheless, correctly propagates each element of the coherent state representation associated with these kinematical variables. Although the configuration space is in general curved, nevertheless the lattice phase-space path integral for the representation independent propagator has the form appropriate to flat space. To illustrate the general theory a representation independent propagator is explicitly constructed for the Lie group SU(2). (orig.)

  20. Label-free evaluation of hepatic microvesicular steatosis with multimodal coherent anti-Stokes Raman scattering microscopy.

    Directory of Open Access Journals (Sweden)

    Thuc T Le

    Full Text Available Hepatic microvesicular steatosis is a hallmark of drug-induced hepatotoxicity and early-stage fatty liver disease. Current histopathology techniques are inadequate for the clinical evaluation of hepatic microvesicular steatosis. In this paper, we explore the use of multimodal coherent anti-Stokes Raman scattering (CARS microscopy for the detection and characterization of hepatic microvesicular steatosis. We show that CARS microscopy is more sensitive than Oil Red O histology for the detection of microvesicular steatosis. Computer-assisted analysis of liver lipid level based on CARS signal intensity is consistent with triglyceride measurement using a standard biochemical assay. Most importantly, in a single measurement procedure on unprocessed and unstained liver tissues, multimodal CARS imaging provides a wealth of critical information including the detection of microvesicular steatosis and quantitation of liver lipid content, number and size of lipid droplets, and lipid unsaturation and packing order of lipid droplets. Such information can only be assessed by multiple different methods on processed and stained liver tissues or tissue extracts using current standard analytical techniques. Multimodal CARS microscopy also permits label-free identification of lipid-rich non-parenchymal cells. In addition, label-free and non-perturbative CARS imaging allow rapid screening of mitochondrial toxins-induced microvesicular steatosis in primary hepatocyte cultures. With its sensitivity and versatility, multimodal CARS microscopy should be a powerful tool for the clinical evaluation of hepatic microvesicular steatosis.

  1. Multimodal Pedagogies for Teacher Education in TESOL

    Science.gov (United States)

    Yi, Youngjoo; Angay-Crowder, Tuba

    2016-01-01

    As a growing number of English language learners (ELLs) engage in digital and multimodal literacy practices in their daily lives, teachers are starting to incorporate multimodal approaches into their instruction. However, anecdotal and empirical evidence shows that teachers often feel unprepared for integrating such practices into their curricula…

  2. Histopathology in 3D: From three-dimensional reconstruction to multi-stain and multi-modal analysis

    Directory of Open Access Journals (Sweden)

    Derek Magee

    2015-01-01

    Full Text Available Light microscopy applied to the domain of histopathology has traditionally been a two-dimensional imaging modality. Several authors, including the authors of this work, have extended the use of digital microscopy to three dimensions by stacking digital images of serial sections using image-based registration. In this paper, we give an overview of our approach, and of extensions to the approach to register multi-modal data sets such as sets of interleaved histopathology sections with different stains, and sets of histopathology images to radiology volumes with very different appearance. Our approach involves transforming dissimilar images into a multi-channel representation derived from co-occurrence statistics between roughly aligned images.

  3. Upper Mantle Shear Wave Structure Beneath North America From Multi-mode Surface Wave Tomography

    Science.gov (United States)

    Yoshizawa, K.; Ekström, G.

    2008-12-01

    The upper mantle structure beneath the North American continent has been investigated from measurements of multi-mode phase speeds of Love and Rayleigh waves. To estimate fundamental-mode and higher-mode phase speeds of surface waves from a single seismogram at regional distances, we have employed a method of nonlinear waveform fitting based on a direct model-parameter search using the neighbourhood algorithm (Yoshizawa & Kennett, 2002). The method of the waveform analysis has been fully automated by employing empirical quantitative measures for evaluating the accuracy/reliability of estimated multi-mode phase dispersion curves, and thus it is helpful in processing the dramatically increasing numbers of seismic data from the latest regional networks such as USArray. As a first step toward modeling the regional anisotropic shear-wave velocity structure of the North American upper mantle with extended vertical resolution, we have applied the method to long-period three-component records of seismic stations in North America, which mostly comprise the GSN and US regional networks as well as the permanent and transportable USArray stations distributed by the IRIS DMC. Preliminary multi-mode phase-speed models show large-scale patterns of isotropic heterogeneity, such as a strong velocity contrast between the western and central/eastern United States, which are consistent with the recent global and regional models (e.g., Marone, et al. 2007; Nettles & Dziewonski, 2008). We will also discuss radial anisotropy of shear wave speed beneath North America from multi-mode dispersion measurements of Love and Rayleigh waves.

  4. Implications of Multimodal Learning Models for foreign language teaching and learning

    Directory of Open Access Journals (Sweden)

    Miguel Farías

    2011-04-01

    Full Text Available This literature review article approaches the topic of information and communications technologies from the perspective of their impact on the language learning process, with particular emphasis on the most appropriate designs of multimodal texts as informed by models of multimodal learning. The first part contextualizes multimodality within the fields of discourse studies, the psychology of learning and CALL; the second, deals with multimodal conceptions of reading and writing by discussing hypertextuality and literacy. A final section outlines the possible implications of multimodal learning models for foreign language teaching and learning.

  5. The effectiveness of multi modal representation text books to improve student's scientific literacy of senior high school students

    Science.gov (United States)

    Zakiya, Hanifah; Sinaga, Parlindungan; Hamidah, Ida

    2017-05-01

    The results of field studies showed the ability of science literacy of students was still low. One root of the problem lies in the books used in learning is not oriented toward science literacy component. This study focused on the effectiveness of the use of textbook-oriented provisioning capability science literacy by using multi modal representation. The text books development method used Design Representational Approach Learning to Write (DRALW). Textbook design which was applied to the topic of "Kinetic Theory of Gases" is implemented in XI grade students of high school learning. Effectiveness is determined by consideration of the effect and the normalized percentage gain value, while the hypothesis was tested using Independent T-test. The results showed that the textbooks which were developed using multi-mode representation science can improve the literacy skills of students. Based on the size of the effect size textbooks developed with representation multi modal was found effective in improving students' science literacy skills. The improvement was occurred in all the competence and knowledge of scientific literacy. The hypothesis testing showed that there was a significant difference on the ability of science literacy between class that uses textbooks with multi modal representation and the class that uses the regular textbook used in schools.

  6. Label-free imaging of arterial cells and extracellular matrix using a multimodal CARS microscope

    Science.gov (United States)

    Wang, Han-Wei; Le, Thuc T.; Cheng, Ji-Xin

    2008-04-01

    A multimodal nonlinear optical imaging system that integrates coherent anti-Stokes Raman scattering (CARS), sum-frequency generation (SFG), and two-photon excitation fluorescence (TPEF) on the same platform was developed and applied to visualize single cells and extracellular matrix in fresh carotid arteries. CARS signals arising from CH 2-rich membranes allowed visualization of endothelial cells and smooth muscle cells of the arterial wall. Additionally, CARS microscopy allowed vibrational imaging of elastin and collagen fibrils which are also rich in CH 2 bonds. The extracellular matrix organization was further confirmed by TPEF signals arising from elastin's autofluorescence and SFG signals arising from collagen fibrils' non-centrosymmetric structure. Label-free imaging of significant components of arterial tissues suggests the potential application of multimodal nonlinear optical microscopy to monitor onset and progression of arterial diseases.

  7. Performance Evaluation of Multimodal Multifeature Authentication System Using KNN Classification

    Directory of Open Access Journals (Sweden)

    Gayathri Rajagopal

    2015-01-01

    Full Text Available This research proposes a multimodal multifeature biometric system for human recognition using two traits, that is, palmprint and iris. The purpose of this research is to analyse integration of multimodal and multifeature biometric system using feature level fusion to achieve better performance. The main aim of the proposed system is to increase the recognition accuracy using feature level fusion. The features at the feature level fusion are raw biometric data which contains rich information when compared to decision and matching score level fusion. Hence information fused at the feature level is expected to obtain improved recognition accuracy. However, information fused at feature level has the problem of curse in dimensionality; here PCA (principal component analysis is used to diminish the dimensionality of the feature sets as they are high dimensional. The proposed multimodal results were compared with other multimodal and monomodal approaches. Out of these comparisons, the multimodal multifeature palmprint iris fusion offers significant improvements in the accuracy of the suggested multimodal biometric system. The proposed algorithm is tested using created virtual multimodal database using UPOL iris database and PolyU palmprint database.

  8. Rubber hand illusion, empathy, and schizotypal experiences in terms of self-other representations.

    Science.gov (United States)

    Asai, Tomohisa; Mao, Zhu; Sugimori, Eriko; Tanno, Yoshihiko

    2011-12-01

    When participants observed a rubber hand being touched, their sense of touch was activated (rubber hand illusion: RHI). While this illusion might be caused by multi-modal integration, it may also be related to empathic function, which enables us to simulate the observed information. We examined individual differences in the RHI, including empathic and schizotypal personality traits, as previous research had suggested that schizophrenic patients would be more subject to the RHI. The results indicated that people who experience a stronger RHI might have stronger empathic and schizotypal personalites simultaneously. We discussed these relationships in terms of self-other representations. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Percorsi linguistici e semiotici: Critical Multimodal Analysis of Digital Discourse

    Directory of Open Access Journals (Sweden)

    edited by Ilaria Moschini

    2014-12-01

    Full Text Available The language section of LEA - edited by Ilaria Moschini - is dedicated to the Critical Multimodal Analysis of Digital Discourse, an approach that encompasses the linguistic and semiotic detailed investigation of texts within a socio-cultural perspective. It features an interview with Professor Theo van Leeuwen by Ilaria Moschini and four essays: “Retwitting, reposting, repinning; reshaping identities online: Towards a social semiotic multimodal analysis of digital remediation” by Elisabetta Adami; “Multimodal aspects of corporate social responsibility communication” by Carmen Daniela Maier; “Pervasive Technologies and the Paradoxes of Multimodal Digital Communication” by Sandra Petroni and “Can the powerless speak? Linguistic and multimodal corporate media manipulation in digital environments: the case of Malala Yousafzai” by Maria Grazia Sindoni. 

  10. A single parameter representation of hygroscopic growth and cloud condensation nucleus activity – Part 2: Including solubility

    Directory of Open Access Journals (Sweden)

    M. D. Petters

    2008-10-01

    Full Text Available The ability of a particle to serve as a cloud condensation nucleus in the atmosphere is determined by its size, hygroscopicity and its solubility in water. Usually size and hygroscopicity alone are sufficient to predict CCN activity. Single parameter representations for hygroscopicity have been shown to successfully model complex, multicomponent particles types. Under the assumption of either complete solubility, or complete insolubility of a component, it is not necessary to explicitly include that component's solubility into the single parameter framework. This is not the case if sparingly soluble materials are present. In this work we explicitly account for solubility by modifying the single parameter equations. We demonstrate that sensitivity to the actual value of solubility emerges only in the regime of 2×10−1–5×10−4, where the solubility values are expressed as volume of solute per unit volume of water present in a saturated solution. Compounds that do not fall inside this sparingly soluble envelope can be adequately modeled assuming they are either infinitely soluble in water or completely insoluble.

  11. Use of analyte-modulated modal power distribution in multimode optical fibers for simultaneous single-wavelength evanescent-wave refractometry and spectrometry.

    Science.gov (United States)

    Potyrailo, R A; Ruddy, V P; Hieftje, G M

    1999-11-01

    A new method is described for the simultaneous determination of absorbance and refractive index of a sample medium. The method is based on measurement of the analyte-modulated modal power distribution (MPD) in a multimode waveguide. In turn, the MPD is quantified by the far-field spatial pattern and intensity of light, i.e., the Fraunhofer diffraction pattern (registered on a CCD camera), that emerges from a multimode optical fiber. Operationally, light that is sent down the fiber interacts with the surrounding analyte-containing medium by means of the evanescent wave at the fiber boundary. The light flux in the propagating beam and the internal reflection angles within the fiber are both affected by optical absorption connected with the analyte and by the refractive index of the analyte-containing medium. In turn, these angles are reflected in the angular divergence of the beam as it leaves the fiber. As a result, the Fraunhofer diffraction pattern of that beam yields two parameters that can, together, be used to deduce refractive index and absorbance. This MPD based detection offers important advantages over traditional evanescent-wave detection strategies which rely on recording only the total transmitted optical power or its lost fraction. First, simultaneous determination of sample refractive index and absorbance is possible at a single probe wavelength. Second, the sensitivity of refractometric and absorption measurements can be controlled simply, either by adjusting the distance between the end face of the fiber and the CCD detector or by monitoring selected modal groups at the fiber output. As a demonstration of these capabilities, several weakly absorbing solutions were examined, with refractive indices in the range from 1.3330 to 1.4553 and with absorption coefficients in the range 0-16 cm-1. The new detection strategy is likely to be important in applications in which sample coloration varies and when it is necessary to compensate for variations in the

  12. Implementation of multimode release criteria and dose standard alternatives

    International Nuclear Information System (INIS)

    Klett, R.

    1993-01-01

    The current standard that regulates the disposal of high-level radioactive wastes (HLW) and transuranic (TRU) wastes evaluates the cumulative risk of all repositories with a single derived set of generic release limits. This paper reviews the technical basis, attributes, and deficiencies of the present approach and two alternative modifications and extensions. The alternatives are the multimode release limits applied at the point of release and a dose standard alternative suggested at the first Electric Power Research Institute (EPRI) waste disposal workshop. Methods of developing and applying the alternatives are presented and some suggestions are given for incorporating them in the standards

  13. Fiber-Optic Vibration Sensor Based on Multimode Fiber

    Directory of Open Access Journals (Sweden)

    I. Lujo

    2008-06-01

    Full Text Available The purpose of this paper is to present a fiberoptic vibration sensor based on the monitoring of the mode distribution in a multimode optical fiber. Detection of vibrations and their parameters is possible through observation of the output speckle pattern from the multimode optical fiber. A working experimental model has been built in which all used components are widely available and cheap: a CCD camera (a simple web-cam, a multimode laser in visible range as a light source, a length of multimode optical fiber, and a computer for signal processing. Measurements have shown good agreement with the actual frequency of vibrations, and promising results were achieved with the amplitude measurements although they require some adaptation of the experimental model. Proposed sensor is cheap and lightweight and therefore presents an interesting alternative for monitoring large smart structures.

  14. ADHD, Multimodal Treatment, and Longitudinal Outcome: Evidence, Paradox, and Challenge.

    Science.gov (United States)

    Hinshaw, Stephen P; Arnold, L Eugene

    2015-01-01

    Given major increases in the diagnosis of attention-deficit hyperactivity disorder (ADHD) and in rates of medication for this condition, we carefully examine evidence for effects of single versus multimodal (i.e., combined medication and psychosocial/behavioral) interventions for ADHD. Our primary data source is the Multimodal Treatment Study of Children with ADHD (MTA), a 14-month, randomized clinical trial in which intensive behavioral, medication, and multimodal treatment arms were contrasted with one another and with community intervention (treatment-as-usual), regarding outcome domains of ADHD symptoms, comorbidities, and core functional impairments. Although initial reports emphasized the superiority of well-monitored medication for symptomatic improvement, reanalyses and reappraisals have highlighted (a) the superiority of combination treatment for composite outcomes and for domains of functional impairment (e.g., academic achievement, social skills, parenting practices); (b) the importance of considering moderator and mediator processes underlying differential patterns of outcome, including comorbid subgroups and improvements in family discipline style during the intervention period; (c) the emergence of side effects (e.g., mild growth suppression) in youth treated with long-term medication; and (d) the diminution of medication's initial superiority once the randomly assigned treatment phase turned into naturalistic follow-up. The key paradox is that whereas ADHD clearly responds to medication and behavioral treatment in the short term, evidence for long-term effectiveness remains elusive. We close with discussion of future directions and a call for greater understanding of relevant developmental processes in the attempt to promote optimal, generalized, and lasting treatments for this important and impairing neurodevelopmental disorder.

  15. Video genre classification using multimodal features

    Science.gov (United States)

    Jin, Sung Ho; Bae, Tae Meon; Choo, Jin Ho; Ro, Yong Man

    2003-12-01

    We propose a video genre classification method using multimodal features. The proposed method is applied for the preprocessing of automatic video summarization or the retrieval and classification of broadcasting video contents. Through a statistical analysis of low-level and middle-level audio-visual features in video, the proposed method can achieve good performance in classifying several broadcasting genres such as cartoon, drama, music video, news, and sports. In this paper, we adopt MPEG-7 audio-visual descriptors as multimodal features of video contents and evaluate the performance of the classification by feeding the features into a decision tree-based classifier which is trained by CART. The experimental results show that the proposed method can recognize several broadcasting video genres with a high accuracy and the classification performance with multimodal features is superior to the one with unimodal features in the genre classification.

  16. Evaluation of registration strategies for multi-modality images of rat brain slices

    International Nuclear Information System (INIS)

    Palm, Christoph; Vieten, Andrea; Salber, Dagmar; Pietrzyk, Uwe

    2009-01-01

    In neuroscience, small-animal studies frequently involve dealing with series of images from multiple modalities such as histology and autoradiography. The consistent and bias-free restacking of multi-modality image series is obligatory as a starting point for subsequent non-rigid registration procedures and for quantitative comparisons with positron emission tomography (PET) and other in vivo data. Up to now, consistency between 2D slices without cross validation using an inherent 3D modality is frequently presumed to be close to the true morphology due to the smooth appearance of the contours of anatomical structures. However, in multi-modality stacks consistency is difficult to assess. In this work, consistency is defined in terms of smoothness of neighboring slices within a single modality and between different modalities. Registration bias denotes the distortion of the registered stack in comparison to the true 3D morphology and shape. Based on these metrics, different restacking strategies of multi-modality rat brain slices are experimentally evaluated. Experiments based on MRI-simulated and real dual-tracer autoradiograms reveal a clear bias of the restacked volume despite quantitatively high consistency and qualitatively smooth brain structures. However, different registration strategies yield different inter-consistency metrics. If no genuine 3D modality is available, the use of the so-called SOP (slice-order preferred) or MOSOP (modality-and-slice-order preferred) strategy is recommended.

  17. Concepts for space nuclear multi-mode reactors

    International Nuclear Information System (INIS)

    Myrabo, L.; Botts, T.E.; Powell, J.R.

    1983-01-01

    A number of nuclear multi-mode reactor power plants are conceptualized for use with solid core, fixed particle bed and rotating particle bed reactors. Multi-mode systems generate high peak electrical power in the open cycle mode, with MHD generator or turbogenerator converters and cryogenically stored coolants. Low level stationkeeping power and auxiliary reactor cooling (i.e., for the removal of reactor afterheat) are provided in a closed cycle mode. Depending on reactor design, heat transfer to the low power converters can be accomplished by heat pipes, liquid metal coolants or high pressure gas coolants. Candidate low power conversion cycles include Brayton turbogenerator, Rankine turbogenerator, thermoelectric and thermionic approaches. A methodology is suggested for estimating the system mass of multi-mode nuclear power plants as a function of peak electric power level and required mission run time. The masses of closed cycle nuclear and open cycle chemical power systems are briefly examined to identify the regime of superiority for nuclear multi-mode systems. Key research and technology issues for such power plants are also identified

  18. The Representation of Polysemy: MEG Evidence

    OpenAIRE

    Pylkkänen, Liina; Llinás, Rodolfo; Murphy, Gregory L.

    2006-01-01

    Most words in natural language are polysemous; i.e., they can be used in more than one way. For example, paper can be used to refer to a substance made out of wood pulp or to a daily publication printed on that substance. Even though virtually every sentence contains polysemy, there is little agreement as to how polysemy is represented in the mental lexicon. Do different uses of polysemous words involve access to a single representation or do our minds store distinct representations for each ...

  19. Effective Fusion of Multi-Modal Remote Sensing Data in a Fully Convolutional Network for Semantic Labeling

    Directory of Open Access Journals (Sweden)

    Wenkai Zhang

    2017-12-01

    Full Text Available In recent years, Fully Convolutional Networks (FCN have led to a great improvement of semantic labeling for various applications including multi-modal remote sensing data. Although different fusion strategies have been reported for multi-modal data, there is no in-depth study of the reasons of performance limits. For example, it is unclear, why an early fusion of multi-modal data in FCN does not lead to a satisfying result. In this paper, we investigate the contribution of individual layers inside FCN and propose an effective fusion strategy for the semantic labeling of color or infrared imagery together with elevation (e.g., Digital Surface Models. The sensitivity and contribution of layers concerning classes and multi-modal data are quantified by recall and descent rate of recall in a multi-resolution model. The contribution of different modalities to the pixel-wise prediction is analyzed explaining the reason of the poor performance caused by the plain concatenation of different modalities. Finally, based on the analysis an optimized scheme for the fusion of layers with image and elevation information into a single FCN model is derived. Experiments are performed on the ISPRS Vaihingen 2D Semantic Labeling dataset (infrared and RGB imagery as well as elevation and the Potsdam dataset (RGB imagery and elevation. Comprehensive evaluations demonstrate the potential of the proposed approach.

  20. Multimode optical fibers: steady state mode exciter.

    Science.gov (United States)

    Ikeda, M; Sugimura, A; Ikegami, T

    1976-09-01

    The steady state mode power distribution of the multimode graded index fiber was measured. A simple and effective steady state mode exciter was fabricated by an etching technique. Its insertion loss was 0.5 dB for an injection laser. Deviation in transmission characteristics of multimode graded index fibers can be avoided by using the steady state mode exciter.

  1. Superparamagnetic nanoparticles for enhanced magnetic resonance and multimodal imaging

    Science.gov (United States)

    Sikma, Elise Ann Schultz

    Magnetic resonance imaging (MRI) is a powerful tool for noninvasive tomographic imaging of biological systems with high spatial and temporal resolution. Superparamagnetic (SPM) nanoparticles have emerged as highly effective MR contrast agents due to their biocompatibility, ease of surface modification and magnetic properties. Conventional nanoparticle contrast agents suffer from difficult synthetic reproducibility, polydisperse sizes and weak magnetism. Numerous synthetic techniques and nanoparticle formulations have been developed to overcome these barriers. However, there are still major limitations in the development of new nanoparticle-based probes for MR and multimodal imaging including low signal amplification and absence of biochemical reporters. To address these issues, a set of multimodal (T2/optical) and dual contrast (T1/T2) nanoparticle probes has been developed. Their unique magnetic properties and imaging capabilities were thoroughly explored. An enzyme-activatable contrast agent is currently being developed as an innovative means for early in vivo detection of cancer at the cellular level. Multimodal probes function by combining the strengths of multiple imaging techniques into a single agent. Co-registration of data obtained by multiple imaging modalities validates the data, enhancing its quality and reliability. A series of T2/optical probes were successfully synthesized by attachment of a fluorescent dye to the surface of different types of nanoparticles. The multimodal nanoparticles generated sufficient MR and fluorescence signal to image transplanted islets in vivo. Dual contrast T1/T2 imaging probes were designed to overcome disadvantages inherent in the individual T1 and T2 components. A class of T1/T2 agents was developed consisting of a gadolinium (III) complex (DTPA chelate or DO3A macrocycle) conjugated to a biocompatible silica-coated metal oxide nanoparticle through a disulfide linker. The disulfide linker has the ability to be reduced

  2. Multimodal lung cancer screening using the ITALUNG biomarker panel and low dose computed tomography. Results of the ITALUNG biomarker study.

    Science.gov (United States)

    Carozzi, Francesca Maria; Bisanzi, Simonetta; Carrozzi, Laura; Falaschi, Fabio; Lopes Pegna, Andrea; Mascalchi, Mario; Picozzi, Giulia; Peluso, Marco; Sani, Cristina; Greco, Luana; Ocello, Cristina; Paci, Eugenio

    2017-07-01

    Asymptomatic high-risk subjects, randomized in the intervention arm of the ITALUNG trial (1,406 screened for lung cancer), were enrolled for the ITALUNG biomarker study (n = 1,356), in which samples of blood and sputum were analyzed for plasma DNA quantification (cut off 5 ng/ml), loss of heterozygosity and microsatellite instability. The ITALUNG biomarker panel (IBP) was considered positive if at least one of the two biomarkers included in the panel was positive. Subjects with and without lung cancer diagnosis at the end of the screening cycle with LDCT (n = 517) were evaluated. Out of 18 baseline screen detected lung cancer cases, 17 were IBP positive (94%). Repeat screen-detected lung cancer cases were 18 and 12 of them positive at baseline IBP test (66%). Interval cancer cases (2-years) and biomarker tests after a suspect Non Calcific Nodule follow-up were investigated. The single test versus multimodal screening measures of accuracy were compared in a simulation within the screened ITALUNG intervention arm, considering screen-detected and interval cancer cases. Sensitivity was 90% at baseline screening. Specificity was 71 and 61% for LDCT and IBP as baseline single test, and improved at 89% with multimodal, combined screening. The positive predictive value was 4.3% for LDCT at baseline and 10.6% for multimodal screening. Multimodal screening could improve the screening efficiency at baseline and strategies for future implementation are discussed. If IBP was used as primary screening test, the LDCT burden might decrease of about 60%. © 2017 UICC.

  3. Acute cognitive dysfunction after hip fracture: frequency and risk factors in an optimized, multimodal, rehabilitation program

    DEFF Research Database (Denmark)

    Bitsch, Martin; Foss, Nicolai Bang; Kristensen, Billy Bjarne

    2006-01-01

    hip fracture surgery in an optimized, multimodal, peri-operative rehabilitation regimen. METHODS: One hundred unselected hip fracture patients treated in a well-defined, optimized, multimodal, peri-operative rehabilitation regimen were included. Patients were tested upon admission and on the second......BACKGROUND: Patients undergoing hip fracture surgery often experience acute post-operative cognitive dysfunction (APOCD). The pathogenesis of APOCD is probably multifactorial, and no single intervention has been successful in its prevention. No studies have investigated the incidence of APOCD after......, fourth and seventh post-operative days with the Mini Mental State Examination (MMSE) score. RESULTS: Thirty-two per cent of patients developed a significant post-operative cognitive decline, which was associated with several pre-fracture patient characteristics, including age and cognitive function...

  4. Multiparameter-dependent spontaneous emission in PbSe quantum dot-doped liquid-core multi-mode fiber

    International Nuclear Information System (INIS)

    Zhang, Lei; Zhang, Yu; Wu, Hua; Zhang, Tieqiang; Gu, Pengfei; Chu, Hairong; Cui, Tian; Wang, Yiding; Zhang, Hanzhuang; Zhao, Jun; Yu, William W.

    2013-01-01

    A theoretical model was established in this paper to analyze the properties of 3.50 and 4.39 nm PbSe quantum dot-doped liquid-core multi-mode fiber. This model was applicable to both single- and multi-mode fiber. The three-level system-based light-propagation equations and rate equations were used to calculate the guided spontaneous emission spectra. Considering the multi-mode in the fiber, the normalized intensity distribution of transversal model was improved and simplified. The detailed calculating results were thus obtained and explained using the above-mentioned model. The redshift of the peak position and the evolution of the emission power were observed and analyzed considering the influence of the fiber length, fiber diameter, doping concentration, and the pump power. The redshift increased with the increases of fiber length, fiber diameter, and doping concentration. The optimal fiber length, fiber diameter, and doping concentration were analyzed and confirmed, and the related spontaneous emission power was obtained. Besides, the normalized emission intensity increased with the increase of pump power in a nearly linear way. The calculating results fitted well to the experimental data

  5. Deterministic multimode photonic device for quantum-information processing

    DEFF Research Database (Denmark)

    Nielsen, Anne E. B.; Mølmer, Klaus

    2010-01-01

    We propose the implementation of a light source that can deterministically generate a rich variety of multimode quantum states. The desired states are encoded in the collective population of different ground hyperfine states of an atomic ensemble and converted to multimode photonic states by exci...

  6. Multimodal network design for sustainable household plastic recycling

    NARCIS (Netherlands)

    Bing Xiaoyun, Xiaoyun; Groot, J.J.; Bloemhof, J.M.; Vorst, van der J.G.A.J.

    2013-01-01

    Purpose – This research studies a plastic recycling system from a reverse logistics angle and investigates the potential benefits of a multimodality strategy to the network design of plastic recycling. This research aims to quantify the impact of multimodality on the network, to provide decision

  7. Multimodal warnings to enhance risk communication and safety

    NARCIS (Netherlands)

    Haas, E.C.; Erp, J.B.F. van

    2014-01-01

    Multimodal warnings incorporate audio and/or skin-based (tactile) cues to supplement or replace visual cues in environments where the user’s visual perception is busy, impaired, or nonexistent. This paper describes characteristics of audio, tactile, and multimodal warning displays and their role in

  8. Stability, structure and scale: improvements in multi-modal vessel extraction for SEEG trajectory planning.

    Science.gov (United States)

    Zuluaga, Maria A; Rodionov, Roman; Nowell, Mark; Achhala, Sufyan; Zombori, Gergely; Mendelson, Alex F; Cardoso, M Jorge; Miserocchi, Anna; McEvoy, Andrew W; Duncan, John S; Ourselin, Sébastien

    2015-08-01

    Brain vessels are among the most critical landmarks that need to be assessed for mitigating surgical risks in stereo-electroencephalography (SEEG) implantation. Intracranial haemorrhage is the most common complication associated with implantation, carrying significantly associated morbidity. SEEG planning is done pre-operatively to identify avascular trajectories for the electrodes. In current practice, neurosurgeons have no assistance in the planning of electrode trajectories. There is great interest in developing computer-assisted planning systems that can optimise the safety profile of electrode trajectories, maximising the distance to critical structures. This paper presents a method that integrates the concepts of scale, neighbourhood structure and feature stability with the aim of improving robustness and accuracy of vessel extraction within a SEEG planning system. The developed method accounts for scale and vicinity of a voxel by formulating the problem within a multi-scale tensor voting framework. Feature stability is achieved through a similarity measure that evaluates the multi-modal consistency in vesselness responses. The proposed measurement allows the combination of multiple images modalities into a single image that is used within the planning system to visualise critical vessels. Twelve paired data sets from two image modalities available within the planning system were used for evaluation. The mean Dice similarity coefficient was 0.89 ± 0.04, representing a statistically significantly improvement when compared to a semi-automated single human rater, single-modality segmentation protocol used in clinical practice (0.80 ± 0.03). Multi-modal vessel extraction is superior to semi-automated single-modality segmentation, indicating the possibility of safer SEEG planning, with reduced patient morbidity.

  9. Single Photon Emission Computed Tomography/Positron Emission Tomography Imaging and Targeted Radionuclide Therapy of Melanoma: New Multimodal Fluorinated and Iodinated Radiotracers

    International Nuclear Information System (INIS)

    Maisonial, A.; Papon, J.; Bayle, M.; Vidal, A.; Auzeloux, Ph.; Rbah, L.; Bonnet-Duquennoy, M.; Miot-Noirault, E.; Galmier, M.J.; Borel, M.; Madelmont, J.C.; Moins, N.; Chezal, J.M.; Kuhnast, B.; Boisgard, R.; Dolle, F.; Tavitian, B.; Boisgard, R.; Tavitian, B.; Askienazy, S.

    2011-01-01

    This study reports a series of 14 new iodinated and fluorinated compounds offering both early imaging ( 123 I, 124 I, 18 F) and systemic treatment ( 131 I) of melanoma potentialities. The biodistribution of each 125 I-labeled tracer was evaluated in a model of melanoma B16F0-bearing mice, using in vivo serial γ scintigraphic imaging. Among this series, [ 125 I]56 emerged as the most promising compound in terms of specific tumoral uptake and in vivo kinetic profile. To validate our multimodality concept, the radiosynthesis of [ 18 F]56 was then optimized and this radiotracer has been successfully investigated for in vivo PET imaging of melanoma in B16F0- and B16F10-bearing mouse model. The therapeutic efficacy of [ 131 I]56 was then evaluated in mice bearing subcutaneous B16F0 melanoma, and a significant slow down in tumoral growth was demonstrated. These data support further development of 56 for PET imaging ( 18 F, 124 I) and targeted radionuclide therapy ( 131 I) of melanoma using a single chemical structure. (authors)

  10. Multi-channel EEG-based sleep stage classification with joint collaborative representation and multiple kernel learning.

    Science.gov (United States)

    Shi, Jun; Liu, Xiao; Li, Yan; Zhang, Qi; Li, Yingjie; Ying, Shihui

    2015-10-30

    Electroencephalography (EEG) based sleep staging is commonly used in clinical routine. Feature extraction and representation plays a crucial role in EEG-based automatic classification of sleep stages. Sparse representation (SR) is a state-of-the-art unsupervised feature learning method suitable for EEG feature representation. Collaborative representation (CR) is an effective data coding method used as a classifier. Here we use CR as a data representation method to learn features from the EEG signal. A joint collaboration model is established to develop a multi-view learning algorithm, and generate joint CR (JCR) codes to fuse and represent multi-channel EEG signals. A two-stage multi-view learning-based sleep staging framework is then constructed, in which JCR and joint sparse representation (JSR) algorithms first fuse and learning the feature representation from multi-channel EEG signals, respectively. Multi-view JCR and JSR features are then integrated and sleep stages recognized by a multiple kernel extreme learning machine (MK-ELM) algorithm with grid search. The proposed two-stage multi-view learning algorithm achieves superior performance for sleep staging. With a K-means clustering based dictionary, the mean classification accuracy, sensitivity and specificity are 81.10 ± 0.15%, 71.42 ± 0.66% and 94.57 ± 0.07%, respectively; while with the dictionary learned using the submodular optimization method, they are 80.29 ± 0.22%, 71.26 ± 0.78% and 94.38 ± 0.10%, respectively. The two-stage multi-view learning based sleep staging framework outperforms all other classification methods compared in this work, while JCR is superior to JSR. The proposed multi-view learning framework has the potential for sleep staging based on multi-channel or multi-modality polysomnography signals. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. All-optical universal logic gates on nonlinear multimode interference coupler using tunable input intensity

    Science.gov (United States)

    Tajaldini, Mehdi; Jafri, Mohd Zubir Mat

    2015-04-01

    The theory of Nonlinear Modal Propagation Analysis Method (NMPA) have shown significant features of nonlinear multimode interference (MMI) coupler with compact dimension and when launched near the threshold of nonlinearity. Moreover, NMPA have the potential to allow studying the nonlinear MMI based the modal interference to explorer the phenomenon that what happen due to the natural of multimode region. Proposal of all-optical switch based NMPA has approved its capability to achieving the all-optical gates. All-optical gates have attracted increasing attention due to their practical utility in all-optical signal processing networks and systems. Nonlinear multimode interference devices could apply as universal all-optical gates due to significant features that NMPA introduce them. In this Paper, we present a novel Ultra-compact MMI coupler based on NMPA method in low intensity compared to last reports either as a novel design method and potential application for optical NAND, NOR as universal gates on single structure for Boolean logic signal processing devices and optimize their application via studding the contrast ratio between ON and OFF as a function of output width. We have applied NMPA for several applications so that the miniaturization in low nonlinear intensities is their main purpose.

  12. New Three-Mode Squeezing Operators Gained via Tripartite Entangled State Representation

    International Nuclear Information System (INIS)

    Jiang Nianquan; Fan Hongyi

    2008-01-01

    We show that the Agarwal-Simon representation of single-mode squeezed states can be generalized to find new form of three-mode squeezed states. We use the tripartite entangled state representations |p,y,z> and |x,u,v> to realize this goal.

  13. Multimodal Scaffolding in the Secondary English Classroom Curriculum

    Science.gov (United States)

    Boche, Benjamin; Henning, Megan

    2015-01-01

    This article examines the topic of multimodal scaffolding in the secondary English classroom curriculum through the viewpoint of one teacher's experiences. With technology becoming more commonplace and readily available in the English classroom, we must pinpoint specific and tangible ways to help teachers use and teach multimodalities in their…

  14. Role of interbranch pumping on the quantum-statistical behavior of multi-mode magnons in ferromagnetic nanowires

    Science.gov (United States)

    Haghshenasfard, Zahra; Cottam, M. G.

    2018-01-01

    Theoretical studies are reported for the quantum-statistical properties of microwave-driven multi-mode magnon systems as represented by ferromagnetic nanowires with a stripe geometry. Effects of both the exchange and the dipole-dipole interactions, as well as a Zeeman term for an external applied field, are included in the magnetic Hamiltonian. The model also contains the time-dependent nonlinear effects due to parallel pumping with an electromagnetic field. Using a coherent magnon state representation in terms of creation and annihilation operators, we investigate the effects of parallel pumping on the temporal evolution of various nonclassical properties of the system. A focus is on the interbranch mixing produced by the pumping field when there are three or more modes. In particular, the occupation magnon number and the multi-mode cross correlations between magnon modes are studied. Manipulation of the collapse and revival phenomena of the average magnon occupation number and the control of the cross correlation between the magnon modes are demonstrated through tuning of the parallel pumping field amplitude and appropriate choices for the coherent magnon states. The cross correlations are a direct consequence of the interbranch pumping effects and do not appear in the corresponding one- or two-mode magnon systems.

  15. Multimodal sensorimotor system in unicellular zoospores of a fungus.

    Science.gov (United States)

    Swafford, Andrew J M; Oakley, Todd H

    2018-01-19

    Complex sensory systems often underlie critical behaviors, including avoiding predators and locating prey, mates and shelter. Multisensory systems that control motor behavior even appear in unicellular eukaryotes, such as Chlamydomonas , which are important laboratory models for sensory biology. However, we know of no unicellular opisthokonts that control motor behavior using a multimodal sensory system. Therefore, existing single-celled models for multimodal sensorimotor integration are very distantly related to animals. Here, we describe a multisensory system that controls the motor function of unicellular fungal zoospores. We found that zoospores of Allomyces arbusculus exhibit both phototaxis and chemotaxis. Furthermore, we report that closely related Allomyces species respond to either the chemical or the light stimuli presented in this study, not both, and likely do not share this multisensory system. This diversity of sensory systems within Allomyces provides a rare example of a comparative framework that can be used to examine the evolution of sensory systems following the gain/loss of available sensory modalities. The tractability of Allomyces and related fungi as laboratory organisms will facilitate detailed mechanistic investigations into the genetic underpinnings of novel photosensory systems, and how multisensory systems may have functioned in early opisthokonts before multicellularity allowed for the evolution of specialized cell types. © 2018. Published by The Company of Biologists Ltd.

  16. Multimodal coaching and its application to workplace, life and health coaching

    OpenAIRE

    Stephen Palmer

    2012-01-01

    This article highlights how the multimodal approach (Lazarus, 1989) has been adapted to the field of coachingand coaching psychology. It covers the basic theories underpinning the multimodal approach and illustratesthe link between the theory and practice. Key multimodal strategies are covered including modalityprofiles, structural profiles, tracking and bridging.

  17. The semiotic construction of masculinity and affect: A multimodal analysis of media texts

    Directory of Open Access Journals (Sweden)

    Sônia Maria de Oliveira Pimenta

    2013-07-01

    Full Text Available http://dx.doi.org/10.5007/2175-8026.2013n64p173 The aim of this paper is to observe changes in the semiotic construction of masculine identities as a dynamic flux of social representations mediated by the multimodal aspect of texts (sensory modality, salience, behaviour and point of view.  The study compares previous research data from a magazine article of 2003 and its cover- page to four adverts of the 2005 edition and three recent adverts published in the 2008 edition of the same magazine, so as to perceive how they position readers ideologically in order to (1 detect how masculinity is discursively represented in its heterogeneity connected, ideologically, with power relations, vanity and emotions and (2 define their identities as consumers of goods and services.

  18. 3C-SiC microdisk mechanical resonators with multimode resonances at radio frequencies

    Science.gov (United States)

    Lee, Jaesung; Zamani, Hamidrera; Rajgopal, Srihari; Zorman, Christian A.; X-L Feng, Philip

    2017-07-01

    We report on the design, modeling, fabrication and measurement of single-crystal 3C-silicon carbide (SiC) microdisk mechanical resonators with multimode resonances operating at radio frequencies (RF). These microdisk resonators (center-clamped on a vertical stem pedestal) offer multiple flexural-mode resonances with frequencies dependent on both disk and anchor dimensions. The resonators are made using a novel fabrication method comprised of focused ion beam nanomachining and hydroflouic : nitric : acetic (HNA) acid etching. Resonance peaks (in the frequency spectrum) are detected through laser-interferometry measurements. Resonators with different dimensions are tested, and multimode resonances, mode splitting, energy dissipation (in the form of quality factor measurement) are investigated. Further, we demonstrate a feedback oscillator based on a passive 3C-SiC resonator. This investigation provides important guidelines for microdisk resonator development, ranging from an analytical prediction of frequency scaling law to fabrication, suggesting RF microdisk resonators can be good candidates for future sensing applications in harsh environments.

  19. Online probabilistic operational safety assessment of multi-mode engineering systems using Bayesian methods

    International Nuclear Information System (INIS)

    Lin, Yufei; Chen, Maoyin; Zhou, Donghua

    2013-01-01

    In the past decades, engineering systems become more and more complex, and generally work at different operational modes. Since incipient fault can lead to dangerous accidents, it is crucial to develop strategies for online operational safety assessment. However, the existing online assessment methods for multi-mode engineering systems commonly assume that samples are independent, which do not hold for practical cases. This paper proposes a probabilistic framework of online operational safety assessment of multi-mode engineering systems with sample dependency. To begin with, a Gaussian mixture model (GMM) is used to characterize multiple operating modes. Then, based on the definition of safety index (SI), the SI for one single mode is calculated. At last, the Bayesian method is presented to calculate the posterior probabilities belonging to each operating mode with sample dependency. The proposed assessment strategy is applied in two examples: one is the aircraft gas turbine, another is an industrial dryer. Both examples illustrate the efficiency of the proposed method

  20. Robustness of multimodal processes itineraries

    DEFF Research Database (Denmark)

    Bocewicz, G.; Banaszak, Z.; Nielsen, Izabela Ewa

    2013-01-01

    itineraries for assumed (O-D) trip. Since itinerary planning problem, constitutes a common routing and scheduling decision faced by travelers, hence the main question regards of itinerary replanning and particularly a method aimed at prototyping of mode sequences and paths selections. The declarative model......This paper concerns multimodal transport systems (MTS) represented by a supernetworks in which several unimodal networks are connected by transfer links and focuses on the scheduling problems encountered in these systems. Assuming unimodal networks are modeled as cyclic lines, i.e. the routes...... of multimodal processes driven itinerary planning problem is our main contribution. Illustrative examples providing alternative itineraries in some cases of MTS malfunction are presented....

  1. Multimode model for projective photon-counting measurements

    International Nuclear Information System (INIS)

    Tualle-Brouri, Rosa; Ourjoumtsev, Alexei; Dantan, Aurelien; Grangier, Philippe; Wubs, Martijn; Soerensen, Anders S.

    2009-01-01

    We present a general model to account for the multimode nature of the quantum electromagnetic field in projective photon-counting measurements. We focus on photon-subtraction experiments, where non-Gaussian states are produced conditionally. These are useful states for continuous-variable quantum-information processing. We present a general method called mode reduction that reduces the multimode model to an effective two-mode problem. We apply this method to a multimode model describing broadband parametric down-conversion, thereby improving the analysis of existing experimental results. The main improvement is that spatial and frequency filters before the photon detector are taken into account explicitly. We find excellent agreement with previously published experimental results, using fewer free parameters than before, and discuss the implications of our analysis for the optimized production of states with negative Wigner functions.

  2. Multimodal coaching and its application to workplace, life and health coaching

    Directory of Open Access Journals (Sweden)

    Stephen Palmer

    2012-10-01

    Full Text Available This article highlights how the multimodal approach (Lazarus, 1989 has been adapted to the field of coachingand coaching psychology. It covers the basic theories underpinning the multimodal approach and illustratesthe link between the theory and practice. Key multimodal strategies are covered including modalityprofiles, structural profiles, tracking and bridging.

  3. Experiencia de enseñanza multimodal en una clase de idiomas [Experience of multimodal teaching in a language classroom

    Directory of Open Access Journals (Sweden)

    María Martínez Lirola

    2013-12-01

    Full Text Available Resumen: Nuestra sociedad es cada vez más tecnológica y multimodal por lo que es necesario que la enseñanza se adapte a los nuevos tiempos. Este artículo analiza el modo en que la asignatura Lengua Inglesa IV de la Licenciatura en Filología Inglesa en la Universidad de Alicante combina el desarrollo de las cinco destrezas (escucha, habla, lectura, escritura e interacción evaluadas por medio de un portafolio con la multimodalidad en las prácticas docentes y en cada una de las actividades que componen el portafolio. Los resultados de una encuesta preparada al final del curso académico 2011-2012 ponen de manifiesto las competencias principales que el alumnado universitario desarrolla gracias a la docencia multimodal y la importancia de las tutorías en este tipo de enseñanza. Abstract: Our society becomes more technological and multimodal and, consequently, teaching has to be adapted to the new time. This article analyses the way in which the subject English Language IV of the degree English Studies at the University of Alicante combines the development of the five skills (listening, speaking, reading, writing and interacting evaluated through a portfolio with multimodality in the teaching practices and in each of the activities that are part of the portfolio. The results of a survey prepared at the end of the academic year 2011-2012 point out the main competences that university students develop thanks to multimodal teaching and the importance of tutorials in this kind of teaching.

  4. Multimodality

    DEFF Research Database (Denmark)

    Buhl, Mie

    In this paper, I address an ongoing discussion in Danish E-learning research about how to take advantage of the fact that digital media facilitate other communication forms than text, so-called ‘multimodal’ communication, which should not be confused with the term ‘multimedia’. While multimedia...... and learning situations. The choices they make involve E-learning resources like videos, social platforms and mobile devices, not just as digital artefacts we interact with, but the entire practice of using digital media. In a life-long learning perspective, multimodality is potentially very useful...

  5. Multimodal freight investment criteria.

    Science.gov (United States)

    2010-07-01

    Literature was reviewed on multi-modal investment criteria for freight projects, examining measures and techniques for quantifying project benefits and costs, as well as ways to describe the economic importance of freight transportation. : A limited ...

  6. Quantum teleportation of nonclassical wave packets: An effective multimode theory

    Energy Technology Data Exchange (ETDEWEB)

    Benichi, Hugo; Takeda, Shuntaro; Lee, Noriyuki; Furusawa, Akira [Department of Applied Physics, University of Tokyo, Tokyo (Japan)

    2011-07-15

    We develop a simple and efficient theoretical model to understand the quantum properties of broadband continuous variable quantum teleportation. We show that, if stated properly, the problem of multimode teleportation can be simplified to teleportation of a single effective mode that describes the input state temporal characteristic. Using that model, we show how the finite bandwidth of squeezing and external noise in the classical channel affect the output teleported quantum field. We choose an approach that is especially relevant for the case of non-Gaussian nonclassical quantum states and we finally back-test our model with recent experimental results.

  7. Multimodal follow-up questions to multimodal answers in a QA system

    NARCIS (Netherlands)

    van Schooten, B.W.; op den Akker, Hendrikus J.A.

    2007-01-01

    We are developing a dialogue manager (DM) for a multimodal interactive Question Answering (QA) system. Our QA system presents answers using text and pictures, and the user may pose follow-up questions using text or speech, while indicating screen elements with the mouse. We developed a corpus of

  8. Effects of a multimodal exercise program on balance, functional mobility and fall risk in older adults with cognitive impairment: a randomized controlled single-blind study.

    Science.gov (United States)

    Kovács, E; Sztruhár Jónásné, I; Karóczi, C K; Korpos, A; Gondos, T

    2013-10-01

    Exercise programs have important role in prevention of falls, but to date, there are conflicting findings about the effects of exercise programs on balance, functional performance and fall risk among cognitively impaired older adults. AIM. To investigate the effects of a multimodal exercise program on static and dynamic balance, and risk of falls in older adults with mild or moderate cognitive impairment. A randomized controlled study. A long-term care institute. Cognitively impaired individuals aged over 60 years. Eighty-six participants were randomized to an exercise group providing multimodal exercise program for 12 months or a control group which did not participate in any exercise program. The Performance Oriented Mobility Assessment scale, Timed Up and Go test, and incidence of falls were measured at baseline, at 6 months and at 12 months. There was a significant improvement in balance-related items of Performance Oriented Mobility Assessment scale in the exercise group both at 6 month and 12 month (Pfalls. Our results confirmed that a 12-month multimodal exercise program can improve the balance in cognitively impaired older adults. Based on our results, the multimodal exercise program may be a promising fall prevention exercise program for older adults with mild or moderate cognitive impairment improving static balance but it is supposed that more emphasis should be put on walking component of exercise program and environmental fall risk assessment.

  9. Reliability-Based Decision Fusion in Multimodal Biometric Verification Systems

    Directory of Open Access Journals (Sweden)

    Kryszczuk Krzysztof

    2007-01-01

    Full Text Available We present a methodology of reliability estimation in the multimodal biometric verification scenario. Reliability estimation has shown to be an efficient and accurate way of predicting and correcting erroneous classification decisions in both unimodal (speech, face, online signature and multimodal (speech and face systems. While the initial research results indicate the high potential of the proposed methodology, the performance of the reliability estimation in a multimodal setting has not been sufficiently studied or evaluated. In this paper, we demonstrate the advantages of using the unimodal reliability information in order to perform an efficient biometric fusion of two modalities. We further show the presented method to be superior to state-of-the-art multimodal decision-level fusion schemes. The experimental evaluation presented in this paper is based on the popular benchmarking bimodal BANCA database.

  10. Single nucleotide polymorphism discovery in rainbow trout by deep sequencing of a reduced representation library

    Directory of Open Access Journals (Sweden)

    Salem Mohamed

    2009-11-01

    Full Text Available Abstract Background To enhance capabilities for genomic analyses in rainbow trout, such as genomic selection, a large suite of polymorphic markers that are amenable to high-throughput genotyping protocols must be identified. Expressed Sequence Tags (ESTs have been used for single nucleotide polymorphism (SNP discovery in salmonids. In those strategies, the salmonid semi-tetraploid genomes often led to assemblies of paralogous sequences and therefore resulted in a high rate of false positive SNP identification. Sequencing genomic DNA using primers identified from ESTs proved to be an effective but time consuming methodology of SNP identification in rainbow trout, therefore not suitable for high throughput SNP discovery. In this study, we employed a high-throughput strategy that used pyrosequencing technology to generate data from a reduced representation library constructed with genomic DNA pooled from 96 unrelated rainbow trout that represent the National Center for Cool and Cold Water Aquaculture (NCCCWA broodstock population. Results The reduced representation library consisted of 440 bp fragments resulting from complete digestion with the restriction enzyme HaeIII; sequencing produced 2,000,000 reads providing an average 6 fold coverage of the estimated 150,000 unique genomic restriction fragments (300,000 fragment ends. Three independent data analyses identified 22,022 to 47,128 putative SNPs on 13,140 to 24,627 independent contigs. A set of 384 putative SNPs, randomly selected from the sets produced by the three analyses were genotyped on individual fish to determine the validation rate of putative SNPs among analyses, distinguish apparent SNPs that actually represent paralogous loci in the tetraploid genome, examine Mendelian segregation, and place the validated SNPs on the rainbow trout linkage map. Approximately 48% (183 of the putative SNPs were validated; 167 markers were successfully incorporated into the rainbow trout linkage map. In

  11. Single nucleotide polymorphism discovery in rainbow trout by deep sequencing of a reduced representation library.

    Science.gov (United States)

    Sánchez, Cecilia Castaño; Smith, Timothy P L; Wiedmann, Ralph T; Vallejo, Roger L; Salem, Mohamed; Yao, Jianbo; Rexroad, Caird E

    2009-11-25

    To enhance capabilities for genomic analyses in rainbow trout, such as genomic selection, a large suite of polymorphic markers that are amenable to high-throughput genotyping protocols must be identified. Expressed Sequence Tags (ESTs) have been used for single nucleotide polymorphism (SNP) discovery in salmonids. In those strategies, the salmonid semi-tetraploid genomes often led to assemblies of paralogous sequences and therefore resulted in a high rate of false positive SNP identification. Sequencing genomic DNA using primers identified from ESTs proved to be an effective but time consuming methodology of SNP identification in rainbow trout, therefore not suitable for high throughput SNP discovery. In this study, we employed a high-throughput strategy that used pyrosequencing technology to generate data from a reduced representation library constructed with genomic DNA pooled from 96 unrelated rainbow trout that represent the National Center for Cool and Cold Water Aquaculture (NCCCWA) broodstock population. The reduced representation library consisted of 440 bp fragments resulting from complete digestion with the restriction enzyme HaeIII; sequencing produced 2,000,000 reads providing an average 6 fold coverage of the estimated 150,000 unique genomic restriction fragments (300,000 fragment ends). Three independent data analyses identified 22,022 to 47,128 putative SNPs on 13,140 to 24,627 independent contigs. A set of 384 putative SNPs, randomly selected from the sets produced by the three analyses were genotyped on individual fish to determine the validation rate of putative SNPs among analyses, distinguish apparent SNPs that actually represent paralogous loci in the tetraploid genome, examine Mendelian segregation, and place the validated SNPs on the rainbow trout linkage map. Approximately 48% (183) of the putative SNPs were validated; 167 markers were successfully incorporated into the rainbow trout linkage map. In addition, 2% of the sequences from the

  12. Teleportation of continuous variable multimode Greeberger-Horne-Zeilinger entangled states

    International Nuclear Information System (INIS)

    He Guangqiang; Zhang Jingtao; Zeng Guihua

    2008-01-01

    Quantum teleportation protocols of continuous variable (CV) Greeberger-Horne-Zeilinger (GHZ) and Einstein-Podolsky-Rosen (EPR) entangled states are proposed, and are generalized to teleportation of arbitrary multimode GHZ entangled states described by Van Loock and Braunstein (2000 Phys. Rev. Lett. 84 3482). Each mode of a multimode entangled state is teleported using a CV EPR entangled pair and classical communication. The analytical expression of fidelity for the multimode Gaussian states which evaluates the teleportation quality is presented. The analytical results show that the fidelity is a function of both the squeezing parameter r, which characterizes the multimode entangled state to be teleported, and the channel parameter p, which characterizes the EPR pairs shared by Alice and Bob. The fidelity increases with increasing p, but decreases with increasing r, i.e., it is more difficult to teleport the more perfect multimode entangled states. The entanglement degree of the teleported multimode entangled states increases with increasing both r and p. In addition, the fact is proved that our teleportation protocol of EPR entangled states using parallel EPR pairs as quantum channels is the best case of the protocol using four-mode entangled states (Adhikari et al 2008 Phys. Rev. A 77 012337).

  13. Multimodal surveillance sensors, algorithms, and systems

    CERN Document Server

    Zhu, Zhigang

    2007-01-01

    From front-end sensors to systems and environmental issues, this practical resource guides you through the many facets of multimodal surveillance. The book examines thermal, vibration, video, and audio sensors in a broad context of civilian and military applications. This cutting-edge volume provides an in-depth treatment of data fusion algorithms that takes you to the core of multimodal surveillance, biometrics, and sentient computing. The book discusses such people and activity topics as tracking people and vehicles and identifying individuals by their speech.Systems designers benefit from d

  14. Multimode waveguide speckle patterns for compressive sensing.

    Science.gov (United States)

    Valley, George C; Sefler, George A; Justin Shaw, T

    2016-06-01

    Compressive sensing (CS) of sparse gigahertz-band RF signals using microwave photonics may achieve better performances with smaller size, weight, and power than electronic CS or conventional Nyquist rate sampling. The critical element in a CS system is the device that produces the CS measurement matrix (MM). We show that passive speckle patterns in multimode waveguides potentially provide excellent MMs for CS. We measure and calculate the MM for a multimode fiber and perform simulations using this MM in a CS system. We show that the speckle MM exhibits the sharp phase transition and coherence properties needed for CS and that these properties are similar to those of a sub-Gaussian MM with the same mean and standard deviation. We calculate the MM for a multimode planar waveguide and find dimensions of the planar guide that give a speckle MM with a performance similar to that of the multimode fiber. The CS simulations show that all measured and calculated speckle MMs exhibit a robust performance with equal amplitude signals that are sparse in time, in frequency, and in wavelets (Haar wavelet transform). The planar waveguide results indicate a path to a microwave photonic integrated circuit for measuring sparse gigahertz-band RF signals using CS.

  15. Development of a Framework for Multimodal Research: Creation of a Bibliographic Database

    National Research Council Canada - National Science Library

    Coovert, Michael D; Gray, Ashley A; Elliott, Linda R; Redden, Elizabeth S

    2007-01-01

    .... The results of the overall effort, the multimodal framework and article tracking sheet, bibliographic database, and searchable multimodal database make substantial and valuable contributions to the accumulation and interpretation of multimodal research. References collected in this effort are listed in the appendix.

  16. Multimodal aspects of CSR communication related to gender empowerment and environmental protection

    DEFF Research Database (Denmark)

    Maier, Carmen Daniela

    Purpose – This paper explores how the multimodal persuasive strategies of CSR communication related to Coca-Cola’s “5 by 20” succeed to highlight the company’s continuous commitment to gender empowerment and environmental protection. Launched in 2010, “5 by 20” is a program designed to empower 5...... as the usual textual focus is extended to a multimodal one. Shedding light on how the multimodal interplay contributes to communicate corporate commitment to gender empowerment and environmental protection, this model can also be employed in order to explore multimodally other areas of CSR communication....... a multimodal analysis model through which it is possible to map and explain the multimodal persuasive strategies employed by the company in their CSR communication. The paper is focused on the analysis of the video series that can be accessed at: http://www.coca-colacompany.com/stories/5by20. Based on a social...

  17. Mode-multiplexed transmission over conventional graded-index multimode fibers

    NARCIS (Netherlands)

    Ryf, R.; Fontaine, N.K.; Chen, H.; Guan, B.; Huang, B.; Esmaeelpour, M.; Gnauck, A.H.; Randel, S.; Yoo, S.J.B.; Koonen, A.M.J.; Shubochkin, R.; Sun, Yi; Lingle, R.

    2015-01-01

    We present experimental results for combined mode-multiplexed and wavelength multiplexed transmission over conventional graded-index multimode fibers. We use mode-selective photonic lanterns as mode couplers to precisely excite a subset of the modes of the multimode fiber and additionally to

  18. A Multimodal Database for Affect Recognition and Implicit Tagging

    NARCIS (Netherlands)

    Soleymani, Mohammad; Lichtenauer, Jeroen; Pun, Thierry; Pantic, Maja

    MAHNOB-HCI is a multimodal database recorded in response to affective stimuli with the goal of emotion recognition and implicit tagging research. A multimodal setup was arranged for synchronized recording of face videos, audio signals, eye gaze data, and peripheral/central nervous system

  19. Quantifying Quality Aspects of Multimodal Interactive Systems

    CERN Document Server

    Kühnel, Christine

    2012-01-01

    This book systematically addresses the quantification of quality aspects of multimodal interactive systems. The conceptual structure is based on a schematic view on human-computer interaction where the user interacts with the system and perceives it via input and output interfaces. Thus, aspects of multimodal interaction are analyzed first, followed by a discussion of the evaluation of output and input and concluding with a view on the evaluation of a complete system.

  20. Pedestrian detection from thermal images: A sparse representation based approach

    Science.gov (United States)

    Qi, Bin; John, Vijay; Liu, Zheng; Mita, Seiichi

    2016-05-01

    Pedestrian detection, a key technology in computer vision, plays a paramount role in the applications of advanced driver assistant systems (ADASs) and autonomous vehicles. The objective of pedestrian detection is to identify and locate people in a dynamic environment so that accidents can be avoided. With significant variations introduced by illumination, occlusion, articulated pose, and complex background, pedestrian detection is a challenging task for visual perception. Different from visible images, thermal images are captured and presented with intensity maps based objects' emissivity, and thus have an enhanced spectral range to make human beings perceptible from the cool background. In this study, a sparse representation based approach is proposed for pedestrian detection from thermal images. We first adopted the histogram of sparse code to represent image features and then detect pedestrian with the extracted features in an unimodal and a multimodal framework respectively. In the unimodal framework, two types of dictionaries, i.e. joint dictionary and individual dictionary, are built by learning from prepared training samples. In the multimodal framework, a weighted fusion scheme is proposed to further highlight the contributions from features with higher separability. To validate the proposed approach, experiments were conducted to compare with three widely used features: Haar wavelets (HWs), histogram of oriented gradients (HOG), and histogram of phase congruency (HPC) as well as two classification methods, i.e. AdaBoost and support vector machine (SVM). Experimental results on a publicly available data set demonstrate the superiority of the proposed approach.

  1. Gastric Adenocarcinoma: A Multimodal Approach

    Directory of Open Access Journals (Sweden)

    Humair S. Quadri

    2017-08-01

    Full Text Available Despite its declining incidence, gastric cancer (GC remains a leading cause of cancer-related deaths worldwide. A multimodal approach to GC is critical to ensure optimal patient outcomes. Pretherapy fine resolution contrast-enhanced cross-sectional imaging, endoscopic ultrasound and staging laparoscopy play an important role in patients with newly diagnosed ostensibly operable GC to avoid unnecessary non-therapeutic laparotomies. Currently, margin negative gastrectomy and adequate lymphadenectomy performed at high volume hospitals remain the backbone of GC treatment. Importantly, adequate GC surgery should be integrated in the setting of a multimodal treatment approach. Treatment for advanced GC continues to expand with the emergence of additional lines of systemic and targeted therapies.

  2. Kraus representation of a damped harmonic oscillator and its application

    International Nuclear Information System (INIS)

    Liu Yuxi; Oezdemir, Sahin K.; Miranowicz, Adam; Imoto, Nobuyuki

    2004-01-01

    By definition, the Kraus representation of a harmonic oscillator suffering from the environment effect, modeled as the amplitude damping or the phase damping, is directly given by a simple operator algebra solution. As examples and applications, we first give a Kraus representation of a single qubit whose computational basis states are defined as bosonic vacuum and single particle number states. We further discuss the environment effect on qubits whose computational basis states are defined as the bosonic odd and even coherent states. The environment effects on entangled qubits defined by two different kinds of computational basis are compared with the use of fidelity

  3. New representation of water activity based on a single solute specific constant to parameterize the hygroscopic growth of aerosols in atmospheric models

    Directory of Open Access Journals (Sweden)

    S. Metzger

    2012-06-01

    Full Text Available Water activity is a key factor in aerosol thermodynamics and hygroscopic growth. We introduce a new representation of water activity (aw, which is empirically related to the solute molality (μs through a single solute specific constant, νi. Our approach is widely applicable, considers the Kelvin effect and covers ideal solutions at high relative humidity (RH, including cloud condensation nuclei (CCN activation. It also encompasses concentrated solutions with high ionic strength at low RH such as the relative humidity of deliquescence (RHD. The constant νi can thus be used to parameterize the aerosol hygroscopic growth over a wide range of particle sizes, from nanometer nucleation mode to micrometer coarse mode particles. In contrast to other aw-representations, our νi factor corrects the solute molality both linearly and in exponent form x · ax. We present four representations of our basic aw-parameterization at different levels of complexity for different aw-ranges, e.g. up to 0.95, 0.98 or 1. νi is constant over the selected aw-range, and in its most comprehensive form, the parameterization describes the entire aw range (0–1. In this work we focus on single solute solutions. νi can be pre-determined with a root-finding method from our water activity representation using an aw−μs data pair, e.g. at solute saturation using RHD and solubility measurements. Our aw and supersaturation (Köhler-theory results compare well with the thermodynamic reference model E-AIM for the key compounds NaCl and (NH42SO4 relevant for CCN modeling and calibration studies. Envisaged applications include regional and global atmospheric chemistry and

  4. Multimodal Estimation of Distribution Algorithms.

    Science.gov (United States)

    Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun

    2016-02-15

    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

  5. Player/Avatar Body Relations in Multimodal Augmented Reality Games

    NARCIS (Netherlands)

    Rosa, N.E.

    2016-01-01

    Augmented reality research is finally moving towards multimodal experiences: more and more applications do not only include visuals, but also audio and even haptics. The purpose of multimodality in these applications can be to increase realism or to increase the amount or quality of communicated

  6. Composition at Washington State University: Building a Multimodal Bricolage

    Science.gov (United States)

    Ericsson, Patricia; Hunter, Leeann Downing; Macklin, Tialitha Michelle; Edwards, Elizabeth Sue

    2016-01-01

    Multimodal pedagogy is increasingly accepted among composition scholars. However, putting such pedagogy into practice presents significant challenges. In this profile of Washington State University's first-year composition program, we suggest a multi-vocal and multi-theoretical approach to addressing the challenges of multimodal pedagogy. Patricia…

  7. Multifuel multimodal network design; Projeto de redes multicombustiveis multimodal

    Energy Technology Data Exchange (ETDEWEB)

    Lage, Carolina; Dias, Gustavo; Bahiense, Laura; Ferreira Filho, Virgilio J.M. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE). Programa de Engenharia de Producao

    2008-07-01

    The objective of the Multi commodity Multimodal Network Project is the development of modeling tools and methodologies for the optimal sizing of production networks and multimodal distribution of multiple fuel and its incomes, considering investments and transportation costs. Given the inherently non-linear combinatory nature of the problem, the resolution of real instances by the complete model, in an exact way, becomes computationally intractable. Thus, the strategy for resolution should contain a combination of exacts and heuristics methods, that must be applied to subdivisions of the original problem. This paper deals with one of these subdivisions, tackling the problem of modeling a network of pipelines in order to drain the production of ethanol away from the producing plants. The objective consists in defining the best network topology, minimizing investment and operational costs, and attending the total demand. In order to do that, the network was considered a tree, where the nodes are the center of producing regions and the edges are the pipelines, trough where the ethanol produced by plants must be drained away. The main objective also includes the decision over the optimal diameter of each pipeline and the optimal size of the bombs, in order to minimize the pumping costs. (author)

  8. MULTIMODAL FEEDBACK PROVISION IN IMPROVING PRE-SERVICE TEACHERS’ COMPETENCE

    Directory of Open Access Journals (Sweden)

    Fazri Nur Yusuf

    2017-09-01

    Full Text Available Studies on potentials of feedback over English language teaching seem not to have not been well-revealed, including studies on the use of feedback to improve English pre-service teachers’ competence. The present study investigates to what extent a multimodal feedback can influence pre-service teachers’ teaching, and which teaching aspects are influenced. Twenty five pre-service teachers taking Microteaching Course served as respondents supervised by a course advisor. The data were collected by teacher observation in a rating-scale form, self-appraisal, and interviews. The data were analyzed by using correlated sample t-test and the eight teaching components proposed by Brown (2001. The results showed that after multimodal feedback provision, pre-service teachers indicated an improvement significantly in seven out of eight teaching aspects. The provision of multimodal feedback could improve their teaching competence on preparation, instructional objective elicitation, mastery of instructional materials, use of media, and classroom management, including classroom language. But, the results do not indicate that they perform well on reflection and follow-up due to some reasons. In addition, the results evince that multimodal feedback provision could improve pre-service teachers’ pedagogical competence when the multimodal feedback is integrated with content, interpersonal relationship, and management.

  9. Multimode optical fiber

    Science.gov (United States)

    Bigot-Astruc, Marianne; Molin, Denis; Sillard, Pierre

    2014-11-04

    A depressed graded-index multimode optical fiber includes a central core, an inner depressed cladding, a depressed trench, an outer depressed cladding, and an outer cladding. The central core has an alpha-index profile. The depressed claddings limit the impact of leaky modes on optical-fiber performance characteristics (e.g., bandwidth, core size, and/or numerical aperture).

  10. The Interaction between Semantic Representation and Episodic Memory.

    Science.gov (United States)

    Fang, Jing; Rüther, Naima; Bellebaum, Christian; Wiskott, Laurenz; Cheng, Sen

    2018-02-01

    The experimental evidence on the interrelation between episodic memory and semantic memory is inconclusive. Are they independent systems, different aspects of a single system, or separate but strongly interacting systems? Here, we propose a computational role for the interaction between the semantic and episodic systems that might help resolve this debate. We hypothesize that episodic memories are represented as sequences of activation patterns. These patterns are the output of a semantic representational network that compresses the high-dimensional sensory input. We show quantitatively that the accuracy of episodic memory crucially depends on the quality of the semantic representation. We compare two types of semantic representations: appropriate representations, which means that the representation is used to store input sequences that are of the same type as those that it was trained on, and inappropriate representations, which means that stored inputs differ from the training data. Retrieval accuracy is higher for appropriate representations because the encoded sequences are less divergent than those encoded with inappropriate representations. Consistent with our model prediction, we found that human subjects remember some aspects of episodes significantly more accurately if they had previously been familiarized with the objects occurring in the episode, as compared to episodes involving unfamiliar objects. We thus conclude that the interaction with the semantic system plays an important role for episodic memory.

  11. Multimodal versus Unimodal Instruction in a Complex Learning Context.

    Science.gov (United States)

    Gellevij, Mark; van der Meij, Hans; de Jong, Ton; Pieters, Jules

    2002-01-01

    Compared multimodal instruction with text and pictures with unimodal text-only instruction as 44 college students used a visual or textual manual to learn a complex software application. Results initially support dual coding theory and indicate that multimodal instruction led to better performance than unimodal instruction. (SLD)

  12. The Big Five: Addressing Recurrent Multimodal Learning Data Challenges

    NARCIS (Netherlands)

    Di Mitri, Daniele; Schneider, Jan; Specht, Marcus; Drachsler, Hendrik

    2018-01-01

    The analysis of multimodal data in learning is a growing field of research, which has led to the development of different analytics solutions. However, there is no standardised approach to handle multimodal data. In this paper, we describe and outline a solution for five recurrent challenges in

  13. The Stability of Multi-modal Traffic Network

    International Nuclear Information System (INIS)

    Han Linghui; Sun Huijun; Zhu Chengjuan; Jia Bin; Wu Jianjun

    2013-01-01

    There is an explicit and implicit assumption in multimodal traffic equilibrium models, that is, if the equilibrium exists, then it will also occur. The assumption is very idealized; in fact, it may be shown that the quite contrary could happen, because in multimodal traffic network, especially in mixed traffic conditions the interaction among traffic modes is asymmetric and the asymmetric interaction may result in the instability of traffic system. In this paper, to study the stability of multimodal traffic system, we respectively present the travel cost function in mixed traffic conditions and in traffic network with dedicated bus lanes. Based on a day-to-day dynamical model, we study the evolution of daily route choice of travelers in multimodal traffic network using 10000 random initial values for different cases. From the results of simulation, it can be concluded that the asymmetric interaction between the cars and buses in mixed traffic conditions can lead the traffic system to instability when traffic demand is larger. We also study the effect of travelers' perception error on the stability of multimodal traffic network. Although the larger perception error can alleviate the effect of interaction between cars and buses and improve the stability of traffic system in mixed traffic conditions, the traffic system also become instable when the traffic demand is larger than a number. For all cases simulated in this study, with the same parameters, traffic system with dedicated bus lane has better stability for traffic demand than that in mixed traffic conditions. We also find that the network with dedicated bus lane has higher portion of travelers by bus than it of mixed traffic network. So it can be concluded that building dedicated bus lane can improve the stability of traffic system and attract more travelers to choose bus reducing the traffic congestion. (general)

  14. Hand Specific Representations in Language Comprehension

    Directory of Open Access Journals (Sweden)

    Claire eMoody-Triantis

    2014-06-01

    Full Text Available Theories of embodied cognition argue that language comprehension involves sensory-motor re-enactments of the actions described. However, the degree of specificity of these re-enactments as well as the relationship between action and language remains a matter of debate. Here we investigate these issues by examining how hand-specific information (left or right hand is recruited in language comprehension and action execution. An fMRI study tested right-handed participants in two separate tasks that were designed to be as similar as possible to increase sensitivity of the comparison across task: an action execution go/no-go task where participants performed right or left hand actions, and a language task where participants read sentences describing the same left or right handed actions as in the execution task. We found that language-induced activity did not match the hand-specific patterns of activity found for action execution in primary somatosensory and motor cortex, but it overlapped with pre-motor and parietal regions associated with action planning. Within these pre-motor regions, both right hand actions and sentences elicited stronger activity than left hand actions and sentences - a dominant hand effect -. Importantly, both dorsal and ventral sections of the left pre-central gyrus were recruited by both tasks, suggesting different action features being recruited. These results suggest that (a language comprehension elicits motor representations that are hand-specific and akin to multimodal action plans, rather than full action re-enactments; and (b language comprehension and action execution share schematic hand-specific representations that are richer for the dominant hand, and thus linked to previous motor experience.

  15. Hand specific representations in language comprehension.

    Science.gov (United States)

    Moody-Triantis, Claire; Humphreys, Gina F; Gennari, Silvia P

    2014-01-01

    Theories of embodied cognition argue that language comprehension involves sensory-motor re-enactments of the actions described. However, the degree of specificity of these re-enactments as well as the relationship between action and language remains a matter of debate. Here we investigate these issues by examining how hand-specific information (left or right hand) is recruited in language comprehension and action execution. An fMRI study tested self-reported right-handed participants in two separate tasks that were designed to be as similar as possible to increase sensitivity of the comparison across task: an action execution go/no-go task where participants performed right or left hand actions, and a language task where participants read sentences describing the same left or right handed actions as in the execution task. We found that language-induced activity did not match the hand-specific patterns of activity found for action execution in primary somatosensory and motor cortex, but it overlapped with pre-motor and parietal regions associated with action planning. Within these pre-motor regions, both right hand actions and sentences elicited stronger activity than left hand actions and sentences-a dominant hand effect. Importantly, both dorsal and ventral sections of the left pre-central gyrus were recruited by both tasks, suggesting different action features being recruited. These results suggest that (a) language comprehension elicits motor representations that are hand-specific and akin to multimodal action plans, rather than full action re-enactments; and (b) language comprehension and action execution share schematic hand-specific representations that are richer for the dominant hand, and thus linked to previous motor experience.

  16. Coordinate Systems Integration for Craniofacial Database from Multimodal Devices

    Directory of Open Access Journals (Sweden)

    Deni Suwardhi

    2005-05-01

    Full Text Available This study presents a data registration method for craniofacial spatial data of different modalities. The data consists of three dimensional (3D vector and raster data models. The data is stored in object relational database. The data capture devices are Laser scanner, CT (Computed Tomography scan and CR (Close Range Photogrammetry. The objective of the registration is to transform the data from various coordinate systems into a single 3-D Cartesian coordinate system. The standard error of the registration obtained from multimodal imaging devices using 3D affine transformation is in the ranged of 1-2 mm. This study is a step forward for storing the craniofacial spatial data in one reference system in database.

  17. PET-MRI and multimodal cancer imaging

    International Nuclear Information System (INIS)

    Wang Taisong; Zhao Jinhua; Song Jianhua

    2011-01-01

    Multimodality imaging, specifically PET-CT, brought a new perspective into the fields of clinical imaging. Clinical cases have shown that PET-CT has great value in clinical diagnosis and experimental research. But PET-CT still bears some limitations. A major drawback is that CT provides only limited soft tissue contrast and exposes the patient to a significant radiation dose. MRI overcome these limitations, it has excellent soft tissue contrast, high temporal and spatial resolution and no radiation damage. Additionally, since MRI provides also functional information, PET-MRI will show a new direction of multimodality imaging in the future. (authors)

  18. Multimodal training between agents

    DEFF Research Database (Denmark)

    Rehm, Matthias

    2003-01-01

    In the system Locator1, agents are treated as individual and autonomous subjects that are able to adapt to heterogenous user groups. Applying multimodal information from their surroundings (visual and linguistic), they acquire the necessary concepts for a successful interaction. This approach has...

  19. On Curating Multimodal Sensory Data for Health and Wellness Platforms

    Directory of Open Access Journals (Sweden)

    Muhammad Bilal Amin

    2016-06-01

    Full Text Available In recent years, the focus of healthcare and wellness technologies has shown a significant shift towards personal vital signs devices. The technology has evolved from smartphone-based wellness applications to fitness bands and smartwatches. The novelty of these devices is the accumulation of activity data as their users go about their daily life routine. However, these implementations are device specific and lack the ability to incorporate multimodal data sources. Data accumulated in their usage does not offer rich contextual information that is adequate for providing a holistic view of a user’s lifelog. As a result, making decisions and generating recommendations based on this data are single dimensional. In this paper, we present our Data Curation Framework (DCF which is device independent and accumulates a user’s sensory data from multimodal data sources in real time. DCF curates the context of this accumulated data over the user’s lifelog. DCF provides rule-based anomaly detection over this context-rich lifelog in real time. To provide computation and persistence over the large volume of sensory data, DCF utilizes the distributed and ubiquitous environment of the cloud platform. DCF has been evaluated for its performance, correctness, ability to detect complex anomalies, and management support for a large volume of sensory data.

  20. On Curating Multimodal Sensory Data for Health and Wellness Platforms

    Science.gov (United States)

    Amin, Muhammad Bilal; Banos, Oresti; Khan, Wajahat Ali; Muhammad Bilal, Hafiz Syed; Gong, Jinhyuk; Bui, Dinh-Mao; Cho, Soung Ho; Hussain, Shujaat; Ali, Taqdir; Akhtar, Usman; Chung, Tae Choong; Lee, Sungyoung

    2016-01-01

    In recent years, the focus of healthcare and wellness technologies has shown a significant shift towards personal vital signs devices. The technology has evolved from smartphone-based wellness applications to fitness bands and smartwatches. The novelty of these devices is the accumulation of activity data as their users go about their daily life routine. However, these implementations are device specific and lack the ability to incorporate multimodal data sources. Data accumulated in their usage does not offer rich contextual information that is adequate for providing a holistic view of a user’s lifelog. As a result, making decisions and generating recommendations based on this data are single dimensional. In this paper, we present our Data Curation Framework (DCF) which is device independent and accumulates a user’s sensory data from multimodal data sources in real time. DCF curates the context of this accumulated data over the user’s lifelog. DCF provides rule-based anomaly detection over this context-rich lifelog in real time. To provide computation and persistence over the large volume of sensory data, DCF utilizes the distributed and ubiquitous environment of the cloud platform. DCF has been evaluated for its performance, correctness, ability to detect complex anomalies, and management support for a large volume of sensory data. PMID:27355955

  1. A Multimodal Discourse Analysis of Tmall's Double Eleven Advertisement

    Science.gov (United States)

    Hu, Chunyu; Luo, Mengxi

    2016-01-01

    From the 1990s, the multimodal turn in discourse studies makes multimodal discourse analysis a popular topic in linguistics and communication studies. An important approach to applying Systemic Functional Linguistics to non-verbal modes is Visual Grammar initially proposed by Kress and van Leeuwen (1996). Considering that commercial advertisement…

  2. Testing nonclassicality in multimode fields: A unified derivation of classical inequalities

    International Nuclear Information System (INIS)

    Miranowicz, Adam; Bartkowiak, Monika; Wang Xiaoguang; Liu Yuxi; Nori, Franco

    2010-01-01

    We consider a way to generate operational inequalities to test nonclassicality (or quantumness) of multimode bosonic fields (or multiparty bosonic systems) that unifies the derivation of many known inequalities and allows to propose new ones. The nonclassicality criteria are based on Vogel's criterion corresponding to analyzing the positivity of multimode P functions or, equivalently, the positivity of matrices of expectation values of, e.g., creation and annihilation operators. We analyze not only monomials but also polynomial functions of such moments, which can sometimes enable simpler derivations of physically relevant inequalities. As an example, we derive various classical inequalities which can be violated only by nonclassical fields. In particular, we show how the criteria introduced here easily reduce to the well-known inequalities describing (a) multimode quadrature squeezing and its generalizations, including sum, difference, and principal squeezing; (b) two-mode one-time photon-number correlations, including sub-Poisson photon-number correlations and effects corresponding to violations of the Cauchy-Schwarz and Muirhead inequalities; (c) two-time single-mode photon-number correlations, including photon antibunching and hyperbunching; and (d) two- and three-mode quantum entanglement. Other simple inequalities for testing nonclassicality are also proposed. We have found some general relations between the nonclassicality and entanglement criteria, in particular those resulting from the Cauchy-Schwarz inequality. It is shown that some known entanglement inequalities can be derived as nonclassicality inequalities within our formalism, while some other known entanglement inequalities can be seen as sums of more than one inequality derived from the nonclassicality criterion. This approach enables a deeper analysis of the entanglement for a given nonclassicality.

  3. Unifying framework for multimodal brain MRI segmentation based on Hidden Markov Chains.

    Science.gov (United States)

    Bricq, S; Collet, Ch; Armspach, J P

    2008-12-01

    In the frame of 3D medical imaging, accurate segmentation of multimodal brain MR images is of interest for many brain disorders. However, due to several factors such as noise, imaging artifacts, intrinsic tissue variation and partial volume effects, tissue classification remains a challenging task. In this paper, we present a unifying framework for unsupervised segmentation of multimodal brain MR images including partial volume effect, bias field correction, and information given by a probabilistic atlas. Here-proposed method takes into account neighborhood information using a Hidden Markov Chain (HMC) model. Due to the limited resolution of imaging devices, voxels may be composed of a mixture of different tissue types, this partial volume effect is included to achieve an accurate segmentation of brain tissues. Instead of assigning each voxel to a single tissue class (i.e., hard classification), we compute the relative amount of each pure tissue class in each voxel (mixture estimation). Further, a bias field estimation step is added to the proposed algorithm to correct intensity inhomogeneities. Furthermore, atlas priors were incorporated using probabilistic brain atlas containing prior expectations about the spatial localization of different tissue classes. This atlas is considered as a complementary sensor and the proposed method is extended to multimodal brain MRI without any user-tunable parameter (unsupervised algorithm). To validate this new unifying framework, we present experimental results on both synthetic and real brain images, for which the ground truth is available. Comparison with other often used techniques demonstrates the accuracy and the robustness of this new Markovian segmentation scheme.

  4. Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation

    Science.gov (United States)

    Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang

    2015-01-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829

  5. A representation theorem for linear discrete-space systems

    Directory of Open Access Journals (Sweden)

    Sandberg Irwin W.

    1998-01-01

    Full Text Available The cornerstone of the theory of discrete-time single-input single-output linear systems is the idea that every such system has an input–output map H that can be represented by a convolution or the familiar generalization of a convolution. This thinking involves an oversight which is corrected in this note by adding an additional term to the representation.

  6. Modeling most likely pathways for smuggling radioactive and special nuclear materials on a worldwide multimodal transportation network

    Energy Technology Data Exchange (ETDEWEB)

    Saeger, Kevin J [Los Alamos National Laboratory; Cuellar, Leticia [Los Alamos National Laboratory

    2010-01-01

    Nuclear weapons proliferation is an existing and growing worldwide problem. To help with devising strategies and supporting decisions to interdict the transport of nuclear material, we developed the Pathway Analysis, Threat Response and Interdiction Options Tool (PATRIOT) that provides an analytical approach for evaluating the probability that an adversary smuggling radioactive or special nuclear material will be detected during transit. We incorporate a global, multi-modal transportation network, explicit representation of designed and serendipitous detection opportunities, and multiple threat devices, material types, and shielding levels. This paper presents the general structure of PATRIOT, and focuses on the theoretical framework used to model the reliabilities of all network components that are used to predict the most likely pathways to the target.

  7. Agrupador de imágenes multimodal no supervisado

    OpenAIRE

    Pérez Hernando, Jesús

    2013-01-01

    Trabajo Fin de Máster donde se implementa un método multimodal para la clasificación de imágenes sin etiquetar y sin intervención humana. Treball Fi de Màster on s'implementa un mètode multimodal per a la classificació d'imatges sense etiquetar i sense intervenció humana. Master thesis for the Computer Science Engineering program.

  8. Multimodal news framing effects

    NARCIS (Netherlands)

    Powell, T.E.

    2017-01-01

    Visuals in news media play a vital role in framing citizens’ political preferences. Yet, compared to the written word, visual images are undervalued in political communication research. Using framing theory, this thesis redresses the balance by studying the combined, or multimodal, effects of visual

  9. Conditions for the Effectiveness of Multiple Visual Representations in Enhancing STEM Learning

    Science.gov (United States)

    Rau, Martina A.

    2017-01-01

    Visual representations play a critical role in enhancing science, technology, engineering, and mathematics (STEM) learning. Educational psychology research shows that adding visual representations to text can enhance students' learning of content knowledge, compared to text-only. But should students learn with a single type of visual…

  10. vECTlab-A fully integrated multi-modality Monte Carlo simulation framework for the radiological imaging sciences

    International Nuclear Information System (INIS)

    Peter, Joerg; Semmler, Wolfhard

    2007-01-01

    Alongside and in part motivated by recent advances in molecular diagnostics, the development of dual-modality instruments for patient and dedicated small animal imaging has gained attention by diverse research groups. The desire for such systems is high not only to link molecular or functional information with the anatomical structures, but also for detecting multiple molecular events simultaneously at shorter total acquisition times. While PET and SPECT have been integrated successfully with X-ray CT, the advance of optical imaging approaches (OT) and the integration thereof into existing modalities carry a high application potential, particularly for imaging small animals. A multi-modality Monte Carlo (MC) simulation approach at present has been developed that is able to trace high-energy (keV) as well as optical (eV) photons concurrently within identical phantom representation models. We show that the involved two approaches for ray-tracing keV and eV photons can be integrated into a unique simulation framework which enables both photon classes to be propagated through various geometry models representing both phantoms and scanners. The main advantage of such integrated framework for our specific application is the investigation of novel tomographic multi-modality instrumentation intended for in vivo small animal imaging through time-resolved MC simulation upon identical phantom geometries. Design examples are provided for recently proposed SPECT-OT and PET-OT imaging systems

  11. Object recognition through a multi-mode fiber

    Science.gov (United States)

    Takagi, Ryosuke; Horisaki, Ryoichi; Tanida, Jun

    2017-04-01

    We present a method of recognizing an object through a multi-mode fiber. A number of speckle patterns transmitted through a multi-mode fiber are provided to a classifier based on machine learning. We experimentally demonstrated binary classification of face and non-face targets based on the method. The measurement process of the experimental setup was random and nonlinear because a multi-mode fiber is a typical strongly scattering medium and any reference light was not used in our setup. Comparisons between three supervised learning methods, support vector machine, adaptive boosting, and neural network, are also provided. All of those learning methods achieved high accuracy rates at about 90% for the classification. The approach presented here can realize a compact and smart optical sensor. It is practically useful for medical applications, such as endoscopy. Also our study indicated a promising utilization of artificial intelligence, which has rapidly progressed, for reducing optical and computational costs in optical sensing systems.

  12. Preterm EEG: a multimodal neurophysiological protocol.

    Science.gov (United States)

    Stjerna, Susanna; Voipio, Juha; Metsäranta, Marjo; Kaila, Kai; Vanhatalo, Sampsa

    2012-02-18

    Since its introduction in early 1950s, electroencephalography (EEG) has been widely used in the neonatal intensive care units (NICU) for assessment and monitoring of brain function in preterm and term babies. Most common indications are the diagnosis of epileptic seizures, assessment of brain maturity, and recovery from hypoxic-ischemic events. EEG recording techniques and the understanding of neonatal EEG signals have dramatically improved, but these advances have been slow to penetrate through the clinical traditions. The aim of this presentation is to bring theory and practice of advanced EEG recording available for neonatal units. In the theoretical part, we will present animations to illustrate how a preterm brain gives rise to spontaneous and evoked EEG activities, both of which are unique to this developmental phase, as well as crucial for a proper brain maturation. Recent animal work has shown that the structural brain development is clearly reflected in early EEG activity. Most important structures in this regard are the growing long range connections and the transient cortical structure, subplate. Sensory stimuli in a preterm baby will generate responses that are seen at a single trial level, and they have underpinnings in the subplate-cortex interaction. This brings neonatal EEG readily into a multimodal study, where EEG is not only recording cortical function, but it also tests subplate function via different sensory modalities. Finally, introduction of clinically suitable dense array EEG caps, as well as amplifiers capable of recording low frequencies, have disclosed multitude of brain activities that have as yet been overlooked. In the practical part of this video, we show how a multimodal, dense array EEG study is performed in neonatal intensive care unit from a preterm baby in the incubator. The video demonstrates preparation of the baby and incubator, application of the EEG cap, and performance of the sensory stimulations.

  13. Feasibility of space-division-multiplexed transmission of IEEE 802.11 n/ac-compliant wireless MIMO signals over OM3 multimode fiber

    NARCIS (Netherlands)

    Lei, Yi; Li, Jianqiang; Meng, Ziyi; Wu, Rui; Wan, Zhiquan; Fan, Yuting; Zhang, Wenjia; Yin, Feifei; Dai, Yitang; Xu, Kun

    2018-01-01

    In this paper, we have experimentally demonstrated the feasibility of space-division-multiplexed 3 × 3 multiple-input multiple-output (MIMO) transmission over a single OM3 multimode fiber (MMF) using commercial IEEE 802.11 n/ac access points. Throughput performance for different fiber length links

  14. Strategy development management of Multimodal Transport Network

    Directory of Open Access Journals (Sweden)

    Nesterova Natalia S.

    2016-01-01

    Full Text Available The article gives a brief overview of works on the development of transport infrastructure for multimodal transportation and integration of Russian transport system into the international transport corridors. The technology for control of the strategy, that changes shape and capacity of Multi-modal Transport Network (MTN, is considered as part of the methodology for designing and development of MTN. This technology allows to carry out strategic and operational management of the strategy implementation based on the use of the balanced scorecard.

  15. Amputation and prosthesis implantation shape body and peripersonal space representations.

    Science.gov (United States)

    Canzoneri, Elisa; Marzolla, Marilena; Amoresano, Amedeo; Verni, Gennaro; Serino, Andrea

    2013-10-03

    Little is known about whether and how multimodal representations of the body (BRs) and of the space around the body (Peripersonal Space, PPS) adapt to amputation and prosthesis implantation. In order to investigate this issue, we tested BR in a group of upper limb amputees by means of a tactile distance perception task and PPS by means of an audio-tactile interaction task. Subjects performed the tasks with stimulation either on the healthy limb or the stump of the amputated limb, while wearing or not wearing their prosthesis. When patients performed the tasks on the amputated limb, without the prosthesis, the perception of arm length shrank, with a concurrent shift of PPS boundaries towards the stump. Conversely, wearing the prosthesis increased the perceived length of the stump and extended the PPS boundaries so as to include the prosthetic hand, such that the prosthesis partially replaced the missing limb.

  16. Multimodal Behavior Therapy: Case Study of a High School Student.

    Science.gov (United States)

    Seligman, Linda

    1981-01-01

    A case study of a high school student concerned with weight problems illustrates multimodal behavior therapy and its use in a high school setting. Multimodal therapy allows the school counselor to maximize referral sources while emphasizing growth and actualization. (JAC)

  17. Statistical representation of a spray as a point process

    International Nuclear Information System (INIS)

    Subramaniam, S.

    2000-01-01

    The statistical representation of a spray as a finite point process is investigated. One objective is to develop a better understanding of how single-point statistical information contained in descriptions such as the droplet distribution function (ddf), relates to the probability density functions (pdfs) associated with the droplets themselves. Single-point statistical information contained in the droplet distribution function (ddf) is shown to be related to a sequence of single surrogate-droplet pdfs, which are in general different from the physical single-droplet pdfs. It is shown that the ddf contains less information than the fundamental single-point statistical representation of the spray, which is also described. The analysis shows which events associated with the ensemble of spray droplets can be characterized by the ddf, and which cannot. The implications of these findings for the ddf approach to spray modeling are discussed. The results of this study also have important consequences for the initialization and evolution of direct numerical simulations (DNS) of multiphase flows, which are usually initialized on the basis of single-point statistics such as the droplet number density in physical space. If multiphase DNS are initialized in this way, this implies that even the initial representation contains certain implicit assumptions concerning the complete ensemble of realizations, which are invalid for general multiphase flows. Also the evolution of a DNS initialized in this manner is shown to be valid only if an as yet unproven commutation hypothesis holds true. Therefore, it is questionable to what extent DNS that are initialized in this manner constitute a direct simulation of the physical droplets. Implications of these findings for large eddy simulations of multiphase flows are also discussed. (c) 2000 American Institute of Physics

  18. Acting rehearsal in collaborative multimodal mixed reality environments

    OpenAIRE

    Steptoe, William; Normand, Jean-Marie; Oyekoya, Oyewole; Pece, Fabrizio; Giannopoulos, Elias; Tecchia, Franco; Steed, Anthony; Weyrich, Tim; Kautz, Jan; Slater, Mel

    2012-01-01

    This paper presents the use of our multimodal mixed reality telecommunication system to support remote acting rehearsal. The rehearsals involved two actors, located in London and Barcelona, and a director in another location in London. This triadic audiovisual telecommunication was performed in a spatial and multimodal collaborative mixed reality environment based on the 'destination-visitor' paradigm, which we define and put into use. We detail our heterogeneous system architecture, which sp...

  19. Comparison of multi-modal early oral nutrition for the tolerance of oral nutrition with conventional care after major abdominal surgery: a prospective, randomized, single-blind trial.

    Science.gov (United States)

    Sun, Da-Li; Li, Wei-Ming; Li, Shu-Min; Cen, Yun-Yun; Xu, Qing-Wen; Li, Yi-Jun; Sun, Yan-Bo; Qi, Yu-Xing; Lin, Yue-Ying; Yang, Ting; Lu, Qi-Ping; Xu, Peng-Yuan

    2017-02-10

    Early oral nutrition (EON) has been shown to improve recovery of gastrointestinal function, length of stay and mortality after abdominal surgery; however, early oral nutrition often fails during the first week after surgery. Here, a multi-modal early oral nutrition program is introduced to promote recovery of gastrointestinal function and tolerance of oral nutrition. Consecutive patients scheduled for abdominal surgery were randomized to the multimodal EON group or a group receiving conventional care. The primary endpoint was the time of first defecation. The secondary endpoints were outcomes and the cost-effectiveness ratio in treating infectious complications. The rate of infectious-free patients was regarded as the index of effectiveness. One hundred seven patients were randomly assigned to groups. Baseline characteristics were similar for both groups. In intention-to-treat analysis, the success rate of oral nutrition during the first week after surgery in the multimodal EON group was 44 (83.0%) versus 31 (57.4%) in the conventional care group (P = 0.004). Time to first defecation, time to flatus, recovery time of bowel sounds, and prolonged postoperative ileus were all less in the multimodal EON group (P oral nutrition group (P oral nutrition program was an effective way to improve tolerance of oral nutrition during the first week after surgery, decrease the length of stay and improve cost-effectiveness after abdominal surgery. Registration number: ChiCTR-TRC-14004395 . Registered 15 March 2014.

  20. MIDA - Optimizing control room performance through multi-modal design

    International Nuclear Information System (INIS)

    Ronan, A. M.

    2006-01-01

    Multi-modal interfaces can support the integration of humans with information processing systems and computational devices to maximize the unique qualities that comprise a complex system. In a dynamic environment, such as a nuclear power plant control room, multi-modal interfaces, if designed correctly, can provide complementary interaction between the human operator and the system which can improve overall performance while reducing human error. Developing such interfaces can be difficult for a designer without explicit knowledge of Human Factors Engineering principles. The Multi-modal Interface Design Advisor (MIDA) was developed as a support tool for system designers and developers. It provides design recommendations based upon a combination of Human Factors principles, a knowledge base of historical research, and current interface technologies. MIDA's primary objective is to optimize available multi-modal technologies within a human computer interface in order to balance operator workload with efficient operator performance. The purpose of this paper is to demonstrate MIDA and illustrate its value as a design evaluation tool within the nuclear power industry. (authors)

  1. [Multimodal pain therapy - implementation of process management - an attempt to consider management approaches].

    Science.gov (United States)

    Dunkel, Marion; Kramp, Melanie

    2012-07-01

    The combination of medical and economical proceedings allows new perspectives in the illustration of medical workflows. Considering structural and developmental aspects multimodal therapy programs show similarities with typical subjects of economic process systems. By pointing out the strategic appearance of the multimodal pain therapy concept multimodal approaches can be described to some extent by using management approaches. E. g., an economic process landscape can be used to represent procedures of a multimodal pain therapy program. © Georg Thieme Verlag Stuttgart · New York.

  2. Pattern recognition of neurotransmitters using multimode sensing.

    Science.gov (United States)

    Stefan-van Staden, Raluca-Ioana; Moldoveanu, Iuliana; van Staden, Jacobus Frederick

    2014-05-30

    Pattern recognition is essential in chemical analysis of biological fluids. Reliable and sensitive methods for neurotransmitters analysis are needed. Therefore, we developed for pattern recognition of neurotransmitters: dopamine, epinephrine, norepinephrine a method based on multimode sensing. Multimode sensing was performed using microsensors based on diamond paste modified with 5,10,15,20-tetraphenyl-21H,23H-porphyrine, hemin and protoporphyrin IX in stochastic and differential pulse voltammetry modes. Optimized working conditions: phosphate buffer solution of pH 3.01 and KCl 0.1mol/L (as electrolyte support), were determined using cyclic voltammetry and used in all measurements. The lowest limits of quantification were: 10(-10)mol/L for dopamine and epinephrine, and 10(-11)mol/L for norepinephrine. The multimode microsensors were selective over ascorbic and uric acids and the method facilitated reliable assay of neurotransmitters in urine samples, and therefore, the pattern recognition showed high reliability (RSDneurotransmitters on biological fluids at a lower determination level than chromatographic methods. The sampling of the biological fluids referees only to the buffering (1:1, v/v) with a phosphate buffer pH 3.01, while for chromatographic methods the sampling is laborious. Accordingly with the statistic evaluation of the results at 99.00% confidence level, both modes can be used for pattern recognition and quantification of neurotransmitters with high reliability. The best multimode microsensor was the one based on diamond paste modified with protoporphyrin IX. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Content-based TV sports video retrieval using multimodal analysis

    Science.gov (United States)

    Yu, Yiqing; Liu, Huayong; Wang, Hongbin; Zhou, Dongru

    2003-09-01

    In this paper, we propose content-based video retrieval, which is a kind of retrieval by its semantical contents. Because video data is composed of multimodal information streams such as video, auditory and textual streams, we describe a strategy of using multimodal analysis for automatic parsing sports video. The paper first defines the basic structure of sports video database system, and then introduces a new approach that integrates visual stream analysis, speech recognition, speech signal processing and text extraction to realize video retrieval. The experimental results for TV sports video of football games indicate that the multimodal analysis is effective for video retrieval by quickly browsing tree-like video clips or inputting keywords within predefined domain.

  4. Validation of a multimodal travel simulator with travel information provision

    NARCIS (Netherlands)

    Chorus, C.G.; Molin, E.J.E.; Arentze, T.A.; Hoogendoorn, S.P.; Timmermans, H.J.P.; Wee, van G.P.

    2007-01-01

    This paper presents a computer based travel simulator for collecting data concerning the use of next-generation ATIS and their effects on traveler decision making in a multimodal travel environment. The tool distinguishes itself by presenting a completely abstract multimodal transport network, where

  5. Development of a hardware-based registration system for the multimodal medical images by USB cameras

    International Nuclear Information System (INIS)

    Iwata, Michiaki; Minato, Kotaro; Watabe, Hiroshi; Koshino, Kazuhiro; Yamamoto, Akihide; Iida, Hidehiro

    2009-01-01

    There are several medical imaging scanners and each modality has different aspect for visualizing inside of human body. By combining these images, diagnostic accuracy could be improved, and therefore, several attempts for multimodal image registration have been implemented. One popular approach is to use hybrid image scanners such as positron emission tomography (PET)/CT and single photon emission computed tomography (SPECT)/CT. However, these hybrid scanners are expensive and not fully available. We developed multimodal image registration system with universal serial bus (USB) cameras, which is inexpensive and applicable to any combinations of existed conventional imaging scanners. The multiple USB cameras will determine the three dimensional positions of a patient while scanning. Using information of these positions and rigid body transformation, the acquired image is registered to the common coordinate which is shared with another scanner. For each scanner, reference marker is attached on gantry of the scanner. For observing the reference marker's position by the USB cameras, the location of the USB cameras can be arbitrary. In order to validate the system, we scanned a cardiac phantom with different positions by PET and MRI scanners. Using this system, images from PET and MRI were visually aligned, and good correlations between PET and MRI images were obtained after the registration. The results suggest this system can be inexpensively used for multimodal image registrations. (author)

  6. Multimodal nanoparticle imaging agents: design and applications

    Science.gov (United States)

    Burke, Benjamin P.; Cawthorne, Christopher; Archibald, Stephen J.

    2017-10-01

    Molecular imaging, where the location of molecules or nanoscale constructs can be tracked in the body to report on disease or biochemical processes, is rapidly expanding to include combined modality or multimodal imaging. No single imaging technique can offer the optimum combination of properties (e.g. resolution, sensitivity, cost, availability). The rapid technological advances in hardware to scan patients, and software to process and fuse images, are pushing the boundaries of novel medical imaging approaches, and hand-in-hand with this is the requirement for advanced and specific multimodal imaging agents. These agents can be detected using a selection from radioisotope, magnetic resonance and optical imaging, among others. Nanoparticles offer great scope in this area as they lend themselves, via facile modification procedures, to act as multifunctional constructs. They have relevance as therapeutics and drug delivery agents that can be tracked by molecular imaging techniques with the particular development of applications in optically guided surgery and as radiosensitizers. There has been a huge amount of research work to produce nanoconstructs for imaging, and the parameters for successful clinical translation and validation of therapeutic applications are now becoming much better understood. It is an exciting time of progress for these agents as their potential is closer to being realized with translation into the clinic. The coming 5-10 years will be critical, as we will see if the predicted improvement in clinical outcomes becomes a reality. Some of the latest advances in combination modality agents are selected and the progression pathway to clinical trials analysed. This article is part of the themed issue 'Challenges for chemistry in molecular imaging'.

  7. OSM-ORIENTED METHOD OF MULTIMODAL ROUTE PLANNING

    Directory of Open Access Journals (Sweden)

    X. Li

    2015-07-01

    Full Text Available With the increasing pervasiveness of basic facilitate of transportation and information, the need of multimodal route planning is becoming more essential in the fields of communication and transportation, urban planning, logistics management, etc. This article mainly described an OSM-oriented method of multimodal route planning. Firstly, it introduced how to extract the information we need from OSM data and build proper network model and storage model; then it analysed the accustomed cost standard adopted by most travellers; finally, we used shortest path algorithm to calculate the best route with multiple traffic means.

  8. MINERVA: A multi-modality plug-in-based radiation therapy treatment planning system

    International Nuclear Information System (INIS)

    Wemple, C. A.; Wessol, D. E.; Nigg, D. W.; Cogliati, J. J.; Milvich, M.; Fredrickson, C. M.; Perkins, M.; Harkin, G. J.; Hartmann-Siantar, C. L.; Lehmann, J.; Flickinger, T.; Pletcher, D.; Yuan, A.; DeNardo, G. L.

    2005-01-01

    Researchers at the INEEL, MSU, LLNL and UCD have undertaken development of MINERVA, a patient-centric, multi-modal, radiation treatment planning system, which can be used for planning and analysing several radiotherapy modalities, either singly or combined, using common treatment planning tools. It employs an integrated, lightweight plug-in architecture to accommodate multi-modal treatment planning using standard interface components. The design also facilitates the future integration of improved planning technologies. The code is being developed with the Java programming language for inter-operability. The MINERVA design includes the image processing, model definition and data analysis modules with a central module to coordinate communication and data transfer. Dose calculation is performed by source and transport plug-in modules, which communicate either directly through the database or through MINERVA's openly published, extensible markup language (XML)-based application programmer's interface (API). All internal data are managed by a database management system and can be exported to other applications or new installations through the API data formats. A full computation path has been established for molecular-targeted radiotherapy treatment planning, with additional treatment modalities presently under development. (authors)

  9. Multimodal Biometric System Based on the Recognition of Face and Both Irises

    Directory of Open Access Journals (Sweden)

    Yeong Gon Kim

    2012-09-01

    Full Text Available The performance of unimodal biometric systems (based on a single modality such as face or fingerprint has to contend with various problems, such as illumination variation, skin condition and environmental conditions, and device variations. Therefore, multimodal biometric systems have been used to overcome the limitations of unimodal biometrics and provide high accuracy recognition. In this paper, we propose a new multimodal biometric system based on score level fusion of face and both irises' recognition. Our study has the following novel features. First, the device proposed acquires images of the face and both irises simultaneously. The proposed device consists of a face camera, two iris cameras, near-infrared illuminators and cold mirrors. Second, fast and accurate iris detection is based on two circular edge detections, which are accomplished in the iris image on the basis of the size of the iris detected in the face image. Third, the combined accuracy is enhanced by combining each score for the face and both irises using a support vector machine. The experimental results show that the equal error rate for the proposed method is 0.131%, which is lower than that of face or iris recognition and other fusion methods.

  10. An atlas-based multimodal registration method for 2D images with discrepancy structures.

    Science.gov (United States)

    Lv, Wenchao; Chen, Houjin; Peng, Yahui; Li, Yanfeng; Li, Jupeng

    2018-06-04

    An atlas-based multimodal registration method for 2-dimension images with discrepancy structures was proposed in this paper. Atlas was utilized for complementing the discrepancy structure information in multimodal medical images. The scheme includes three steps: floating image to atlas registration, atlas to reference image registration, and field-based deformation. To evaluate the performance, a frame model, a brain model, and clinical images were employed in registration experiments. We measured the registration performance by the squared sum of intensity differences. Results indicate that this method is robust and performs better than the direct registration for multimodal images with discrepancy structures. We conclude that the proposed method is suitable for multimodal images with discrepancy structures. Graphical Abstract An Atlas-based multimodal registration method schematic diagram.

  11. A new piezoelectric energy harvesting design concept: multimodal energy harvesting skin.

    Science.gov (United States)

    Lee, Soobum; Youn, Byeng D

    2011-03-01

    This paper presents an advanced design concept for a piezoelectric energy harvesting (EH), referred to as multimodal EH skin. This EH design facilitates the use of multimodal vibration and enhances power harvesting efficiency. The multimodal EH skin is an extension of our previous work, EH skin, which was an innovative design paradigm for a piezoelectric energy harvester: a vibrating skin structure and an additional thin piezoelectric layer in one device. A computational (finite element) model of the multilayered assembly - the vibrating skin structure and piezoelectric layer - is constructed and the optimal topology and/or shape of the piezoelectric layer is found for maximum power generation from multiple vibration modes. A design rationale for the multimodal EH skin was proposed: designing a piezoelectric material distribution and external resistors. In the material design step, the piezoelectric material is segmented by inflection lines from multiple vibration modes of interests to minimize voltage cancellation. The inflection lines are detected using the voltage phase. In the external resistor design step, the resistor values are found for each segment to maximize power output. The presented design concept, which can be applied to any engineering system with multimodal harmonic-vibrating skins, was applied to two case studies: an aircraft skin and a power transformer panel. The excellent performance of multimodal EH skin was demonstrated, showing larger power generation than EH skin without segmentation or unimodal EH skin.

  12. Increased discriminability of authenticity from multimodal laughter is driven by auditory information.

    Science.gov (United States)

    Lavan, Nadine; McGettigan, Carolyn

    2017-10-01

    We present an investigation of the perception of authenticity in audiovisual laughter, in which we contrast spontaneous and volitional samples and examine the contributions of unimodal affective information to multimodal percepts. In a pilot study, we demonstrate that listeners perceive spontaneous laughs as more authentic than volitional ones, both in unimodal (audio-only, visual-only) and multimodal contexts (audiovisual). In the main experiment, we show that the discriminability of volitional and spontaneous laughter is enhanced for multimodal laughter. Analyses of relationships between affective ratings and the perception of authenticity show that, while both unimodal percepts significantly predict evaluations of audiovisual laughter, it is auditory affective cues that have the greater influence on multimodal percepts. We discuss differences and potential mismatches in emotion signalling through voices and faces, in the context of spontaneous and volitional behaviour, and highlight issues that should be addressed in future studies of dynamic multimodal emotion processing.

  13. Modeling most likely pathways for smuggling radioactive and special nuclear materials on a worldwide multi-modal transportation network

    Energy Technology Data Exchange (ETDEWEB)

    Saeger, Kevin J [Los Alamos National Laboratory; Cuellar, Leticia [Los Alamos National Laboratory

    2010-10-28

    Nuclear weapons proliferation is an existing and growing worldwide problem. To help with devising strategies and supporting decisions to interdict the transport of nuclear material, we developed the Pathway Analysis, Threat Response and Interdiction Options Tool (PATRIOT) that provides an analytical approach for evaluating the probability that an adversary smuggling radioactive or special nuclear material will be detected during transit. We incorporate a global, multi-modal transportation network, explicit representation of designed and serendipitous detection opportunities, and multiple threat devices, material types, and shielding levels. This paper presents the general structure of PATRIOT, all focuses on the theoretical framework used to model the reliabilities of all network components that are used to predict the most likely pathways to the target.

  14. Interactivity in Educational Apps for Young Children: A Multimodal Analysis

    Science.gov (United States)

    Blitz-Raith, Alexandra H.; Liu, Jianxin

    2017-01-01

    Interactivity is an important indicator of an educational app's reception. Since most educational apps are multimodal, it justifies a methodological initiative to understand meaningful involvement of multimodality in enacting and even amplifying interactivity in an educational app. Yet research so far has largely concentrated on algorithm…

  15. Multimodal Desktop Interaction: The Face –Object-Gesture–Voice Example

    DEFF Research Database (Denmark)

    Vidakis, Nikolas; Vlasopoulos, Anastasios; Kounalakis, Tsampikos

    2013-01-01

    This paper presents a natural user interface system based on multimodal human computer interaction, which operates as an intermediate module between the user and the operating system. The aim of this work is to demonstrate a multimodal system which gives users the ability to interact with desktop...

  16. Attention-deficit hyperactivity disorder, multimodal treatment, and longitudinal outcome: evidence, paradox, and challenge.

    Science.gov (United States)

    Hinshaw, Stephen P; Arnold, L Eugene

    2015-01-01

    Given major increases in the diagnosis of attention-deficit hyperactivity disorder (ADHD) and in rates of medication for this condition, we carefully examine evidence for effects of single versus multimodal (i.e., combined medication and psychosocial/behavioral) interventions for ADHD. Our primary data source is the Multimodal Treatment Study of Children with ADHD (MTA), a 14-month, randomized clinical trial in which intensive behavioral, medication, and multimodal treatment arms were contrasted with one another and with community intervention (treatment-as-usual), regarding outcome domains of ADHD symptoms, comorbidities, and core functional impairments. Although initial reports emphasized the superiority of well-monitored medication for symptomatic improvement, reanalyses and reappraisals have highlighted (1) the superiority of combination treatment for composite outcomes and for domains of functional impairment (e.g., academic achievement, social skills, parenting practices); (2) the importance of considering moderator and mediator processes underlying differential patterns of outcome, including comorbid subgroups and improvements in family discipline style during the intervention period; (3) the emergence of side effects (e.g., mild growth suppression) in youth treated with long-term medication; and (4) the diminution of medication's initial superiority once the randomly assigned treatment phase turned into naturalistic follow-up. The key paradox is that while ADHD clearly responds to medication and behavioral treatment in the short term, evidence for long-term effectiveness remains elusive. We close with discussion of future directions and a call for greater understanding of relevant developmental processes in the attempt to promote optimal, generalized, and lasting treatments for this important and impairing neurodevelopmental disorder. © 2014 John Wiley & Sons, Ltd.

  17. Internal representations of temporal statistics and feedback calibrate motor-sensory interval timing.

    Directory of Open Access Journals (Sweden)

    Luigi Acerbi

    Full Text Available Humans have been shown to adapt to the temporal statistics of timing tasks so as to optimize the accuracy of their responses, in agreement with the predictions of Bayesian integration. This suggests that they build an internal representation of both the experimentally imposed distribution of time intervals (the prior and of the error (the loss function. The responses of a Bayesian ideal observer depend crucially on these internal representations, which have only been previously studied for simple distributions. To study the nature of these representations we asked subjects to reproduce time intervals drawn from underlying temporal distributions of varying complexity, from uniform to highly skewed or bimodal while also varying the error mapping that determined the performance feedback. Interval reproduction times were affected by both the distribution and feedback, in good agreement with a performance-optimizing Bayesian observer and actor model. Bayesian model comparison highlighted that subjects were integrating the provided feedback and represented the experimental distribution with a smoothed approximation. A nonparametric reconstruction of the subjective priors from the data shows that they are generally in agreement with the true distributions up to third-order moments, but with systematically heavier tails. In particular, higher-order statistical features (kurtosis, multimodality seem much harder to acquire. Our findings suggest that humans have only minor constraints on learning lower-order statistical properties of unimodal (including peaked and skewed distributions of time intervals under the guidance of corrective feedback, and that their behavior is well explained by Bayesian decision theory.

  18. Multicomponent, peptide-targeted glycol chitosan nanoparticles containing ferrimagnetic iron oxide nanocubes for bladder cancer multimodal imaging

    Directory of Open Access Journals (Sweden)

    Key J

    2016-08-01

    Full Text Available Jaehong Key,1,2 Deepika Dhawan,3 Christy L Cooper,3,4 Deborah W Knapp,3 Kwangmeyung Kim,5 Ick Chan Kwon,5 Kuiwon Choi,5 Kinam Park,1,6 Paolo Decuzzi,7–9 James F Leary1,3,41Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA; 2Department of Biomedical Engineering, Yonsei University, Wonju, Republic of Korea; 3School of Veterinary Medicine-Department of Basic Medical Sciences, Purdue University, West Lafayette, 4Birck Nanotechnology Center at Discovery Park, Purdue University, West Lafayette, IN, USA; 5Biomedical Research Center, Korea Institute of Science and Technology, Sungbook-Gu, Seoul, Republic of Korea; 6Department of Pharmaceutics, Purdue University, West Lafayette, IN, 7Department of Translational Imaging, 8Department of Nanomedicine, Houston Methodist Research Institute, Houston, TX USA; 9Laboratory of Nanotechnology for Precision Medicine, Fondazione Istituto Italiano di Tecnologia (IIT, Genova, Italy Abstract: While current imaging modalities, such as magnetic resonance imaging (MRI, computed tomography, and positron emission tomography, play an important role in detecting tumors in the body, no single-modality imaging possesses all the functions needed for a complete diagnostic imaging, such as spatial resolution, signal sensitivity, and tissue penetration depth. For this reason, multimodal imaging strategies have become promising tools for advanced biomedical research and cancer diagnostics and therapeutics. In designing multimodal nanoparticles, the physicochemical properties of the nanoparticles should be engineered so that they successfully accumulate at the tumor site and minimize nonspecific uptake by other organs. Finely altering the nano-scale properties can dramatically change the biodistribution and tumor accumulation of nanoparticles in the body. In this study, we engineered multimodal nanoparticles for both MRI, by using ferrimagnetic nanocubes (NCs, and near infrared fluorescence imaging

  19. Multimodality imaging of the postoperative shoulder

    Energy Technology Data Exchange (ETDEWEB)

    Woertler, Klaus [Technische Universitaet Muenchen, Department of Radiology, Munich (Germany)

    2007-12-15

    Multimodality imaging of the postoperative shoulder includes radiography, magnetic resonance (MR) imaging, MR arthrography, computed tomography (CT), CT arthrography, and ultrasound. Target-oriented evaluation of the postoperative shoulder necessitates familiarity with surgical techniques, their typical complications and sources of failure, knowledge of normal and abnormal postoperative findings, awareness of the advantages and weaknesses with the different radiologic techniques, and clinical information on current symptoms and function. This article reviews the most commonly used surgical procedures for treatment of anterior glenohumeral instability, lesions of the labral-bicipital complex, subacromial impingement, and rotator cuff lesions and highlights the significance of imaging findings with a view to detection of recurrent lesions and postoperative complications in a multimodality approach. (orig.)

  20. New Technologies, New Possibilities for the Arts and Multimodality in English Language Arts

    Science.gov (United States)

    Williams, Wendy R.

    2014-01-01

    This article discusses the arts, multimodality, and new technologies in English language arts. It then turns to the example of the illuminated text--a multimodal book report consisting of animated text, music, and images--to consider how art, multimodality, and technology can work together to support students' reading of literature and inspire…

  1. A group property for the coherent state representation of fermionic squeezing operators

    Science.gov (United States)

    Fan, Hong-yi; Li, Chao

    2004-06-01

    For the two-mode fermionic squeezing operators we find that their coherent state projection operator representation make up a loyal representation, which is homomorphic to an SO(4) group, though the fermionic coherent states are not mutual orthogonal. Thus the result of successively operating with many fermionic squeezing operators on a state can be equivalent to a single operation. The fermionic squeezing operators are mappings of orthogonal transformations in Grassmann number pseudo-classical space in the fermionic coherent state representation.

  2. A group property for the coherent state representation of fermionic squeezing operators

    International Nuclear Information System (INIS)

    Fan Hongyi; Li Chao

    2004-01-01

    For the two-mode fermionic squeezing operators we find that their coherent state projection operator representation make up a loyal representation, which is homomorphic to an SO(4) group, though the fermionic coherent states are not mutual orthogonal. Thus the result of successively operating with many fermionic squeezing operators on a state can be equivalent to a single operation. The fermionic squeezing operators are mappings of orthogonal transformations in Grassmann number pseudo-classical space in the fermionic coherent state representation

  3. Object-based attention: strength of object representation and attentional guidance.

    Science.gov (United States)

    Shomstein, Sarah; Behrmann, Marlene

    2008-01-01

    Two or more features belonging to a single object are identified more quickly and more accurately than are features belonging to different objects--a finding attributed to sensory enhancement of all features belonging to an attended or selected object. However, several recent studies have suggested that this "single-object advantage" may be a product of probabilistic and configural strategic prioritizations rather than of object-based perceptual enhancement per se, challenging the underlying mechanism that is thought to give rise to object-based attention. In the present article, we further explore constraints on the mechanisms of object-based selection by examining the contribution of the strength of object representations to the single-object advantage. We manipulated factors such as exposure duration (i.e., preview time) and salience of configuration (i.e., objects). Varying preview time changes the magnitude of the object-based effect, so that if there is ample time to establish an object representation (i.e., preview time of 1,000 msec), then both probability and configuration (i.e., objects) guide attentional selection. If, however, insufficient time is provided to establish a robust object-based representation, then only probabilities guide attentional selection. Interestingly, at a short preview time of 200 msec, when the two objects were sufficiently different from each other (i.e., different colors), both configuration and probability guided attention selection. These results suggest that object-based effects can be explained both in terms of strength of object representations (established at longer exposure durations and by pictorial cues) and probabilistic contingencies in the visual environment.

  4. Capturing molecular multimode relaxation processes in excitable gases based on decomposition of acoustic relaxation spectra

    Science.gov (United States)

    Zhu, Ming; Liu, Tingting; Wang, Shu; Zhang, Kesheng

    2017-08-01

    Existing two-frequency reconstructive methods can only capture primary (single) molecular relaxation processes in excitable gases. In this paper, we present a reconstructive method based on the novel decomposition of frequency-dependent acoustic relaxation spectra to capture the entire molecular multimode relaxation process. This decomposition of acoustic relaxation spectra is developed from the frequency-dependent effective specific heat, indicating that a multi-relaxation process is the sum of the interior single-relaxation processes. Based on this decomposition, we can reconstruct the entire multi-relaxation process by capturing the relaxation times and relaxation strengths of N interior single-relaxation processes, using the measurements of acoustic absorption and sound speed at 2N frequencies. Experimental data for the gas mixtures CO2-N2 and CO2-O2 validate our decomposition and reconstruction approach.

  5. Multi-modal RGB–Depth–Thermal Human Body Segmentation

    DEFF Research Database (Denmark)

    Palmero, Cristina; Clapés, Albert; Bahnsen, Chris

    2016-01-01

    This work addresses the problem of human body segmentation from multi-modal visual cues as a first stage of automatic human behavior analysis. We propose a novel RGB-Depth-Thermal dataset along with a multi-modal seg- mentation baseline. The several modalities are registered us- ing a calibration...... to other state-of-the-art meth- ods, obtaining an overlap above 75% on the novel dataset when compared to the manually annotated ground-truth of human segmentations....

  6. Multimodal Strategies of Theorization

    DEFF Research Database (Denmark)

    Cartel, Melodie; Colombero, Sylvain; Boxenbaum, Eva

    This paper examines the role of multimodal strategies in processes of theorization. Empirically, we investigate the theorization process of a highly disruptive innovation in the history of architecture: reinforced concrete. Relying on archival data from a dominant French architectural journal from...... with well-known rhetorical strategies and develop a process model of theorization....

  7. Multimodality molecular imaging - from target description to clinical studies

    International Nuclear Information System (INIS)

    Schober, O.; Rahbar, K.; Riemann, B.

    2009-01-01

    This highlight lecture was presented at the closing session of the Annual Congress of the European Association of Nuclear Medicine (EANM) in Munich on 15 October 2008. The Congress was a great success: there were more than 4,000 participants, and 1,597 abstracts were submitted. Of these, 1,387 were accepted for oral or poster presentation, with a rejection rate of 14%. In this article a choice was made from 100 of the 500 lectures which received the highest scores by the scientific review panel. This article outlines the major findings and trends at the EANM 2008, and is only a brief summary of the large number of outstanding abstracts presented. Among the great number of oral and poster presentations covering nearly all fields of nuclear medicine some headlines have to be defined highlighting the development of nuclear medicine in the 21st century. This review focuses on the increasing impact of molecular and multimodality imaging in the field of nuclear medicine. In addition, the question may be asked as to whether the whole spectrum of nuclear medicine is nothing other than molecular imaging and therapy. Furthermore, molecular imaging will and has to go ahead to multimodality imaging. In view of this background the review was structured according to the single steps of molecular imaging, i.e. from target description to clinical studies. The following topics are addressed: targets, radiochemistry and radiopharmacy, devices and computer science, animals and preclinical evaluations, and patients and clinical evaluations. (orig.)

  8. Comprehensive Context Recognizer Based on Multimodal Sensors in a Smartphone

    Directory of Open Access Journals (Sweden)

    Sungyoung Lee

    2012-09-01

    Full Text Available Recent developments in smartphones have increased the processing capabilities and equipped these devices with a number of built-in multimodal sensors, including accelerometers, gyroscopes, GPS interfaces, Wi-Fi access, and proximity sensors. Despite the fact that numerous studies have investigated the development of user-context aware applications using smartphones, these applications are currently only able to recognize simple contexts using a single type of sensor. Therefore, in this work, we introduce a comprehensive approach for context aware applications that utilizes the multimodal sensors in smartphones. The proposed system is not only able to recognize different kinds of contexts with high accuracy, but it is also able to optimize the power consumption since power-hungry sensors can be activated or deactivated at appropriate times. Additionally, the system is able to recognize activities wherever the smartphone is on a human’s body, even when the user is using the phone to make a phone call, manipulate applications, play games, or listen to music. Furthermore, we also present a novel feature selection algorithm for the accelerometer classification module. The proposed feature selection algorithm helps select good features and eliminates bad features, thereby improving the overall accuracy of the accelerometer classifier. Experimental results show that the proposed system can classify eight activities with an accuracy of 92.43%.

  9. Multimodal treatment for unresectable pancreatic cancer

    International Nuclear Information System (INIS)

    Katayama, Kanji; Iida, Atsushi; Fujita, Takashi; Kobayashi, Taizo; Shinmoto, Syuichi; Hirose, Kazuo; Yamaguchi, Akio; Yoshida, Masanori

    1998-01-01

    In order to improve in prognosis and quality of life (QOL), the multimodal treatment for unresectable pancreatic cancers were performed. Bypass surgery was carried out for unresectable pancreatic cancer with intraoperative irradiation (IOR). After surgery, patients were treated with the combination of CDDP (25 mg) and MMC (4 mg) administration, intravenously continuous injection of 5-FU (250 mg for 24 hours), external radiation by the high voltage X-ray (1.5 Gy per irradiation, 4 times a week, and during hyperthermia 3 Gy per irradiation) and hyperthermia using the Thermotron RF-8 warmer. Six out of 13 patients received hyperthermia at over 40degC, were obtained PR, and their survival periods were 22, 21, 19, 18, 11 and 8 months and they could return to work. For all patients with pain, the symptom was abolished or reduced. The survival periods in cases of the multimodal treatment were longer than those of only bypass-surgery or of the resective cases with the curability C. The multimodal treatment combined with radiation, hyperthermia and surgery is more useful for the removal of pain and the improvement of QOL, and also expected the improvement of the prognosis than pancreatectomy. And hyperthermia has an important role on the effect of this treatment. (K.H.)

  10. Multimodal treatment for unresectable pancreatic cancer

    Energy Technology Data Exchange (ETDEWEB)

    Katayama, Kanji; Iida, Atsushi; Fujita, Takashi; Kobayashi, Taizo; Shinmoto, Syuichi; Hirose, Kazuo; Yamaguchi, Akio; Yoshida, Masanori [Fukui Medical School, Matsuoka (Japan)

    1998-07-01

    In order to improve in prognosis and quality of life (QOL), the multimodal treatment for unresectable pancreatic cancers were performed. Bypass surgery was carried out for unresectable pancreatic cancer with intraoperative irradiation (IOR). After surgery, patients were treated with the combination of CDDP (25 mg) and MMC (4 mg) administration, intravenously continuous injection of 5-FU (250 mg for 24 hours), external radiation by the high voltage X-ray (1.5 Gy per irradiation, 4 times a week, and during hyperthermia 3 Gy per irradiation) and hyperthermia using the Thermotron RF-8 warmer. Six out of 13 patients received hyperthermia at over 40degC, were obtained PR, and their survival periods were 22, 21, 19, 18, 11 and 8 months and they could return to work. For all patients with pain, the symptom was abolished or reduced. The survival periods in cases of the multimodal treatment were longer than those of only bypass-surgery or of the resective cases with the curability C. The multimodal treatment combined with radiation, hyperthermia and surgery is more useful for the removal of pain and the improvement of QOL, and also expected the improvement of the prognosis than pancreatectomy. And hyperthermia has an important role on the effect of this treatment. (K.H.)

  11. Dynamics of scene representations in the human brain revealed by magnetoencephalography and deep neural networks

    Science.gov (United States)

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude

    2017-01-01

    Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100 ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250 ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. PMID:27039703

  12. Oscillatory Activity in the Infant Brain and the Representation of Small Numbers.

    Science.gov (United States)

    Leung, Sumie; Mareschal, Denis; Rowsell, Renee; Simpson, David; Iaria, Leon; Grbic, Amanda; Kaufman, Jordy

    2016-01-01

    Gamma-band oscillatory activity (GBA) is an established neural signature of sustained occluded object representation in infants and adults. However, it is not yet known whether the magnitude of GBA in the infant brain reflects the quantity of occluded items held in memory. To examine this, we compared GBA of 6-8 month-old infants during occlusion periods after the representation of two objects vs. that of one object. We found that maintaining a representation of two objects during occlusion resulted in significantly greater GBA relative to maintaining a single object. Further, this enhancement was located in the right occipital region, which is consistent with previous object representation research in adults and infants. We conclude that enhanced GBA reflects neural processes underlying infants' representation of small numbers.

  13. Development of a micromirror-scanned multimodal CARS miniaturized microscope for the in vivo study of spinal cord disorders

    Science.gov (United States)

    Murugkar, Sangeeta; Smith, Brett; Naji, Majid; Brideau, Craig; Stys, Peter; Anis, Hanan

    2011-03-01

    We discuss the design and implementation of a novel multimodal coherent anti-Stokes Raman scattering (CARS) miniaturized microscope for imaging of injured and recovering spinal cords in a single living animal. We demonstrate for the first time, the use of a biaxial microelectromechanical system (MEMS) mirror for scanning and diffraction limited multiple lens miniaturized objective for exciting a CARS signal. The miniaturized microscope design includes light delivery using a large mode area photonic crystal fiber (PCF), and multimode fiber for collection of the nonlinear optical signal. The basic design concept, major engineering challenges, solutions, and preliminary results are presented. We demonstrate CARS and two photon excitation fluorescence microscopy in a benchtop setup with the miniaturized optics and MEMS scanning. The light source is based on a single femtosecond laser (pump beam) and a supercontinuum generated in a nonlinear PCF (Stokes beam). This is coupled using free space optics onto the surface of a resonantly driven two dimensional scanning MEMS mirror that scans the excitation light in a Lissajous pattern. The novel design of the miniaturized microscope is expected to provide significant new information on the pathogenesis of demyelinating diseases such as Multiple Sclerosis and Spinal Cord Injury.

  14. Multimodal quantitative phase and fluorescence imaging of cell apoptosis

    Science.gov (United States)

    Fu, Xinye; Zuo, Chao; Yan, Hao

    2017-06-01

    Fluorescence microscopy, utilizing fluorescence labeling, has the capability to observe intercellular changes which transmitted and reflected light microscopy techniques cannot resolve. However, the parts without fluorescence labeling are not imaged. Hence, the processes simultaneously happen in these parts cannot be revealed. Meanwhile, fluorescence imaging is 2D imaging where information in the depth is missing. Therefore the information in labeling parts is also not complete. On the other hand, quantitative phase imaging is capable to image cells in 3D in real time through phase calculation. However, its resolution is limited by the optical diffraction and cannot observe intercellular changes below 200 nanometers. In this work, fluorescence imaging and quantitative phase imaging are combined to build a multimodal imaging system. Such system has the capability to simultaneously observe the detailed intercellular phenomenon and 3D cell morphology. In this study the proposed multimodal imaging system is used to observe the cell behavior in the cell apoptosis. The aim is to highlight the limitations of fluorescence microscopy and to point out the advantages of multimodal quantitative phase and fluorescence imaging. The proposed multimodal quantitative phase imaging could be further applied in cell related biomedical research, such as tumor.

  15. Single and Multiple Object Tracking Using a Multi-Feature Joint Sparse Representation.

    Science.gov (United States)

    Hu, Weiming; Li, Wei; Zhang, Xiaoqin; Maybank, Stephen

    2015-04-01

    In this paper, we propose a tracking algorithm based on a multi-feature joint sparse representation. The templates for the sparse representation can include pixel values, textures, and edges. In the multi-feature joint optimization, noise or occlusion is dealt with using a set of trivial templates. A sparse weight constraint is introduced to dynamically select the relevant templates from the full set of templates. A variance ratio measure is adopted to adaptively adjust the weights of different features. The multi-feature template set is updated adaptively. We further propose an algorithm for tracking multi-objects with occlusion handling based on the multi-feature joint sparse reconstruction. The observation model based on sparse reconstruction automatically focuses on the visible parts of an occluded object by using the information in the trivial templates. The multi-object tracking is simplified into a joint Bayesian inference. The experimental results show the superiority of our algorithm over several state-of-the-art tracking algorithms.

  16. A multimodal image sensor system for identifying water stress in grapevines

    Science.gov (United States)

    Zhao, Yong; Zhang, Qin; Li, Minzan; Shao, Yongni; Zhou, Jianfeng; Sun, Hong

    2012-11-01

    Water stress is one of the most common limitations of fruit growth. Water is the most limiting resource for crop growth. In grapevines, as well as in other fruit crops, fruit quality benefits from a certain level of water deficit which facilitates to balance vegetative and reproductive growth and the flow of carbohydrates to reproductive structures. A multi-modal sensor system was designed to measure the reflectance signature of grape plant surfaces and identify different water stress levels in this paper. The multi-modal sensor system was equipped with one 3CCD camera (three channels in R, G, and IR). The multi-modal sensor can capture and analyze grape canopy from its reflectance features, and identify the different water stress levels. This research aims at solving the aforementioned problems. The core technology of this multi-modal sensor system could further be used as a decision support system that combines multi-modal sensory data to improve plant stress detection and identify the causes of stress. The images were taken by multi-modal sensor which could output images in spectral bands of near-infrared, green and red channel. Based on the analysis of the acquired images, color features based on color space and reflectance features based on image process method were calculated. The results showed that these parameters had the potential as water stress indicators. More experiments and analysis are needed to validate the conclusion.

  17. Multiscale climate emulator of multimodal wave spectra: MUSCLE-spectra

    Science.gov (United States)

    Rueda, Ana; Hegermiller, Christie A.; Antolinez, Jose A. A.; Camus, Paula; Vitousek, Sean; Ruggiero, Peter; Barnard, Patrick L.; Erikson, Li H.; Tomás, Antonio; Mendez, Fernando J.

    2017-02-01

    Characterization of multimodal directional wave spectra is important for many offshore and coastal applications, such as marine forecasting, coastal hazard assessment, and design of offshore wave energy farms and coastal structures. However, the multivariate and multiscale nature of wave climate variability makes this complex problem tractable using computationally expensive numerical models. So far, the skill of statistical-downscaling model-based parametric (unimodal) wave conditions is limited in large ocean basins such as the Pacific. The recent availability of long-term directional spectral data from buoys and wave hindcast models allows for development of stochastic models that include multimodal sea-state parameters. This work introduces a statistical downscaling framework based on weather types to predict multimodal wave spectra (e.g., significant wave height, mean wave period, and mean wave direction from different storm systems, including sea and swells) from large-scale atmospheric pressure fields. For each weather type, variables of interest are modeled using the categorical distribution for the sea-state type, the Generalized Extreme Value (GEV) distribution for wave height and wave period, a multivariate Gaussian copula for the interdependence between variables, and a Markov chain model for the chronology of daily weather types. We apply the model to the southern California coast, where local seas and swells from both the Northern and Southern Hemispheres contribute to the multimodal wave spectrum. This work allows attribution of particular extreme multimodal wave events to specific atmospheric conditions, expanding knowledge of time-dependent, climate-driven offshore and coastal sea-state conditions that have a significant influence on local nearshore processes, coastal morphology, and flood hazards.

  18. Advanced Multimodal Solutions for Information Presentation

    Science.gov (United States)

    Wenzel, Elizabeth M.; Godfroy-Cooper, Martine

    2018-01-01

    High-workload, fast-paced, and degraded sensory environments are the likeliest candidates to benefit from multimodal information presentation. For example, during EVA (Extra-Vehicular Activity) and telerobotic operations, the sensory restrictions associated with a space environment provide a major challenge to maintaining the situation awareness (SA) required for safe operations. Multimodal displays hold promise to enhance situation awareness and task performance by utilizing different sensory modalities and maximizing their effectiveness based on appropriate interaction between modalities. During EVA, the visual and auditory channels are likely to be the most utilized with tasks such as monitoring the visual environment, attending visual and auditory displays, and maintaining multichannel auditory communications. Previous studies have shown that compared to unimodal displays (spatial auditory or 2D visual), bimodal presentation of information can improve operator performance during simulated extravehicular activity on planetary surfaces for tasks as diverse as orientation, localization or docking, particularly when the visual environment is degraded or workload is increased. Tactile displays offer a third sensory channel that may both offload information processing effort and provide a means to capture attention when urgently required. For example, recent studies suggest that including tactile cues may result in increased orientation and alerting accuracy, improved task response time and decreased workload, as well as provide self-orientation cues in microgravity on the ISS (International Space Station). An important overall issue is that context-dependent factors like task complexity, sensory degradation, peripersonal vs. extrapersonal space operations, workload, experience level, and operator fatigue tend to vary greatly in complex real-world environments and it will be difficult to design a multimodal interface that performs well under all conditions. As a

  19. Obstacle traversal and self-righting of bio-inspired robots reveal the physics of multi-modal locomotion

    Science.gov (United States)

    Li, Chen; Fearing, Ronald; Full, Robert

    Most animals move in nature in a variety of locomotor modes. For example, to traverse obstacles like dense vegetation, cockroaches can climb over, push across, reorient their bodies to maneuver through slits, or even transition among these modes forming diverse locomotor pathways; if flipped over, they can also self-right using wings or legs to generate body pitch or roll. By contrast, most locomotion studies have focused on a single mode such as running, walking, or jumping, and robots are still far from capable of life-like, robust, multi-modal locomotion in the real world. Here, we present two recent studies using bio-inspired robots, together with new locomotion energy landscapes derived from locomotor-environment interaction physics, to begin to understand the physics of multi-modal locomotion. (1) Our experiment of a cockroach-inspired legged robot traversing grass-like beam obstacles reveals that, with a terradynamically ``streamlined'' rounded body like that of the insect, robot traversal becomes more probable by accessing locomotor pathways that overcome lower potential energy barriers. (2) Our experiment of a cockroach-inspired self-righting robot further suggests that body vibrations are crucial for exploring locomotion energy landscapes and reaching lower barrier pathways. Finally, we posit that our new framework of locomotion energy landscapes holds promise to better understand and predict multi-modal biological and robotic movement.

  20. The semiotics of typography in literary texts. A multimodal approach

    DEFF Research Database (Denmark)

    Nørgaard, Nina

    2009-01-01

    to multimodal discourse proposed, for instance, by Kress & Van Leeuwen (2001) and Baldry & Thibault (2006), and, more specifically, the multimodal approach to typography suggested by Van Leeuwen (2005b; 2006), in order to sketch out a methodological framework applicable to the description and analysis...... of the semiotic potential of typography in literary texts....

  1. A Multimodal Search Engine for Medical Imaging Studies.

    Science.gov (United States)

    Pinho, Eduardo; Godinho, Tiago; Valente, Frederico; Costa, Carlos

    2017-02-01

    The use of digital medical imaging systems in healthcare institutions has increased significantly, and the large amounts of data in these systems have led to the conception of powerful support tools: recent studies on content-based image retrieval (CBIR) and multimodal information retrieval in the field hold great potential in decision support, as well as for addressing multiple challenges in healthcare systems, such as computer-aided diagnosis (CAD). However, the subject is still under heavy research, and very few solutions have become part of Picture Archiving and Communication Systems (PACS) in hospitals and clinics. This paper proposes an extensible platform for multimodal medical image retrieval, integrated in an open-source PACS software with profile-based CBIR capabilities. In this article, we detail a technical approach to the problem by describing its main architecture and each sub-component, as well as the available web interfaces and the multimodal query techniques applied. Finally, we assess our implementation of the engine with computational performance benchmarks.

  2. Multimodal designs for learning in contexts of diversity

    Directory of Open Access Journals (Sweden)

    Arlene Archer

    2014-12-01

    Full Text Available This paper aims to identify multimodal designs for learning in diverse and developing contexts, where access to resources remains vastly unequal. Using case studies from South African education, the paper explores ways of surfacing the range of students’ resources which are often not noticed or valued in formal educational settings. The studies showcased here demonstrate how ethnographic and textually-based approaches can be combined. Opening up the semiotic space of the classroom through multimodal designs for learning is important for finding innovative ways of addressing access, diversity, and past inequalities. This is of relevance not only to South Africa, but a range of global contexts. The paper argues that multimodal designs for learning can involve interrogating the relation between ‘tradition’ and ‘modernity’; harnessing students’ creative practices as resources for pedagogy; developing metalanguages for critical reflection; creating less regulated pedagogical spaces in order to enable useful teaching and learning practices.

  3. Study of internalization and viability of multimodal nanoparticles for labeling of human umbilical cord mesenchymal stem cells; Estudo de internalizacao e viabilidade de nanoparticulas multimodal para marcacao de celulas-tronco mesenquimais de cordao umbilical humano

    Energy Technology Data Exchange (ETDEWEB)

    Miyaki, Liza Aya Mabuchi [Faculdade de Enfermagem, Hospital Israelita Albert Einstein - HIAE, Sao Paulo, SP (Brazil); Sibov, Tatiana Tais; Pavon, Lorena Favaro; Mamani, Javier Bustamante; Gamarra, Lionel Fernel, E-mail: tatianats@einstein.br [Instituto do Cerebro - InCe, Hospital Israelita Albert Einstein - HIAE, Sao Paulo, SP (Brazil)

    2012-04-15

    Objective: To analyze multimodal magnetic nanoparticles-Rhodamine B in culture media for cell labeling, and to establish a study of multimodal magnetic nanoparticles-Rhodamine B detection at labeled cells evaluating they viability at concentrations of 10 {mu}g Fe/mL and 100{mu}g Fe/mL. Methods: We performed the analysis of stability of multimodal magnetic nanoparticles-Rhodamine B in different culture media; the mesenchymal stem cells labeling with multimodal magnetic nanoparticles-Rhodamine B; the intracellular detection of multimodal magnetic nanoparticles-Rhodamine B in mesenchymal stem cells, and assessment of the viability of labeled cells by kinetic proliferation. Results: The stability analysis showed that multimodal magnetic nanoparticles-Rhodamine B had good stability in cultured Dulbecco's Modified Eagle's-Low Glucose medium and RPMI 1640 medium. The mesenchymal stem cell with multimodal magnetic nanoparticles-Rhodamine B described location of intracellular nanoparticles, which were shown as blue granules co-localized in fluorescent clusters, thus characterizing magnetic and fluorescent properties of multimodal magnetic nanoparticles Rhodamine B. Conclusion: The stability of multimodal magnetic nanoparticles-Rhodamine B found in cultured Dulbecco's Modified Eagle's-Low Glucose medium and RPMI 1640 medium assured intracellular mesenchymal stem cells labeling. This cell labeling did not affect viability of labeled mesenchymal stem cells since they continued to proliferate for five days. (author)

  4. Baikov-Lee representations of cut Feynman integrals

    International Nuclear Information System (INIS)

    Harley, Mark; Moriello, Francesco; Schabinger, Robert M.

    2017-01-01

    We develop a general framework for the evaluation of d-dimensional cut Feynman integrals based on the Baikov-Lee representation of purely-virtual Feynman integrals. We implement the generalized Cutkosky cutting rule using Cauchy’s residue theorem and identify a set of constraints which determine the integration domain. The method applies equally well to Feynman integrals with a unitarity cut in a single kinematic channel and to maximally-cut Feynman integrals. Our cut Baikov-Lee representation reproduces the expected relation between cuts and discontinuities in a given kinematic channel and furthermore makes the dependence on the kinematic variables manifest from the beginning. By combining the Baikov-Lee representation of maximally-cut Feynman integrals and the properties of periods of algebraic curves, we are able to obtain complete solution sets for the homogeneous differential equations satisfied by Feynman integrals which go beyond multiple polylogarithms. We apply our formalism to the direct evaluation of a number of interesting cut Feynman integrals.

  5. Oscillatory activity in the infant brain and the representation of small numbers

    Directory of Open Access Journals (Sweden)

    Sumie eLeung

    2016-02-01

    Full Text Available Gamma-band oscillatory activity (GBA is an established neural signature of sustained occluded object representation in infants and adults. However, it is not yet known whether the magnitude of GBA in the infant brain reflects the quantity of occluded items held in memory. To examine this, we compared GBA of 6- to 8-month-old infants during occlusion periods after the representation of two objects versus that of one object. We found that maintaining a representation of two objects during occlusion resulted in significantly greater GBA relative to maintaining a single object. Further, this enhancement was located in the right occipital region, which is consistent with previous object representation research in adults and infants. We conclude that enhanced GBA reflects neural processes underlying infants’ representation of small numbers.

  6. Multimodality imaging of pulmonary infarction

    International Nuclear Information System (INIS)

    Bray, T.J.P.; Mortensen, K.H.; Gopalan, D.

    2014-01-01

    Highlights: • A plethora of pulmonary and systemic disorders, often associated with grave outcomes, may cause pulmonary infarction. • A stereotypical infarct is a peripheral wedge shaped pleurally based opacity but imaging findings can be highly variable. • Multimodality imaging is key to diagnosing the presence, aetiology and complications of pulmonary infarction. • Multimodality imaging of pulmonary infarction together with any ancillary features often guide to early targeted treatment. • CT remains the principal imaging modality with MRI increasingly used alongside nuclear medicine studies and ultrasound. - Abstract: The impact of absent pulmonary arterial and venous flow on the pulmonary parenchyma depends on a host of factors. These include location of the occlusive insult, the speed at which the occlusion develops and the ability of the normal dual arterial supply to compensate through increased bronchial arterial flow. Pulmonary infarction occurs when oxygenation is cut off secondary to sudden occlusion with lack of recruitment of the dual supply arterial system. Thromboembolic disease is the commonest cause of such an insult but a whole range of disease processes intrinsic and extrinsic to the pulmonary arterial and venous lumen may also result in infarcts. Recognition of the presence of infarction can be challenging as imaging manifestations often differ from the classically described wedge shaped defect and a number of weighty causes need consideration. This review highlights aetiologies and imaging appearances of pulmonary infarction, utilising cases to illustrate the essential role of a multimodality imaging approach in order to arrive at the appropriate diagnosis

  7. Multimodality imaging of pulmonary infarction

    Energy Technology Data Exchange (ETDEWEB)

    Bray, T.J.P., E-mail: timothyjpbray@gmail.com [Department of Radiology, Papworth Hospital NHS Foundation Trust, Ermine Street, Papworth Everard, Cambridge CB23 3RE (United Kingdom); Mortensen, K.H., E-mail: mortensen@doctors.org.uk [Department of Radiology, Papworth Hospital NHS Foundation Trust, Ermine Street, Papworth Everard, Cambridge CB23 3RE (United Kingdom); University Department of Radiology, Addenbrookes Hospital, Cambridge University Hospitals NHS Foundation Trust, Hills Road, Box 318, Cambridge CB2 0QQ (United Kingdom); Gopalan, D., E-mail: deepa.gopalan@btopenworld.com [Department of Radiology, Papworth Hospital NHS Foundation Trust, Ermine Street, Papworth Everard, Cambridge CB23 3RE (United Kingdom)

    2014-12-15

    Highlights: • A plethora of pulmonary and systemic disorders, often associated with grave outcomes, may cause pulmonary infarction. • A stereotypical infarct is a peripheral wedge shaped pleurally based opacity but imaging findings can be highly variable. • Multimodality imaging is key to diagnosing the presence, aetiology and complications of pulmonary infarction. • Multimodality imaging of pulmonary infarction together with any ancillary features often guide to early targeted treatment. • CT remains the principal imaging modality with MRI increasingly used alongside nuclear medicine studies and ultrasound. - Abstract: The impact of absent pulmonary arterial and venous flow on the pulmonary parenchyma depends on a host of factors. These include location of the occlusive insult, the speed at which the occlusion develops and the ability of the normal dual arterial supply to compensate through increased bronchial arterial flow. Pulmonary infarction occurs when oxygenation is cut off secondary to sudden occlusion with lack of recruitment of the dual supply arterial system. Thromboembolic disease is the commonest cause of such an insult but a whole range of disease processes intrinsic and extrinsic to the pulmonary arterial and venous lumen may also result in infarcts. Recognition of the presence of infarction can be challenging as imaging manifestations often differ from the classically described wedge shaped defect and a number of weighty causes need consideration. This review highlights aetiologies and imaging appearances of pulmonary infarction, utilising cases to illustrate the essential role of a multimodality imaging approach in order to arrive at the appropriate diagnosis.

  8. Investigation of protein selectivity in multimodal chromatography using in silico designed Fab fragment variants.

    Science.gov (United States)

    Karkov, Hanne Sophie; Krogh, Berit Olsen; Woo, James; Parimal, Siddharth; Ahmadian, Haleh; Cramer, Steven M

    2015-11-01

    In this study, a unique set of antibody Fab fragments was designed in silico and produced to examine the relationship between protein surface properties and selectivity in multimodal chromatographic systems. We hypothesized that multimodal ligands containing both hydrophobic and charged moieties would interact strongly with protein surface regions where charged groups and hydrophobic patches were in close spatial proximity. Protein surface property characterization tools were employed to identify the potential multimodal ligand binding regions on the Fab fragment of a humanized antibody and to evaluate the impact of mutations on surface charge and hydrophobicity. Twenty Fab variants were generated by site-directed mutagenesis, recombinant expression, and affinity purification. Column gradient experiments were carried out with the Fab variants in multimodal, cation-exchange, and hydrophobic interaction chromatographic systems. The results clearly indicated that selectivity in the multimodal system was different from the other chromatographic modes examined. Column retention data for the reduced charge Fab variants identified a binding site comprising light chain CDR1 as the main electrostatic interaction site for the multimodal and cation-exchange ligands. Furthermore, the multimodal ligand binding was enhanced by additional hydrophobic contributions as evident from the results obtained with hydrophobic Fab variants. The use of in silico protein surface property analyses combined with molecular biology techniques, protein expression, and chromatographic evaluations represents a previously undescribed and powerful approach for investigating multimodal selectivity with complex biomolecules. © 2015 Wiley Periodicals, Inc.

  9. Multimodal network design and assessment

    NARCIS (Netherlands)

    Brands, Ties; Alkim, T.P.; van Eck, Gijs; van Arem, Bart; Arentze, T.

    2010-01-01

    A framework is proposed for the design of an optimal multimodal transport network for the Randstad area. This research framework consists of a multi-objective optimization heuristic and a fast network assessment module, which results in a set of Pareto optimal solutions. Subsequently, a proper

  10. Reference Frame Fields based on Quantum Theory Representations of Real and Complex Numbers

    OpenAIRE

    Benioff, Paul

    2007-01-01

    A quantum theory representations of real (R) and complex (C) numbers is given that is based on states of single, finite strings of qukits for any base k > 1. Both unary representations and the possibility that qukits with k a prime number are elementary and the rest composite are discussed. Cauchy sequences of qukit string states are defined from the arithmetic properties. The representations of R and C, as equivalence classes of these sequences, differ from classical kit string state represe...

  11. A Pretargeted Approach for the Multimodal PET/NIRF Imaging of Colorectal Cancer.

    Science.gov (United States)

    Adumeau, Pierre; Carnazza, Kathryn E; Brand, Christian; Carlin, Sean D; Reiner, Thomas; Agnew, Brian J; Lewis, Jason S; Zeglis, Brian M

    2016-01-01

    The complementary nature of positron emission tomography (PET) and near-infrared fluorescence (NIRF) imaging makes the development of strategies for the multimodal PET/NIRF imaging of cancer a very enticing prospect. Indeed, in the context of colorectal cancer, a single multimodal PET/NIRF imaging agent could be used to stage the disease, identify candidates for surgical intervention, and facilitate the image-guided resection of the disease. While antibodies have proven to be highly effective vectors for the delivery of radioisotopes and fluorophores to malignant tissues, the use of radioimmunoconjugates labeled with long-lived nuclides such as 89 Zr poses two important clinical complications: high radiation doses to the patient and the need for significant lag time between imaging and surgery. In vivo pretargeting strategies that decouple the targeting vector from the radioactivity at the time of injection have the potential to circumvent these issues by facilitating the use of positron-emitting radioisotopes with far shorter half-lives. Here, we report the synthesis, characterization, and in vivo validation of a pretargeted strategy for the multimodal PET and NIRF imaging of colorectal carcinoma. This approach is based on the rapid and bioorthogonal ligation between a trans -cyclooctene- and fluorophore-bearing immunoconjugate of the huA33 antibody (huA33-Dye800-TCO) and a 64 Cu-labeled tetrazine radioligand ( 64 Cu-Tz-SarAr). In vivo imaging experiments in mice bearing A33 antigen-expressing SW1222 colorectal cancer xenografts clearly demonstrate that this approach enables the non-invasive visualization of tumors and the image-guided resection of malignant tissue, all at only a fraction of the radiation dose created by a directly labeled radioimmunoconjugate. Additional in vivo experiments in peritoneal and patient-derived xenograft models of colorectal carcinoma reinforce the efficacy of this methodology and underscore its potential as an innovative and useful

  12. Cycling in multimodal transport behaviours: Exploring modality styles in the Danish population

    DEFF Research Database (Denmark)

    Olafsson, Anton Stahl; Nielsen, Thomas Alexander Sick; Carstensen, Trine Agervig

    2016-01-01

    and small towns. Thus, the way in which travel modes relate to the urban environment and variations in modality styles must serve as the starting point for policies aiming to fulfil the potential of multimodal transport behaviour and promote cycling. (C) 2016 Elsevier Ltd. All rights reserved....... explores how cycling forms part of multimodal transport behaviour based on survey data on transport modes and travel purposes and the weekly frequency of out-of-home activities and travel mode use in a representative sample of adult Danes (n = 1957). The following five distinct multimodal travel segments...... or 'modality styles' are identified: 'education transport'; 'public-based transport'; 'limited transport'; 'bicycle-based transport'; and 'car-based transport'. Travel behaviour is predominantly multimodal with few unimodal car-drivers being identified. Substantial cycling takes place in all modality styles...

  13. Multimodality with Eye tracking and Haptics: A New Horizon for Serious Games?

    Directory of Open Access Journals (Sweden)

    Shujie Deng

    2014-10-01

    Full Text Available The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications.

  14. Design, Performance and Optimization for Multimodal Radar Operation

    Directory of Open Access Journals (Sweden)

    Surendra S. Bhat

    2012-09-01

    Full Text Available This paper describes the underlying methodology behind an adaptive multimodal radar sensor that is capable of progressively optimizing its range resolution depending upon the target scattering features. It consists of a test-bed that enables the generation of linear frequency modulated waveforms of various bandwidths. This paper discusses a theoretical approach to optimizing the bandwidth used by the multimodal radar. It also discusses the various experimental results obtained from measurement. The resolution predicted from theory agrees quite well with that obtained from experiments for different target arrangements.

  15. Spectral space-time coding for optical communications through a multimode fiber

    NARCIS (Netherlands)

    Alonso, A.; Berghmans, F.; Thienpont, H.; Danckaert, J.; Desmet, L.

    2001-01-01

    We propose a method for coding the mode structure of a multimode optical fiber by spectral coding mixed with space-time modulation. With this system we can improve the data carrying capacity of a multimode fiber for optical communications and optical interconnects, and encode and decode the

  16. Cortical Representations of Speech in a Multitalker Auditory Scene.

    Science.gov (United States)

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory

  17. Patterns of Failure Following Multimodal Treatment for Medulloblastoma: Long-Term Follow-up Results at a Single Institution.

    Science.gov (United States)

    Lee, Dong Soo; Cho, Jaeho; Kim, Se Hoon; Kim, Dong-Seok; Shim, Kyu Won; Lyu, Chuhl Joo; Han, Jung Woo; Suh, Chang-Ok

    2015-10-01

    The purpose of this study is to investigate the long-term results and appropriateness of radiation therapy (RT) for medulloblastoma (MB) at a single institution. We analyzed the clinical outcomes of 106 patients with MB who received RT between January 1992 and October 2009. The median age was 7 years (range, 0 to 50 years), and the proportion of M0, M1, M2, and M3 stages was 60.4%, 8.5%, 4.7%, and 22.6%, respectively. The median total craniospinal irradiation (CSI) and posterior fossa tumor bed dose in 102 patients (96.2%) treated with CSI was 36 Gy and 54 Gy, respectively. The median follow-up period in survivors was 132 months (range, 31 to 248 months). A gradual improvement in survival outcomes was observed, with 5-year overall survival rates of 61.5% in 1990s increasing to 73.6% in 2000s. A total of 29 recurrences (27.4%) developed at the following sites: five (17.2%) in the tumor bed; five (17.2%) in the posterior fossa other than the tumor bed; nine (31%) in the supratentorium; and six (20.7%) in the spinal subarachnoid space only. The four remaining patients showed multiple site recurrences. Among 12 supratentorial recurrences, five cases recurred in the subfrontal areas. Although the frequency of posterior fossa/tumor bed recurrences was significantly high among patients treated with subtotal resection, other site (other intracranial/spinal) recurrences were more common among patients treated with gross tumor removal (p=0.016). There was no case of spinal subarachnoid space relapse from desmoplastic/extensive nodular histological subtypes. Long-term follow-up results and patterns of failure confirmed the importance of optimal RT dose and field arrangement. More tailored multimodal strategies and proper CSI technique may be the cornerstones for improving treatment outcomes in MB patients.

  18. Driver Education for New Multimodal Facilities

    Science.gov (United States)

    2016-05-24

    Local and state transportation agencies are redesigning roads to accommodate multimodal travel, including the addition of new configurations, infrastructures, and rules that may be unfamiliar to current drivers and other road users. Education and out...

  19. Advanced Contrast Agents for Multimodal Biomedical Imaging Based on Nanotechnology.

    Science.gov (United States)

    Calle, Daniel; Ballesteros, Paloma; Cerdán, Sebastián

    2018-01-01

    Clinical imaging modalities have reached a prominent role in medical diagnosis and patient management in the last decades. Different image methodologies as Positron Emission Tomography, Single Photon Emission Tomography, X-Rays, or Magnetic Resonance Imaging are in continuous evolution to satisfy the increasing demands of current medical diagnosis. Progress in these methodologies has been favored by the parallel development of increasingly more powerful contrast agents. These are molecules that enhance the intrinsic contrast of the images in the tissues where they accumulate, revealing noninvasively the presence of characteristic molecular targets or differential physiopathological microenvironments. The contrast agent field is currently moving to improve the performance of these molecules by incorporating the advantages that modern nanotechnology offers. These include, mainly, the possibilities to combine imaging and therapeutic capabilities over the same theranostic platform or improve the targeting efficiency in vivo by molecular engineering of the nanostructures. In this review, we provide an introduction to multimodal imaging methods in biomedicine, the sub-nanometric imaging agents previously used and the development of advanced multimodal and theranostic imaging agents based in nanotechnology. We conclude providing some illustrative examples from our own laboratories, including recent progress in theranostic formulations of magnetoliposomes containing ω-3 poly-unsaturated fatty acids to treat inflammatory diseases, or the use of stealth liposomes engineered with a pH-sensitive nanovalve to release their cargo specifically in the acidic extracellular pH microenvironment of tumors.

  20. [The value of multimodal imaging by single photon emission computed tomography associated to X ray computed tomography (SPECT-CT) in the management of differentiated thyroid carcinoma: about 156 cases].

    Science.gov (United States)

    Mhiri, Aida; El Bez, Intidhar; Slim, Ihsen; Meddeb, Imène; Yeddes, Imene; Ghezaiel, Mohamed; Gritli, Saïd; Ben Slimène, Mohamed Faouzi

    2013-10-01

    Single photon emission computed tomography combined with a low dose computed tomography (SPECT-CT), is a hybrid imaging integrating functional and anatomical data. The purpose of our study was to evaluate the contribution of the SPECTCT over traditional planar imaging of patients with differentiated thyroid carcinoma (DTC). Post therapy 131IWhole body scan followed by SPECTCT of the neck and thorax, were performed in 156 patients with DTC. Among these 156 patients followed for a predominantly papillary, the use of fusion imaging SPECT-CT compared to conventional planar imaging allowed us to correct our therapeutic approach in 26.9 % (42/156 patients), according to the protocols of therapeutic management of our institute. SPECT-CT is a multimodal imaging providing better identification and more accurate anatomic localization of the foci of radioiodine uptake with impact on therapeutic management.

  1. Multimode-singlemode-multimode optical fiber sensor coated with novolac resin for detecting liquid phase alcohol

    Science.gov (United States)

    Marfu'ah, Amalia, Niza Rosyda; Hatta, Agus Muhamad; Pratama, Detak Yan

    2018-04-01

    Alcohol sensor based on multimode-singlemode-multimode (MSM) optical fiber with novolac resin as the external medium is proposed and demonstrated experimentally. Novolac resin swells when it is exposed by the alcohol. This effect causes a change in the polymer density leading to the refractive index's variation. The transmission light of the sensor depends on the refractive index of external medium. Based on the results, alcohol sensor based on MSM optical fiber structure using novolac resin has a higher sensitivity compared to the sensor without using novolac resin in the mixture of alcohol and distilled water. Alcohol sensor based on MSM optical fiber structure using novolac resin in the mixture of alcohol and distilled water with a singlemode fiber length of 5 mm has a sensitivity of 0.028972 dBm per % V/V, and in the mixture of alcohol and sugar solution of 10% w/w has a sensitivity of 0.005005 dBm per % V/V.

  2. High-resolution multimodal clinical multiphoton tomography of skin

    Science.gov (United States)

    König, Karsten

    2011-03-01

    This review focuses on multimodal multiphoton tomography based on near infrared femtosecond lasers. Clinical multiphoton tomographs for 3D high-resolution in vivo imaging have been placed into the market several years ago. The second generation of this Prism-Award winning High-Tech skin imaging tool (MPTflex) was introduced in 2010. The same year, the world's first clinical CARS studies have been performed with a hybrid multimodal multiphoton tomograph. In particular, non-fluorescent lipids and water as well as mitochondrial fluorescent NAD(P)H, fluorescent elastin, keratin, and melanin as well as SHG-active collagen has been imaged with submicron resolution in patients suffering from psoriasis. Further multimodal approaches include the combination of multiphoton tomographs with low-resolution wide-field systems such as ultrasound, optoacoustical, OCT, and dermoscopy systems. Multiphoton tomographs are currently employed in Australia, Japan, the US, and in several European countries for early diagnosis of skin cancer, optimization of treatment strategies, and cosmetic research including long-term testing of sunscreen nanoparticles as well as anti-aging products.

  3. [Multimodal medical image registration using cubic spline interpolation method].

    Science.gov (United States)

    He, Yuanlie; Tian, Lianfang; Chen, Ping; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan

    2007-12-01

    Based on the characteristic of the PET-CT multimodal image series, a novel image registration and fusion method is proposed, in which the cubic spline interpolation method is applied to realize the interpolation of PET-CT image series, then registration is carried out by using mutual information algorithm and finally the improved principal component analysis method is used for the fusion of PET-CT multimodal images to enhance the visual effect of PET image, thus satisfied registration and fusion results are obtained. The cubic spline interpolation method is used for reconstruction to restore the missed information between image slices, which can compensate for the shortage of previous registration methods, improve the accuracy of the registration, and make the fused multimodal images more similar to the real image. Finally, the cubic spline interpolation method has been successfully applied in developing 3D-CRT (3D Conformal Radiation Therapy) system.

  4. Interactivity in Educational Apps for Young children: A Multimodal Analysis

    Directory of Open Access Journals (Sweden)

    Alexandra H. Blitz-Raith

    2017-11-01

    Full Text Available Interactivity is an important indicator of an educational app's reception. Since most educational apps are multimodal, it justifies a methodological initiative to understand meaningful involvement of multimodality in enacting and even amplifying interactivity in an educational app. Yet research so far has largely concentrated on algorithm construct and user feedback rather than on multimodal interactions, especially from a social semiotics perspective. Drawing from social semiotics approaches, this article proposes a multimodal analytic framework to examine three layers of mode in engendering interaction; namely, multiplicity, function, and relationship. Using the analytic framework in an analysis of The Farm Adventure for Kids, a popular educational app for pre-school children, we found that still images are dominant proportionally and are central in the interactive process. We also found that tapping still images of animals on screen is the main action, with other screen actions deliberately excluded. Such findings suggest that aligning children’s cognitive and physical capabilities to the use of mode become the primary consideration in educational app design and that consistent attendance to this alignment in mobilizing modes significantly affect an educational app’s interactivity, and consequently its reception by young children

  5. MODELING PARTICLE SIZE DISTRIBUTION IN HETEROGENEOUS POLYMERIZATION SYSTEMS USING MULTIMODAL LOGNORMAL FUNCTION

    Directory of Open Access Journals (Sweden)

    J. C. Ferrari

    Full Text Available Abstract This work evaluates the usage of the multimodal lognormal function to describe Particle Size Distributions (PSD of emulsion and suspension polymerization processes, including continuous reactions with particle re-nucleation leading to complex multimodal PSDs. A global optimization algorithm, namely Particle Swarm Optimization (PSO, was used for parameter estimation of the proposed model, minimizing the objective function defined by the mean squared errors. Statistical evaluation of the results indicated that the multimodal lognormal function could describe distinctive features of different types of PSDs with accuracy and consistency.

  6. An evaluation framework for multimodal interaction determining quality aspects and modality choice

    CERN Document Server

    Wechsung, Ina

    2014-01-01

    This book presents (1) an exhaustive and empirically validated taxonomy of quality aspects of multimodal interaction as well as respective measurement methods, (2) a validated questionnaire specifically tailored to the evaluation of multimodal systems and covering most of the taxonomy‘s quality aspects, (3) insights on how the quality perceptions of multimodal systems relate to the quality perceptions of its individual components, (4) a set of empirically tested factors which influence modality choice, and (5) models regarding the relationship of the perceived quality of a modality and the actual usage of a modality.

  7. Multimoded rf delay line distribution system for the Next Linear Collider

    Directory of Open Access Journals (Sweden)

    S. G. Tantawi

    2002-03-01

    Full Text Available The delay line distribution system is an alternative to conventional pulse compression, which enhances the peak power of rf sources while matching the long pulse of those sources to the shorter filling time of accelerator structures. We present an implementation of this scheme that combines pairs of parallel delay lines of the system into single lines. The power of several sources is combined into a single waveguide delay line using a multimode launcher. The output mode of the launcher is determined by the phase coding of the input signals. The combined power is extracted from the delay line using mode-selective extractors, each of which extracts a single mode. Hence, the phase coding of the sources controls the output port of the combined power. The power is then fed to the local accelerator structures. We present a detailed design of such a system, including several implementation methods for the launchers, extractors, and ancillary high power rf components. The system is designed so that it can handle the 600 MW peak power required by the Next Linear Collider design while maintaining high efficiency.

  8. On the effects of multimodal information integration in multitasking.

    Science.gov (United States)

    Stock, Ann-Kathrin; Gohil, Krutika; Huster, René J; Beste, Christian

    2017-07-07

    There have recently been considerable advances in our understanding of the neuronal mechanisms underlying multitasking, but the role of multimodal integration for this faculty has remained rather unclear. We examined this issue by comparing different modality combinations in a multitasking (stop-change) paradigm. In-depth neurophysiological analyses of event-related potentials (ERPs) were conducted to complement the obtained behavioral data. Specifically, we applied signal decomposition using second order blind identification (SOBI) to the multi-subject ERP data and source localization. We found that both general multimodal information integration and modality-specific aspects (potentially related to task difficulty) modulate behavioral performance and associated neurophysiological correlates. Simultaneous multimodal input generally increased early attentional processing of visual stimuli (i.e. P1 and N1 amplitudes) as well as measures of cognitive effort and conflict (i.e. central P3 amplitudes). Yet, tactile-visual input caused larger impairments in multitasking than audio-visual input. General aspects of multimodal information integration modulated the activity in the premotor cortex (BA 6) as well as different visual association areas concerned with the integration of visual information with input from other modalities (BA 19, BA 21, BA 37). On top of this, differences in the specific combination of modalities also affected performance and measures of conflict/effort originating in prefrontal regions (BA 6).

  9. Evaluation of selectivity in homologous multimodal chromatographic systems using in silico designed antibody fragment libraries.

    Science.gov (United States)

    Karkov, Hanne Sophie; Woo, James; Krogh, Berit Olsen; Ahmadian, Haleh; Cramer, Steven M

    2015-12-24

    This study describes the in silico design, surface property analyses, production and chromatographic evaluations of a diverse set of antibody Fab fragment variants. Based on previous findings, we hypothesized that the complementarity-determining regions (CDRs) constitute important binding sites for multimodal chromatographic ligands. Given that antibodies are highly diversified molecules and in particular the CDRs, we set out to examine the generality of this result. For this purpose, four different Fab fragments with different CDRs and/or framework regions of the variable domains were identified and related variants were designed in silico. The four Fab variant libraries were subsequently generated by site-directed mutagenesis and produced by recombinant expression and affinity purification to enable examination of their chromatographic retention behavior. The effects of geometric re-arrangement of the functional moieties on the multimodal resin ligands were also investigated with respect to Fab variant retention profiles by comparing two commercially available multimodal cation-exchange ligands, Capto MMC and Nuvia cPrime, and two novel multimodal ligand prototypes. Interestingly, the chromatographic data demonstrated distinct selectivity trends between the four Fab variant libraries. For three of the Fab libraries, the CDR regions appeared as major binding sites for all multimodal ligands. In contrast, the fourth Fab library displayed a distinctly different chromatographic behavior, where Nuvia cPrime and related multimodal ligand prototypes provided markedly improved selectivity over Capto MMC. Clearly, the results illustrate that the discriminating power of multimodal ligands differs between different Fab fragments. The results are promising indications that multimodal chromatography using the appropriate multimodal ligands can be employed in downstream bioprocessing for challenging selective separation of product related variants. Copyright © 2015 Elsevier B

  10. EXPLICITATION AND ADDITION TECHNIQUES IN AUDIOVISUAL TRANSLATION: A MULTIMODAL APPROACH OF ENGLISHINDONESIAN SUBTITLES

    Directory of Open Access Journals (Sweden)

    Ichwan Suyudi

    2017-12-01

    Full Text Available In audiovisual translation, the multimodality of the audiovisual text is both a challenge and a resource for subtitlers. This paper illustrates how multi-modes provide information that helps subtitlers to gain a better understanding of meaning-making practices that will influence them to make a decision-making in translating a certain verbal text. Subtitlers may explicit, add, and condense the texts based on the multi-modes as seen on the visual frames. Subtitlers have to consider the distribution and integration of the meanings of multi-modes in order to create comprehensive equivalence between the source and target texts. Excerpts of visual frames in this paper are taken from English films Forrest Gump (drama, 1996, and James Bond (thriller, 2010.

  11. Active Multimodal Sensor System for Target Recognition and Tracking.

    Science.gov (United States)

    Qu, Yufu; Zhang, Guirong; Zou, Zhaofan; Liu, Ziyue; Mao, Jiansen

    2017-06-28

    High accuracy target recognition and tracking systems using a single sensor or a passive multisensor set are susceptible to external interferences and exhibit environmental dependencies. These difficulties stem mainly from limitations to the available imaging frequency bands, and a general lack of coherent diversity of the available target-related data. This paper proposes an active multimodal sensor system for target recognition and tracking, consisting of a visible, an infrared, and a hyperspectral sensor. The system makes full use of its multisensor information collection abilities; furthermore, it can actively control different sensors to collect additional data, according to the needs of the real-time target recognition and tracking processes. This level of integration between hardware collection control and data processing is experimentally shown to effectively improve the accuracy and robustness of the target recognition and tracking system.

  12. Multimodal tuned dynamic absorber for split Stirling linear cryocooler

    Science.gov (United States)

    Veprik, A.; Tuito, A.

    2017-02-01

    Forthcoming low size, weight, power and price split Stirling linear cryocoolers may rely on electro-dynamically driven single-piston compressors and pneumatically driven expanders interconnected by the configurable transfer line. For compactness, compressor and expander units may be placed in a side-by-side manner, thus producing tonal vibration export comprising force and moment components. In vibration sensitive applications, this may result in excessive angular line of sight jitter and translational defocusing affecting the image quality. The authors present Multimodal Tuned Dynamic Absorber (MTDA), having one translational and two tilting modes essentially tuned to the driving frequency. The dynamic reactions (force and moment) produced by such a MTDA are simultaneously counterbalancing force and moment vibration export produced by the cryocooler. The authors reveal the design details, the method of fine modal tuning and outcomes of numerical simulation on attainable performance.

  13. Study of internalization and viability of multimodal nanoparticles for labeling of human umbilical cord mesenchymal stem cells

    International Nuclear Information System (INIS)

    Miyaki, Liza Aya Mabuchi; Sibov, Tatiana Tais; Pavon, Lorena Favaro; Mamani, Javier Bustamante; Gamarra, Lionel Fernel

    2012-01-01

    Objective: To analyze multimodal magnetic nanoparticles-Rhodamine B in culture media for cell labeling, and to establish a study of multimodal magnetic nanoparticles-Rhodamine B detection at labeled cells evaluating they viability at concentrations of 10 μg Fe/mL and 100μg Fe/mL. Methods: We performed the analysis of stability of multimodal magnetic nanoparticles-Rhodamine B in different culture media; the mesenchymal stem cells labeling with multimodal magnetic nanoparticles-Rhodamine B; the intracellular detection of multimodal magnetic nanoparticles-Rhodamine B in mesenchymal stem cells, and assessment of the viability of labeled cells by kinetic proliferation. Results: The stability analysis showed that multimodal magnetic nanoparticles-Rhodamine B had good stability in cultured Dulbecco's Modified Eagle's-Low Glucose medium and RPMI 1640 medium. The mesenchymal stem cell with multimodal magnetic nanoparticles-Rhodamine B described location of intracellular nanoparticles, which were shown as blue granules co-localized in fluorescent clusters, thus characterizing magnetic and fluorescent properties of multimodal magnetic nanoparticles Rhodamine B. Conclusion: The stability of multimodal magnetic nanoparticles-Rhodamine B found in cultured Dulbecco's Modified Eagle's-Low Glucose medium and RPMI 1640 medium assured intracellular mesenchymal stem cells labeling. This cell labeling did not affect viability of labeled mesenchymal stem cells since they continued to proliferate for five days. (author)

  14. Multimodal approaches to use mobile, digital devices in learning practies

    DEFF Research Database (Denmark)

    Buhl, Mie

    , anthropology, psychology and sociology) and outlines the prospect of a trans-disciplinary learning mode. The learning mode reflects the current society where knowledge production is social collaborative process and is produced in formal as well as informal and non-formal contexts. My discussion’s theoretical......In this paper, I discuss the potential of multimodal approaches to enhance learning processes. I draw on a case based on Danish Master Courses in ICT and didactic designs where multimodal approaches are in the center of students’ practical design experience as well as in generation of theoretical...... knowledge. The design of the master courses takes its starting point in the assumption that theoretical knowledge generates from practical experiences. Thus, the organization of the students’ learning processes revolves around practical multimodal experiences followed by iterative reflexive sessions...

  15. Optimal resonant control of flexible structures

    DEFF Research Database (Denmark)

    Krenk, Steen; Høgsberg, Jan Becker

    2009-01-01

    When introducing a resonant controller for a particular vibration mode in a structure this mode splits into two. A design principle is developed for resonant control based oil equal damping of these two modes. First the design principle is developed for control of a system with a single degree...... of freedom, and then it is extended to multi-mode structures. A root locus analysis of the controlled single-mode structure identifies the equal modal damping property as a condition oil the linear and Cubic terms of the characteristic equation. Particular solutions for filtered displacement feedback...... and filtered acceleration feedback are developed by combining the root locus analysis with optimal properties of the displacement amplification frequency curve. The results are then extended to multi-mode structures by including a quasi-static representation of the background modes in the equations...

  16. Multi-Modal Intelligent Traffic Signal Systems GPS

    Data.gov (United States)

    Department of Transportation — Data were collected during the Multi-Modal Intelligent Transportation Signal Systems (MMITSS) study. MMITSS is a next-generation traffic signal system that seeks to...

  17. Phase space density representations in fluid dynamics

    International Nuclear Information System (INIS)

    Ramshaw, J.D.

    1989-01-01

    Phase space density representations of inviscid fluid dynamics were recently discussed by Abarbanel and Rouhi. Here it is shown that such representations may be simply derived and interpreted by means of the Liouville equation corresponding to the dynamical system of ordinary differential equations that describes fluid particle trajectories. The Hamiltonian and Poisson bracket for the phase space density then emerge as immediate consequences of the corresponding structure of the dynamics. For barotropic fluids, this approach leads by direct construction to the formulation presented by Abarbanel and Rouhi. Extensions of this formulation to inhomogeneous incompressible fluids and to fluids in which the state equation involves an additional transported scalar variable are constructed by augmenting the single-particle dynamics and phase space to include the relevant additional variable

  18. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.

    Science.gov (United States)

    Scharfe, Michael; Pielot, Rainer; Schreiber, Falk

    2010-01-11

    Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.

  19. Single-footprint retrievals for AIRS using a fast TwoSlab cloud-representation model and the SARTA all-sky infrared radiative transfer algorithm

    Science.gov (United States)

    DeSouza-Machado, Sergio; Larrabee Strow, L.; Tangborn, Andrew; Huang, Xianglei; Chen, Xiuhong; Liu, Xu; Wu, Wan; Yang, Qiguang

    2018-01-01

    One-dimensional variational retrievals of temperature and moisture fields from hyperspectral infrared (IR) satellite sounders use cloud-cleared radiances (CCRs) as their observation. These derived observations allow the use of clear-sky-only radiative transfer in the inversion for geophysical variables but at reduced spatial resolution compared to the native sounder observations. Cloud clearing can introduce various errors, although scenes with large errors can be identified and ignored. Information content studies show that, when using multilayer cloud liquid and ice profiles in infrared hyperspectral radiative transfer codes, there are typically only 2-4 degrees of freedom (DOFs) of cloud signal. This implies a simplified cloud representation is sufficient for some applications which need accurate radiative transfer. Here we describe a single-footprint retrieval approach for clear and cloudy conditions, which uses the thermodynamic and cloud fields from numerical weather prediction (NWP) models as a first guess, together with a simple cloud-representation model coupled to a fast scattering radiative transfer algorithm (RTA). The NWP model thermodynamic and cloud profiles are first co-located to the observations, after which the N-level cloud profiles are converted to two slab clouds (TwoSlab; typically one for ice and one for water clouds). From these, one run of our fast cloud-representation model allows an improvement of the a priori cloud state by comparing the observed and model-simulated radiances in the thermal window channels. The retrieval yield is over 90 %, while the degrees of freedom correlate with the observed window channel brightness temperature (BT) which itself depends on the cloud optical depth. The cloud-representation and scattering package is benchmarked against radiances computed using a maximum random overlap (RMO) cloud scheme. All-sky infrared radiances measured by NASA's Atmospheric Infrared Sounder (AIRS) and NWP thermodynamic and cloud

  20. Single-footprint retrievals for AIRS using a fast TwoSlab cloud-representation model and the SARTA all-sky infrared radiative transfer algorithm

    Directory of Open Access Journals (Sweden)

    S. DeSouza-Machado

    2018-01-01

    Full Text Available One-dimensional variational retrievals of temperature and moisture fields from hyperspectral infrared (IR satellite sounders use cloud-cleared radiances (CCRs as their observation. These derived observations allow the use of clear-sky-only radiative transfer in the inversion for geophysical variables but at reduced spatial resolution compared to the native sounder observations. Cloud clearing can introduce various errors, although scenes with large errors can be identified and ignored. Information content studies show that, when using multilayer cloud liquid and ice profiles in infrared hyperspectral radiative transfer codes, there are typically only 2–4 degrees of freedom (DOFs of cloud signal. This implies a simplified cloud representation is sufficient for some applications which need accurate radiative transfer. Here we describe a single-footprint retrieval approach for clear and cloudy conditions, which uses the thermodynamic and cloud fields from numerical weather prediction (NWP models as a first guess, together with a simple cloud-representation model coupled to a fast scattering radiative transfer algorithm (RTA. The NWP model thermodynamic and cloud profiles are first co-located to the observations, after which the N-level cloud profiles are converted to two slab clouds (TwoSlab; typically one for ice and one for water clouds. From these, one run of our fast cloud-representation model allows an improvement of the a priori cloud state by comparing the observed and model-simulated radiances in the thermal window channels. The retrieval yield is over 90 %, while the degrees of freedom correlate with the observed window channel brightness temperature (BT which itself depends on the cloud optical depth. The cloud-representation and scattering package is benchmarked against radiances computed using a maximum random overlap (RMO cloud scheme. All-sky infrared radiances measured by NASA's Atmospheric Infrared Sounder (AIRS and NWP

  1. Exploring Multimodal Registers in and Across Organizations and Institutional Fields

    DEFF Research Database (Denmark)

    Meyer, Renate; Jancsary, Dennis; Höllerer, Markus

    In this article, we develop methodology that enables a systematic analysis of the structuralaspects of multimodal discourse from larger amounts of data. While existing research in visualorganization studies has provided interesting insights into the content and meaning(s) of visualand multimodal ......Responsibility (CSR), a complex and multivocal management idea that touches a variety oftopics and incorporates multiple levels of audience engagement with regard to a substantialdiversity of audiences....

  2. States characterized by the irreducible single row representations of the U(3) is contained in SO(3) and U(4) is contained in Dsup(3/2)[SO(3)] chains of groups

    International Nuclear Information System (INIS)

    Dumitrescu, T.S.

    1977-01-01

    A new method is applied in order to obtain the irreducible single row representations of the groups under study. For the case U(3) contained in SO(3) also an explicit realization is constructed. The method has the advantage of being simpler than the previously used ones. (author)

  3. Multimodal human-machine interaction for service robots in home-care environments

    OpenAIRE

    Goetze, Stefan; Fischer, S.; Moritz, Niko; Appell, Jens-E.; Wallhoff, Frank

    2012-01-01

    This contribution focuses on multimodal interaction techniques for a mobile communication and assistance system on a robot platform. The system comprises of acoustic, visual and haptic input modalities. Feedback is given to the user by a graphical user interface and a speech synthesis system. By this, multimodal and natural communication with the robot system is possible.

  4. The Roles of Representations in Building Design

    DEFF Research Database (Denmark)

    Harty, Chris; Tryggestad, Kjell

    2012-01-01

    minimum) spatial requirements should be to allow effective care of patients. The first representation is a three dimensional augmented reality model of a single room for a new hospital in the UK, using a CAVE (Cave Automatic Virtual Environment) where the room is reproduced virtually at one-to-one scale......, and which can be explored or navigated using head-tracker technology and a joystick controller. The second is a physical mock up of a single room for a Danish hospital where actual medical procedures are simulated using real equipment and real people. Drawing on Latour’s concepts of matters of concern...

  5. Multimodal optical coherence tomography and fluorescence lifetime imaging with interleaved excitation sources for simultaneous endogenous and exogenous fluorescence.

    Science.gov (United States)

    Shrestha, Sebina; Serafino, Michael J; Rico-Jimenez, Jesus; Park, Jesung; Chen, Xi; Zhaorigetu, Siqin; Walton, Brian L; Jo, Javier A; Applegate, Brian E

    2016-09-01

    Multimodal imaging probes a variety of tissue properties in a single image acquisition by merging complimentary imaging technologies. Exploiting synergies amongst the data, algorithms can be developed that lead to better tissue characterization than could be accomplished by the constituent imaging modalities taken alone. The combination of optical coherence tomography (OCT) with fluorescence lifetime imaging microscopy (FLIM) provides access to detailed tissue morphology and local biochemistry. The optical system described here merges 1310 nm swept-source OCT with time-domain FLIM having excitation at 355 and 532 nm. The pulses from 355 and 532 nm lasers have been interleaved to enable simultaneous acquisition of endogenous and exogenous fluorescence signals, respectively. The multimodal imaging system was validated using tissue phantoms. Nonspecific tagging with Alexa Flour 532 in a Watanbe rabbit aorta and active tagging of the LOX-1 receptor in human coronary artery, demonstrate the capacity of the system for simultaneous acquisition of OCT, endogenous FLIM, and exogenous FLIM in tissues.

  6. Nonlinear High-Energy Pulse Propagation in Graded-Index Multimode Optical Fibers for Mode-Locked Fiber Lasers

    Science.gov (United States)

    2014-12-23

    power kW at nm in a C-GIMF segment in the lowest order mode ; this pulse can be ob- tained from a typical titanium –sapphire mode-locked laser . A much...single- andmulticore double- clad and PCF lasers . He was a Senior Research Scientist at Corning Inc. from 2005 to 2008. He is currently an Assistant...High-Energy Pulse Propagation in Graded-Index Multimode Optical Fibers for Mode-Locked Fiber Lasers 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-12-1

  7. Drusen Characterization with Multimodal Imaging

    Science.gov (United States)

    Spaide, Richard F.; Curcio, Christine A.

    2010-01-01

    Summary Multimodal imaging findings and histological demonstration of soft drusen, cuticular drusen, and subretinal drusenoid deposits provided information used to develop a model explaining their imaging characteristics. Purpose To characterize the known appearance of cuticular drusen, subretinal drusenoid deposits (reticular pseudodrusen), and soft drusen as revealed by multimodal fundus imaging; to create an explanatory model that accounts for these observations. Methods Reported color, fluorescein angiographic, autofluorescence, and spectral domain optical coherence tomography (SD-OCT) images of patients with cuticular drusen, soft drusen, and subretinal drusenoid deposits were reviewed, as were actual images from affected eyes. Representative histological sections were examined. The geometry, location, and imaging characteristics of these lesions were evaluated. A hypothesis based on the Beer-Lambert Law of light absorption was generated to fit these observations. Results Cuticular drusen appear as numerous uniform round yellow-white punctate accumulations under the retinal pigment epithelium (RPE). Soft drusen are larger yellow-white dome-shaped mounds of deposit under the RPE. Subretinal drusenoid deposits are polymorphous light-grey interconnected accumulations above the RPE. Based on the model, both cuticular and soft drusen appear yellow due to the removal of shorter wavelength light by a double pass through the RPE. Subretinal drusenoid deposits, which are located on the RPE, are not subjected to short wavelength attenuation and therefore are more prominent when viewed with blue light. The location and morphology of extracellular material in relationship to the RPE, and associated changes to RPE morphology and pigmentation, appeared to be primary determinants of druse appearance in different imaging modalities. Conclusion Although cuticular drusen, subretinal drusenoid deposits, and soft drusen are composed of common components, they are distinguishable

  8. Bi-objective optimization for multi-modal transportation routing planning problem based on Pareto optimality

    Directory of Open Access Journals (Sweden)

    Yan Sun

    2015-09-01

    Full Text Available Purpose: The purpose of study is to solve the multi-modal transportation routing planning problem that aims to select an optimal route to move a consignment of goods from its origin to its destination through the multi-modal transportation network. And the optimization is from two viewpoints including cost and time. Design/methodology/approach: In this study, a bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. Minimizing the total transportation cost and the total transportation time are set as the optimization objectives of the model. In order to balance the benefit between the two objectives, Pareto optimality is utilized to solve the model by gaining its Pareto frontier. The Pareto frontier of the model can provide the multi-modal transportation operator (MTO and customers with better decision support and it is gained by the normalized normal constraint method. Then, an experimental case study is designed to verify the feasibility of the model and Pareto optimality by using the mathematical programming software Lingo. Finally, the sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case. Findings: The calculation results indicate that the proposed model and Pareto optimality have good performance in dealing with the bi-objective optimization. The sensitivity analysis also shows the influence of the variation of the demand and supply on the multi-modal transportation organization clearly. Therefore, this method can be further promoted to the practice. Originality/value: A bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. The Pareto frontier based sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case.

  9. Adhesion of multimode adhesives to enamel and dentin after one year of water storage.

    Science.gov (United States)

    Vermelho, Paulo Moreira; Reis, André Figueiredo; Ambrosano, Glaucia Maria Bovi; Giannini, Marcelo

    2017-06-01

    This study aimed to evaluate the ultramorphological characteristics of tooth-resin interfaces and the bond strength (BS) of multimode adhesive systems to enamel and dentin. Multimode adhesives (Scotchbond Universal (SBU) and All-Bond Universal) were tested in both self-etch and etch-and-rinse modes and compared to control groups (Optibond FL and Clearfil SE Bond (CSB)). Adhesives were applied to human molars and composite blocks were incrementally built up. Teeth were sectioned to obtain specimens for microtensile BS and TEM analysis. Specimens were tested after storage for either 24 h or 1 year. SEM analyses were performed to classify the failure pattern of beam specimens after BS testing. Etching increased the enamel BS of multimode adhesives; however, BS decreased after storage for 1 year. No significant differences in dentin BS were noted between multimode and control in either evaluation period. Storage for 1 year only reduced the dentin BS for SBU in self-etch mode. TEM analysis identified hybridization and interaction zones in dentin and enamel for all adhesives. Silver impregnation was detected on dentin-resin interfaces after storage of specimens for 1 year only with the SBU and CSB. Storage for 1 year reduced enamel BS when adhesives are applied on etched surface; however, BS of multimode adhesives did not differ from those of the control group. In dentin, no significant difference was noted between the multimode and control group adhesives, regardless of etching mode. In general, multimode adhesives showed similar behavior when compared to traditional adhesive techniques. Multimode adhesives are one-step self-etching adhesives that can also be used after enamel/dentin phosphoric acid etching, but each product may work better in specific conditions.

  10. Sensitivity-Bandwidth Limit in a Multimode Optoelectromechanical Transducer

    Science.gov (United States)

    Moaddel Haghighi, I.; Malossi, N.; Natali, R.; Di Giuseppe, G.; Vitali, D.

    2018-03-01

    An optoelectromechanical system formed by a nanomembrane capacitively coupled to an L C resonator and to an optical interferometer has recently been employed for the highly sensitive optical readout of rf signals [T. Bagci et al., Nature (London) 507, 81 (2013), 10.1038/nature13029]. We propose and experimentally demonstrate how the bandwidth of such a transducer can be increased by controlling the interference between two electromechanical interaction pathways of a two-mode mechanical system. With a proof-of-principle device operating at room temperature, we achieve a sensitivity of 300 nV /√{Hz } over a bandwidth of 15 kHz in the presence of radio-frequency noise, and an optimal shot-noise-limited sensitivity of 10 nV /√{Hz } over a bandwidth of 5 kHz. We discuss strategies for improving the performance of the device, showing that, for the same given sensitivity, a mechanical multimode transducer can achieve a bandwidth significantly larger than that for a single-mode one.

  11. Alternative modalities – A Review of Graphic Encounters: Comics and the Sponsorship of Multimodal Literacy

    Directory of Open Access Journals (Sweden)

    Aaron Scott Humphrey

    2014-05-01

    Full Text Available Research exploring the multimodal characteristics of comics has recently flourished, and Dale Jacobs has been one of the early prolific authors on this topic. Jacobs expands these ideas further in 'Graphic Encounters: Comics and the Sponsorship of Multimodal Literacy', a monograph which engages with theories of multimodality, but shifts its focus primarily to literacy sponsorship.

  12. Multimodal nonlinear microscope based on a compact fiber-format laser source

    Science.gov (United States)

    Crisafi, Francesco; Kumar, Vikas; Perri, Antonio; Marangoni, Marco; Cerullo, Giulio; Polli, Dario

    2018-01-01

    We present a multimodal non-linear optical (NLO) laser-scanning microscope, based on a compact fiber-format excitation laser and integrating coherent anti-Stokes Raman scattering (CARS), stimulated Raman scattering (SRS) and two-photon-excitation fluorescence (TPEF) on a single platform. We demonstrate its capabilities in simultaneously acquiring CARS and SRS images of a blend of 6-μm poly(methyl methacrylate) beads and 3-μm polystyrene beads. We then apply it to visualize cell walls and chloroplast of an unprocessed fresh leaf of Elodea aquatic plant via SRS and TPEF modalities, respectively. The presented NLO microscope, developed in house using off-the-shelf components, offers full accessibility to the optical path and ensures its easy re-configurability and flexibility.

  13. Liquid level and temperature sensing by using dual-wavelength fiber laser based on multimode interferometer and FBG in parallel

    Science.gov (United States)

    Sun, Chunran; Dong, Yue; Wang, Muguang; Jian, Shuisheng

    2018-03-01

    The detection of liquid level and temperature based on a fiber ring cavity laser sensing configuration is presented and demonstrated experimentally. The sensing head contains a fiber Bragg grating (FBG) and a single-mode-cladding-less-single-mode multimode interferometer, which also functions as wavelength-selective components of the fiber laser. When the liquid level or temperature is applied on the sensing head, the pass-band peaks of both multimode interference (MMI) filter and FBG filter vary and the two output wavelengths of the laser shift correspondingly. In the experiment, the corresponding sensitivities of the liquid level with four different refractive indices (RI) in the deep range from 0 mm to 40 mm are obtained and the sensitivity enhances with the RI of the liquid being measured. The maximum sensitivity of interferometer is 106.3 pm/mm with the RI of 1.391. For the temperature measurement, a sensitivity of 10.3 pm/°C and 13.8 pm/°C are achieved with the temperature ranging from 0 °C to 90 °C corresponding to the two lasing wavelengths selective by the MMI filter and FBG, respectively. In addition, the average RI sensitivity of 155.77 pm/mm/RIU is also obtained in the RI range of 1.333-1.391.

  14. Experimental Study on Bioluminescence Tomography with Multimodality Fusion

    Directory of Open Access Journals (Sweden)

    Yujie Lv

    2007-01-01

    Full Text Available To verify the influence of a priori information on the nonuniqueness problem of bioluminescence tomography (BLT, the multimodality imaging fusion based BLT experiment is performed by multiview noncontact detection mode, which incorporates the anatomical information obtained by the microCT scanner and the background optical properties based on diffuse reflectance measurements. In the reconstruction procedure, the utilization of adaptive finite element methods (FEMs and a priori permissible source region refines the reconstructed results and improves numerical robustness and efficiency. The comparison between the absence and employment of a priori information shows that multimodality imaging fusion is essential to quantitative BLT reconstruction.

  15. Multimodal medical information retrieval with unsupervised rank fusion.

    Science.gov (United States)

    Mourão, André; Martins, Flávio; Magalhães, João

    2015-01-01

    Modern medical information retrieval systems are paramount to manage the insurmountable quantities of clinical data. These systems empower health care experts in the diagnosis of patients and play an important role in the clinical decision process. However, the ever-growing heterogeneous information generated in medical environments poses several challenges for retrieval systems. We propose a medical information retrieval system with support for multimodal medical case-based retrieval. The system supports medical information discovery by providing multimodal search, through a novel data fusion algorithm, and term suggestions from a medical thesaurus. Our search system compared favorably to other systems in 2013 ImageCLEFMedical. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Current and future multimodal learning analytics data challenges

    DEFF Research Database (Denmark)

    Spikol, Daniel; Prieto, Luis P.; Rodriguez-Triana, M.J.

    2017-01-01

    Multimodal Learning Analytics (MMLA) captures, integrates and analyzes learning traces from different sources in order to obtain a more holistic understanding of the learning process, wherever it happens. MMLA leverages the increasingly widespread availability of diverse sensors, high......-frequency data collection technologies and sophisticated machine learning and artificial intelligence techniques. The aim of this workshop is twofold: first, to expose participants to, and develop, different multimodal datasets that reflect how MMLA can bring new insights and opportunities to investigate complex...... learning processes and environments; second, to collaboratively identify a set of grand challenges for further MMLA research, built upon the foundations of previous workshops on the topic....

  17. Effects of Multimodal Presentation and Stimulus Familiarity on Auditory and Visual Processing

    Science.gov (United States)

    Robinson, Christopher W.; Sloutsky, Vladimir M.

    2010-01-01

    Two experiments examined the effects of multimodal presentation and stimulus familiarity on auditory and visual processing. In Experiment 1, 10-month-olds were habituated to either an auditory stimulus, a visual stimulus, or an auditory-visual multimodal stimulus. Processing time was assessed during the habituation phase, and discrimination of…

  18. Multimodal Teacher Input and Science Learning in a Middle School Sheltered Classroom

    Science.gov (United States)

    Zhang, Ying

    2016-01-01

    This article reports the results of an ethnographic research about the multimodal science discourse in a sixth-grade sheltered classroom involving English Language Learners (ELLs) only. Drawing from the perspective of multimodality, this study examines how science learning is constructed in science lectures through multiple semiotic resources,…

  19. Protein Sub-Nuclear Localization Based on Effective Fusion Representations and Dimension Reduction Algorithm LDA.

    Science.gov (United States)

    Wang, Shunfang; Liu, Shuhui

    2015-12-19

    An effective representation of a protein sequence plays a crucial role in protein sub-nuclear localization. The existing representations, such as dipeptide composition (DipC), pseudo-amino acid composition (PseAAC) and position specific scoring matrix (PSSM), are insufficient to represent protein sequence due to their single perspectives. Thus, this paper proposes two fusion feature representations of DipPSSM and PseAAPSSM to integrate PSSM with DipC and PseAAC, respectively. When constructing each fusion representation, we introduce the balance factors to value the importance of its components. The optimal values of the balance factors are sought by genetic algorithm. Due to the high dimensionality of the proposed representations, linear discriminant analysis (LDA) is used to find its important low dimensional structure, which is essential for classification and location prediction. The numerical experiments on two public datasets with KNN classifier and cross-validation tests showed that in terms of the common indexes of sensitivity, specificity, accuracy and MCC, the proposed fusing representations outperform the traditional representations in protein sub-nuclear localization, and the representation treated by LDA outperforms the untreated one.

  20. Standard forms and entanglement engineering of multimode Gaussian states under local operations

    International Nuclear Information System (INIS)

    Serafini, Alessio; Adesso, Gerardo

    2007-01-01

    We investigate the action of local unitary operations on multimode (pure or mixed) Gaussian states and single out the minimal number of locally invariant parameters which completely characterize the covariance matrix of such states. For pure Gaussian states, central resources for continuous-variable quantum information, we investigate separately the parameter reduction due to the additional constraint of global purity, and the one following by the local-unitary freedom. Counting arguments and insights from the phase-space Schmidt decomposition and in general from the framework of symplectic analysis, accompany our description of the standard form of pure n-mode Gaussian states. In particular, we clarify why only in pure states with n ≤ 3 modes all the direct correlations between position and momentum operators can be set to zero by local unitary operations. For any n, the emerging minimal set of parameters contains complete information about all forms of entanglement in the corresponding states. An efficient state engineering scheme (able to encode direct correlations between position and momentum operators as well) is proposed to produce entangled multimode Gaussian resources, its number of optical elements matching the minimal number of locally invariant degrees of freedom of general pure n-mode Gaussian states. Finally, we demonstrate that so-called 'block-diagonal' Gaussian states, without direct correlations between position and momentum, are systematically less entangled, on average, than arbitrary pure Gaussian states

  1. A Deep Learning Architecture for Temporal Sleep Stage Classification Using Multivariate and Multimodal Time Series.

    Science.gov (United States)

    Chambon, Stanislas; Galtier, Mathieu N; Arnal, Pierrick J; Wainrib, Gilles; Gramfort, Alexandre

    2018-04-01

    Sleep stage classification constitutes an important preliminary exam in the diagnosis of sleep disorders. It is traditionally performed by a sleep expert who assigns to each 30 s of the signal of a sleep stage, based on the visual inspection of signals such as electroencephalograms (EEGs), electrooculograms (EOGs), electrocardiograms, and electromyograms (EMGs). We introduce here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or extracting handcrafted features, that exploits all multivariate and multimodal polysomnography (PSG) signals (EEG, EMG, and EOG), and that can exploit the temporal context of each 30-s window of data. For each modality, the first layer learns linear spatial filters that exploit the array of sensors to increase the signal-to-noise ratio, and the last layer feeds the learnt representation to a softmax classifier. Our model is compared to alternative automatic approaches based on convolutional networks or decisions trees. Results obtained on 61 publicly available PSG records with up to 20 EEG channels demonstrate that our network architecture yields the state-of-the-art performance. Our study reveals a number of insights on the spatiotemporal distribution of the signal of interest: a good tradeoff for optimal classification performance measured with balanced accuracy is to use 6 EEG with 2 EOG (left and right) and 3 EMG chin channels. Also exploiting 1 min of data before and after each data segment offers the strongest improvement when a limited number of channels are available. As sleep experts, our system exploits the multivariate and multimodal nature of PSG signals in order to deliver the state-of-the-art classification performance with a small computational cost.

  2. Interaction between visual and chemical cues in a Liolaemus lizard: a multimodal approach.

    Science.gov (United States)

    Vicente, Natalin S; Halloy, Monique

    2017-12-01

    Multimodal communication involves the use of signals and cues across two or more sensory modalities. The genus Liolaemus (Iguania: Liolaemidae) offers a great potential for studies on the ecology and evolution of multimodal communication, including visual and chemical signals. In this study, we analyzed the response of male and female Liolaemus pacha to chemical, visual and combined (multimodal) stimuli. Using cue-isolation tests, we registered the number of tongue flicks and headbob displays from exposure to signals in each modality. Number of tongue flicks was greater when a chemical stimulus was presented alone than in the presence of visual or multimodal stimuli. In contrast, headbob displays were fewer in number with visual and chemical stimuli alone, but significantly higher in number when combined. Female signallers triggered significantly more tongue flicks than male signallers, suggesting that chemical cues are involved in sexual recognition. We did not find an inhibition between chemical and visual cues. On the contrary, we observed a dominance of the chemical modality, because when presented with visual stimuli, lizards also responded with more tongue flicks than headbob displays. The total response produced by multimodal stimuli was similar to that of the chemical stimuli alone, possibly suggesting non-redundancy. We discuss whether the visual component of a multimodal signal could attract attention at a distance, increasing the effectiveness of transmission and reception of the information in chemical cues. Copyright © 2017 Elsevier GmbH. All rights reserved.

  3. Factorizations and physical representations

    International Nuclear Information System (INIS)

    Revzen, M; Khanna, F C; Mann, A; Zak, J

    2006-01-01

    A Hilbert space in M dimensions is shown explicitly to accommodate representations that reflect the decomposition of M into prime numbers. Representations that exhibit the factorization of M into two relatively prime numbers: the kq representation (Zak J 1970 Phys. Today 23 51), and related representations termed q 1 q 2 representations (together with their conjugates) are analysed, as well as a representation that exhibits the complete factorization of M. In this latter representation each quantum number varies in a subspace that is associated with one of the prime numbers that make up M

  4. Selective injection locking of a multi-mode semiconductor laser to a multi-frequency reference beam

    Science.gov (United States)

    Pramod, Mysore Srinivas; Yang, Tao; Pandey, Kanhaiya; Giudici, Massimo; Wilkowski, David

    2014-07-01

    Injection locking is a well known and commonly used method for coherent light amplification. Usually injection locking is obtained on a single-mode laser injected by a single-frequency seeding beam. In this work we show that selective injection locking of a single-frequency may also be achieved on a multi-mode semiconductor laser injected by a multi-frequency seeding beam, if the slave laser provides sufficient frequency filtering. This selective injection locking condition depends critically on the frequency detuning between the free-running slave emission frequency and each injected frequency component. Stable selective injection locking to a set of three seeding components separated by 1.2 GHz is obtained. This system provides an amplification up to 37 dB of each component. This result suggests that, using distinct slave lasers for each frequency line, a set of mutually coherent high-power radiation modes can be tuned in the GHz frequency domain.

  5. Response of brown anoles Anolis sagrei to multimodal signals from a native and novel predator

    Directory of Open Access Journals (Sweden)

    Omar L. ELMASRI, Marcus S. MORENO, Courtney A. NEUMANN, Daniel T. BLUMSTEIN

    2012-06-01

    Full Text Available Multiple studies have focused on the importance of single modalities (visual, auditory, olfactory in eliciting anti-predator behavior, however multiple channels are often engaged simultaneously. While examining responses to multiple cues can potentially reveal more complex behavioral responses, little is known about how multimodal processing evolves. By contrasting response to familiar and novel predators, insights can be gained into the evolution of multimodal responses. We studied brown anoles’ (Anolis sagrei response to acoustic and visual predatory cues of a common potential predator, the great-tailed grackle Quiscalus mexicanus and to the American kestrel Falco sparverius, a species found in other populations but not present in our study population. We observed anole behavior before and after a stimulus and quantified rates of looking, display, and locomotion. Anoles increased their rate of locomotion in response to grackle models, an effect modulated by grackle vocalizations. No such response or modulation was seen when anoles were presented with kestrel stimuli. This suggests that the degree of sophistication of anole response to predators is experience dependent and that relaxed selection can result in reduced anti-predator response following loss of predators [Current Zoology 58 (6: 791–796, 2012].

  6. Super-Gaussian, super-diffusive transport of multi-mode active matter

    OpenAIRE

    Hahn, Seungsoo; Song, Sanggeun; Kim, Dae Hyun; Yang, Gil-Suk; Lee, Kang Taek; Sung, Jaeyoung

    2017-01-01

    Living cells exhibit multi-mode transport that switches between an active, self-propelled motion and a seemingly passive, random motion. Cellular decision-making over transport mode switching is a stochastic process that depends on the dynamics of the intracellular chemical network regulating the cell migration process. Here, we propose a theory and an exactly solvable model of multi-mode active matter. Our exact model study shows that the reversible transition between a passive mode and an a...

  7. Systemic multimodal approach to speech therapy treatment in autistic children.

    Science.gov (United States)

    Tamas, Daniela; Marković, Slavica; Milankov, Vesela

    2013-01-01

    Conditions in which speech therapy treatment is applied in autistic children are often not in accordance with characteristics of opinions and learning of people with autism. A systemic multimodal approach means motivating autistic people to develop their language speech skill through the procedure which allows reliving of their personal experience according to the contents that are presented in the their natural social environment. This research was aimed at evaluating the efficiency of speech treatment based on the systemic multimodal approach to the work with autistic children. The study sample consisted of 34 children, aged from 8 to 16 years, diagnosed to have different autistic disorders, whose results showed a moderate and severe clinical picture of autism on the Childhood Autism Rating Scale. The applied instruments for the evaluation of ability were the Childhood Autism Rating Scale and Ganzberg II test. The study subjects were divided into two groups according to the type of treatment: children who were covered by the continuing treatment and systemic multimodal approach in the treatment, and children who were covered by classical speech treatment. It is shown that the systemic multimodal approach in teaching autistic children affects the stimulation of communication, socialization, self-service and work as well as that the progress achieved in these areas of functioning was retainable after long time, too. By applying the systemic multimodal approach when dealing with autistic children and by comparing their achievements on tests applied before, during and after the application of this mode, it has been concluded that certain improvement has been achieved in the functionality within the diagnosed category. The results point to a possible direction in the creation of new methods, plans and programs in dealing with autistic children based on empirical and interactive learning.

  8. New two-port multimode interference reflectors

    NARCIS (Netherlands)

    Kleijn, E.; Smit, M.K.; Wale, M.J.; Leijtens, X.J.M.

    2012-01-01

    Multi-mode interference reflectors (MIRs) are versatile components. Two new MIR designs with a fixed 50/50 reflection to transmission ratio are introduced. Measurements on these new devices and on devices similar to those in [1] are presented and compared to the design values. Measured losses are

  9. Distinguishing Representations as Origin and Representations as Input: Roles for Individual Cells

    Directory of Open Access Journals (Sweden)

    Jonathan C.W. Edwards

    2016-09-01

    Full Text Available It is widely perceived that there is a problem in giving a naturalistic account of mental representation that deals adequately with meaning, interpretation or significance (semantic content. It is suggested here that this problem may arise partly from the conflation of two vernacular senses of representation: representation-as-origin and representation-as-input. The flash of a neon sign may in one sense represent a popular drink, but to function as representation it must provide an input to a ‘consumer’ in the street. The arguments presented draw on two principles – the neuron doctrine and the need for a venue for ‘presentation’ or ‘reception’ of a representation at a specified site, consistent with the locality principle. It is also argued that domains of representation cannot be defined by signal traffic, since they can be expected to include ‘null’ elements based on non-firing cells. In this analysis, mental representations-as-origin are distributed patterns of cell firing. Each firing cell is given semantic value in its own right - some form of atomic propositional significance – since different axonal branches may contribute to integration with different populations of signals at different downstream sites. Representations-as-input are patterns of local co-arrival of signals in the form of synaptic potentials in dendrites. Meaning then draws on the relationships between active and null inputs, forming ‘scenarios’ comprising a molecular combination of ‘premises’ from which a new output with atomic propositional significance is generated. In both types of representation, meaning, interpretation or significance pivots on events in an individual cell. (This analysis only applies to ‘occurrent’ representations based on current neural activity. The concept of representations-as-input emphasises the need for a ‘consumer’ of a representation and the dependence of meaning on the co-relationships involved in an

  10. Identification of DNA-Binding Proteins Using Mixed Feature Representation Methods.

    Science.gov (United States)

    Qu, Kaiyang; Han, Ke; Wu, Song; Wang, Guohua; Wei, Leyi

    2017-09-22

    DNA-binding proteins play vital roles in cellular processes, such as DNA packaging, replication, transcription, regulation, and other DNA-associated activities. The current main prediction method is based on machine learning, and its accuracy mainly depends on the features extraction method. Therefore, using an efficient feature representation method is important to enhance the classification accuracy. However, existing feature representation methods cannot efficiently distinguish DNA-binding proteins from non-DNA-binding proteins. In this paper, a multi-feature representation method, which combines three feature representation methods, namely, K-Skip-N-Grams, Information theory, and Sequential and structural features (SSF), is used to represent the protein sequences and improve feature representation ability. In addition, the classifier is a support vector machine. The mixed-feature representation method is evaluated using 10-fold cross-validation and a test set. Feature vectors, which are obtained from a combination of three feature extractions, show the best performance in 10-fold cross-validation both under non-dimensional reduction and dimensional reduction by max-relevance-max-distance. Moreover, the reduced mixed feature method performs better than the non-reduced mixed feature technique. The feature vectors, which are a combination of SSF and K-Skip-N-Grams, show the best performance in the test set. Among these methods, mixed features exhibit superiority over the single features.

  11. Identification of DNA-Binding Proteins Using Mixed Feature Representation Methods

    Directory of Open Access Journals (Sweden)

    Kaiyang Qu

    2017-09-01

    Full Text Available DNA-binding proteins play vital roles in cellular processes, such as DNA packaging, replication, transcription, regulation, and other DNA-associated activities. The current main prediction method is based on machine learning, and its accuracy mainly depends on the features extraction method. Therefore, using an efficient feature representation method is important to enhance the classification accuracy. However, existing feature representation methods cannot efficiently distinguish DNA-binding proteins from non-DNA-binding proteins. In this paper, a multi-feature representation method, which combines three feature representation methods, namely, K-Skip-N-Grams, Information theory, and Sequential and structural features (SSF, is used to represent the protein sequences and improve feature representation ability. In addition, the classifier is a support vector machine. The mixed-feature representation method is evaluated using 10-fold cross-validation and a test set. Feature vectors, which are obtained from a combination of three feature extractions, show the best performance in 10-fold cross-validation both under non-dimensional reduction and dimensional reduction by max-relevance-max-distance. Moreover, the reduced mixed feature method performs better than the non-reduced mixed feature technique. The feature vectors, which are a combination of SSF and K-Skip-N-Grams, show the best performance in the test set. Among these methods, mixed features exhibit superiority over the single features.

  12. Influence of Blood Contamination During Multimode Adhesive ...

    African Journals Online (AJOL)

    2018-01-30

    Jan 30, 2018 ... (μTBS) of multimode adhesives to dentin when using the self‑etch approach. Materials and Methods: ... adhesion, the collagen fibers collapse during the. Introduction ..... The failure mode was determined using an optical.

  13. Multimodal 2D Brain Computer Interface.

    Science.gov (United States)

    Almajidy, Rand K; Boudria, Yacine; Hofmann, Ulrich G; Besio, Walter; Mankodiya, Kunal

    2015-08-01

    In this work we used multimodal, non-invasive brain signal recording systems, namely Near Infrared Spectroscopy (NIRS), disc electrode electroencephalography (EEG) and tripolar concentric ring electrodes (TCRE) electroencephalography (tEEG). 7 healthy subjects participated in our experiments to control a 2-D Brain Computer Interface (BCI). Four motor imagery task were performed, imagery motion of the left hand, the right hand, both hands and both feet. The signal slope (SS) of the change in oxygenated hemoglobin concentration measured by NIRS was used for feature extraction while the power spectrum density (PSD) of both EEG and tEEG in the frequency band 8-30Hz was used for feature extraction. Linear Discriminant Analysis (LDA) was used to classify different combinations of the aforementioned features. The highest classification accuracy (85.2%) was achieved by using features from all the three brain signals recording modules. The improvement in classification accuracy was highly significant (p = 0.0033) when using the multimodal signals features as compared to pure EEG features.

  14. Multimodal interaction with W3C standards toward natural user interfaces to everything

    CERN Document Server

    2017-01-01

    This book presents new standards for multimodal interaction published by the W3C and other standards bodies in straightforward and accessible language, while also illustrating the standards in operation through case studies and chapters on innovative implementations. The book illustrates how, as smart technology becomes ubiquitous, and appears in more and more different shapes and sizes, vendor-specific approaches to multimodal interaction become impractical, motivating the need for standards. This book covers standards for voice, emotion, natural language understanding, dialog, and multimodal architectures. The book describes the standards in a practical manner, making them accessible to developers, students, and researchers. Comprehensive resource that explains the W3C standards for multimodal interaction clear and straightforward way; Includes case studies of the use of the standards on a wide variety of devices, including mobile devices, tablets, wearables and robots, in applications such as assisted livi...

  15. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets

    Directory of Open Access Journals (Sweden)

    Pielot Rainer

    2010-01-01

    Full Text Available Abstract Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE, a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.

  16. Gestures and multimodal input

    OpenAIRE

    Keates, Simeon; Robinson, Peter

    1999-01-01

    For users with motion impairments, the standard keyboard and mouse arrangement for computer access often presents problems. Other approaches have to be adopted to overcome this. In this paper, we will describe the development of a prototype multimodal input system based on two gestural input channels. Results from extensive user trials of this system are presented. These trials showed that the physical and cognitive loads on the user can quickly become excessive and detrimental to the interac...

  17. MOBILTEL - Mobile Multimodal Telecommunications dialogue system based on VoIP telephony

    Directory of Open Access Journals (Sweden)

    Anton Čižmár

    2009-10-01

    Full Text Available In this paper the project MobilTel ispresented. The communication itself is becoming amultimodal interactive process. The MobilTel projectprovides research and development activities inmultimodal interfaces area. The result is a functionalarchitecture for mobile multimodal telecommunicationsystem running on handheld device. The MobilTelcommunicator is a multimodal Slovak speech andgraphical interface with integrated VoIP client. Theother possible modalities are pen – touch screeninteraction, keyboard, and display on which theinformation is more user friendly presented (icons,emoticons, etc., and provides hyperlink and scrollingmenu availability.We describe the method of interaction between mobileterminal (PDA and MobilTel multimodal PCcommunicator over a VoIP WLAN connection basedon SIP protocol. We also present the graphicalexamples of services that enable users to obtaininformation about weather or information about trainconnection between two train stations.

  18. Marveling at "The Man Called Nova": Comics as Sponsors of Multimodal Literacy

    Science.gov (United States)

    Jacobs, Dale

    2007-01-01

    This essay theorizes the ways in which comics, and Marvel Comics in particular, acted as sponsors of multimodal literacy for the author. In doing so, the essay demonstrates the possibilities that exist in examining comics more closely and in thinking about how literacy sponsorship happens in multimodal texts. (Contains 1 figure and 13 notes.)

  19. The Combinatorial Multi-Mode Resource Constrained Multi-Project Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Denis Pinha

    2016-11-01

    Full Text Available This paper presents the formulation and solution of the Combinatorial Multi-Mode Resource Constrained Multi-Project Scheduling Problem. The focus of the proposed method is not on finding a single optimal solution, instead on presenting multiple feasible solutions, with cost and duration information to the project manager. The motivation for developing such an approach is due in part to practical situations where the definition of optimal changes on a regular basis. The proposed approach empowers the project manager to determine what is optimal, on a given day, under the current constraints, such as, change of priorities, lack of skilled worker. The proposed method utilizes a simulation approach to determine feasible solutions, under the current constraints. Resources can be non-consumable, consumable, or doubly constrained. The paper also presents a real-life case study dealing with scheduling of ship repair activities.

  20. Numerical modelling of multimode fibre-optic communication lines

    Energy Technology Data Exchange (ETDEWEB)

    Sidelnikov, O S; Fedoruk, M P [Novosibirsk State University, Novosibirsk (Russian Federation); Sygletos, S; Ferreira, F [Aston University, England, Birmingham, B4 7ET (United Kingdom)

    2016-01-31

    The results of numerical modelling of nonlinear propagation of an optical signal in multimode fibres with a small differential group delay are presented. It is found that the dependence of the error vector magnitude (EVM) on the differential group delay can be reduced by increasing the number of ADC samples per symbol in the numerical implementation of the differential group delay compensation algorithm in the receiver. The possibility of using multimode fibres with a small differential group delay for data transmission in modern digital communication systems is demonstrated. It is shown that with increasing number of modes the strong coupling regime provides a lower EVM level than the weak coupling one. (fibre-optic communication lines)

  1. THE USE OF CORELDRAW PROGRAM FOR THE REPRESENTATION OF WEFT KNITTED STRUCTURES

    Directory of Open Access Journals (Sweden)

    INDRIE Liliana

    2015-05-01

    Full Text Available The representation weft knitted fabrics covers a wide range of methods, which may vary from country to country or may be similar, being used identical or slightly modified. In Romania, currently, for the weft knitted fabrics are used four methods of representing the knitted structures, namely: structural or analytical representation, representation using knitting notations, symbolic representation of the section of stitch courses and representation of drawing design. Generalization of knitted fabrics design using CAD systems determined the development of software design, including 2D representation that can be used for any type of machine. 2D representation of stitches solves the modeling problems and is possible to be executed using different computer graphics programs (CorelDRAW, AUTOCAD, etc. With the help of the graphic editor CORELDRAW it is possible to make the graphical representation of the structure of any kind of weft knitted fabric regardless of its complexity, strating from the simplest ones such as knits with basic weaves (single jersey, rib fabric, links, links and links patterns to the most complex structures such as knitted fabrics with different evolutionary changes of normal stitches (tuck loop knits, missed stitches knits, racked stitches knits, etc. or those with combined designs (knitted jacquard, intarsia knits etc.. This paper presents the possibility of using the CorelDRAW application for representing knitted structures.

  2. Simultaneous in vivo imaging of melanin and lipofuscin in the retina with multimodal photoacoustic ophthalmoscopy

    Science.gov (United States)

    Zhang, Xiangyang; Zhang, Hao F.; Zhou, Lixiang; Jiao, Shuliang

    2012-02-01

    We combined photoacoustic ophthalmoscopy (PAOM) with autofluorescence imaging for simultaneous in vivo imaging of dual molecular contrasts in the retina using a single light source. The dual molecular contrasts come from melanin and lipofuscin in the retinal pigment epithelium (RPE). Melanin and lipofuscin are two types of pigments and are believed to play opposite roles (protective vs. exacerbate) in the RPE in the aging process. We successfully imaged the retina of pigmented and albino rats at different ages. The experimental results showed that multimodal PAOM system can be a potentially powerful tool in the study of age-related degenerative retinal diseases.

  3. Experimental evaluation of user performance in a pursuit tracking task with multimodal feedback

    Directory of Open Access Journals (Sweden)

    Obrenović Željko

    2004-01-01

    Full Text Available In this paper we describe the results of experimental evaluation of user performance in a pursuit-tracking task with multimodal feedback. Our experimental results indicate that audio can significantly improve the accuracy of pursuit tracking. Experiments with 19 participants have shown that addition of acoustic modalities reduces the error during pursuit tracking for up to 19%. Moreover, experiments indicated the existence of perceptual boundaries of multimodal HCI for different scene complexity and target speeds. We have also shown that the most appealing paradigms are not the most effective ones, which necessitates a careful quantitative analysis of proposed multimodal HCI paradigms.

  4. An Efficient Quality-Related Fault Diagnosis Method for Real-Time Multimode Industrial Process

    Directory of Open Access Journals (Sweden)

    Kaixiang Peng

    2017-01-01

    Full Text Available Focusing on quality-related complex industrial process performance monitoring, a novel multimode process monitoring method is proposed in this paper. Firstly, principal component space clustering is implemented under the guidance of quality variables. Through extraction of model tags, clustering information of original training data can be acquired. Secondly, according to multimode characteristics of process data, the monitoring model integrated Gaussian mixture model with total projection to latent structures is effective after building the covariance description form. The multimode total projection to latent structures (MTPLS model is the foundation of problem solving about quality-related monitoring for multimode processes. Then, a comprehensive statistics index is defined which is based on the posterior probability of the monitored samples belonging to each Gaussian component in the Bayesian theory. After that, a combined index is constructed for process monitoring. Finally, motivated by the application of traditional contribution plot in fault diagnosis, a gradient contribution rate is applied for analyzing the variation of variable contribution rate along samples. Our method can ensure the implementation of online fault monitoring and diagnosis for multimode processes. Performances of the whole proposed scheme are verified in a real industrial, hot strip mill process (HSMP compared with some existing methods.

  5. Empowering Prospective Teachers to Become Active Sense-Makers: Multimodal Modeling of the Seasons

    Science.gov (United States)

    Kim, Mi Song

    2015-10-01

    Situating science concepts in concrete and authentic contexts, using information and communications technologies, including multimodal modeling tools, is important for promoting the development of higher-order thinking skills in learners. However, teachers often struggle to integrate emergent multimodal models into a technology-rich informal learning environment. Our design-based research co-designs and develops engaging, immersive, and interactive informal learning activities called "Embodied Modeling-Mediated Activities" (EMMA) to support not only Singaporean learners' deep learning of astronomy but also the capacity of teachers. As part of the research on EMMA, this case study describes two prospective teachers' co-design processes involving multimodal models for teaching and learning the concept of the seasons in a technology-rich informal learning setting. Our study uncovers four prominent themes emerging from our data concerning the contextualized nature of learning and teaching involving multimodal models in informal learning contexts: (1) promoting communication and emerging questions, (2) offering affordances through limitations, (3) explaining one concept involving multiple concepts, and (4) integrating teaching and learning experiences. This study has an implication for the development of a pedagogical framework for teaching and learning in technology-enhanced learning environments—that is empowering teachers to become active sense-makers using multimodal models.

  6. Responsive Multimodal Transportation Management Strategies And IVHS

    Science.gov (United States)

    1995-02-01

    THE PURPOSE OF THIS STUDY WAS TO INVESTIGATE NEW AND INNOVATIVE WAYS TO INCORPORATE IVHS TECHNOLOGIES INTO MULTIMODAL TRANSPORTATION MANAGEMENT STRATEGIES. MUCH OF THE IVHS RESEARCH DONE TO DATE HAS ADDRESSED THE MODES INDIVIDUALLY. THIS PROJECT FOCU...

  7. Damage classification of pipelines under water flow operation using multi-mode actuated sensing technology

    International Nuclear Information System (INIS)

    Lee, Changgil; Park, Seunghee

    2011-01-01

    In a structure, several types of damage can occur, ranging from micro-cracking to corrosion or loose bolts. This makes identifying the damage difficult with a single mode of sensing. Therefore, a multi-mode actuated sensing system is proposed based on a self-sensing circuit using a piezoelectric sensor. In self-sensing-based multi-mode actuated sensing, one mode provides a wide frequency-band structural response from the self-sensed impedance measurement and the other mode provides a specific frequency-induced structural wavelet response from the self-sensed guided wave measurement. In this experimental study, a pipeline system under water flow operation was examined to verify the effectiveness and robustness of the proposed structural health monitoring approach. Different types of structural damage were inflicted artificially on the pipeline system. To classify the multiple types of structural damage, supervised learning-based statistical pattern recognition was implemented by composing a three-dimensional space using the damage indices extracted from the impedance and guided wave features as well as temperature variations. For a more systematic damage classification, several control parameters were optimized to determine an optimal decision boundary for the supervised learning-based pattern recognition. Further research issues are also discussed for real-world implementations of the proposed approach

  8. Laser injury and in vivo multimodal imaging using a mouse model

    Science.gov (United States)

    Pocock, Ginger M.; Boretsky, Adam; Gupta, Praveena; Oliver, Jeff W.; Motamedi, Massoud

    2011-03-01

    Balb/c wild type mice were used to perform in vivo experiments of laser-induced thermal damage to the retina. A Heidelberg Spectralis HRA confocal scanning laser ophthalmoscope with a spectral domain optical coherence tomographer was used to obtain fundus and cross-sectional images of laser induced injury in the retina. Sub-threshold, threshold, and supra-threshold lesions were observed using optical coherence tomography (OCT), infrared reflectance, red-free reflectance, fluorescence angiography, and autofluorescence imaging modalities at different time points post-exposure. Lesions observed using all imaging modalities, except autofluorescence, were not visible immediately after exposure but did resolve within an hour and grew in size over a 24 hour period. There was a decrease in fundus autofluorescence at exposure sites immediately following exposure that developed into hyper-fluorescence 24-48 hours later. OCT images revealed threshold damage that was localized to the RPE but extended into the neural retina over a 24 hour period. Volumetric representations of the mouse retina were created to visualize the extent of damage within the retina over a 24 hour period. Multimodal imaging provides complementary information regarding damage mechanisms that may be used to quantify the extent of the damage as well as the effectiveness of treatments without need for histology.

  9. Interactive natural language acquisition in a multi-modal recurrent neural architecture

    Science.gov (United States)

    Heinrich, Stefan; Wermter, Stefan

    2018-01-01

    For the complex human brain that enables us to communicate in natural language, we gathered good understandings of principles underlying language acquisition and processing, knowledge about sociocultural conditions, and insights into activity patterns in the brain. However, we were not yet able to understand the behavioural and mechanistic characteristics for natural language and how mechanisms in the brain allow to acquire and process language. In bridging the insights from behavioural psychology and neuroscience, the goal of this paper is to contribute a computational understanding of appropriate characteristics that favour language acquisition. Accordingly, we provide concepts and refinements in cognitive modelling regarding principles and mechanisms in the brain and propose a neurocognitively plausible model for embodied language acquisition from real-world interaction of a humanoid robot with its environment. In particular, the architecture consists of a continuous time recurrent neural network, where parts have different leakage characteristics and thus operate on multiple timescales for every modality and the association of the higher level nodes of all modalities into cell assemblies. The model is capable of learning language production grounded in both, temporal dynamic somatosensation and vision, and features hierarchical concept abstraction, concept decomposition, multi-modal integration, and self-organisation of latent representations.

  10. The future of multimodal corpora O futuro dos corpora modais

    Directory of Open Access Journals (Sweden)

    Dawn Knight

    2011-01-01

    Full Text Available This paper takes stock of the current state-of-the-art in multimodal corpus linguistics, and proposes some projections of future developments in this field. It provides a critical overview of key multimodal corpora that have been constructed over the past decade and presents a wish-list of future technological and methodological advancements that may help to increase the availability, utility and functionality of such corpora for linguistic research.Este artigo apresenta um balanço do estado da arte da linguística de corpus multimodal e propõe a projeção de desenvolvimentos futuros nessa área. Um resumo crítico dos corpora multimodais-chave que foram construídos na última década é apresentado, assim como uma lista de desenvolvimentos tecnológicos e metodológicos futuros que podem auxiliar na disponibilização e utilização, bem como na funcionalidade, de tais corpora para a pesquisa linguística.

  11. Multi-representation based on scientific investigation for enhancing students’ representation skills

    Science.gov (United States)

    Siswanto, J.; Susantini, E.; Jatmiko, B.

    2018-03-01

    This research aims to implementation learning physics with multi-representation based on the scientific investigation for enhancing students’ representation skills, especially on the magnetic field subject. The research design is one group pretest-posttest. This research was conducted in the department of mathematics education, Universitas PGRI Semarang, with the sample is students of class 2F who take basic physics courses. The data were obtained by representation skills test and documentation of multi-representation worksheet. The Results show gain analysis value of .64 which means some medium improvements. The result of t-test (α = .05) is shows p-value = .001. This learning significantly improves students representation skills.

  12. Representation in Memory.

    Science.gov (United States)

    Rumelhart, David E.; Norman, Donald A.

    This paper reviews work on the representation of knowledge from within psychology and artificial intelligence. The work covers the nature of representation, the distinction between the represented world and the representing world, and significant issues concerned with propositional, analogical, and superpositional representations. Specific topics…

  13. Coherent storage of temporally multimode light using a spin-wave atomic frequency comb memory

    International Nuclear Information System (INIS)

    Gündoğan, M; Mazzera, M; Ledingham, P M; Cristiani, M; De Riedmatten, H

    2013-01-01

    We report on the coherent and multi-temporal mode storage of light using the full atomic frequency comb memory scheme. The scheme involves the transfer of optical atomic excitations in Pr 3+ :Y 2 SiO 5 to spin waves in hyperfine levels using strong single-frequency transfer pulses. Using this scheme, a total of five temporal modes are stored and recalled on-demand from the memory. The coherence of the storage and retrieval is characterized using a time-bin interference measurement resulting in visibilities higher than 80%, independent of the storage time. This coherent and multimode spin-wave memory is promising as a quantum memory for light. (paper)

  14. The Impact of Multimodal Texts on Reading Achievement: A Study of Iranian Secondary School Learners

    Directory of Open Access Journals (Sweden)

    Bahareh Baharani

    2015-07-01

    Full Text Available This study was designed to investigate the impact of multimodal text on reading comprehension test performance of Iranian intermediate learners. A total of 80 students participated in this study. All of them were Iranian female EFL learners with the age ranging from 16 to 18. They were selected from a boarding high school in Nasr Abad, Torbat Jam in Khorasan e Razavi, Iran. The students were randomly settled in four groups, who received different instructional approaches through using linear texts, multimodal printed texts, non-printed multimodal texts, and both multimodal printed and non-printed texts.  A pre-test and post-test were used to find out the differences before and after the experimental treatment.  The results reflected that the printed and non-printed multimodal texts had significant impact on reading comprehension test performance. In contrast, applying linear texts or traditional texts did not exert significant influence on reading comprehension ability of the participants. The findings provide useful hints for language instructors to improve effectiveness of instructional reading curriculums and reading ability of language learners. The participants who learned reading comprehension through using multimodal printed and non-printed texts enjoy reading programs and develop their intrinsic and extrinsic motivation for improving reading ability.

  15. Protein Sub-Nuclear Localization Based on Effective Fusion Representations and Dimension Reduction Algorithm LDA

    Directory of Open Access Journals (Sweden)

    Shunfang Wang

    2015-12-01

    Full Text Available An effective representation of a protein sequence plays a crucial role in protein sub-nuclear localization. The existing representations, such as dipeptide composition (DipC, pseudo-amino acid composition (PseAAC and position specific scoring matrix (PSSM, are insufficient to represent protein sequence due to their single perspectives. Thus, this paper proposes two fusion feature representations of DipPSSM and PseAAPSSM to integrate PSSM with DipC and PseAAC, respectively. When constructing each fusion representation, we introduce the balance factors to value the importance of its components. The optimal values of the balance factors are sought by genetic algorithm. Due to the high dimensionality of the proposed representations, linear discriminant analysis (LDA is used to find its important low dimensional structure, which is essential for classification and location prediction. The numerical experiments on two public datasets with KNN classifier and cross-validation tests showed that in terms of the common indexes of sensitivity, specificity, accuracy and MCC, the proposed fusing representations outperform the traditional representations in protein sub-nuclear localization, and the representation treated by LDA outperforms the untreated one.

  16. Attention and Representational Momentum

    OpenAIRE

    Hayes, Amy; Freyd, Jennifer J

    1995-01-01

    Representational momentum, the tendency for memory to be distorted in the direction of an implied transformation, suggests that dynamics are an intrinsic part of perceptual representations. We examined the effect of attention on dynamic representation by testing for representational momentum under conditions of distraction. Forward memory shifts increase when attention is divided. Attention may be involved in halting but not in maintaining dynamic representations.

  17. Deep Multimodal Distance Metric Learning Using Click Constraints for Image Ranking.

    Science.gov (United States)

    Yu, Jun; Yang, Xiaokang; Gao, Fei; Tao, Dacheng

    2017-12-01

    How do we retrieve images accurately? Also, how do we rank a group of images precisely and efficiently for specific queries? These problems are critical for researchers and engineers to generate a novel image searching engine. First, it is important to obtain an appropriate description that effectively represent the images. In this paper, multimodal features are considered for describing images. The images unique properties are reflected by visual features, which are correlated to each other. However, semantic gaps always exist between images visual features and semantics. Therefore, we utilize click feature to reduce the semantic gap. The second key issue is learning an appropriate distance metric to combine these multimodal features. This paper develops a novel deep multimodal distance metric learning (Deep-MDML) method. A structured ranking model is adopted to utilize both visual and click features in distance metric learning (DML). Specifically, images and their related ranking results are first collected to form the training set. Multimodal features, including click and visual features, are collected with these images. Next, a group of autoencoders is applied to obtain initially a distance metric in different visual spaces, and an MDML method is used to assign optimal weights for different modalities. Next, we conduct alternating optimization to train the ranking model, which is used for the ranking of new queries with click features. Compared with existing image ranking methods, the proposed method adopts a new ranking model to use multimodal features, including click features and visual features in DML. We operated experiments to analyze the proposed Deep-MDML in two benchmark data sets, and the results validate the effects of the method.

  18. LGBT Representations on Facebook : Representations of the Self and the Content

    OpenAIRE

    Chu, Yawen

    2017-01-01

    The topic of LGBT rights has been increasingly discussed and debated over recent years. More and more scholars show their interests in the field of LGBT representations in media. However, not many studies involved LGBT representations in social media. This paper explores LGBT representations on Facebook by analysing posts on an open page and in a private group, including both representations of the self as the identity of sexual minorities, content that is displayed on Facebook and the simila...

  19. Evaluation of multimodal ground cues

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Lecuyer, Anatole; Serafin, Stefania

    2012-01-01

    This chapter presents an array of results on the perception of ground surfaces via multiple sensory modalities,with special attention to non visual perceptual cues, notably those arising from audition and haptics, as well as interactions between them. It also reviews approaches to combining...... synthetic multimodal cues, from vision, haptics, and audition, in order to realize virtual experiences of walking on simulated ground surfaces or other features....

  20. Manipulating single second mode transparency in a corrugated waveguide via the thickness of sputtered gold

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Dan [Key Lab of In-fiber Integrated Optics, Ministry of Education of China, Harbin Engineering University, Harbin 150001 (China); Photonics Research Center, College of Science, Harbin Engineering University, Harbin 150001 (China); Fan, Ya-Xian, E-mail: yxfan@hrbeu.edu.cn [Key Lab of In-fiber Integrated Optics, Ministry of Education of China, Harbin Engineering University, Harbin 150001 (China); Photonics Research Center, College of Science, Harbin Engineering University, Harbin 150001 (China); Sang, Tang-Qing; Xu, Lan-Lan; Bibi, Aysha [Key Lab of In-fiber Integrated Optics, Ministry of Education of China, Harbin Engineering University, Harbin 150001 (China); Photonics Research Center, College of Science, Harbin Engineering University, Harbin 150001 (China); Tao, Zhi-Yong, E-mail: zytao@hrbeu.edu.cn [Key Lab of In-fiber Integrated Optics, Ministry of Education of China, Harbin Engineering University, Harbin 150001 (China); Photonics Research Center, College of Science, Harbin Engineering University, Harbin 150001 (China)

    2016-03-11

    We propose a classical analog of electromagnetically induced transparency in a cylindrical waveguide with undulated metallic walls. The transparency, induced by multi-mode interactions in waveguides, not only has a narrow line-width, but also consists of a single second-order transverse mode, which corresponds to the Bessel function distributions investigated extensively due to their unique characteristics. By increasing the thickness of sputtered gold layers of the waveguide, we demonstrate a frequency-agile single mode transparency phenomenon in a terahertz radiation. It is found that the center frequency of the transparency is linearly related to the gold thickness, indicating the achievement of a controllable single mode terahertz device. The field distributions at the cross-sections of outlets verify the single second mode transparency and indicate the mechanism of its frequency manipulation, which will significantly benefit the mode-control engineering in terahertz applications. - Highlights: • An analog of electromagnetically induced transparency in terahertz tubes is proposed. • A single second transverse mode of Bessel distributions is observed in the pass band. • The operating frequency can be linearly controlled by the sputtered gold thickness. • We can effectively manipulate the slow down factor of light by the gold thickness. • The transparency characteristics rely on the transition of multi-mode interactions.

  1. Manipulating single second mode transparency in a corrugated waveguide via the thickness of sputtered gold

    International Nuclear Information System (INIS)

    Xu, Dan; Fan, Ya-Xian; Sang, Tang-Qing; Xu, Lan-Lan; Bibi, Aysha; Tao, Zhi-Yong

    2016-01-01

    We propose a classical analog of electromagnetically induced transparency in a cylindrical waveguide with undulated metallic walls. The transparency, induced by multi-mode interactions in waveguides, not only has a narrow line-width, but also consists of a single second-order transverse mode, which corresponds to the Bessel function distributions investigated extensively due to their unique characteristics. By increasing the thickness of sputtered gold layers of the waveguide, we demonstrate a frequency-agile single mode transparency phenomenon in a terahertz radiation. It is found that the center frequency of the transparency is linearly related to the gold thickness, indicating the achievement of a controllable single mode terahertz device. The field distributions at the cross-sections of outlets verify the single second mode transparency and indicate the mechanism of its frequency manipulation, which will significantly benefit the mode-control engineering in terahertz applications. - Highlights: • An analog of electromagnetically induced transparency in terahertz tubes is proposed. • A single second transverse mode of Bessel distributions is observed in the pass band. • The operating frequency can be linearly controlled by the sputtered gold thickness. • We can effectively manipulate the slow down factor of light by the gold thickness. • The transparency characteristics rely on the transition of multi-mode interactions.

  2. Assessment of Closed-Loop Control Using Multi-Mode Sensor Fusion For a High Reynolds Number Transonic Jet

    Science.gov (United States)

    Low, Kerwin; Elhadidi, Basman; Glauser, Mark

    2009-11-01

    Understanding the different noise production mechanisms caused by the free shear flows in a turbulent jet flow provides insight to improve ``intelligent'' feedback mechanisms to control the noise. Towards this effort, a control scheme is based on feedback of azimuthal pressure measurements in the near field of the jet at two streamwise locations. Previous studies suggested that noise reduction can be achieved by azimuthal actuators perturbing the shear layer at the jet lip. The closed-loop actuation will be based on a low-dimensional Fourier representation of the hydrodynamic pressure measurements. Preliminary results show that control authority and reduction in the overall sound pressure level was possible. These results provide motivation to move forward with the overall vision of developing innovative multi-mode sensing methods to improve state estimation and derive dynamical systems. It is envisioned that estimating velocity-field and dynamic pressure information from various locations both local and in the far-field regions, sensor fusion techniques can be utilized to ascertain greater overall control authority.

  3. Familiarity and Voice Representation: From Acoustic-Based Representation to Voice Averages

    Directory of Open Access Journals (Sweden)

    Maureen Fontaine

    2017-07-01

    Full Text Available The ability to recognize an individual from their voice is a widespread ability with a long evolutionary history. Yet, the perceptual representation of familiar voices is ill-defined. In two experiments, we explored the neuropsychological processes involved in the perception of voice identity. We specifically explored the hypothesis that familiar voices (trained-to-familiar (Experiment 1, and famous voices (Experiment 2 are represented as a whole complex pattern, well approximated by the average of multiple utterances produced by a single speaker. In experiment 1, participants learned three voices over several sessions, and performed a three-alternative forced-choice identification task on original voice samples and several “speaker averages,” created by morphing across varying numbers of different vowels (e.g., [a] and [i] produced by the same speaker. In experiment 2, the same participants performed the same task on voice samples produced by familiar speakers. The two experiments showed that for famous voices, but not for trained-to-familiar voices, identification performance increased and response times decreased as a function of the number of utterances in the averages. This study sheds light on the perceptual representation of familiar voices, and demonstrates the power of average in recognizing familiar voices. The speaker average captures the unique characteristics of a speaker, and thus retains the information essential for recognition; it acts as a prototype of the speaker.

  4. Teaching Poetry through Collaborative Art: An Analysis of Multimodal Ensembles for Transformative Learning

    Science.gov (United States)

    Wandera, David B.

    2016-01-01

    This study is anchored on two positions: that every communication is multimodal and that different modalities within multimodal communication have particular affordances. Written and oral language and other modalities, such as body language and audio/visual media, are interwoven in classroom communication. What might it look like to strategically…

  5. Multimodal retrieval of autobiographical memories: sensory information contributes differently to the recollection of events.

    Science.gov (United States)

    Willander, Johan; Sikström, Sverker; Karlsson, Kristina

    2015-01-01

    Previous studies on autobiographical memory have focused on unimodal retrieval cues (i.e., cues pertaining to one modality). However, from an ecological perspective multimodal cues (i.e., cues pertaining to several modalities) are highly important to investigate. In the present study we investigated age distributions and experiential ratings of autobiographical memories retrieved with unimodal and multimodal cues. Sixty-two participants were randomized to one of four cue-conditions: visual, olfactory, auditory, or multimodal. The results showed that the peak of the distributions depends on the modality of the retrieval cue. The results indicated that multimodal retrieval seemed to be driven by visual and auditory information to a larger extent and to a lesser extent by olfactory information. Finally, no differences were observed in the number of retrieved memories or experiential ratings across the four cue-conditions.

  6. Multimodal Retrieval of Autobiographical Memories: Sensory Information Contributes Differently to the Recollection of Events

    Directory of Open Access Journals (Sweden)

    Johan eWillander

    2015-11-01

    Full Text Available Previous studies on autobiographical memory have focused on unimodal retrieval cues (i.e., cues pertaining to one modality. However, from an ecological perspective multimodal cues (i.e., cues pertaining to several modalities are highly important to investigate. In the present study we investigated age distributions and experiential ratings of autobiographical memories retrieved with unimodal and multimodal cues. Sixty-two participants were randomized to one of four cue-conditions: visual, olfactory, auditory, and multimodal. The results showed that the peak of the distributions depend on the modality of the retrieval cue. The results indicated that multimodal retrieval seemed to be driven by visual and auditory information to a larger extent and to a lesser extent by olfactory information. Finally, no differences were observed in the number of retrieved memories or experiential ratings across the four cue-conditions.

  7. MEDCIS: Multi-Modality Epilepsy Data Capture and Integration System.

    Science.gov (United States)

    Zhang, Guo-Qiang; Cui, Licong; Lhatoo, Samden; Schuele, Stephan U; Sahoo, Satya S

    2014-01-01

    Sudden Unexpected Death in Epilepsy (SUDEP) is the leading mode of epilepsy-related death and is most common in patients with intractable, frequent, and continuing seizures. A statistically significant cohort of patients for SUDEP study requires meticulous, prospective follow up of a large population that is at an elevated risk, best represented by the Epilepsy Monitoring Unit (EMU) patient population. Multiple EMUs need to collaborate, share data for building a larger cohort of potential SUDEP patient using a state-of-the-art informatics infrastructure. To address the challenges of data integration and data access from multiple EMUs, we developed the Multi-Modality Epilepsy Data Capture and Integration System (MEDCIS) that combines retrospective clinical free text processing using NLP, prospective structured data capture using an ontology-driven interface, interfaces for cohort search and signal visualization, all in a single integrated environment. A dedicated Epilepsy and Seizure Ontology (EpSO) has been used to streamline the user interfaces, enhance its usability, and enable mappings across distributed databases so that federated queries can be executed. MEDCIS contained 936 patient data sets from the EMUs of University Hospitals Case Medical Center (UH CMC) in Cleveland and Northwestern Memorial Hospital (NMH) in Chicago. Patients from UH CMC and NMH were stored in different databases and then federated through MEDCIS using EpSO and our mapping module. More than 77GB of multi-modal signal data were processed using the Cloudwave pipeline and made available for rendering through the web-interface. About 74% of the 40 open clinical questions of interest were answerable accurately using the EpSO-driven VISual AGregagator and Explorer (VISAGE) interface. Questions not directly answerable were either due to their inherent computational complexity, the unavailability of primary information, or the scope of concept that has been formulated in the existing Ep

  8. Nanodiamond Landmarks for Subcellular Multimodal Optical and Electron Imaging

    Science.gov (United States)

    Zurbuchen, Mark A.; Lake, Michael P.; Kohan, Sirus A.; Leung, Belinda; Bouchard, Louis-S.

    2013-01-01

    There is a growing need for biolabels that can be used in both optical and electron microscopies, are non-cytotoxic, and do not photobleach. Such biolabels could enable targeted nanoscale imaging of sub-cellular structures, and help to establish correlations between conjugation-delivered biomolecules and function. Here we demonstrate a sub-cellular multi-modal imaging methodology that enables localization of inert particulate probes, consisting of nanodiamonds having fluorescent nitrogen-vacancy centers. These are functionalized to target specific structures, and are observable by both optical and electron microscopies. Nanodiamonds targeted to the nuclear pore complex are rapidly localized in electron-microscopy diffraction mode to enable “zooming-in” to regions of interest for detailed structural investigations. Optical microscopies reveal nanodiamonds for in-vitro tracking or uptake-confirmation. The approach is general, works down to the single nanodiamond level, and can leverage the unique capabilities of nanodiamonds, such as biocompatibility, sensitive magnetometry, and gene and drug delivery. PMID:24036840

  9. Divided multimodal attention sensory trace and context coding strategies in spatially congruent auditory and visual presentation.

    Science.gov (United States)

    Kristjánsson, Tómas; Thorvaldsson, Tómas Páll; Kristjánsson, Arni

    2014-01-01

    Previous research involving both unimodal and multimodal studies suggests that single-response change detection is a capacity-free process while a discriminatory up or down identification is capacity-limited. The trace/context model assumes that this reflects different memory strategies rather than inherent differences between identification and detection. To perform such tasks, one of two strategies is used, a sensory trace or a context coding strategy, and if one is blocked, people will automatically use the other. A drawback to most preceding studies is that stimuli are presented at separate locations, creating the possibility of a spatial confound, which invites alternative interpretations of the results. We describe a series of experiments, investigating divided multimodal attention, without the spatial confound. The results challenge the trace/context model. Our critical experiment involved a gap before a change in volume and brightness, which according to the trace/context model blocks the sensory trace strategy, simultaneously with a roaming pedestal, which should block the context coding strategy. The results clearly show that people can use strategies other than sensory trace and context coding in the tasks and conditions of these experiments, necessitating changes to the trace/context model.

  10. Biologically-inspired robust and adaptive multi-sensor fusion and active control

    Science.gov (United States)

    Khosla, Deepak; Dow, Paul A.; Huber, David J.

    2009-04-01

    In this paper, we describe a method and system for robust and efficient goal-oriented active control of a machine (e.g., robot) based on processing, hierarchical spatial understanding, representation and memory of multimodal sensory inputs. This work assumes that a high-level plan or goal is known a priori or is provided by an operator interface, which translates into an overall perceptual processing strategy for the machine. Its analogy to the human brain is the download of plans and decisions from the pre-frontal cortex into various perceptual working memories as a perceptual plan that then guides the sensory data collection and processing. For example, a goal might be to look for specific colored objects in a scene while also looking for specific sound sources. This paper combines three key ideas and methods into a single closed-loop active control system. (1) Use high-level plan or goal to determine and prioritize spatial locations or waypoints (targets) in multimodal sensory space; (2) collect/store information about these spatial locations at the appropriate hierarchy and representation in a spatial working memory. This includes invariant learning of these spatial representations and how to convert between them; and (3) execute actions based on ordered retrieval of these spatial locations from hierarchical spatial working memory and using the "right" level of representation that can efficiently translate into motor actions. In its most specific form, the active control is described for a vision system (such as a pantilt- zoom camera system mounted on a robotic head and neck unit) which finds and then fixates on high saliency visual objects. We also describe the approach where the goal is to turn towards and sequentially foveate on salient multimodal cues that include both visual and auditory inputs.

  11. Effect of Multimodal Pore Channels on Cargo Release from Mesoporous Silica Nanoparticles

    Directory of Open Access Journals (Sweden)

    Sushilkumar A. Jadhav

    2016-01-01

    Full Text Available Mesoporous silica nanoparticles (MSNs with multimodal pore channels were fully characterized by TEM, nitrogen adsorption-desorption, and DLS analyses. MSNs with average diameter of 200 nm with dual pore channel zones with pore diameters of 1.3–2.6 and 4 nm were tested for their use in drug delivery application. Important role of the multimodal pore systems present on MSNs on the quantitative release of model drug ibuprofen was investigated. The results obtained revealed that the release profile for ibuprofen clearly shows distinct zones which can be attributed to the respective porous channel zones present on the particles. The fluctuations in the concentration of ibuprofen during the prolonged release from MSNs were caused by the multimodal pore channel systems.

  12. Multimodal imaging of lung cancer and its microenvironment (Conference Presentation)

    Science.gov (United States)

    Hariri, Lida P.; Niederst, Matthew J.; Mulvey, Hillary; Adams, David C.; Hu, Haichuan; Chico Calero, Isabel; Szabari, Margit V.; Vakoc, Benjamin J.; Hasan, Tayyaba; Bouma, Brett E.; Engelman, Jeffrey A.; Suter, Melissa J.

    2016-03-01

    Despite significant advances in targeted therapies for lung cancer, nearly all patients develop drug resistance within 6-12 months and prognosis remains poor. Developing drug resistance is a progressive process that involves tumor cells and their microenvironment. We hypothesize that microenvironment factors alter tumor growth and response to targeted therapy. We conducted in vitro studies in human EGFR-mutant lung carcinoma cells, and demonstrated that factors secreted from lung fibroblasts results in increased tumor cell survival during targeted therapy with EGFR inhibitor, gefitinib. We also demonstrated that increased environment stiffness results in increased tumor survival during gefitinib therapy. In order to test our hypothesis in vivo, we developed a multimodal optical imaging protocol for preclinical intravital imaging in mouse models to assess tumor and its microenvironment over time. We have successfully conducted multimodal imaging of dorsal skinfold chamber (DSC) window mice implanted with GFP-labeled human EGFR mutant lung carcinoma cells and visualized changes in tumor development and microenvironment facets over time. Multimodal imaging included structural OCT to assess tumor viability and necrosis, polarization-sensitive OCT to measure tissue birefringence for collagen/fibroblast detection, and Doppler OCT to assess tumor vasculature. Confocal imaging was also performed for high-resolution visualization of EGFR-mutant lung cancer cells labeled with GFP, and was coregistered with OCT. Our results demonstrated that stromal support and vascular growth are essential to tumor progression. Multimodal imaging is a useful tool to assess tumor and its microenvironment over time.

  13. Multimodal system for the planning and guidance of bronchoscopy

    Science.gov (United States)

    Higgins, William E.; Cheirsilp, Ronnarit; Zang, Xiaonan; Byrnes, Patrick

    2015-03-01

    Many technical innovations in multimodal radiologic imaging and bronchoscopy have emerged recently in the effort against lung cancer. Modern X-ray computed-tomography (CT) scanners provide three-dimensional (3D) high-resolution chest images, positron emission tomography (PET) scanners give complementary molecular imaging data, and new integrated PET/CT scanners combine the strengths of both modalities. State-of-the-art bronchoscopes permit minimally invasive tissue sampling, with vivid endobronchial video enabling navigation deep into the airway-tree periphery, while complementary endobronchial ultrasound (EBUS) reveals local views of anatomical structures outside the airways. In addition, image-guided intervention (IGI) systems have proven their utility for CT-based planning and guidance of bronchoscopy. Unfortunately, no IGI system exists that integrates all sources effectively through the complete lung-cancer staging work flow. This paper presents a prototype of a computer-based multimodal IGI system that strives to fill this need. The system combines a wide range of automatic and semi-automatic image-processing tools for multimodal data fusion and procedure planning. It also provides a flexible graphical user interface for follow-on guidance of bronchoscopy/EBUS. Human-study results demonstrate the system's potential.

  14. Multimodality and children's participation in classrooms: Instances of ...

    African Journals Online (AJOL)

    Multimodality and children's participation in classrooms: Instances of research. ... deficit models of children, drawing on their everyday experiences and their existing ... It outlines the theoretical framework supporting the pedagogical approach, ...

  15. Elimination of mode coupling in multimode continuous-variable key distribution

    International Nuclear Information System (INIS)

    Filip, Radim; Mista, Ladislav; Marek, Petr

    2005-01-01

    A multimode channel can be utilized to substantially increase the capacity of quantum continuous-variable key distribution. Beyond losses in the channel, an uncontrollable coupling between the modes of the channel typically degrades the capacity of multimode channels. For the key distribution protocol with simultaneous measurement of both complementary quadratures we propose a feasible method to eliminate any undesirable mode coupling by only the receiver's appropriate measurement and data manipulation. It can be used to substantially increase the capacity of the channel, which has an important application in practical continuous-variable quantum cryptography

  16. OpenLMD, multimodal monitoring and control of LMD processing

    Science.gov (United States)

    Rodríguez-Araújo, Jorge; García-Díaz, Antón

    2017-02-01

    This paper presents OpenLMD, a novel open-source solution for on-line multimodal monitoring of Laser Metal Deposition (LMD). The solution is also applicable to a wider range of laser-based applications that require on-line control (e.g. laser welding). OpenLMD is a middleware that enables the orchestration and virtualization of a LMD robot cell, using several open-source frameworks (e.g. ROS, OpenCV, PCL). The solution also allows reconfiguration by easy integration of multiple sensors and processing equipment. As a result, OpenLMD delivers significant advantages over existing monitoring and control approaches, such as improved scalability, and multimodal monitoring and data sharing capabilities.

  17. Multimodality, politics and ideology

    DEFF Research Database (Denmark)

    Machin, David; Van Leeuwen, T.

    2016-01-01

    This journal's editorial statement is clear that political discourse should be studied not only as regards parliamentary type politics. In this introduction we argue precisely for the need to pay increasing attention to the way that political ideologies are infused into culture more widely...... of power, requires meanings and identities which can hold them in place. We explain the processes by which critical multimodal discourse analysis can best draw out this ideology as it is realized through different semiotics resources. © John Benjamins Publishing Company....

  18. Multi-Modality Cascaded Convolutional Neural Networks for Alzheimer's Disease Diagnosis.

    Science.gov (United States)

    Liu, Manhua; Cheng, Danni; Wang, Kundong; Wang, Yaping

    2018-03-23

    Accurate and early diagnosis of Alzheimer's disease (AD) plays important role for patient care and development of future treatment. Structural and functional neuroimages, such as magnetic resonance images (MRI) and positron emission tomography (PET), are providing powerful imaging modalities to help understand the anatomical and functional neural changes related to AD. In recent years, machine learning methods have been widely studied on analysis of multi-modality neuroimages for quantitative evaluation and computer-aided-diagnosis (CAD) of AD. Most existing methods extract the hand-craft imaging features after image preprocessing such as registration and segmentation, and then train a classifier to distinguish AD subjects from other groups. This paper proposes to construct cascaded convolutional neural networks (CNNs) to learn the multi-level and multimodal features of MRI and PET brain images for AD classification. First, multiple deep 3D-CNNs are constructed on different local image patches to transform the local brain image into more compact high-level features. Then, an upper high-level 2D-CNN followed by softmax layer is cascaded to ensemble the high-level features learned from the multi-modality and generate the latent multimodal correlation features of the corresponding image patches for classification task. Finally, these learned features are combined by a fully connected layer followed by softmax layer for AD classification. The proposed method can automatically learn the generic multi-level and multimodal features from multiple imaging modalities for classification, which are robust to the scale and rotation variations to some extent. No image segmentation and rigid registration are required in pre-processing the brain images. Our method is evaluated on the baseline MRI and PET images of 397 subjects including 93 AD patients, 204 mild cognitive impairment (MCI, 76 pMCI +128 sMCI) and 100 normal controls (NC) from Alzheimer's Disease Neuroimaging

  19. The Multimodalities of Globalization: Teaching a YouTube Video in an EAP Classroom

    Science.gov (United States)

    Chun, Christian W.

    2012-01-01

    This article examines the ways in which a multimodal text--a YouTube video on globalization and business--was mediated in two English for Academic Purposes (EAP) classrooms, and how these mediations shaped the instructor's and her students' meaning-making in specific ways. I first explore the complex multimodal discourses involved with this…

  20. Multimodal Imaging of Integrin Receptor-Positive Tumors by Bioluminescence, Fluorescence, Gamma Scintigraphy, and Single-Photon Emission Computed Tomography Using a Cyclic RGD Peptide Labeled with a Near-Infrared Fluorescent Dye and a Radionuclide

    Directory of Open Access Journals (Sweden)

    W. Barry Edwards

    2009-03-01

    Full Text Available Integrins, particularly the αvβ3 heterodimers, play important roles in tumor-induced angiogenesis and invasiveness. To image the expression pattern of the αvβ3 integrin in tumors through a multimodality imaging paradigm, we prepared a cyclic RGDyK peptide analogue (LS308 bearing a tetraazamacrocycle 1,4,7,10-tetraazacyclododecane-N, N′, N″, N‴-tetraacetic acid (DOTA and a lipophilic near-infrared (NIR fluorescent dye cypate. The αvβ3 integrin binding affinity and the internalization properties of LS308 mediated by the αvβ3 integrin in 4t1luc cells were investigated by receptor binding assay and fluorescence microscopy, respectively. The in vivo distribution of 111In-labeled LS308 in a 4t1luc tumor-bearing mouse model was studied by fluorescence, bioluminescence, planar gamma, and single-photon emission computed tomography (SPECT. The results show that LS308 has high affinity for αvβ3 integrin and internalized preferentially via the αvβ3 integrin-mediated endocytosis in 4t1luc cells. We also found that LS308 selectively accumulated in αvβ3-positve tumors in a receptor-specific manner and was visualized by the four imaging methods. Whereas the endogenous bioluminescence imaging identified the ensemble of the tumor tissue, the fluorescence and SPECT methods with the exogenous contrast agent LS308 reported the local expression of αvβ3 integrin. Thus, the multimodal imaging approach could provide important complementary diagnostic information for monitoring the efficacy of new antiangiogenic drugs.