WorldWideScience

Sample records for procedures hmm models

  1. HMM-based Trust Model

    DEFF Research Database (Denmark)

    ElSalamouny, Ehab; Nielsen, Mogens; Sassone, Vladimiro

    2010-01-01

    Probabilistic trust has been adopted as an approach to taking security sensitive decisions in modern global computing environments. Existing probabilistic trust frameworks either assume fixed behaviour for the principals or incorporate the notion of ‘decay' as an ad hoc approach to cope...... with their dynamic behaviour. Using Hidden Markov Models (HMMs) for both modelling and approximating the behaviours of principals, we introduce the HMM-based trust model as a new approach to evaluating trust in systems exhibiting dynamic behaviour. This model avoids the fixed behaviour assumption which is considered...... the major limitation of existing Beta trust model. We show the consistency of the HMM-based trust model and contrast it against the well known Beta trust model with the decay principle in terms of the estimation precision....

  2. Study on solitary word based on HMM model and Baum-Welch algorithm

    Directory of Open Access Journals (Sweden)

    Junxia CHEN

    Full Text Available This paper introduces the principle of Hidden Markov Model, which is used to describe the Markov process with unknown parameters, is a probability model to describe the statistical properties of the random process. On this basis, designed a solitary word detection experiment based on HMM model, by optimizing the experimental model, Using Baum-Welch algorithm for training the problem of solving the HMM model, HMM model to estimate the parameters of the λ value is found, in this view of mathematics equivalent to other linear prediction coefficient. This experiment in reducing unnecessary HMM training at the same time, reduced the algorithm complexity. In order to test the effectiveness of the Baum-Welch algorithm, The simulation of experimental data, the results show that the algorithm is effective.

  3. HMM Adaptation for child speech synthesis

    CSIR Research Space (South Africa)

    Govender, Avashna

    2015-09-01

    Full Text Available Hidden Markov Model (HMM)-based synthesis in combination with speaker adaptation has proven to be an approach that is well-suited for child speech synthesis. This paper describes the development and evaluation of different HMM-based child speech...

  4. An HMM-Like Dynamic Time Warping Scheme for Automatic Speech Recognition

    Directory of Open Access Journals (Sweden)

    Ing-Jr Ding

    2014-01-01

    Full Text Available In the past, the kernel of automatic speech recognition (ASR is dynamic time warping (DTW, which is feature-based template matching and belongs to the category technique of dynamic programming (DP. Although DTW is an early developed ASR technique, DTW has been popular in lots of applications. DTW is playing an important role for the known Kinect-based gesture recognition application now. This paper proposed an intelligent speech recognition system using an improved DTW approach for multimedia and home automation services. The improved DTW presented in this work, called HMM-like DTW, is essentially a hidden Markov model- (HMM- like method where the concept of the typical HMM statistical model is brought into the design of DTW. The developed HMM-like DTW method, transforming feature-based DTW recognition into model-based DTW recognition, will be able to behave as the HMM recognition technique and therefore proposed HMM-like DTW with the HMM-like recognition model will have the capability to further perform model adaptation (also known as speaker adaptation. A series of experimental results in home automation-based multimedia access service environments demonstrated the superiority and effectiveness of the developed smart speech recognition system by HMM-like DTW.

  5. Accelerated Profile HMM Searches.

    Directory of Open Access Journals (Sweden)

    Sean R Eddy

    2011-10-01

    Full Text Available Profile hidden Markov models (profile HMMs and probabilistic inference methods have made important contributions to the theory of sequence database homology search. However, practical use of profile HMM methods has been hindered by the computational expense of existing software implementations. Here I describe an acceleration heuristic for profile HMMs, the "multiple segment Viterbi" (MSV algorithm. The MSV algorithm computes an optimal sum of multiple ungapped local alignment segments using a striped vector-parallel approach previously described for fast Smith/Waterman alignment. MSV scores follow the same statistical distribution as gapped optimal local alignment scores, allowing rapid evaluation of significance of an MSV score and thus facilitating its use as a heuristic filter. I also describe a 20-fold acceleration of the standard profile HMM Forward/Backward algorithms using a method I call "sparse rescaling". These methods are assembled in a pipeline in which high-scoring MSV hits are passed on for reanalysis with the full HMM Forward/Backward algorithm. This accelerated pipeline is implemented in the freely available HMMER3 software package. Performance benchmarks show that the use of the heuristic MSV filter sacrifices negligible sensitivity compared to unaccelerated profile HMM searches. HMMER3 is substantially more sensitive and 100- to 1000-fold faster than HMMER2. HMMER3 is now about as fast as BLAST for protein searches.

  6. Speech-To-Text Conversion STT System Using Hidden Markov Model HMM

    Directory of Open Access Journals (Sweden)

    Su Myat Mon

    2015-06-01

    Full Text Available Abstract Speech is an easiest way to communicate with each other. Speech processing is widely used in many applications like security devices household appliances cellular phones ATM machines and computers. The human computer interface has been developed to communicate or interact conveniently for one who is suffering from some kind of disabilities. Speech-to-Text Conversion STT systems have a lot of benefits for the deaf or dumb people and find their applications in our daily lives. In the same way the aim of the system is to convert the input speech signals into the text output for the deaf or dumb students in the educational fields. This paper presents an approach to extract features by using Mel Frequency Cepstral Coefficients MFCC from the speech signals of isolated spoken words. And Hidden Markov Model HMM method is applied to train and test the audio files to get the recognized spoken word. The speech database is created by using MATLAB.Then the original speech signals are preprocessed and these speech samples are extracted to the feature vectors which are used as the observation sequences of the Hidden Markov Model HMM recognizer. The feature vectors are analyzed in the HMM depending on the number of states.

  7. HMM Logos for visualization of protein families

    Directory of Open Access Journals (Sweden)

    Schultz Jörg

    2004-01-01

    Full Text Available Abstract Background Profile Hidden Markov Models (pHMMs are a widely used tool for protein family research. Up to now, however, there exists no method to visualize all of their central aspects graphically in an intuitively understandable way. Results We present a visualization method that incorporates both emission and transition probabilities of the pHMM, thus extending sequence logos introduced by Schneider and Stephens. For each emitting state of the pHMM, we display a stack of letters. The stack height is determined by the deviation of the position's letter emission frequencies from the background frequencies. The stack width visualizes both the probability of reaching the state (the hitting probability and the expected number of letters the state emits during a pass through the model (the state's expected contribution. A web interface offering online creation of HMM Logos and the corresponding source code can be found at the Logos web server of the Max Planck Institute for Molecular Genetics http://logos.molgen.mpg.de. Conclusions We demonstrate that HMM Logos can be a useful tool for the biologist: We use them to highlight differences between two homologous subfamilies of GTPases, Rab and Ras, and we show that they are able to indicate structural elements of Ras.

  8. Hidden Neural Networks: A Framework for HMM/NN Hybrids

    DEFF Research Database (Denmark)

    Riis, Søren Kamaric; Krogh, Anders Stærmose

    1997-01-01

    This paper presents a general framework for hybrids of hidden Markov models (HMM) and neural networks (NN). In the new framework called hidden neural networks (HNN) the usual HMM probability parameters are replaced by neural network outputs. To ensure a probabilistic interpretation the HNN is nor...... HMMs on TIMIT continuous speech recognition benchmarks. On the task of recognizing five broad phoneme classes an accuracy of 84% is obtained compared to 76% for a standard HMM. Additionally, we report a preliminary result of 69% accuracy on the TIMIT 39 phoneme task...

  9. Online adaptive learning of Left-Right Continuous HMM for bearings condition assessment

    International Nuclear Information System (INIS)

    Cartella, F; Liu, T; Meganck, S; Lemeire, J; Sahli, H

    2012-01-01

    Standard Hidden Markov Models (HMMs) approaches used for condition assessment of bearings assume that all the possible system states are fixed and known a priori and that training data from all of the associated states are available. Moreover, the training procedure is performed offline, and only once at the beginning, with the available training set. These assumptions significantly impede component diagnosis applications when all of the possible states of the system are not known in advance or environmental factors or operative conditions change during the tool's usage. The method introduced in this paper overcomes the above limitations and proposes an approach to detect unknown degradation modalities using a Left-Right Continuous HMM with a variable state space. The proposed HMM is combined with Change Point Detection algorithms to (i) estimate, from historical observations, the initial number of the model's states, as well as to perform an initial guess of the parameters, and (ii) to adaptively recognize new states and, consequently, adjust the model parameters during monitoring. The approach has been tested using real monitoring data taken from the NASA benchmark repository. A comparative study with state of the art techniques shows improvements in terms of reduction of the training procedure iterations, and early detection of unknown states.

  10. Cluster-Based Adaptation Using Density Forest for HMM Phone Recognition

    DEFF Research Database (Denmark)

    Abou-Zleikha, Mohamed; Tan, Zheng-Hua; Christensen, Mads Græsbøll

    2014-01-01

    The dissimilarity between the training and test data in speech recognition systems is known to have a considerable effect on the recognition accuracy. To solve this problem, we use density forest to cluster the data and use maximum a posteriori (MAP) method to build a cluster-based adapted Gaussian...... mixture models (GMMs) in HMM speech recognition. Specifically, a set of bagged versions of the training data for each state in the HMM is generated, and each of these versions is used to generate one GMM and one tree in the density forest. Thereafter, an acoustic model forest is built by replacing...... the data of each leaf (cluster) in each tree with the corresponding GMM adapted by the leaf data using the MAP method. The results show that the proposed approach achieves 3:8% (absolute) lower phone error rate compared with the standard HMM/GMM and 0:8% (absolute) lower PER compared with bagged HMM/GMM....

  11. Effect of HMM Glutenin Subunits on Wheat Quality Attributes

    Directory of Open Access Journals (Sweden)

    Daniela Horvat

    2009-01-01

    Full Text Available Glutenin is a group of polymeric gluten proteins. Glutenin molecules consist of glutenin subunits linked together with disulphide bonds and having higher (HMM-GS and lower (LMM-GS molecular mass. The main objective of this study is the evaluation of the influence of HMM-GS on flour processing properties. Seven bread wheat genotypes with contrasting quality attributes and different HMM-GS composition were analyzed during three years. The composition and quantity of HMM-GS were determined by SDS-PAGE and RP-HPLC, respectively. The quality diversity among genotypes was estimated by the analysis of wheat grain, and flour and bread quality parameters. The presence of HMM glutenin subunits 1 and 2* at Glu-A1 and the subunits 5+10 at Glu-D1 loci, as well as a higher proportion of total HMM-GS, had a positive effect on wheat quality. Cluster analysis of the three groups of data (genotype and HMM-GS, flour and bread quality, and dough rheology yielded the same hierarchical structure for the first top three levels, and similarity of the corresponding dendrograms was proved by the principal eigenvalues of the corresponding Euclidian distance matrices. The obtained similarity in classification based on essentially different types of measurements reflects strong natural association between genetic data, product quality and physical properties. Principal component analysis (PCA was applied to effectively reduce large data set into lower dimensions of latent variables amenable for the analysis. PCA analysis of the total set of data (15 variables revealed a very strong interrelationship between the variables. The first three PCA components accounted for 96 % of the total variance, which was significant to the level of 0.05 and was considered as the level of experimental error. These data imply that the quality of wheat cultivars can be contributed to HMM-GS data and should be taken into account in breeding programs assisted by computer models with the aim to

  12. Important factors in HMM-based phonetic segmentation

    CSIR Research Space (South Africa)

    Van Niekerk, DR

    2007-11-01

    Full Text Available , window and step sizes. Taking into account that the segmentation system trains and applies the HMM models on a single speaker only, our first con- cern was the applicability of the window and step sizes that are commonly used for speech recognition...

  13. Appropriate baseline values for HMM-based speech recognition

    CSIR Research Space (South Africa)

    Barnard, E

    2004-11-01

    Full Text Available A number of issues realted to the development of speech-recognition systems with Hidden Markov Models (HMM) are discussed. A set of systematic experiments using the HTK toolkit and the TMIT database are used to elucidate matters such as the number...

  14. An HMM posterior decoder for sequence feature prediction that includes homology information

    DEFF Research Database (Denmark)

    Käll, Lukas; Krogh, Anders Stærmose; Sonnhammer, Erik L. L.

    2005-01-01

    Motivation: When predicting sequence features like transmembrane topology, signal peptides, coil-coil structures, protein secondary structure or genes, extra support can be gained from homologs. Results: We present here a general hidden Markov model (HMM) decoding algorithm that combines probabil......Motivation: When predicting sequence features like transmembrane topology, signal peptides, coil-coil structures, protein secondary structure or genes, extra support can be gained from homologs. Results: We present here a general hidden Markov model (HMM) decoding algorithm that combines......://phobius.cgb.ki.se/poly.html . An implementation of the algorithm is available on request from the authors....

  15. An improved segmentation-based HMM learning method for Condition-based Maintenance

    International Nuclear Information System (INIS)

    Liu, T; Lemeire, J; Cartella, F; Meganck, S

    2012-01-01

    In the domain of condition-based maintenance (CBM), persistence of machine states is a valid assumption. Based on this assumption, we present an improved Hidden Markov Model (HMM) learning algorithm for the assessment of equipment states. By a good estimation of initial parameters, more accurate learning can be achieved than by regular HMM learning methods which start with randomly chosen initial parameters. It is also better in avoiding getting trapped in local maxima. The data is segmented with a change-point analysis method which uses a combination of cumulative sum charts (CUSUM) and bootstrapping techniques. The method determines a confidence level that a state change happens. After the data is segmented, in order to label and combine the segments corresponding to the same states, a clustering technique is used based on a low-pass filter or root mean square (RMS) values of the features. The segments with their labelled hidden state are taken as 'evidence' to estimate the parameters of an HMM. Then, the estimated parameters are served as initial parameters for the traditional Baum-Welch (BW) learning algorithms, which are used to improve the parameters and train the model. Experiments on simulated and real data demonstrate that both performance and convergence speed is improved.

  16. A Bayesian Approach for Structural Learning with Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Cen Li

    2002-01-01

    Full Text Available Hidden Markov Models(HMM have proved to be a successful modeling paradigm for dynamic and spatial processes in many domains, such as speech recognition, genomics, and general sequence alignment. Typically, in these applications, the model structures are predefined by domain experts. Therefore, the HMM learning problem focuses on the learning of the parameter values of the model to fit the given data sequences. However, when one considers other domains, such as, economics and physiology, model structure capturing the system dynamic behavior is not available. In order to successfully apply the HMM methodology in these domains, it is important that a mechanism is available for automatically deriving the model structure from the data. This paper presents a HMM learning procedure that simultaneously learns the model structure and the maximum likelihood parameter values of a HMM from data. The HMM model structures are derived based on the Bayesian model selection methodology. In addition, we introduce a new initialization procedure for HMM parameter value estimation based on the K-means clustering method. Experimental results with artificially generated data show the effectiveness of the approach.

  17. Comparison of HMM experts with MLP experts in the Full Combination Multi-Band Approach to Robust ASR

    OpenAIRE

    Hagen, Astrid; Morris, Andrew

    2000-01-01

    In this paper we apply the Full Combination (FC) multi-band approach, which has originally been introduced in the framework of posterior-based HMM/ANN (Hidden Markov Model/Artificial Neural Network) hybrid systems, to systems in which the ANN (or Multilayer Perceptron (MLP)) is itself replaced by a Multi Gaussian HMM (MGM). Both systems represent the most widely used statistical models for robust ASR (automatic speech recognition). It is shown how the FC formula for the likelihood--based MGMs...

  18. Fault diagnosis of nuclear-powered equipment based on HMM and SVM

    International Nuclear Information System (INIS)

    Yue Xia; Zhang Chunliang; Zhu Houyao; Quan Yanming

    2012-01-01

    For the complexity and the small fault samples of nuclear-powered equipment, a hybrid HMM/SVM method was introduced in fault diagnosis. The hybrid method has two steps: first, HMM is utilized for primary diagnosis, in which the range of possible failure is reduced and the state trends can be observed; then faults can be recognized taking the advantage of the generalization ability of SVM. Experiments on the main pump failure simulator show that the HMM/SVM system has a high recognition rate and can be used in the fault diagnosis of nuclear-powered equipment. (authors)

  19. HMM-ModE – Improved classification using profile hidden Markov models by optimising the discrimination threshold and modifying emission probabilities with negative training sequences

    Directory of Open Access Journals (Sweden)

    Nandi Soumyadeep

    2007-03-01

    Full Text Available Abstract Background Profile Hidden Markov Models (HMM are statistical representations of protein families derived from patterns of sequence conservation in multiple alignments and have been used in identifying remote homologues with considerable success. These conservation patterns arise from fold specific signals, shared across multiple families, and function specific signals unique to the families. The availability of sequences pre-classified according to their function permits the use of negative training sequences to improve the specificity of the HMM, both by optimizing the threshold cutoff and by modifying emission probabilities to minimize the influence of fold-specific signals. A protocol to generate family specific HMMs is described that first constructs a profile HMM from an alignment of the family's sequences and then uses this model to identify sequences belonging to other classes that score above the default threshold (false positives. Ten-fold cross validation is used to optimise the discrimination threshold score for the model. The advent of fast multiple alignment methods enables the use of the profile alignments to align the true and false positive sequences, and the resulting alignments are used to modify the emission probabilities in the original model. Results The protocol, called HMM-ModE, was validated on a set of sequences belonging to six sub-families of the AGC family of kinases. These sequences have an average sequence similarity of 63% among the group though each sub-group has a different substrate specificity. The optimisation of discrimination threshold, by using negative sequences scored against the model improves specificity in test cases from an average of 21% to 98%. Further discrimination by the HMM after modifying model probabilities using negative training sequences is provided in a few cases, the average specificity rising to 99%. Similar improvements were obtained with a sample of G-Protein coupled receptors

  20. HMM based automated wheelchair navigation using EOG traces in EEG

    Science.gov (United States)

    Aziz, Fayeem; Arof, Hamzah; Mokhtar, Norrima; Mubin, Marizan

    2014-10-01

    This paper presents a wheelchair navigation system based on a hidden Markov model (HMM), which we developed to assist those with restricted mobility. The semi-autonomous system is equipped with obstacle/collision avoidance sensors and it takes the electrooculography (EOG) signal traces from the user as commands to maneuver the wheelchair. The EOG traces originate from eyeball and eyelid movements and they are embedded in EEG signals collected from the scalp of the user at three different locations. Features extracted from the EOG traces are used to determine whether the eyes are open or closed, and whether the eyes are gazing to the right, center, or left. These features are utilized as inputs to a few support vector machine (SVM) classifiers, whose outputs are regarded as observations to an HMM. The HMM determines the state of the system and generates commands for navigating the wheelchair accordingly. The use of simple features and the implementation of a sliding window that captures important signatures in the EOG traces result in a fast execution time and high classification rates. The wheelchair is equipped with a proximity sensor and it can move forward and backward in three directions. The asynchronous system achieved an average classification rate of 98% when tested with online data while its average execution time was less than 1 s. It was also tested in a navigation experiment where all of the participants managed to complete the tasks successfully without collisions.

  1. Using features of local densities, statistics and HMM toolkit (HTK for offline Arabic handwriting text recognition

    Directory of Open Access Journals (Sweden)

    El Moubtahij Hicham

    2017-12-01

    Full Text Available This paper presents an analytical approach of an offline handwritten Arabic text recognition system. It is based on the Hidden Markov Models (HMM Toolkit (HTK without explicit segmentation. The first phase is preprocessing, where the data is introduced in the system after quality enhancements. Then, a set of characteristics (features of local densities and features statistics are extracted by using the technique of sliding windows. Subsequently, the resulting feature vectors are injected to the Hidden Markov Model Toolkit (HTK. The simple database “Arabic-Numbers” and IFN/ENIT are used to evaluate the performance of this system. Keywords: Hidden Markov Models (HMM Toolkit (HTK, Sliding windows

  2. Spotting handwritten words and REGEX using a two stage BLSTM-HMM architecture

    Science.gov (United States)

    Bideault, Gautier; Mioulet, Luc; Chatelain, Clément; Paquet, Thierry

    2015-01-01

    In this article, we propose a hybrid model for spotting words and regular expressions (REGEX) in handwritten documents. The model is made of the state-of-the-art BLSTM (Bidirectional Long Short Time Memory) neural network for recognizing and segmenting characters, coupled with a HMM to build line models able to spot the desired sequences. Experiments on the Rimes database show very promising results.

  3. Bearing Performance Degradation Assessment Using Linear Discriminant Analysis and Coupled HMM

    International Nuclear Information System (INIS)

    Liu, T; Chen, J; Zhou, X N; Xiao, W B

    2012-01-01

    Bearing is one of the most important units in rotary machinery, its performance may vary significantly under different working stages. Thus it is critical to choose the most effective features for bearing performance degradation prediction. Linear Discriminant Analysis (LDA) is a useful method in finding few feature's dimensions that best discriminate a set of features extracted from original vibration signals. Another challenge in bearing performance degradation is how to build a model to recognize the different conditions with the data coming from different monitoring channels. In this paper, coupled hidden Markov models (CHMM) is presented to model interacting processes which can overcome the defections of the HMM. Because the input data in CHMM are collected by several sensors, and the interacting information can be fused by coupled modalities, it is more effective than HMM which used only one state chain. The model can be used in estimating the bearing performance degradation states according to several observation data. When becoming degradation pattern recognition, the new observation features should be input into the pre-trained CHMM and calculate the performance index (PI) of the outputs, the changing of PI could be used to describe the different degradation level of the bearings. The results show that PI will decline with the increase of the bearing degradation. Assessment results of the whole life time experimental bearing signals validate the feasibility and effectiveness of this method.

  4. A Novel Approach to Detect Network Attacks Using G-HMM-Based Temporal Relations between Internet Protocol Packets

    Directory of Open Access Journals (Sweden)

    Han Kyusuk

    2011-01-01

    Full Text Available This paper introduces novel attack detection approaches on mobile and wireless device security and network which consider temporal relations between internet packets. In this paper we first present a field selection technique using a Genetic Algorithm and generate a Packet-based Mining Association Rule from an original Mining Association Rule for Support Vector Machine in mobile and wireless network environment. Through the preprocessing with PMAR, SVM inputs can account for time variation between packets in mobile and wireless network. Third, we present Gaussian observation Hidden Markov Model to exploit the hidden relationships between packets based on probabilistic estimation. In our G-HMM approach, we also apply G-HMM feature reduction for better initialization. We demonstrate the usefulness of our SVM and G-HMM approaches with GA on MIT Lincoln Lab datasets and a live dataset that we captured on a real mobile and wireless network. Moreover, experimental results are verified by -fold cross-validation test.

  5. Objective measures to improve the selection of training speakers in HMM-based child speech synthesis

    CSIR Research Space (South Africa)

    Govender, Avashna

    2016-12-01

    Full Text Available Building synthetic child voices is considered a difficult task due to the challenges associated with data collection. As a result, speaker adaptation in conjunction with Hidden Markov Model (HMM)-based synthesis has become prevalent in this domain...

  6. Explorations in the History of Machines and Mechanisms : Proceedings of HMM2012

    CERN Document Server

    Ceccarelli, Marco

    2012-01-01

    This book contains the proceedings of HMM2012, the 4th International Symposium on Historical Developments in the field of Mechanism and Machine Science (MMS). These proceedings cover recent research concerning all aspects of the development of MMS from antiquity until the present and its historiography: machines, mechanisms, kinematics, dynamics, concepts and theories, design methods, collections of methods, collections of models, institutions and biographies.

  7. An HMM-based comparative genomic framework for detecting introgression in eukaryotes.

    Directory of Open Access Journals (Sweden)

    Kevin J Liu

    2014-06-01

    Full Text Available One outcome of interspecific hybridization and subsequent effects of evolutionary forces is introgression, which is the integration of genetic material from one species into the genome of an individual in another species. The evolution of several groups of eukaryotic species has involved hybridization, and cases of adaptation through introgression have been already established. In this work, we report on PhyloNet-HMM-a new comparative genomic framework for detecting introgression in genomes. PhyloNet-HMM combines phylogenetic networks with hidden Markov models (HMMs to simultaneously capture the (potentially reticulate evolutionary history of the genomes and dependencies within genomes. A novel aspect of our work is that it also accounts for incomplete lineage sorting and dependence across loci. Application of our model to variation data from chromosome 7 in the mouse (Mus musculus domesticus genome detected a recently reported adaptive introgression event involving the rodent poison resistance gene Vkorc1, in addition to other newly detected introgressed genomic regions. Based on our analysis, it is estimated that about 9% of all sites within chromosome 7 are of introgressive origin (these cover about 13 Mbp of chromosome 7, and over 300 genes. Further, our model detected no introgression in a negative control data set. We also found that our model accurately detected introgression and other evolutionary processes from synthetic data sets simulated under the coalescent model with recombination, isolation, and migration. Our work provides a powerful framework for systematic analysis of introgression while simultaneously accounting for dependence across sites, point mutations, recombination, and ancestral polymorphism.

  8. HMMBinder: DNA-Binding Protein Prediction Using HMM Profile Based Features.

    Science.gov (United States)

    Zaman, Rianon; Chowdhury, Shahana Yasmin; Rashid, Mahmood A; Sharma, Alok; Dehzangi, Abdollah; Shatabda, Swakkhar

    2017-01-01

    DNA-binding proteins often play important role in various processes within the cell. Over the last decade, a wide range of classification algorithms and feature extraction techniques have been used to solve this problem. In this paper, we propose a novel DNA-binding protein prediction method called HMMBinder. HMMBinder uses monogram and bigram features extracted from the HMM profiles of the protein sequences. To the best of our knowledge, this is the first application of HMM profile based features for the DNA-binding protein prediction problem. We applied Support Vector Machines (SVM) as a classification technique in HMMBinder. Our method was tested on standard benchmark datasets. We experimentally show that our method outperforms the state-of-the-art methods found in the literature.

  9. HMMBinder: DNA-Binding Protein Prediction Using HMM Profile Based Features

    Directory of Open Access Journals (Sweden)

    Rianon Zaman

    2017-01-01

    Full Text Available DNA-binding proteins often play important role in various processes within the cell. Over the last decade, a wide range of classification algorithms and feature extraction techniques have been used to solve this problem. In this paper, we propose a novel DNA-binding protein prediction method called HMMBinder. HMMBinder uses monogram and bigram features extracted from the HMM profiles of the protein sequences. To the best of our knowledge, this is the first application of HMM profile based features for the DNA-binding protein prediction problem. We applied Support Vector Machines (SVM as a classification technique in HMMBinder. Our method was tested on standard benchmark datasets. We experimentally show that our method outperforms the state-of-the-art methods found in the literature.

  10. Improving a HMM-based off-line handwriting recognition system using MME-PSO optimization

    Science.gov (United States)

    Hamdani, Mahdi; El Abed, Haikal; Hamdani, Tarek M.; Märgner, Volker; Alimi, Adel M.

    2011-01-01

    One of the trivial steps in the development of a classifier is the design of its architecture. This paper presents a new algorithm, Multi Models Evolvement (MME) using Particle Swarm Optimization (PSO). This algorithm is a modified version of the basic PSO, which is used to the unsupervised design of Hidden Markov Model (HMM) based architectures. For instance, the proposed algorithm is applied to an Arabic handwriting recognizer based on discrete probability HMMs. After the optimization of their architectures, HMMs are trained with the Baum- Welch algorithm. The validation of the system is based on the IfN/ENIT database. The performance of the developed approach is compared to the participating systems at the 2005 competition organized on Arabic handwriting recognition on the International Conference on Document Analysis and Recognition (ICDAR). The final system is a combination between an optimized HMM with 6 other HMMs obtained by a simple variation of the number of states. An absolute improvement of 6% of word recognition rate with about 81% is presented. This improvement is achieved comparing to the basic system (ARAB-IfN). The proposed recognizer outperforms also most of the known state-of-the-art systems.

  11. Comparison of HMM and DTW methods in automatic recognition of pathological phoneme pronunciation

    OpenAIRE

    Wielgat, Robert; Zielinski, Tomasz P.; Swietojanski, Pawel; Zoladz, Piotr; Król, Daniel; Wozniak, Tomasz; Grabias, Stanislaw

    2007-01-01

    In the paper recently proposed Human Factor Cepstral Coefficients (HFCC) are used to automatic recognition of pathological phoneme pronunciation in speech of impaired children and efficiency of this approach is compared to application of the standard Mel-Frequency Cepstral Coefficients (MFCC) as a feature vector. Both dynamic time warping (DTW), working on whole words or embedded phoneme patterns, and hidden Markov models (HMM) are used as classifiers in the presented research. Obtained resul...

  12. Research study on harmonized molecular materials (HMM); Bunshi kyocho zairyo ni kansuru chosa kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    As functional material to satisfy various needs for environmental harmonization and efficient conversion for information-oriented and aging societies, HMM were surveyed. Living bodies effectively carry out transmission/processing of information, and transport/conversion of substances, and these functions are based on harmonization between organic molecules, and between those and metal or inorganic ones. HMM is a key substance to artificially realize these bio-related functions. Its R & D aims at (1) Making a breakthrough in production process based on innovation of material separation/conversion technology, (2) Contribution to an information-oriented society by high-efficiency devices, and (3) Growth of a functional bio-material industry. HMM is classified into three categories: (1) Assembly materials such as organic ultra-thin films (LB film, self-organizing film), and organic/inorganic hybrid materials for optoelectronics, sensors and devices, (2) Mesophase materials such as functional separation membrane and photo-conductive material, and (3) Microporous materials such as synthetic catalyst using guest/host materials. 571 refs., 88 figs., 21 tabs.

  13. Name segmentation using hidden Markov models and its application in record linkage

    Directory of Open Access Journals (Sweden)

    Rita de Cassia Braga Gonçalves

    2014-10-01

    Full Text Available This study aimed to evaluate the use of hidden Markov models (HMM for the segmentation of person names and its influence on record linkage. A HMM was applied to the segmentation of patient’s and mother’s names in the databases of the Mortality Information System (SIM, Information Subsystem for High Complexity Procedures (APAC, and Hospital Information System (AIH. A sample of 200 patients from each database was segmented via HMM, and the results were compared to those from segmentation by the authors. The APAC-SIM and APAC-AIH databases were linked using three different segmentation strategies, one of which used HMM. Conformity of segmentation via HMM varied from 90.5% to 92.5%. The different segmentation strategies yielded similar results in the record linkage process. This study suggests that segmentation of Brazilian names via HMM is no more effective than traditional segmentation approaches in the linkage process.

  14. A method for identifying gas-liquid two-phase flow patterns on the basis of wavelet packet multi-scale information entropy and HMM

    International Nuclear Information System (INIS)

    Zhou Yunlong; Zhang Xueqing; Gao Yunpeng; Cheng Yue

    2009-01-01

    For studying flow regimes of gas/liquid two-phase in a vertical upward pipe, the conductance fluctuation information of four typical flow regimes was collected by a measuring the system with self-made multiple conductivity probes. Owing to the non-stationarity of conductance fluctuation signals of gas-liquid two-phase flow, a kind of' flow regime identification method based on wavelet packet Multi-scale Information Entropy and Hidden Markov Model (HMM) was put forward. First of all, the collected conductance fluctuation signals were decomposed into eight different frequency bands signals. Secondly, the wavelet packet multi-scale information entropy of different frequency bands signals were regarded as the input characteristic vectors of all states HMM which had been trained. In the end the regime identification of' the gas-liquid two-phase flow could be performed. The study showed that the method that HMM was applied to identify the flow regime was superior to the one that BP neural network was used, and the results proved that the method was efficient and feasible. (authors)

  15. A hidden Markov model approach for determining expression from genomic tiling micro arrays

    Directory of Open Access Journals (Sweden)

    Krogh Anders

    2006-05-01

    Full Text Available Abstract Background Genomic tiling micro arrays have great potential for identifying previously undiscovered coding as well as non-coding transcription. To-date, however, analyses of these data have been performed in an ad hoc fashion. Results We present a probabilistic procedure, ExpressHMM, that adaptively models tiling data prior to predicting expression on genomic sequence. A hidden Markov model (HMM is used to model the distributions of tiling array probe scores in expressed and non-expressed regions. The HMM is trained on sets of probes mapped to regions of annotated expression and non-expression. Subsequently, prediction of transcribed fragments is made on tiled genomic sequence. The prediction is accompanied by an expression probability curve for visual inspection of the supporting evidence. We test ExpressHMM on data from the Cheng et al. (2005 tiling array experiments on ten Human chromosomes 1. Results can be downloaded and viewed from our web site 2. Conclusion The value of adaptive modelling of fluorescence scores prior to categorisation into expressed and non-expressed probes is demonstrated. Our results indicate that our adaptive approach is superior to the previous analysis in terms of nucleotide sensitivity and transfrag specificity.

  16. HMMEditor: a visual editing tool for profile hidden Markov model

    Directory of Open Access Journals (Sweden)

    Cheng Jianlin

    2008-03-01

    Full Text Available Abstract Background Profile Hidden Markov Model (HMM is a powerful statistical model to represent a family of DNA, RNA, and protein sequences. Profile HMM has been widely used in bioinformatics research such as sequence alignment, gene structure prediction, motif identification, protein structure prediction, and biological database search. However, few comprehensive, visual editing tools for profile HMM are publicly available. Results We develop a visual editor for profile Hidden Markov Models (HMMEditor. HMMEditor can visualize the profile HMM architecture, transition probabilities, and emission probabilities. Moreover, it provides functions to edit and save HMM and parameters. Furthermore, HMMEditor allows users to align a sequence against the profile HMM and to visualize the corresponding Viterbi path. Conclusion HMMEditor provides a set of unique functions to visualize and edit a profile HMM. It is a useful tool for biological sequence analysis and modeling. Both HMMEditor software and web service are freely available.

  17. Development of TTS Engine for Indian Accent using Modified HMM Algorithm

    Directory of Open Access Journals (Sweden)

    Sasanko Sekhar Gantayat

    2018-03-01

    Full Text Available A text-to-speech (TTS system converts normal language text into speech. An intelligent text-to-speech program allows people with visual impairments or reading disabilities, to listen to written works on a home computer. Many computer operating systems and day to day software applications like Adobe Reader have included text-to-speech systems. This paper is presented to show that how HMM can be used as a tool to convert text to speech.

  18. zipHMMlib: a highly optimised HMM library exploiting repetitions in the input to speed up the forward algorithm.

    Science.gov (United States)

    Sand, Andreas; Kristiansen, Martin; Pedersen, Christian N S; Mailund, Thomas

    2013-11-22

    Hidden Markov models are widely used for genome analysis as they combine ease of modelling with efficient analysis algorithms. Calculating the likelihood of a model using the forward algorithm has worst case time complexity linear in the length of the sequence and quadratic in the number of states in the model. For genome analysis, however, the length runs to millions or billions of observations, and when maximising the likelihood hundreds of evaluations are often needed. A time efficient forward algorithm is therefore a key ingredient in an efficient hidden Markov model library. We have built a software library for efficiently computing the likelihood of a hidden Markov model. The library exploits commonly occurring substrings in the input to reuse computations in the forward algorithm. In a pre-processing step our library identifies common substrings and builds a structure over the computations in the forward algorithm which can be reused. This analysis can be saved between uses of the library and is independent of concrete hidden Markov models so one preprocessing can be used to run a number of different models.Using this library, we achieve up to 78 times shorter wall-clock time for realistic whole-genome analyses with a real and reasonably complex hidden Markov model. In one particular case the analysis was performed in less than 8 minutes compared to 9.6 hours for the previously fastest library. We have implemented the preprocessing procedure and forward algorithm as a C++ library, zipHMM, with Python bindings for use in scripts. The library is available at http://birc.au.dk/software/ziphmm/.

  19. Neuroevolution Mechanism for Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Nabil M. Hewahi

    2011-12-01

    Full Text Available Hidden Markov Model (HMM is a statistical model based on probabilities. HMM is becoming one of the major models involved in many applications such as natural language
    processing, handwritten recognition, image processing, prediction systems and many more. In this research we are concerned with finding out the best HMM for a certain application domain. We propose a neuroevolution process that is based first on converting the HMM to a neural network, then generating many neural networks at random where each represents a HMM. We proceed by
    applying genetic operators to obtain new set of neural networks where each represents HMMs, and updating the population. Finally select the best neural network based on a fitness function.

  20. An Efficient Algorithm for Modelling Duration in Hidden Markov Models, with a Dramatic Application

    DEFF Research Database (Denmark)

    Hauberg, Søren; Sloth, Jakob

    2008-01-01

    For many years, the hidden Markov model (HMM) has been one of the most popular tools for analysing sequential data. One frequently used special case is the left-right model, in which the order of the hidden states is known. If knowledge of the duration of a state is available it is not possible...... to represent it explicitly with an HMM. Methods for modelling duration with HMM's do exist (Rabiner in Proc. IEEE 77(2):257---286, [1989]), but they come at the price of increased computational complexity. Here we present an efficient and robust algorithm for modelling duration in HMM's, and this algorithm...

  1. HMM_Model-Checker pour la vérification probabiliste HMM_Model ...

    African Journals Online (AJOL)

    ASSIA

    probabiliste –Télescope Hubble. Abstract. Probabilistic verification for embedded systems continues to attract more and more followers in the research community. Given a probabilistic model, a formula of temporal logic, describing a property of a system and an exploration algorithm to check whether the property is satisfied ...

  2. Hierarchical material models for fragmentation modeling in NIF-ALE-AMR

    International Nuclear Information System (INIS)

    Fisher, A C; Masters, N D; Koniges, A E; Anderson, R W; Gunney, B T N; Wang, P; Becker, R; Dixit, P; Benson, D J

    2008-01-01

    Fragmentation is a fundamental process that naturally spans micro to macroscopic scales. Recent advances in algorithms, computer simulations, and hardware enable us to connect the continuum to microstructural regimes in a real simulation through a heterogeneous multiscale mathematical model. We apply this model to the problem of predicting how targets in the NIF chamber dismantle, so that optics and diagnostics can be protected from damage. The mechanics of the initial material fracture depend on the microscopic grain structure. In order to effectively simulate the fragmentation, this process must be modeled at the subgrain level with computationally expensive crystal plasticity models. However, there are not enough computational resources to model the entire NIF target at this microscopic scale. In order to accomplish these calculations, a hierarchical material model (HMM) is being developed. The HMM will allow fine-scale modeling of the initial fragmentation using computationally expensive crystal plasticity, while the elements at the mesoscale can use polycrystal models, and the macroscopic elements use analytical flow stress models. The HMM framework is built upon an adaptive mesh refinement (AMR) capability. We present progress in implementing the HMM in the NIF-ALE-AMR code. Additionally, we present test simulations relevant to NIF targets

  3. Hierarchical material models for fragmentation modeling in NIF-ALE-AMR

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, A C; Masters, N D; Koniges, A E; Anderson, R W; Gunney, B T N; Wang, P; Becker, R [Lawrence Livermore National Laboratory, PO Box 808, Livermore, CA 94551 (United States); Dixit, P; Benson, D J [University of California San Diego, 9500 Gilman Dr., La Jolla. CA 92093 (United States)], E-mail: fisher47@llnl.gov

    2008-05-15

    Fragmentation is a fundamental process that naturally spans micro to macroscopic scales. Recent advances in algorithms, computer simulations, and hardware enable us to connect the continuum to microstructural regimes in a real simulation through a heterogeneous multiscale mathematical model. We apply this model to the problem of predicting how targets in the NIF chamber dismantle, so that optics and diagnostics can be protected from damage. The mechanics of the initial material fracture depend on the microscopic grain structure. In order to effectively simulate the fragmentation, this process must be modeled at the subgrain level with computationally expensive crystal plasticity models. However, there are not enough computational resources to model the entire NIF target at this microscopic scale. In order to accomplish these calculations, a hierarchical material model (HMM) is being developed. The HMM will allow fine-scale modeling of the initial fragmentation using computationally expensive crystal plasticity, while the elements at the mesoscale can use polycrystal models, and the macroscopic elements use analytical flow stress models. The HMM framework is built upon an adaptive mesh refinement (AMR) capability. We present progress in implementing the HMM in the NIF-ALE-AMR code. Additionally, we present test simulations relevant to NIF targets.

  4. Hidden Markov models in automatic speech recognition

    Science.gov (United States)

    Wrzoskowicz, Adam

    1993-11-01

    This article describes a method for constructing an automatic speech recognition system based on hidden Markov models (HMMs). The author discusses the basic concepts of HMM theory and the application of these models to the analysis and recognition of speech signals. The author provides algorithms which make it possible to train the ASR system and recognize signals on the basis of distinct stochastic models of selected speech sound classes. The author describes the specific components of the system and the procedures used to model and recognize speech. The author discusses problems associated with the choice of optimal signal detection and parameterization characteristics and their effect on the performance of the system. The author presents different options for the choice of speech signal segments and their consequences for the ASR process. The author gives special attention to the use of lexical, syntactic, and semantic information for the purpose of improving the quality and efficiency of the system. The author also describes an ASR system developed by the Speech Acoustics Laboratory of the IBPT PAS. The author discusses the results of experiments on the effect of noise on the performance of the ASR system and describes methods of constructing HMM's designed to operate in a noisy environment. The author also describes a language for human-robot communications which was defined as a complex multilevel network from an HMM model of speech sounds geared towards Polish inflections. The author also added mandatory lexical and syntactic rules to the system for its communications vocabulary.

  5. Improved hidden Markov model for nosocomial infections.

    Science.gov (United States)

    Khader, Karim; Leecaster, Molly; Greene, Tom; Samore, Matthew; Thomas, Alun

    2014-12-01

    We propose a novel hidden Markov model (HMM) for parameter estimation in hospital transmission models, and show that commonly made simplifying assumptions can lead to severe model misspecification and poor parameter estimates. A standard HMM that embodies two commonly made simplifying assumptions, namely a fixed patient count and binomially distributed detections is compared with a new alternative HMM that does not require these simplifying assumptions. Using simulated data, we demonstrate how each of the simplifying assumptions used by the standard model leads to model misspecification, whereas the alternative model results in accurate parameter estimates. © The Authors 2013. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  6. Electricity Price Forecast Using Combined Models with Adaptive Weights Selected and Errors Calibrated by Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Da Liu

    2013-01-01

    Full Text Available A combined forecast with weights adaptively selected and errors calibrated by Hidden Markov model (HMM is proposed to model the day-ahead electricity price. Firstly several single models were built to forecast the electricity price separately. Then the validation errors from every individual model were transformed into two discrete sequences: an emission sequence and a state sequence to build the HMM, obtaining a transmission matrix and an emission matrix, representing the forecasting ability state of the individual models. The combining weights of the individual models were decided by the state transmission matrixes in HMM and the best predict sample ratio of each individual among all the models in the validation set. The individual forecasts were averaged to get the combining forecast with the weights obtained above. The residuals of combining forecast were calibrated by the possible error calculated by the emission matrix of HMM. A case study of day-ahead electricity market of Pennsylvania-New Jersey-Maryland (PJM, USA, suggests that the proposed method outperforms individual techniques of price forecasting, such as support vector machine (SVM, generalized regression neural networks (GRNN, day-ahead modeling, and self-organized map (SOM similar days modeling.

  7. Monthly streamflow forecasting based on hidden Markov model and Gaussian Mixture Regression

    Science.gov (United States)

    Liu, Yongqi; Ye, Lei; Qin, Hui; Hong, Xiaofeng; Ye, Jiajun; Yin, Xingli

    2018-06-01

    Reliable streamflow forecasts can be highly valuable for water resources planning and management. In this study, we combined a hidden Markov model (HMM) and Gaussian Mixture Regression (GMR) for probabilistic monthly streamflow forecasting. The HMM is initialized using a kernelized K-medoids clustering method, and the Baum-Welch algorithm is then executed to learn the model parameters. GMR derives a conditional probability distribution for the predictand given covariate information, including the antecedent flow at a local station and two surrounding stations. The performance of HMM-GMR was verified based on the mean square error and continuous ranked probability score skill scores. The reliability of the forecasts was assessed by examining the uniformity of the probability integral transform values. The results show that HMM-GMR obtained reasonably high skill scores and the uncertainty spread was appropriate. Different HMM states were assumed to be different climate conditions, which would lead to different types of observed values. We demonstrated that the HMM-GMR approach can handle multimodal and heteroscedastic data.

  8. Using hidden Markov models to align multiple sequences.

    Science.gov (United States)

    Mount, David W

    2009-07-01

    A hidden Markov model (HMM) is a probabilistic model of a multiple sequence alignment (msa) of proteins. In the model, each column of symbols in the alignment is represented by a frequency distribution of the symbols (called a "state"), and insertions and deletions are represented by other states. One moves through the model along a particular path from state to state in a Markov chain (i.e., random choice of next move), trying to match a given sequence. The next matching symbol is chosen from each state, recording its probability (frequency) and also the probability of going to that state from a previous one (the transition probability). State and transition probabilities are multiplied to obtain a probability of the given sequence. The hidden nature of the HMM is due to the lack of information about the value of a specific state, which is instead represented by a probability distribution over all possible values. This article discusses the advantages and disadvantages of HMMs in msa and presents algorithms for calculating an HMM and the conditions for producing the best HMM.

  9. Coding with partially hidden Markov models

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Rissanen, J.

    1995-01-01

    Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...

  10. Enhanced Map-Matching Algorithm with a Hidden Markov Model for Mobile Phone Positioning

    Directory of Open Access Journals (Sweden)

    An Luo

    2017-10-01

    Full Text Available Numerous map-matching techniques have been developed to improve positioning, using Global Positioning System (GPS data and other sensors. However, most existing map-matching algorithms process GPS data with high sampling rates, to achieve a higher correct rate and strong universality. This paper introduces a novel map-matching algorithm based on a hidden Markov model (HMM for GPS positioning and mobile phone positioning with a low sampling rate. The HMM is a statistical model well known for providing solutions to temporal recognition applications such as text and speech recognition. In this work, the hidden Markov chain model was built to establish a map-matching process, using the geometric data, the topologies matrix of road links in road network and refined quad-tree data structure. HMM-based map-matching exploits the Viterbi algorithm to find the optimized road link sequence. The sequence consists of hidden states in the HMM model. The HMM-based map-matching algorithm is validated on a vehicle trajectory using GPS and mobile phone data. The results show a significant improvement in mobile phone positioning and high and low sampling of GPS data.

  11. Genetic Algorithms Principles Towards Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Nabil M. Hewahi

    2011-10-01

    Full Text Available In this paper we propose a general approach based on Genetic Algorithms (GAs to evolve Hidden Markov Models (HMM. The problem appears when experts assign probability values for HMM, they use only some limited inputs. The assigned probability values might not be accurate to serve in other cases related to the same domain. We introduce an approach based on GAs to find
    out the suitable probability values for the HMM to be mostly correct in more cases than what have been used to assign the probability values.

  12. Efficient view based 3-D object retrieval using Hidden Markov Model

    Science.gov (United States)

    Jain, Yogendra Kumar; Singh, Roshan Kumar

    2013-12-01

    Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.

  13. Two-Stage Hidden Markov Model in Gesture Recognition for Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Nhan Nguyen-Duc-Thanh

    2012-07-01

    Full Text Available Hidden Markov Model (HMM is very rich in mathematical structure and hence can form the theoretical basis for use in a wide range of applications including gesture representation. Most research in this field, however, uses only HMM for recognizing simple gestures, while HMM can definitely be applied for whole gesture meaning recognition. This is very effectively applicable in Human-Robot Interaction (HRI. In this paper, we introduce an approach for HRI in which not only the human can naturally control the robot by hand gesture, but also the robot can recognize what kind of task it is executing. The main idea behind this method is the 2-stages Hidden Markov Model. The 1st HMM is to recognize the prime command-like gestures. Based on the sequence of prime gestures that are recognized from the 1st stage and which represent the whole action, the 2nd HMM plays a role in task recognition. Another contribution of this paper is that we use the output Mixed Gaussian distribution in HMM to improve the recognition rate. In the experiment, we also complete a comparison of the different number of hidden states and mixture components to obtain the optimal one, and compare to other methods to evaluate this performance.

  14. An Analysis and Implementation of the Hidden Markov Model to Technology Stock Prediction

    Directory of Open Access Journals (Sweden)

    Nguyet Nguyen

    2017-11-01

    Full Text Available Future stock prices depend on many internal and external factors that are not easy to evaluate. In this paper, we use the Hidden Markov Model, (HMM, to predict a daily stock price of three active trading stocks: Apple, Google, and Facebook, based on their historical data. We first use the Akaike information criterion (AIC and Bayesian information criterion (BIC to choose the numbers of states from HMM. We then use the models to predict close prices of these three stocks using both single observation data and multiple observation data. Finally, we use the predictions as signals for trading these stocks. The criteria tests’ results showed that HMM with two states worked the best among two, three and four states for the three stocks. Our results also demonstrate that the HMM outperformed the naïve method in forecasting stock prices. The results also showed that active traders using HMM got a higher return than using the naïve forecast for Facebook and Google stocks. The stock price prediction method has a significant impact on stock trading and derivative hedging.

  15. Implementation of a Tour Guide Robot System Using RFID Technology and Viterbi Algorithm-Based HMM for Speech Recognition

    Directory of Open Access Journals (Sweden)

    Neng-Sheng Pai

    2014-01-01

    Full Text Available This paper applied speech recognition and RFID technologies to develop an omni-directional mobile robot into a robot with voice control and guide introduction functions. For speech recognition, the speech signals were captured by short-time processing. The speaker first recorded the isolated words for the robot to create speech database of specific speakers. After the speech pre-processing of this speech database, the feature parameters of cepstrum and delta-cepstrum were obtained using linear predictive coefficient (LPC. Then, the Hidden Markov Model (HMM was used for model training of the speech database, and the Viterbi algorithm was used to find an optimal state sequence as the reference sample for speech recognition. The trained reference model was put into the industrial computer on the robot platform, and the user entered the isolated words to be tested. After processing by the same reference model and comparing with previous reference model, the path of the maximum total probability in various models found using the Viterbi algorithm in the recognition was the recognition result. Finally, the speech recognition and RFID systems were achieved in an actual environment to prove its feasibility and stability, and implemented into the omni-directional mobile robot.

  16. A Framework for Bioacoustic Vocalization Analysis Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Ebenezer Out-Nyarko

    2009-11-01

    Full Text Available Using Hidden Markov Models (HMMs as a recognition framework for automatic classification of animal vocalizations has a number of benefits, including the ability to handle duration variability through nonlinear time alignment, the ability to incorporate complex language or recognition constraints, and easy extendibility to continuous recognition and detection domains. In this work, we apply HMMs to several different species and bioacoustic tasks using generalized spectral features that can be easily adjusted across species and HMM network topologies suited to each task. This experimental work includes a simple call type classification task using one HMM per vocalization for repertoire analysis of Asian elephants, a language-constrained song recognition task using syllable models as base units for ortolan bunting vocalizations, and a stress stimulus differentiation task in poultry vocalizations using a non-sequential model via a one-state HMM with Gaussian mixtures. Results show strong performance across all tasks and illustrate the flexibility of the HMM framework for a variety of species, vocalization types, and analysis tasks.

  17. HMM-based lexicon-driven and lexicon-free word recognition for online handwritten Indic scripts.

    Science.gov (United States)

    Bharath, A; Madhvanath, Sriganesh

    2012-04-01

    Research for recognizing online handwritten words in Indic scripts is at its early stages when compared to Latin and Oriental scripts. In this paper, we address this problem specifically for two major Indic scripts--Devanagari and Tamil. In contrast to previous approaches, the techniques we propose are largely data driven and script independent. We propose two different techniques for word recognition based on Hidden Markov Models (HMM): lexicon driven and lexicon free. The lexicon-driven technique models each word in the lexicon as a sequence of symbol HMMs according to a standard symbol writing order derived from the phonetic representation. The lexicon-free technique uses a novel Bag-of-Symbols representation of the handwritten word that is independent of symbol order and allows rapid pruning of the lexicon. On handwritten Devanagari word samples featuring both standard and nonstandard symbol writing orders, a combination of lexicon-driven and lexicon-free recognizers significantly outperforms either of them used in isolation. In contrast, most Tamil word samples feature the standard symbol order, and the lexicon-driven recognizer outperforms the lexicon free one as well as their combination. The best recognition accuracies obtained for 20,000 word lexicons are 87.13 percent for Devanagari when the two recognizers are combined, and 91.8 percent for Tamil using the lexicon-driven technique.

  18. A Self-Adaptive Hidden Markov Model for Emotion Classification in Chinese Microblogs

    Directory of Open Access Journals (Sweden)

    Li Liu

    2015-01-01

    we propose a modified version of hidden Markov model (HMM classifier, called self-adaptive HMM, whose parameters are optimized by Particle Swarm Optimization algorithms. Since manually labeling large-scale dataset is difficult, we also employ the entropy to decide whether a new unlabeled tweet shall be contained in the training dataset after being assigned an emotion using our HMM-based approach. In the experiment, we collected about 200,000 Chinese tweets from Sina Weibo. The results show that the F-score of our approach gets 76% on happiness and fear and 65% on anger, surprise, and sadness. In addition, the self-adaptive HMM classifier outperforms Naive Bayes and Support Vector Machine on recognition of happiness, anger, and sadness.

  19. Compact Acoustic Models for Embedded Speech Recognition

    Directory of Open Access Journals (Sweden)

    Lévy Christophe

    2009-01-01

    Full Text Available Speech recognition applications are known to require a significant amount of resources. However, embedded speech recognition only authorizes few KB of memory, few MIPS, and small amount of training data. In order to fit the resource constraints of embedded applications, an approach based on a semicontinuous HMM system using state-independent acoustic modelling is proposed. A transformation is computed and applied to the global model in order to obtain each HMM state-dependent probability density functions, authorizing to store only the transformation parameters. This approach is evaluated on two tasks: digit and voice-command recognition. A fast adaptation technique of acoustic models is also proposed. In order to significantly reduce computational costs, the adaptation is performed only on the global model (using related speaker recognition adaptation techniques with no need for state-dependent data. The whole approach results in a relative gain of more than 20% compared to a basic HMM-based system fitting the constraints.

  20. Evaluation of soft segment modeling on a context independent phoneme classification system

    International Nuclear Information System (INIS)

    Razzazi, F.; Sayadiyan, A.

    2007-01-01

    The geometric distribution of states duration is one of the main performance limiting assumptions of hidden Markov modeling of speech signals. Stochastic segment models, generally, and segmental HMM, specifically overcome this deficiency partly at the cost of more complexity in both training and recognition phases. In addition to this assumption, the gradual temporal changes of speech statistics has not been modeled in HMM. In this paper, a new duration modeling approach is presented. The main idea of the model is to consider the effect of adjacent segments on the probability density function estimation and evaluation of each acoustic segment. This idea not only makes the model robust against segmentation errors, but also it models gradual change from one segment to the next one with a minimum set of parameters. The proposed idea is analytically formulated and tested on a TIMIT based context independent phenomena classification system. During the test procedure, the phoneme classification of different phoneme classes was performed by applying various proposed recognition algorithms. The system was optimized and the results have been compared with a continuous density hidden Markov model (CDHMM) with similar computational complexity. The results show 8-10% improvement in phoneme recognition rate in comparison with standard continuous density hidden Markov model. This indicates improved compatibility of the proposed model with the speech nature. (author)

  1. SVM-dependent pairwise HMM: an application to protein pairwise alignments.

    Science.gov (United States)

    Orlando, Gabriele; Raimondi, Daniele; Khan, Taushif; Lenaerts, Tom; Vranken, Wim F

    2017-12-15

    Methods able to provide reliable protein alignments are crucial for many bioinformatics applications. In the last years many different algorithms have been developed and various kinds of information, from sequence conservation to secondary structure, have been used to improve the alignment performances. This is especially relevant for proteins with highly divergent sequences. However, recent works suggest that different features may have different importance in diverse protein classes and it would be an advantage to have more customizable approaches, capable to deal with different alignment definitions. Here we present Rigapollo, a highly flexible pairwise alignment method based on a pairwise HMM-SVM that can use any type of information to build alignments. Rigapollo lets the user decide the optimal features to align their protein class of interest. It outperforms current state of the art methods on two well-known benchmark datasets when aligning highly divergent sequences. A Python implementation of the algorithm is available at http://ibsquare.be/rigapollo. wim.vranken@vub.be. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  2. A state-based probabilistic model for tumor respiratory motion prediction

    International Nuclear Information System (INIS)

    Kalet, Alan; Sandison, George; Schmitz, Ruth; Wu Huanmei

    2010-01-01

    This work proposes a new probabilistic mathematical model for predicting tumor motion and position based on a finite state representation using the natural breathing states of exhale, inhale and end of exhale. Tumor motion was broken down into linear breathing states and sequences of states. Breathing state sequences and the observables representing those sequences were analyzed using a hidden Markov model (HMM) to predict the future sequences and new observables. Velocities and other parameters were clustered using a k-means clustering algorithm to associate each state with a set of observables such that a prediction of state also enables a prediction of tumor velocity. A time average model with predictions based on average past state lengths was also computed. State sequences which are known a priori to fit the data were fed into the HMM algorithm to set a theoretical limit of the predictive power of the model. The effectiveness of the presented probabilistic model has been evaluated for gated radiation therapy based on previously tracked tumor motion in four lung cancer patients. Positional prediction accuracy is compared with actual position in terms of the overall RMS errors. Various system delays, ranging from 33 to 1000 ms, were tested. Previous studies have shown duty cycles for latencies of 33 and 200 ms at around 90% and 80%, respectively, for linear, no prediction, Kalman filter and ANN methods as averaged over multiple patients. At 1000 ms, the previously reported duty cycles range from approximately 62% (ANN) down to 34% (no prediction). Average duty cycle for the HMM method was found to be 100% and 91 ± 3% for 33 and 200 ms latency and around 40% for 1000 ms latency in three out of four breathing motion traces. RMS errors were found to be lower than linear and no prediction methods at latencies of 1000 ms. The results show that for system latencies longer than 400 ms, the time average HMM prediction outperforms linear, no prediction, and the more

  3. A first approach to Arrhythmogenic Cardiomyopathy detection through ECG and Hidden Markov Models

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez-Serrano, S.; Sanz Sanchez, J.; Martínez Hinarejos, C.D.; Igual Muñoz, B.; Millet Roig, J.; Zorio Grima, Z.; Castells, F.

    2016-07-01

    Arrhythmogenic Cardiomyopathy (ACM) is a heritable cardiac disease causing sudden cardiac death in young people. Its clinical diagnosis includes major and minor criteria based on alterations of the electrocardiogram (ECG). The aim of this study is to evaluate Hidden Markov Models (HMM) in order to assess its possible potential of classification among subjects affected by ACM and those relatives who do not suffer the disease through 12-lead ECG recordings. Database consists of 12-lead ECG recordings from 32 patients diagnosed with ACM, and 37 relatives of those affected, but without gene mutation. Using the HTK toolkit and a hold-out strategy in order to train and evaluate a set of HMM models, we performed a grid search through the number of states and Gaussians across these HMM models. Results show that two different HMM models achieved the best balance between sensibility and specificity. The first one needed 35 states and 2 Gaussians and its performance was 0.7 and 0.8 in sensibility and specificity respectively. The second one achieved a sensibility and specificity values of 0.8 and 0.7 respectively with 50 states and 4 Gaussians. The results of this study show that HMM models can achieve an acceptable level of sensibility and specificity in the classification among ECG registers between those affected by ACM and the control group. All the above suggest that this approach could help to detect the disease in a non-invasive way, especially within the context of family screening, improving sensitivity in detection by ECG. (Author)

  4. Evolving the structure of hidden Markov Models

    DEFF Research Database (Denmark)

    won, K. J.; Prugel-Bennett, A.; Krogh, A.

    2006-01-01

    A genetic algorithm (GA) is proposed for finding the structure of hidden Markov Models (HMMs) used for biological sequence analysis. The GA is designed to preserve biologically meaningful building blocks. The search through the space of HMM structures is combined with optimization of the emission...... and transition probabilities using the classic Baum-Welch algorithm. The system is tested on the problem of finding the promoter and coding region of C. jejuni. The resulting HMM has a superior discrimination ability to a handcrafted model that has been published in the literature....

  5. Detecting Structural Breaks using Hidden Markov Models

    DEFF Research Database (Denmark)

    Ntantamis, Christos

    Testing for structural breaks and identifying their location is essential for econometric modeling. In this paper, a Hidden Markov Model (HMM) approach is used in order to perform these tasks. Breaks are defined as the data points where the underlying Markov Chain switches from one state to another....... The estimation of the HMM is conducted using a variant of the Iterative Conditional Expectation-Generalized Mixture (ICE-GEMI) algorithm proposed by Delignon et al. (1997), that permits analysis of the conditional distributions of economic data and allows for different functional forms across regimes...

  6. A Method for Driving Route Predictions Based on Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Ning Ye

    2015-01-01

    Full Text Available We present a driving route prediction method that is based on Hidden Markov Model (HMM. This method can accurately predict a vehicle’s entire route as early in a trip’s lifetime as possible without inputting origins and destinations beforehand. Firstly, we propose the route recommendation system architecture, where route predictions play important role in the system. Secondly, we define a road network model, normalize each of driving routes in the rectangular coordinate system, and build the HMM to make preparation for route predictions using a method of training set extension based on K-means++ and the add-one (Laplace smoothing technique. Thirdly, we present the route prediction algorithm. Finally, the experimental results of the effectiveness of the route predictions that is based on HMM are shown.

  7. Automatic generation of gene finders for eukaryotic species

    DEFF Research Database (Denmark)

    Terkelsen, Kasper Munch; Krogh, A.

    2006-01-01

    and quality of reliable gene annotation grows. Results We present a procedure, Agene, that automatically generates a species-specific gene predictor from a set of reliable mRNA sequences and a genome. We apply a Hidden Markov model (HMM) that implements explicit length distribution modelling for all gene......Background The number of sequenced eukaryotic genomes is rapidly increasing. This means that over time it will be hard to keep supplying customised gene finders for each genome. This calls for procedures to automatically generate species-specific gene finders and to re-train them as the quantity...... structure blocks using acyclic discrete phase type distributions. The state structure of the each HMM is generated dynamically from an array of sub-models to include only gene features represented in the training set. Conclusion Acyclic discrete phase type distributions are well suited to model sequence...

  8. Inference with constrained hidden Markov models in PRISM

    DEFF Research Database (Denmark)

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2010-01-01

    A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we show how HMMs can be extended with side-constraints and present constraint solving techniques for efficient inference. De......_different are integrated. We experimentally validate our approach on the biologically motivated problem of global pairwise alignment.......A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we show how HMMs can be extended with side-constraints and present constraint solving techniques for efficient inference...

  9. A Constraint Model for Constrained Hidden Markov Models

    DEFF Research Database (Denmark)

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2009-01-01

    A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we extend HMMs with constraints and show how the familiar Viterbi algorithm can be generalized, based on constraint solving ...

  10. Bayesian Mixed Hidden Markov Models: A Multi-Level Approach to Modeling Categorical Outcomes with Differential Misclassification

    Science.gov (United States)

    Zhang, Yue; Berhane, Kiros

    2014-01-01

    Questionnaire-based health status outcomes are often prone to misclassification. When studying the effect of risk factors on such outcomes, ignoring any potential misclassification may lead to biased effect estimates. Analytical challenges posed by these misclassified outcomes are further complicated when simultaneously exploring factors for both the misclassification and health processes in a multi-level setting. To address these challenges, we propose a fully Bayesian Mixed Hidden Markov Model (BMHMM) for handling differential misclassification in categorical outcomes in a multi-level setting. The BMHMM generalizes the traditional Hidden Markov Model (HMM) by introducing random effects into three sets of HMM parameters for joint estimation of the prevalence, transition and misclassification probabilities. This formulation not only allows joint estimation of all three sets of parameters, but also accounts for cluster level heterogeneity based on a multi-level model structure. Using this novel approach, both the true health status prevalence and the transition probabilities between the health states during follow-up are modeled as functions of covariates. The observed, possibly misclassified, health states are related to the true, but unobserved, health states and covariates. Results from simulation studies are presented to validate the estimation procedure, to show the computational efficiency due to the Bayesian approach and also to illustrate the gains from the proposed method compared to existing methods that ignore outcome misclassification and cluster level heterogeneity. We apply the proposed method to examine the risk factors for both asthma transition and misclassification in the Southern California Children's Health Study (CHS). PMID:24254432

  11. A Two-Channel Training Algorithm for Hidden Markov Model and Its Application to Lip Reading

    Directory of Open Access Journals (Sweden)

    Foo Say Wei

    2005-01-01

    Full Text Available Hidden Markov model (HMM has been a popular mathematical approach for sequence classification such as speech recognition since 1980s. In this paper, a novel two-channel training strategy is proposed for discriminative training of HMM. For the proposed training strategy, a novel separable-distance function that measures the difference between a pair of training samples is adopted as the criterion function. The symbol emission matrix of an HMM is split into two channels: a static channel to maintain the validity of the HMM and a dynamic channel that is modified to maximize the separable distance. The parameters of the two-channel HMM are estimated by iterative application of expectation-maximization (EM operations. As an example of the application of the novel approach, a hierarchical speaker-dependent visual speech recognition system is trained using the two-channel HMMs. Results of experiments on identifying a group of confusable visemes indicate that the proposed approach is able to increase the recognition accuracy by an average of 20% compared with the conventional HMMs that are trained with the Baum-Welch estimation.

  12. Hidden Markov models for sequence analysis: extension and analysis of the basic method

    DEFF Research Database (Denmark)

    Hughey, Richard; Krogh, Anders Stærmose

    1996-01-01

    -maximization training procedure is relatively straight-forward. In this paper,we review the mathematical extensions and heuristics that move the method from the theoreticalto the practical. Then, we experimentally analyze the effectiveness of model regularization,dynamic model modification, and optimization strategies......Hidden Markov models (HMMs) are a highly effective means of modeling a family of unalignedsequences or a common motif within a set of unaligned sequences. The trained HMM can then beused for discrimination or multiple alignment. The basic mathematical description of an HMMand its expectation....... Finally it is demonstrated on the SH2domain how a domain can be found from unaligned sequences using a special model type. Theexperimental work was completed with the aid of the Sequence Alignment and Modeling softwaresuite....

  13. Forecasting oil price trends using wavelets and hidden Markov models

    International Nuclear Information System (INIS)

    Souza e Silva, Edmundo G. de; Souza e Silva, Edmundo A. de; Legey, Luiz F.L.

    2010-01-01

    The crude oil price is influenced by a great number of factors, most of which interact in very complex ways. For this reason, forecasting it through a fundamentalist approach is a difficult task. An alternative is to use time series methodologies, with which the price's past behavior is conveniently analyzed, and used to predict future movements. In this paper, we investigate the usefulness of a nonlinear time series model, known as hidden Markov model (HMM), to predict future crude oil price movements. Using an HMM, we develop a forecasting methodology that consists of, basically, three steps. First, we employ wavelet analysis to remove high frequency price movements, which can be assumed as noise. Then, the HMM is used to forecast the probability distribution of the price return accumulated over the next F days. Finally, from this distribution, we infer future price trends. Our results indicate that the proposed methodology might be a useful decision support tool for agents participating in the crude oil market. (author)

  14. Post processing of optically recognized text via second order hidden Markov model

    Science.gov (United States)

    Poudel, Srijana

    In this thesis, we describe a postprocessing system on Optical Character Recognition(OCR) generated text. Second Order Hidden Markov Model (HMM) approach is used to detect and correct the OCR related errors. The reason for choosing the 2nd order HMM is to keep track of the bigrams so that the model can represent the system more accurately. Based on experiments with training data of 159,733 characters and testing of 5,688 characters, the model was able to correct 43.38 % of the errors with a precision of 75.34 %. However, the precision value indicates that the model introduced some new errors, decreasing the correction percentage to 26.4%.

  15. HPeak: an HMM-based algorithm for defining read-enriched regions in ChIP-Seq data

    Directory of Open Access Journals (Sweden)

    Maher Christopher A

    2010-07-01

    Full Text Available Abstract Background Protein-DNA interaction constitutes a basic mechanism for the genetic regulation of target gene expression. Deciphering this mechanism has been a daunting task due to the difficulty in characterizing protein-bound DNA on a large scale. A powerful technique has recently emerged that couples chromatin immunoprecipitation (ChIP with next-generation sequencing, (ChIP-Seq. This technique provides a direct survey of the cistrom of transcription factors and other chromatin-associated proteins. In order to realize the full potential of this technique, increasingly sophisticated statistical algorithms have been developed to analyze the massive amount of data generated by this method. Results Here we introduce HPeak, a Hidden Markov model (HMM-based Peak-finding algorithm for analyzing ChIP-Seq data to identify protein-interacting genomic regions. In contrast to the majority of available ChIP-Seq analysis software packages, HPeak is a model-based approach allowing for rigorous statistical inference. This approach enables HPeak to accurately infer genomic regions enriched with sequence reads by assuming realistic probability distributions, in conjunction with a novel weighting scheme on the sequencing read coverage. Conclusions Using biologically relevant data collections, we found that HPeak showed a higher prevalence of the expected transcription factor binding motifs in ChIP-enriched sequences relative to the control sequences when compared to other currently available ChIP-Seq analysis approaches. Additionally, in comparison to the ChIP-chip assay, ChIP-Seq provides higher resolution along with improved sensitivity and specificity of binding site detection. Additional file and the HPeak program are freely available at http://www.sph.umich.edu/csg/qin/HPeak.

  16. QRS complex detection based on continuous density hidden Markov models using univariate observations

    Science.gov (United States)

    Sotelo, S.; Arenas, W.; Altuve, M.

    2018-04-01

    In the electrocardiogram (ECG), the detection of QRS complexes is a fundamental step in the ECG signal processing chain since it allows the determination of other characteristics waves of the ECG and provides information about heart rate variability. In this work, an automatic QRS complex detector based on continuous density hidden Markov models (HMM) is proposed. HMM were trained using univariate observation sequences taken either from QRS complexes or their derivatives. The detection approach is based on the log-likelihood comparison of the observation sequence with a fixed threshold. A sliding window was used to obtain the observation sequence to be evaluated by the model. The threshold was optimized by receiver operating characteristic curves. Sensitivity (Sen), specificity (Spc) and F1 score were used to evaluate the detection performance. The approach was validated using ECG recordings from the MIT-BIH Arrhythmia database. A 6-fold cross-validation shows that the best detection performance was achieved with 2 states HMM trained with QRS complexes sequences (Sen = 0.668, Spc = 0.360 and F1 = 0.309). We concluded that these univariate sequences provide enough information to characterize the QRS complex dynamics from HMM. Future works are directed to the use of multivariate observations to increase the detection performance.

  17. Classification of Multiple Seizure-Like States in Three Different Rodent Models of Epileptogenesis.

    Science.gov (United States)

    Guirgis, Mirna; Serletis, Demitre; Zhang, Jane; Florez, Carlos; Dian, Joshua A; Carlen, Peter L; Bardakjian, Berj L

    2014-01-01

    Epilepsy is a dynamical disease and its effects are evident in over fifty million people worldwide. This study focused on objective classification of the multiple states involved in the brain's epileptiform activity. Four datasets from three different rodent hippocampal preparations were explored, wherein seizure-like-events (SLE) were induced by the perfusion of a low - Mg(2+) /high-K(+) solution or 4-Aminopyridine. Local field potentials were recorded from CA3 pyramidal neurons and interneurons and modeled as Markov processes. Specifically, hidden Markov models (HMM) were used to determine the nature of the states present. Properties of the Hilbert transform were used to construct the feature spaces for HMM training. By sequentially applying the HMM training algorithm, multiple states were identified both in episodes of SLE and nonSLE activity. Specifically, preSLE and postSLE states were differentiated and multiple inner SLE states were identified. This was accomplished using features extracted from the lower frequencies (1-4 Hz, 4-8 Hz) alongside those of both the low- (40-100 Hz) and high-gamma (100-200 Hz) of the recorded electrical activity. The learning paradigm of this HMM-based system eliminates the inherent bias associated with other learning algorithms that depend on predetermined state segmentation and renders it an appropriate candidate for SLE classification.

  18. Hidden Markov modeling of frequency-following responses to Mandarin lexical tones.

    Science.gov (United States)

    Llanos, Fernando; Xie, Zilong; Chandrasekaran, Bharath

    2017-11-01

    The frequency-following response (FFR) is a scalp-recorded electrophysiological potential reflecting phase-locked activity from neural ensembles in the auditory system. The FFR is often used to assess the robustness of subcortical pitch processing. Due to low signal-to-noise ratio at the single-trial level, FFRs are typically averaged across thousands of stimulus repetitions. Prior work using this approach has shown that subcortical encoding of linguistically-relevant pitch patterns is modulated by long-term language experience. We examine the extent to which a machine learning approach using hidden Markov modeling (HMM) can be utilized to decode Mandarin tone-categories from scalp-record electrophysiolgical activity. We then assess the extent to which the HMM can capture biologically-relevant effects (language experience-driven plasticity). To this end, we recorded FFRs to four Mandarin tones from 14 adult native speakers of Chinese and 14 of native English. We trained a HMM to decode tone categories from the FFRs with varying size of averages. Tone categories were decoded with above-chance accuracies using HMM. The HMM derived metric (decoding accuracy) revealed a robust effect of language experience, such that FFRs from native Chinese speakers yielded greater accuracies than native English speakers. Critically, the language experience-driven plasticity was captured with average sizes significantly smaller than those used in the extant literature. Our results demonstrate the feasibility of HMM in assessing the robustness of neural pitch. Machine-learning approaches can complement extant analytical methods that capture auditory function and could reduce the number of trials needed to capture biological phenomena. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Accelerating Information Retrieval from Profile Hidden Markov Model Databases.

    Science.gov (United States)

    Tamimi, Ahmad; Ashhab, Yaqoub; Tamimi, Hashem

    2016-01-01

    Profile Hidden Markov Model (Profile-HMM) is an efficient statistical approach to represent protein families. Currently, several databases maintain valuable protein sequence information as profile-HMMs. There is an increasing interest to improve the efficiency of searching Profile-HMM databases to detect sequence-profile or profile-profile homology. However, most efforts to enhance searching efficiency have been focusing on improving the alignment algorithms. Although the performance of these algorithms is fairly acceptable, the growing size of these databases, as well as the increasing demand for using batch query searching approach, are strong motivations that call for further enhancement of information retrieval from profile-HMM databases. This work presents a heuristic method to accelerate the current profile-HMM homology searching approaches. The method works by cluster-based remodeling of the database to reduce the search space, rather than focusing on the alignment algorithms. Using different clustering techniques, 4284 TIGRFAMs profiles were clustered based on their similarities. A representative for each cluster was assigned. To enhance sensitivity, we proposed an extended step that allows overlapping among clusters. A validation benchmark of 6000 randomly selected protein sequences was used to query the clustered profiles. To evaluate the efficiency of our approach, speed and recall values were measured and compared with the sequential search approach. Using hierarchical, k-means, and connected component clustering techniques followed by the extended overlapping step, we obtained an average reduction in time of 41%, and an average recall of 96%. Our results demonstrate that representation of profile-HMMs using a clustering-based approach can significantly accelerate data retrieval from profile-HMM databases.

  20. Procedural Modeling for Digital Cultural Heritage

    Directory of Open Access Journals (Sweden)

    Simon Haegler

    2009-01-01

    Full Text Available The rapid development of computer graphics and imaging provides the modern archeologist with several tools to realistically model and visualize archeological sites in 3D. This, however, creates a tension between veridical and realistic modeling. Visually compelling models may lead people to falsely believe that there exists very precise knowledge about the past appearance of a site. In order to make the underlying uncertainty visible, it has been proposed to encode this uncertainty with different levels of transparency in the rendering, or of decoloration of the textures. We argue that procedural modeling technology based on shape grammars provides an interesting alternative to such measures, as they tend to spoil the experience for the observer. Both its efficiency and compactness make procedural modeling a tool to produce multiple models, which together sample the space of possibilities. Variations between the different models express levels of uncertainty implicitly, while letting each individual model keeping its realistic appearance. The underlying, structural description makes the uncertainty explicit. Additionally, procedural modeling also yields the flexibility to incorporate changes as knowledge of an archeological site gets refined. Annotations explaining modeling decisions can be included. We demonstrate our procedural modeling implementation with several recent examples.

  1. Hidden Markov models for labeled sequences

    DEFF Research Database (Denmark)

    Krogh, Anders Stærmose

    1994-01-01

    A hidden Markov model for labeled observations, called a class HMM, is introduced and a maximum likelihood method is developed for estimating the parameters of the model. Instead of training it to model the statistics of the training sequences it is trained to optimize recognition. It resembles MMI...

  2. Learning effective connectivity from fMRI using autoregressive hidden Markov model with missing data.

    Science.gov (United States)

    Dang, Shilpa; Chaudhury, Santanu; Lall, Brejesh; Roy, Prasun Kumar

    2017-02-15

    Effective connectivity (EC) analysis of neuronal groups using fMRI delivers insights about functional-integration. However, fMRI signal has low-temporal resolution due to down-sampling and indirectly measures underlying neuronal activity. The aim is to address above issues for more reliable EC estimates. This paper proposes use of autoregressive hidden Markov model with missing data (AR-HMM-md) in dynamically multi-linked (DML) framework for learning EC using multiple fMRI time series. In our recent work (Dang et al., 2016), we have shown how AR-HMM-md for modelling single fMRI time series outperforms the existing methods. AR-HMM-md models unobserved neuronal activity and lost data over time as variables and estimates their values by joint optimization given fMRI observation sequence. The effectiveness in learning EC is shown using simulated experiments. Also the effects of sampling and noise are studied on EC. Moreover, classification-experiments are performed for Attention-Deficit/Hyperactivity Disorder subjects and age-matched controls for performance evaluation of real data. Using Bayesian model selection, we see that the proposed model converged to higher log-likelihood and demonstrated that group-classification can be performed with higher cross-validation accuracy of above 94% using distinctive network EC which characterizes patients vs. The full data EC obtained from DML-AR-HMM-md is more consistent with previous literature than the classical multivariate Granger causality method. The proposed architecture leads to reliable estimates of EC than the existing latent models. This framework overcomes the disadvantage of low-temporal resolution and improves cross-validation accuracy significantly due to presence of missing data variables and autoregressive process. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Dual Sticky Hierarchical Dirichlet Process Hidden Markov Model and Its Application to Natural Language Description of Motions.

    Science.gov (United States)

    Hu, Weiming; Tian, Guodong; Kang, Yongxin; Yuan, Chunfeng; Maybank, Stephen

    2017-09-25

    In this paper, a new nonparametric Bayesian model called the dual sticky hierarchical Dirichlet process hidden Markov model (HDP-HMM) is proposed for mining activities from a collection of time series data such as trajectories. All the time series data are clustered. Each cluster of time series data, corresponding to a motion pattern, is modeled by an HMM. Our model postulates a set of HMMs that share a common set of states (topics in an analogy with topic models for document processing), but have unique transition distributions. For the application to motion trajectory modeling, topics correspond to motion activities. The learnt topics are clustered into atomic activities which are assigned predicates. We propose a Bayesian inference method to decompose a given trajectory into a sequence of atomic activities. On combining the learnt sources and sinks, semantic motion regions, and the learnt sequence of atomic activities, the action represented by the trajectory can be described in natural language in as automatic a way as possible. The effectiveness of our dual sticky HDP-HMM is validated on several trajectory datasets. The effectiveness of the natural language descriptions for motions is demonstrated on the vehicle trajectories extracted from a traffic scene.

  4. Accelerating Information Retrieval from Profile Hidden Markov Model Databases.

    Directory of Open Access Journals (Sweden)

    Ahmad Tamimi

    Full Text Available Profile Hidden Markov Model (Profile-HMM is an efficient statistical approach to represent protein families. Currently, several databases maintain valuable protein sequence information as profile-HMMs. There is an increasing interest to improve the efficiency of searching Profile-HMM databases to detect sequence-profile or profile-profile homology. However, most efforts to enhance searching efficiency have been focusing on improving the alignment algorithms. Although the performance of these algorithms is fairly acceptable, the growing size of these databases, as well as the increasing demand for using batch query searching approach, are strong motivations that call for further enhancement of information retrieval from profile-HMM databases. This work presents a heuristic method to accelerate the current profile-HMM homology searching approaches. The method works by cluster-based remodeling of the database to reduce the search space, rather than focusing on the alignment algorithms. Using different clustering techniques, 4284 TIGRFAMs profiles were clustered based on their similarities. A representative for each cluster was assigned. To enhance sensitivity, we proposed an extended step that allows overlapping among clusters. A validation benchmark of 6000 randomly selected protein sequences was used to query the clustered profiles. To evaluate the efficiency of our approach, speed and recall values were measured and compared with the sequential search approach. Using hierarchical, k-means, and connected component clustering techniques followed by the extended overlapping step, we obtained an average reduction in time of 41%, and an average recall of 96%. Our results demonstrate that representation of profile-HMMs using a clustering-based approach can significantly accelerate data retrieval from profile-HMM databases.

  5. A procedure for building product models

    DEFF Research Database (Denmark)

    Hvam, Lars; Riis, Jesper; Malis, Martin

    2001-01-01

    This article presents a procedure for building product models to support the specification processes dealing with sales, design of product variants and production preparation. The procedure includes, as the first phase, an analysis and redesign of the business processes, which are to be supported...... with product models. The next phase includes an analysis of the product assortment, and the set up of a so-called product master. Finally the product model is designed and implemented using object oriented modelling. The procedure is developed in order to ensure that the product models constructed are fit...... for the business processes they support, and properly structured and documented, in order to facilitate that the systems can be maintained continually and further developed. The research has been carried out at the Centre for Industrialisation of Engineering, Department of Manufacturing Engineering, Technical...

  6. Partially Hidden Markov Models

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Rissanen, Jorma

    1996-01-01

    Partially Hidden Markov Models (PHMM) are introduced. They differ from the ordinary HMM's in that both the transition probabilities of the hidden states and the output probabilities are conditioned on past observations. As an illustration they are applied to black and white image compression where...

  7. Design Transformations for Rule-based Procedural Modeling

    KAUST Repository

    Lienhard, Stefan; Lau, Cheryl; Mü ller, Pascal; Wonka, Peter; Pauly, Mark

    2017-01-01

    We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.

  8. Design Transformations for Rule-based Procedural Modeling

    KAUST Repository

    Lienhard, Stefan

    2017-05-24

    We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.

  9. Evolutionarily conserved elements in vertebrate, insect, worm, and yeast genomes

    DEFF Research Database (Denmark)

    Siepel, Adam; Bejerano, Gill; Pedersen, Jakob Skou

    2005-01-01

    We have conducted a comprehensive search for conserved elements in vertebrate genomes, using genome-wide multiple alignments of five vertebrate species (human, mouse, rat, chicken, and Fugu rubripes). Parallel searches have been performed with multiple alignments of four insect species (three...... species of Drosophila and Anopheles gambiae), two species of Caenorhabditis, and seven species of Saccharomyces. Conserved elements were identified with a computer program called phastCons, which is based on a two-state phylogenetic hidden Markov model (phylo-HMM). PhastCons works by fitting a phylo......-HMM to the data by maximum likelihood, subject to constraints designed to calibrate the model across species groups, and then predicting conserved elements based on this model. The predicted elements cover roughly 3%-8% of the human genome (depending on the details of the calibration procedure) and substantially...

  10. A hidden Markov model approach for determining expression from genomic tiling micro arrays

    DEFF Research Database (Denmark)

    Terkelsen, Kasper Munch; Gardner, P. P.; Arctander, Peter

    2006-01-01

    Background Genomic tiling micro arrays have great potential for identifying previously undiscovered coding as well as non-coding transcription. To-date, however, analyses of these data have been performed in an ad hoc fashion. Results We present a probabilistic procedure, ExpressHMM, that adaptiv......Background Genomic tiling micro arrays have great potential for identifying previously undiscovered coding as well as non-coding transcription. To-date, however, analyses of these data have been performed in an ad hoc fashion. Results We present a probabilistic procedure, Express...

  11. A Novel Entropy-Based Decoding Algorithm for a Generalized High-Order Discrete Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Jason Chin-Tiong Chan

    2018-01-01

    Full Text Available The optimal state sequence of a generalized High-Order Hidden Markov Model (HHMM is tracked from a given observational sequence using the classical Viterbi algorithm. This classical algorithm is based on maximum likelihood criterion. We introduce an entropy-based Viterbi algorithm for tracking the optimal state sequence of a HHMM. The entropy of a state sequence is a useful quantity, providing a measure of the uncertainty of a HHMM. There will be no uncertainty if there is only one possible optimal state sequence for HHMM. This entropy-based decoding algorithm can be formulated in an extended or a reduction approach. We extend the entropy-based algorithm for computing the optimal state sequence that was developed from a first-order to a generalized HHMM with a single observational sequence. This extended algorithm performs the computation exponentially with respect to the order of HMM. The computational complexity of this extended algorithm is due to the growth of the model parameters. We introduce an efficient entropy-based decoding algorithm that used reduction approach, namely, entropy-based order-transformation forward algorithm (EOTFA to compute the optimal state sequence of any generalized HHMM. This EOTFA algorithm involves a transformation of a generalized high-order HMM into an equivalent first-order HMM and an entropy-based decoding algorithm is developed based on the equivalent first-order HMM. This algorithm performs the computation based on the observational sequence and it requires OTN~2 calculations, where N~ is the number of states in an equivalent first-order model and T is the length of observational sequence.

  12. A Survey on Procedural Modelling for Virtual Worlds

    NARCIS (Netherlands)

    Smelik, R.M.; Tutenel, T.; Bidarra, R.; Benes, B.

    2014-01-01

    Procedural modelling deals with (semi-)automatic content generation by means of a program or procedure. Among other advantages, its data compression and the potential to generate a large variety of detailed content with reduced human intervention, have made procedural modelling attractive for

  13. Video event classification and image segmentation based on noncausal multidimensional hidden Markov models.

    Science.gov (United States)

    Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A

    2009-06-01

    In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.

  14. An Introduction to Infinite HMMs for Single-Molecule Data Analysis.

    Science.gov (United States)

    Sgouralis, Ioannis; Pressé, Steve

    2017-05-23

    The hidden Markov model (HMM) has been a workhorse of single-molecule data analysis and is now commonly used as a stand-alone tool in time series analysis or in conjunction with other analysis methods such as tracking. Here, we provide a conceptual introduction to an important generalization of the HMM, which is poised to have a deep impact across the field of biophysics: the infinite HMM (iHMM). As a modeling tool, iHMMs can analyze sequential data without a priori setting a specific number of states as required for the traditional (finite) HMM. Although the current literature on the iHMM is primarily intended for audiences in statistics, the idea is powerful and the iHMM's breadth in applicability outside machine learning and data science warrants a careful exposition. Here, we explain the key ideas underlying the iHMM, with a special emphasis on implementation, and provide a description of a code we are making freely available. In a companion article, we provide an important extension of the iHMM to accommodate complications such as drift. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  15. Detecting Seismic Events Using a Supervised Hidden Markov Model

    Science.gov (United States)

    Burks, L.; Forrest, R.; Ray, J.; Young, C.

    2017-12-01

    We explore the use of supervised hidden Markov models (HMMs) to detect seismic events in streaming seismogram data. Current methods for seismic event detection include simple triggering algorithms, such as STA/LTA and the Z-statistic, which can lead to large numbers of false positives that must be investigated by an analyst. The hypothesis of this study is that more advanced detection methods, such as HMMs, may decreases false positives while maintaining accuracy similar to current methods. We train a binary HMM classifier using 2 weeks of 3-component waveform data from the International Monitoring System (IMS) that was carefully reviewed by an expert analyst to pick all seismic events. Using an ensemble of simple and discrete features, such as the triggering of STA/LTA, the HMM predicts the time at which transition occurs from noise to signal. Compared to the STA/LTA detection algorithm, the HMM detects more true events, but the false positive rate remains unacceptably high. Future work to potentially decrease the false positive rate may include using continuous features, a Gaussian HMM, and multi-class HMMs to distinguish between types of seismic waves (e.g., P-waves and S-waves). Acknowledgement: Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525.SAND No: SAND2017-8154 A

  16. Cough event classification by pretrained deep neural network.

    Science.gov (United States)

    Liu, Jia-Ming; You, Mingyu; Wang, Zheng; Li, Guo-Zheng; Xu, Xianghuai; Qiu, Zhongmin

    2015-01-01

    Cough is an essential symptom in respiratory diseases. In the measurement of cough severity, an accurate and objective cough monitor is expected by respiratory disease society. This paper aims to introduce a better performed algorithm, pretrained deep neural network (DNN), to the cough classification problem, which is a key step in the cough monitor. The deep neural network models are built from two steps, pretrain and fine-tuning, followed by a Hidden Markov Model (HMM) decoder to capture tamporal information of the audio signals. By unsupervised pretraining a deep belief network, a good initialization for a deep neural network is learned. Then the fine-tuning step is a back propogation tuning the neural network so that it can predict the observation probability associated with each HMM states, where the HMM states are originally achieved by force-alignment with a Gaussian Mixture Model Hidden Markov Model (GMM-HMM) on the training samples. Three cough HMMs and one noncough HMM are employed to model coughs and noncoughs respectively. The final decision is made based on viterbi decoding algorihtm that generates the most likely HMM sequence for each sample. A sample is labeled as cough if a cough HMM is found in the sequence. The experiments were conducted on a dataset that was collected from 22 patients with respiratory diseases. Patient dependent (PD) and patient independent (PI) experimental settings were used to evaluate the models. Five criteria, sensitivity, specificity, F1, macro average and micro average are shown to depict different aspects of the models. From overall evaluation criteria, the DNN based methods are superior to traditional GMM-HMM based method on F1 and micro average with maximal 14% and 11% error reduction in PD and 7% and 10% in PI, meanwhile keep similar performances on macro average. They also surpass GMM-HMM model on specificity with maximal 14% error reduction on both PD and PI. In this paper, we tried pretrained deep neural network in

  17. Non-intrusive gesture recognition system combining with face detection based on Hidden Markov Model

    Science.gov (United States)

    Jin, Jing; Wang, Yuanqing; Xu, Liujing; Cao, Liqun; Han, Lei; Zhou, Biye; Li, Minggao

    2014-11-01

    A non-intrusive gesture recognition human-machine interaction system is proposed in this paper. In order to solve the hand positioning problem which is a difficulty in current algorithms, face detection is used for the pre-processing to narrow the search area and find user's hand quickly and accurately. Hidden Markov Model (HMM) is used for gesture recognition. A certain number of basic gesture units are trained as HMM models. At the same time, an improved 8-direction feature vector is proposed and used to quantify characteristics in order to improve the detection accuracy. The proposed system can be applied in interaction equipments without special training for users, such as household interactive television

  18. Improvement and Validation of Weld Residual Stress Modelling Procedure

    International Nuclear Information System (INIS)

    Zang, Weilin; Gunnars, Jens; Dong, Pingsha; Hong, Jeong K.

    2009-06-01

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  19. Improvement and Validation of Weld Residual Stress Modelling Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Zang, Weilin; Gunnars, Jens (Inspecta Technology AB, Stockholm (Sweden)); Dong, Pingsha; Hong, Jeong K. (Center for Welded Structures Research, Battelle, Columbus, OH (United States))

    2009-06-15

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  20. Hidden Markov Model for quantitative prediction of snowfall

    Indian Academy of Sciences (India)

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six ...

  1. A hidden Ising model for ChIP-chip data analysis

    KAUST Repository

    Mo, Q.

    2010-01-28

    Motivation: Chromatin immunoprecipitation (ChIP) coupled with tiling microarray (chip) experiments have been used in a wide range of biological studies such as identification of transcription factor binding sites and investigation of DNA methylation and histone modification. Hidden Markov models are widely used to model the spatial dependency of ChIP-chip data. However, parameter estimation for these models is typically either heuristic or suboptimal, leading to inconsistencies in their applications. To overcome this limitation and to develop an efficient software, we propose a hidden ferromagnetic Ising model for ChIP-chip data analysis. Results: We have developed a simple, but powerful Bayesian hierarchical model for ChIP-chip data via a hidden Ising model. Metropolis within Gibbs sampling algorithm is used to simulate from the posterior distribution of the model parameters. The proposed model naturally incorporates the spatial dependency of the data, and can be used to analyze data with various genomic resolutions and sample sizes. We illustrate the method using three publicly available datasets and various simulated datasets, and compare it with three closely related methods, namely TileMap HMM, tileHMM and BAC. We find that our method performs as well as TileMap HMM and BAC for the high-resolution data from Affymetrix platform, but significantly outperforms the other three methods for the low-resolution data from Agilent platform. Compared with the BAC method which also involves MCMC simulations, our method is computationally much more efficient. Availability: A software called iChip is freely available at http://www.bioconductor.org/. Contact: moq@mskcc.org. © The Author 2010. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oxfordjournals.org.

  2. Hidden Markov Model for Stock Selection

    Directory of Open Access Journals (Sweden)

    Nguyet Nguyen

    2015-10-01

    Full Text Available The hidden Markov model (HMM is typically used to predict the hidden regimes of observation data. Therefore, this model finds applications in many different areas, such as speech recognition systems, computational molecular biology and financial market predictions. In this paper, we use HMM for stock selection. We first use HMM to make monthly regime predictions for the four macroeconomic variables: inflation (consumer price index (CPI, industrial production index (INDPRO, stock market index (S&P 500 and market volatility (VIX. At the end of each month, we calibrate HMM’s parameters for each of these economic variables and predict its regimes for the next month. We then look back into historical data to find the time periods for which the four variables had similar regimes with the forecasted regimes. Within those similar periods, we analyze all of the S&P 500 stocks to identify which stock characteristics have been well rewarded during the time periods and assign scores and corresponding weights for each of the stock characteristics. A composite score of each stock is calculated based on the scores and weights of its features. Based on this algorithm, we choose the 50 top ranking stocks to buy. We compare the performances of the portfolio with the benchmark index, S&P 500. With an initial investment of $100 in December 1999, over 15 years, in December 2014, our portfolio had an average gain per annum of 14.9% versus 2.3% for the S&P 500.

  3. Optimisation of Hidden Markov Model using Baum–Welch algorithm ...

    Indian Academy of Sciences (India)

    The present work is a part of development of Hidden Markov Model. (HMM) based ... the Himalaya. In this work, HMMs have been developed for forecasting of maximum and minimum ..... data collection teams of Snow and Avalanche Study.

  4. Modeling Strategic Use of Human Computer Interfaces with Novel Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Laura Jane Mariano

    2015-07-01

    Full Text Available Immersive software tools are virtual environments designed to give their users an augmented view of real-world data and ways of manipulating that data. As virtual environments, every action users make while interacting with these tools can be carefully logged, as can the state of the software and the information it presents to the user, giving these actions context. This data provides a high-resolution lens through which dynamic cognitive and behavioral processes can be viewed. In this report, we describe new methods for the analysis and interpretation of such data, utilizing a novel implementation of the Beta Process Hidden Markov Model (BP-HMM for analysis of software activity logs. We further report the results of a preliminary study designed to establish the validity of our modeling approach. A group of 20 participants were asked to play a simple computer game, instrumented to log every interaction with the interface. Participants had no previous experience with the game’s functionality or rules, so the activity logs collected during their naïve interactions capture patterns of exploratory behavior and skill acquisition as they attempted to learn the rules of the game. Pre- and post-task questionnaires probed for self-reported styles of problem solving, as well as task engagement, difficulty, and workload. We jointly modeled the activity log sequences collected from all participants using the BP-HMM approach, identifying a global library of activity patterns representative of the collective behavior of all the participants. Analyses show systematic relationships between both pre- and post-task questionnaires, self-reported approaches to analytic problem solving, and metrics extracted from the BP-HMM decomposition. Overall, we find that this novel approach to decomposing unstructured behavioral data within software environments provides a sensible means for understanding how users learn to integrate software functionality for strategic

  5. Automated EEG sleep staging in the term-age baby using a generative modelling approach

    Science.gov (United States)

    Pillay, Kirubin; Dereymaeker, Anneleen; Jansen, Katrien; Naulaers, Gunnar; Van Huffel, Sabine; De Vos, Maarten

    2018-06-01

    Objective. We develop a method for automated four-state sleep classification of preterm and term-born babies at term-age of 38-40 weeks postmenstrual age (the age since the last menstrual cycle of the mother) using multichannel electroencephalogram (EEG) recordings. At this critical age, EEG differentiates from broader quiet sleep (QS) and active sleep (AS) stages to four, more complex states, and the quality and timing of this differentiation is indicative of the level of brain development. However, existing methods for automated sleep classification remain focussed only on QS and AS sleep classification. Approach. EEG features were calculated from 16 EEG recordings, in 30 s epochs, and personalized feature scaling used to correct for some of the inter-recording variability, by standardizing each recording’s feature data using its mean and standard deviation. Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) were trained, with the HMM incorporating knowledge of the sleep state transition probabilities. Performance of the GMM and HMM (with and without scaling) were compared, and Cohen’s kappa agreement calculated between the estimates and clinicians’ visual labels. Main results. For four-state classification, the HMM proved superior to the GMM. With the inclusion of personalized feature scaling, mean kappa (±standard deviation) was 0.62 (±0.16) compared to the GMM value of 0.55 (±0.15). Without feature scaling, kappas for the HMM and GMM dropped to 0.56 (±0.18) and 0.51 (±0.15), respectively. Significance. This is the first study to present a successful method for the automated staging of four states in term-age sleep using multichannel EEG. Results suggested a benefit in incorporating transition information using an HMM, and correcting for inter-recording variability through personalized feature scaling. Determining the timing and quality of these states are indicative of developmental delays in both preterm and term-born babies that may

  6. A stochastic HMM-based forecasting model for fuzzy time series.

    Science.gov (United States)

    Li, Sheng-Tun; Cheng, Yi-Chung

    2010-10-01

    Recently, fuzzy time series have attracted more academic attention than traditional time series due to their capability of dealing with the uncertainty and vagueness inherent in the data collected. The formulation of fuzzy relations is one of the key issues affecting forecasting results. Most of the present works adopt IF-THEN rules for relationship representation, which leads to higher computational overhead and rule redundancy. Sullivan and Woodall proposed a Markov-based formulation and a forecasting model to reduce computational overhead; however, its applicability is limited to handling one-factor problems. In this paper, we propose a novel forecasting model based on the hidden Markov model by enhancing Sullivan and Woodall's work to allow handling of two-factor forecasting problems. Moreover, in order to make the nature of conjecture and randomness of forecasting more realistic, the Monte Carlo method is adopted to estimate the outcome. To test the effectiveness of the resulting stochastic model, we conduct two experiments and compare the results with those from other models. The first experiment consists of forecasting the daily average temperature and cloud density in Taipei, Taiwan, and the second experiment is based on the Taiwan Weighted Stock Index by forecasting the exchange rate of the New Taiwan dollar against the U.S. dollar. In addition to improving forecasting accuracy, the proposed model adheres to the central limit theorem, and thus, the result statistically approximates to the real mean of the target value being forecast.

  7. Typical NRC inspection procedures for model plant

    International Nuclear Information System (INIS)

    Blaylock, J.

    1984-01-01

    A summary of NRC inspection procedures for a model LEU fuel fabrication plant is presented. Procedures and methods for combining inventory data, seals, measurement techniques, and statistical analysis are emphasized

  8. Clinical Prediction Performance of Glaucoma Progression Using a 2-Dimensional Continuous-Time Hidden Markov Model with Structural and Functional Measurements.

    Science.gov (United States)

    Song, Youngseok; Ishikawa, Hiroshi; Wu, Mengfei; Liu, Yu-Ying; Lucy, Katie A; Lavinsky, Fabio; Liu, Mengling; Wollstein, Gadi; Schuman, Joel S

    2018-03-20

    Previously, we introduced a state-based 2-dimensional continuous-time hidden Markov model (2D CT HMM) to model the pattern of detected glaucoma changes using structural and functional information simultaneously. The purpose of this study was to evaluate the detected glaucoma change prediction performance of the model in a real clinical setting using a retrospective longitudinal dataset. Longitudinal, retrospective study. One hundred thirty-four eyes from 134 participants diagnosed with glaucoma or as glaucoma suspects (average follow-up, 4.4±1.2 years; average number of visits, 7.1±1.8). A 2D CT HMM model was trained using OCT (Cirrus HD-OCT; Zeiss, Dublin, CA) average circumpapillary retinal nerve fiber layer (cRNFL) thickness and visual field index (VFI) or mean deviation (MD; Humphrey Field Analyzer; Zeiss). The model was trained using a subset of the data (107 of 134 eyes [80%]) including all visits except for the last visit, which was used to test the prediction performance (training set). Additionally, the remaining 27 eyes were used for secondary performance testing as an independent group (validation set). The 2D CT HMM predicts 1 of 4 possible detected state changes based on 1 input state. Prediction accuracy was assessed as the percentage of correct prediction against the patient's actual recorded state. In addition, deviations of the predicted long-term detected change paths from the actual detected change paths were measured. Baseline mean ± standard deviation age was 61.9±11.4 years, VFI was 90.7±17.4, MD was -3.50±6.04 dB, and cRNFL thickness was 74.9±12.2 μm. The accuracy of detected glaucoma change prediction using the training set was comparable with the validation set (57.0% and 68.0%, respectively). Prediction deviation from the actual detected change path showed stability throughout patient follow-up. The 2D CT HMM demonstrated promising prediction performance in detecting glaucoma change performance in a simulated clinical setting

  9. Context Analysis of Customer Requests using a Hybrid Adaptive Neuro Fuzzy Inference System and Hidden Markov Models in the Natural Language Call Routing Problem

    Science.gov (United States)

    Rustamov, Samir; Mustafayev, Elshan; Clements, Mark A.

    2018-04-01

    The context analysis of customer requests in a natural language call routing problem is investigated in the paper. One of the most significant problems in natural language call routing is a comprehension of client request. With the aim of finding a solution to this issue, the Hybrid HMM and ANFIS models become a subject to an examination. Combining different types of models (ANFIS and HMM) can prevent misunderstanding by the system for identification of user intention in dialogue system. Based on these models, the hybrid system may be employed in various language and call routing domains due to nonusage of lexical or syntactic analysis in classification process.

  10. Context Analysis of Customer Requests using a Hybrid Adaptive Neuro Fuzzy Inference System and Hidden Markov Models in the Natural Language Call Routing Problem

    Directory of Open Access Journals (Sweden)

    Rustamov Samir

    2018-04-01

    Full Text Available The context analysis of customer requests in a natural language call routing problem is investigated in the paper. One of the most significant problems in natural language call routing is a comprehension of client request. With the aim of finding a solution to this issue, the Hybrid HMM and ANFIS models become a subject to an examination. Combining different types of models (ANFIS and HMM can prevent misunderstanding by the system for identification of user intention in dialogue system. Based on these models, the hybrid system may be employed in various language and call routing domains due to nonusage of lexical or syntactic analysis in classification process.

  11. Regime switching model for financial data: Empirical risk analysis

    Science.gov (United States)

    Salhi, Khaled; Deaconu, Madalina; Lejay, Antoine; Champagnat, Nicolas; Navet, Nicolas

    2016-11-01

    This paper constructs a regime switching model for the univariate Value-at-Risk estimation. Extreme value theory (EVT) and hidden Markov models (HMM) are combined to estimate a hybrid model that takes volatility clustering into account. In the first stage, HMM is used to classify data in crisis and steady periods, while in the second stage, EVT is applied to the previously classified data to rub out the delay between regime switching and their detection. This new model is applied to prices of numerous stocks exchanged on NYSE Euronext Paris over the period 2001-2011. We focus on daily returns for which calibration has to be done on a small dataset. The relative performance of the regime switching model is benchmarked against other well-known modeling techniques, such as stable, power laws and GARCH models. The empirical results show that the regime switching model increases predictive performance of financial forecasting according to the number of violations and tail-loss tests. This suggests that the regime switching model is a robust forecasting variant of power laws model while remaining practical to implement the VaR measurement.

  12. Enhancing Speech Recognition Using Improved Particle Swarm Optimization Based Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Lokesh Selvaraj

    2014-01-01

    Full Text Available Enhancing speech recognition is the primary intention of this work. In this paper a novel speech recognition method based on vector quantization and improved particle swarm optimization (IPSO is suggested. The suggested methodology contains four stages, namely, (i denoising, (ii feature mining (iii, vector quantization, and (iv IPSO based hidden Markov model (HMM technique (IP-HMM. At first, the speech signals are denoised using median filter. Next, characteristics such as peak, pitch spectrum, Mel frequency Cepstral coefficients (MFCC, mean, standard deviation, and minimum and maximum of the signal are extorted from the denoised signal. Following that, to accomplish the training process, the extracted characteristics are given to genetic algorithm based codebook generation in vector quantization. The initial populations are created by selecting random code vectors from the training set for the codebooks for the genetic algorithm process and IP-HMM helps in doing the recognition. At this point the creativeness will be done in terms of one of the genetic operation crossovers. The proposed speech recognition technique offers 97.14% accuracy.

  13. Learning with Admixture: Modeling, Optimization, and Applications in Population Genetics

    DEFF Research Database (Denmark)

    Cheng, Jade Yu

    2016-01-01

    the foundation for both CoalHMM and Ohana. Optimization modeling has been the main theme throughout my PhD, and it will continue to shape my work for the years to come. The algorithms and software I developed to study historical admixture and population evolution fall into a larger family of machine learning...... geneticists strive to establish working solutions to extract information from massive volumes of biological data. The steep increase in the quantity and quality of genomic data during the past decades provides a unique opportunity but also calls for new and improved algorithms and software to cope...... including population splits, effective population sizes, gene flow, etc. Since joining the CoalHMM development team in 2014, I have mainly contributed in two directions: 1) improving optimizations through heuristic-based evolutionary algorithms and 2) modeling of historical admixture events. Ohana, meaning...

  14. Cognition and procedure representational requirements for predictive human performance models

    Science.gov (United States)

    Corker, K.

    1992-01-01

    Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods

  15. Model of Procedure Usage – Results from a Qualitative Study to Inform Design of Computer-Based Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Johanna H Oxstrand; Katya L Le Blanc

    2012-07-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory, the Institute for Energy Technology, and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field operators. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do this. The underlying philosophy in the research effort is “Stop – Start – Continue”, i.e. what features from the use of paper-based procedures should we not incorporate (Stop), what should we keep (Continue), and what new features or work processes should be added (Start). One step in identifying the Stop – Start – Continue was to conduct a baseline study where affordances related to the current usage of paper-based procedures were identified. The purpose of the study was to develop a model of paper based procedure use which will help to identify desirable features for computer based procedure prototypes. Affordances such as note taking, markups

  16. Mining adverse drug reactions from online healthcare forums using hidden Markov model.

    Science.gov (United States)

    Sampathkumar, Hariprasad; Chen, Xue-wen; Luo, Bo

    2014-10-23

    Adverse Drug Reactions are one of the leading causes of injury or death among patients undergoing medical treatments. Not all Adverse Drug Reactions are identified before a drug is made available in the market. Current post-marketing drug surveillance methods, which are based purely on voluntary spontaneous reports, are unable to provide the early indications necessary to prevent the occurrence of such injuries or fatalities. The objective of this research is to extract reports of adverse drug side-effects from messages in online healthcare forums and use them as early indicators to assist in post-marketing drug surveillance. We treat the task of extracting adverse side-effects of drugs from healthcare forum messages as a sequence labeling problem and present a Hidden Markov Model(HMM) based Text Mining system that can be used to classify a message as containing drug side-effect information and then extract the adverse side-effect mentions from it. A manually annotated dataset from http://www.medications.com is used in the training and validation of the HMM based Text Mining system. A 10-fold cross-validation on the manually annotated dataset yielded on average an F-Score of 0.76 from the HMM Classifier, in comparison to 0.575 from the Baseline classifier. Without the Plain Text Filter component as a part of the Text Processing module, the F-Score of the HMM Classifier was reduced to 0.378 on average, while absence of the HTML Filter component was found to have no impact. Reducing the Drug names dictionary size by half, on average reduced the F-Score of the HMM Classifier to 0.359, while a similar reduction to the side-effects dictionary yielded an F-Score of 0.651 on average. Adverse side-effects mined from http://www.medications.com and http://www.steadyhealth.com were found to match the Adverse Drug Reactions on the Drug Package Labels of several drugs. In addition, some novel adverse side-effects, which can be potential Adverse Drug Reactions, were also

  17. Activity recognition using semi-Markov models on real world smart home datasets

    NARCIS (Netherlands)

    van Kasteren, T.L.M.; Englebienne, G.; Kröse, B.J.A.

    2010-01-01

    Accurately recognizing human activities from sensor data recorded in a smart home setting is a challenging task. Typically, probabilistic models such as the hidden Markov model (HMM) or conditional random fields (CRF) are used to map the observed sensor data onto the hidden activity states. A

  18. Understanding eye movements in face recognition using hidden Markov models.

    Science.gov (United States)

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H

    2014-09-16

    We use a hidden Markov model (HMM) based approach to analyze eye movement data in face recognition. HMMs are statistical models that are specialized in handling time-series data. We conducted a face recognition task with Asian participants, and model each participant's eye movement pattern with an HMM, which summarized the participant's scan paths in face recognition with both regions of interest and the transition probabilities among them. By clustering these HMMs, we showed that participants' eye movements could be categorized into holistic or analytic patterns, demonstrating significant individual differences even within the same culture. Participants with the analytic pattern had longer response times, but did not differ significantly in recognition accuracy from those with the holistic pattern. We also found that correct and wrong recognitions were associated with distinctive eye movement patterns; the difference between the two patterns lies in the transitions rather than locations of the fixations alone. © 2014 ARVO.

  19. Segment-based acoustic models for continuous speech recognition

    Science.gov (United States)

    Ostendorf, Mari; Rohlicek, J. R.

    1993-07-01

    This research aims to develop new and more accurate stochastic models for speaker-independent continuous speech recognition, by extending previous work in segment-based modeling and by introducing a new hierarchical approach to representing intra-utterance statistical dependencies. These techniques, which are more costly than traditional approaches because of the large search space associated with higher order models, are made feasible through rescoring a set of HMM-generated N-best sentence hypotheses. We expect these different modeling techniques to result in improved recognition performance over that achieved by current systems, which handle only frame-based observations and assume that these observations are independent given an underlying state sequence. In the fourth quarter of the project, we have completed the following: (1) ported our recognition system to the Wall Street Journal task, a standard task in the ARPA community; (2) developed an initial dependency-tree model of intra-utterance observation correlation; and (3) implemented baseline language model estimation software. Our initial results on the Wall Street Journal task are quite good and represent significantly improved performance over most HMM systems reporting on the Nov. 1992 5k vocabulary test set.

  20. A baseline-free procedure for transformation models under interval censorship.

    Science.gov (United States)

    Gu, Ming Gao; Sun, Liuquan; Zuo, Guoxin

    2005-12-01

    An important property of Cox regression model is that the estimation of regression parameters using the partial likelihood procedure does not depend on its baseline survival function. We call such a procedure baseline-free. Using marginal likelihood, we show that an baseline-free procedure can be derived for a class of general transformation models under interval censoring framework. The baseline-free procedure results a simplified and stable computation algorithm for some complicated and important semiparametric models, such as frailty models and heteroscedastic hazard/rank regression models, where the estimation procedures so far available involve estimation of the infinite dimensional baseline function. A detailed computational algorithm using Markov Chain Monte Carlo stochastic approximation is presented. The proposed procedure is demonstrated through extensive simulation studies, showing the validity of asymptotic consistency and normality. We also illustrate the procedure with a real data set from a study of breast cancer. A heuristic argument showing that the score function is a mean zero martingale is provided.

  1. Damage evaluation by a guided wave-hidden Markov model based method

    Science.gov (United States)

    Mei, Hanfei; Yuan, Shenfang; Qiu, Lei; Zhang, Jinjin

    2016-02-01

    Guided wave based structural health monitoring has shown great potential in aerospace applications. However, one of the key challenges of practical engineering applications is the accurate interpretation of the guided wave signals under time-varying environmental and operational conditions. This paper presents a guided wave-hidden Markov model based method to improve the damage evaluation reliability of real aircraft structures under time-varying conditions. In the proposed approach, an HMM based unweighted moving average trend estimation method, which can capture the trend of damage propagation from the posterior probability obtained by HMM modeling is used to achieve a probabilistic evaluation of the structural damage. To validate the developed method, experiments are performed on a hole-edge crack specimen under fatigue loading condition and a real aircraft wing spar under changing structural boundary conditions. Experimental results show the advantage of the proposed method.

  2. Hidden Markov model analysis of maternal behavior patterns in inbred and reciprocal hybrid mice.

    Directory of Open Access Journals (Sweden)

    Valeria Carola

    Full Text Available Individual variation in maternal care in mammals shows a significant heritable component, with the maternal behavior of daughters resembling that of their mothers. In laboratory mice, genetically distinct inbred strains show stable differences in maternal care during the first postnatal week. Moreover, cross fostering and reciprocal breeding studies demonstrate that differences in maternal care between inbred strains persist in the absence of genetic differences, demonstrating a non-genetic or epigenetic contribution to maternal behavior. In this study we applied a mathematical tool, called hidden Markov model (HMM, to analyze the behavior of female mice in the presence of their young. The frequency of several maternal behaviors in mice has been previously described, including nursing/grooming pups and tending to the nest. However, the ordering, clustering, and transitions between these behaviors have not been systematically described and thus a global description of maternal behavior is lacking. Here we used HMM to describe maternal behavior patterns in two genetically distinct mouse strains, C57BL/6 and BALB/c, and their genetically identical reciprocal hybrid female offspring. HMM analysis is a powerful tool to identify patterns of events that cluster in time and to determine transitions between these clusters, or hidden states. For the HMM analysis we defined seven states: arched-backed nursing, blanket nursing, licking/grooming pups, grooming, activity, eating, and sleeping. By quantifying the frequency, duration, composition, and transition probabilities of these states we were able to describe the pattern of maternal behavior in mouse and identify aspects of these patterns that are under genetic and nongenetic inheritance. Differences in these patterns observed in the experimental groups (inbred and hybrid females were detected only after the application of HMM analysis whereas classical statistical methods and analyses were not able to

  3. Validation of Inter-Subject Training for Hidden Markov Models Applied to Gait Phase Detection in Children with Cerebral Palsy

    Directory of Open Access Journals (Sweden)

    Juri Taborri

    2015-09-01

    Full Text Available Gait-phase recognition is a necessary functionality to drive robotic rehabilitation devices for lower limbs. Hidden Markov Models (HMMs represent a viable solution, but they need subject-specific training, making data processing very time-consuming. Here, we validated an inter-subject procedure to avoid the intra-subject one in two, four and six gait-phase models in pediatric subjects. The inter-subject procedure consists in the identification of a standardized parameter set to adapt the model to measurements. We tested the inter-subject procedure both on scalar and distributed classifiers. Ten healthy children and ten hemiplegic children, each equipped with two Inertial Measurement Units placed on shank and foot, were recruited. The sagittal component of angular velocity was recorded by gyroscopes while subjects performed four walking trials on a treadmill. The goodness of classifiers was evaluated with the Receiver Operating Characteristic. The results provided a goodness from good to optimum for all examined classifiers (0 < G < 0.6, with the best performance for the distributed classifier in two-phase recognition (G = 0.02. Differences were found among gait partitioning models, while no differences were found between training procedures with the exception of the shank classifier. Our results raise the possibility of avoiding subject-specific training in HMM for gait-phase recognition and its implementation to control exoskeletons for the pediatric population.

  4. Quantile Forecasting for Credit Risk Management Using Possibly Mis-specified Hidden Markov Models

    NARCIS (Netherlands)

    Banachewicz, K.P.; Lucas, A.

    2008-01-01

    Recent models for credit risk management make use of hidden Markov models (HMMs). HMMs are used to forecast quantiles of corporate default rates. Little research has been done on the quality of such forecasts if the underlying HMM is potentially misspecified. In this paper, we focus on

  5. Development of a Fault Monitoring Technique for Wind Turbines Using a Hidden Markov Model.

    Science.gov (United States)

    Shin, Sung-Hwan; Kim, SangRyul; Seo, Yun-Ho

    2018-06-02

    Regular inspection for the maintenance of the wind turbines is difficult because of their remote locations. For this reason, condition monitoring systems (CMSs) are typically installed to monitor their health condition. The purpose of this study is to propose a fault detection algorithm for the mechanical parts of the wind turbine. To this end, long-term vibration data were collected over two years by a CMS installed on a 3 MW wind turbine. The vibration distribution at a specific rotating speed of main shaft is approximated by the Weibull distribution and its cumulative distribution function is utilized for determining the threshold levels that indicate impending failure of mechanical parts. A Hidden Markov model (HMM) is employed to propose the statistical fault detection algorithm in the time domain and the method whereby the input sequence for HMM is extracted is also introduced by considering the threshold levels and the correlation between the signals. Finally, it was demonstrated that the proposed HMM algorithm achieved a greater than 95% detection success rate by using the long-term signals.

  6. Passive acoustic leak detection for sodium cooled fast reactors using hidden Markov models

    Energy Technology Data Exchange (ETDEWEB)

    Riber Marklund, A. [CEA, Cadarache, DEN/DTN/STCP/LIET, Batiment 202, 13108 St Paul-lez-Durance, (France); Kishore, S. [Fast Reactor Technology Group of IGCAR, (India); Prakash, V. [Vibrations Diagnostics Division, Fast Reactor Technology Group of IGCAR, (India); Rajan, K.K. [Fast Reactor Technology Group and Engineering Services Group of IGCAR, (India)

    2015-07-01

    Acoustic leak detection for steam generators of sodium fast reactors have been an active research topic since the early 1970's and several methods have been tested over the years. Inspired by its success in the field of automatic speech recognition, we here apply hidden Markov models (HMM) in combination with Gaussian mixture models (GMM) to the problem. To achieve this, we propose a new feature calculation scheme, based on the temporal evolution of the power spectral density (PSD) of the signal. Using acoustic signals recorded during steam/water injection experiments done at the Indira Gandhi Centre for Atomic Research (IGCAR), the proposed method is tested. We perform parametric studies on the HMM+GMM model size and demonstrate that the proposed method a) performs well without a priori knowledge of injection noise, b) can incorporate several noise models and c) has an output distribution that simplifies false alarm rate control. (authors)

  7. Geolocating fish using Hidden Markov Models and Data Storage Tags

    DEFF Research Database (Denmark)

    Thygesen, Uffe Høgsbro; Pedersen, Martin Wæver; Madsen, Henrik

    2009-01-01

    Geolocation of fish based on data from archival tags typically requires a statistical analysis to reduce the effect of measurement errors. In this paper we present a novel technique for this analysis, one based on Hidden Markov Models (HMM's). We assume that the actual path of the fish is generated...... by a biased random walk. The HMM methodology produces, for each time step, the probability that the fish resides in each grid cell. Because there is no Monte Carlo step in our technique, we are able to estimate parameters within the likelihood framework. The method does not require the distribution...... of inference in state-space models of animals. The technique can be applied to geolocation based on light, on tidal patterns, or measurement of other variables that vary with space. We illustrate the method through application to a simulated data set where geolocation relies on depth data exclusively....

  8. A Hybrid of Deep Network and Hidden Markov Model for MCI Identification with Resting-State fMRI.

    Science.gov (United States)

    Suk, Heung-Il; Lee, Seong-Whan; Shen, Dinggang

    2015-10-01

    In this paper, we propose a novel method for modelling functional dynamics in resting-state fMRI (rs-fMRI) for Mild Cognitive Impairment (MCI) identification. Specifically, we devise a hybrid architecture by combining Deep Auto-Encoder (DAE) and Hidden Markov Model (HMM). The roles of DAE and HMM are, respectively, to discover hierarchical non-linear relations among features, by which we transform the original features into a lower dimension space, and to model dynamic characteristics inherent in rs-fMRI, i.e. , internal state changes. By building a generative model with HMMs for each class individually, we estimate the data likelihood of a test subject as MCI or normal healthy control, based on which we identify the clinical label. In our experiments, we achieved the maximal accuracy of 81.08% with the proposed method, outperforming state-of-the-art methods in the literature.

  9. HMM filtering and parameter estimation of an electricity spot price model

    International Nuclear Information System (INIS)

    Erlwein, Christina; Benth, Fred Espen; Mamon, Rogemar

    2010-01-01

    In this paper we develop a model for electricity spot price dynamics. The spot price is assumed to follow an exponential Ornstein-Uhlenbeck (OU) process with an added compound Poisson process. In this way, the model allows for mean-reversion and possible jumps. All parameters are modulated by a hidden Markov chain in discrete time. They are able to switch between different economic regimes representing the interaction of various factors. Through the application of reference probability technique, adaptive filters are derived, which in turn, provide optimal estimates for the state of the Markov chain and related quantities of the observation process. The EM algorithm is applied to find optimal estimates of the model parameters in terms of the recursive filters. We implement this self-calibrating model on a deseasonalised series of daily spot electricity prices from the Nordic exchange Nord Pool. On the basis of one-step ahead forecasts, we found that the model is able to capture the empirical characteristics of Nord Pool spot prices. (author)

  10. Model checking as an aid to procedure design

    International Nuclear Information System (INIS)

    Zhang, Wenhu

    2001-01-01

    The OECD Halden Reactor Project has been actively working on computer assisted operating procedures for many years. The objective of the research has been to provide computerised assistance for procedure design, verification and validation, implementation and maintenance. For the verification purpose, the application of formal methods has been considered in several reports. The recent formal verification activity conducted at the Halden Project is based on using model checking to the verification of procedures. This report presents verification approaches based on different model checking techniques and tools for the formalization and verification of operating procedures. Possible problems and relative merits of the different approaches are discussed. A case study of one of the approaches is presented to show the practical application of formal verification. Application of formal verification in the traditional procedure design process can reduce the human resources involved in reviews and simulations, and hence reduce the cost of verification and validation. A discussion of the integration of the formal verification with the traditional procedure design process is given at the end of this report. (Author)

  11. Enhancing photogrammetric 3d city models with procedural modeling techniques for urban planning support

    International Nuclear Information System (INIS)

    Schubiger-Banz, S; Arisona, S M; Zhong, C

    2014-01-01

    This paper presents a workflow to increase the level of detail of reality-based 3D urban models. It combines the established workflows from photogrammetry and procedural modeling in order to exploit distinct advantages of both approaches. The combination has advantages over purely automatic acquisition in terms of visual quality, accuracy and model semantics. Compared to manual modeling, procedural techniques can be much more time effective while maintaining the qualitative properties of the modeled environment. In addition, our method includes processes for procedurally adding additional features such as road and rail networks. The resulting models meet the increasing needs in urban environments for planning, inventory, and analysis

  12. Applying Modeling Tools to Ground System Procedures

    Science.gov (United States)

    Di Pasquale, Peter

    2012-01-01

    As part of a long-term effort to revitalize the Ground Systems (GS) Engineering Section practices, Systems Modeling Language (SysML) and Business Process Model and Notation (BPMN) have been used to model existing GS products and the procedures GS engineers use to produce them.

  13. Metagenome and Metatranscriptome Analyses Using Protein Family Profiles.

    Directory of Open Access Journals (Sweden)

    Cuncong Zhong

    2016-07-01

    Full Text Available Analyses of metagenome data (MG and metatranscriptome data (MT are often challenged by a paucity of complete reference genome sequences and the uneven/low sequencing depth of the constituent organisms in the microbial community, which respectively limit the power of reference-based alignment and de novo sequence assembly. These limitations make accurate protein family classification and abundance estimation challenging, which in turn hamper downstream analyses such as abundance profiling of metabolic pathways, identification of differentially encoded/expressed genes, and de novo reconstruction of complete gene and protein sequences from the protein family of interest. The profile hidden Markov model (HMM framework enables the construction of very useful probabilistic models for protein families that allow for accurate modeling of position specific matches, insertions, and deletions. We present a novel homology detection algorithm that integrates banded Viterbi algorithm for profile HMM parsing with an iterative simultaneous alignment and assembly computational framework. The algorithm searches a given profile HMM of a protein family against a database of fragmentary MG/MT sequencing data and simultaneously assembles complete or near-complete gene and protein sequences of the protein family. The resulting program, HMM-GRASPx, demonstrates superior performance in aligning and assembling homologs when benchmarked on both simulated marine MG and real human saliva MG datasets. On real supragingival plaque and stool MG datasets that were generated from healthy individuals, HMM-GRASPx accurately estimates the abundances of the antimicrobial resistance (AMR gene families and enables accurate characterization of the resistome profiles of these microbial communities. For real human oral microbiome MT datasets, using the HMM-GRASPx estimated transcript abundances significantly improves detection of differentially expressed (DE genes. Finally, HMM

  14. A new isolation with migration model along complete genomes infers very different divergence processes among closely related great ape species.

    Directory of Open Access Journals (Sweden)

    Thomas Mailund

    Full Text Available We present a hidden Markov model (HMM for inferring gradual isolation between two populations during speciation, modelled as a time interval with restricted gene flow. The HMM describes the history of adjacent nucleotides in two genomic sequences, such that the nucleotides can be separated by recombination, can migrate between populations, or can coalesce at variable time points, all dependent on the parameters of the model, which are the effective population sizes, splitting times, recombination rate, and migration rate. We show by extensive simulations that the HMM can accurately infer all parameters except the recombination rate, which is biased downwards. Inference is robust to variation in the mutation rate and the recombination rate over the sequence and also robust to unknown phase of genomes unless they are very closely related. We provide a test for whether divergence is gradual or instantaneous, and we apply the model to three key divergence processes in great apes: (a the bonobo and common chimpanzee, (b the eastern and western gorilla, and (c the Sumatran and Bornean orang-utan. We find that the bonobo and chimpanzee appear to have undergone a clear split, whereas the divergence processes of the gorilla and orang-utan species occurred over several hundred thousands years with gene flow stopping quite recently. We also apply the model to the Homo/Pan speciation event and find that the most likely scenario involves an extended period of gene flow during speciation.

  15. The Use of Hidden Markov Models for Anomaly Detection in Nuclear Core Condition Monitoring

    Science.gov (United States)

    Stephen, Bruce; West, Graeme M.; Galloway, Stuart; McArthur, Stephen D. J.; McDonald, James R.; Towle, Dave

    2009-04-01

    Unplanned outages can be especially costly for generation companies operating nuclear facilities. Early detection of deviations from expected performance through condition monitoring can allow a more proactive and managed approach to dealing with ageing plant. This paper proposes an anomaly detection framework incorporating the use of the Hidden Markov Model (HMM) to support the analysis of nuclear reactor core condition monitoring data. Fuel Grab Load Trace (FGLT) data gathered within the UK during routine refueling operations has been seen to provide information relating to the condition of the graphite bricks that comprise the core. Although manual analysis of this data is time consuming and requires considerable expertise, this paper demonstrates how techniques such as the HMM can provide analysis support by providing a benchmark model of expected behavior against which future refueling events may be compared. The presence of anomalous behavior in candidate traces is inferred through the underlying statistical foundation of the HMM which gives an observation likelihood averaged along the length of the input sequence. Using this likelihood measure, the engineer can be alerted to anomalous behaviour, indicating data which might require further detailed examination. It is proposed that this data analysis technique is used in conjunction with other intelligent analysis techniques currently employed to analyse FGLT to provide a greater confidence measure in detecting anomalous behaviour from FGLT data.

  16. A simple but accurate procedure for solving the five-parameter model

    International Nuclear Information System (INIS)

    Mares, Oana; Paulescu, Marius; Badescu, Viorel

    2015-01-01

    Highlights: • A new procedure for extracting the parameters of the one-diode model is proposed. • Only the basic information listed in the datasheet of PV modules are required. • Results demonstrate a simple, robust and accurate procedure. - Abstract: The current–voltage characteristic of a photovoltaic module is typically evaluated by using a model based on the solar cell equivalent circuit. The complexity of the procedure applied for extracting the model parameters depends on data available in manufacture’s datasheet. Since the datasheet is not detailed enough, simplified models have to be used in many cases. This paper proposes a new procedure for extracting the parameters of the one-diode model in standard test conditions, using only the basic data listed by all manufactures in datasheet (short circuit current, open circuit voltage and maximum power point). The procedure is validated by using manufacturers’ data for six commercially crystalline silicon photovoltaic modules. Comparing the computed and measured current–voltage characteristics the determination coefficient is in the range 0.976–0.998. Thus, the proposed procedure represents a feasible tool for solving the five-parameter model applied to crystalline silicon photovoltaic modules. The procedure is described in detail, to guide potential users to derive similar models for other types of photovoltaic modules.

  17. Popularity Modeling for Mobile Apps: A Sequential Approach.

    Science.gov (United States)

    Zhu, Hengshu; Liu, Chuanren; Ge, Yong; Xiong, Hui; Chen, Enhong

    2015-07-01

    The popularity information in App stores, such as chart rankings, user ratings, and user reviews, provides an unprecedented opportunity to understand user experiences with mobile Apps, learn the process of adoption of mobile Apps, and thus enables better mobile App services. While the importance of popularity information is well recognized in the literature, the use of the popularity information for mobile App services is still fragmented and under-explored. To this end, in this paper, we propose a sequential approach based on hidden Markov model (HMM) for modeling the popularity information of mobile Apps toward mobile App services. Specifically, we first propose a popularity based HMM (PHMM) to model the sequences of the heterogeneous popularity observations of mobile Apps. Then, we introduce a bipartite based method to precluster the popularity observations. This can help to learn the parameters and initial values of the PHMM efficiently. Furthermore, we demonstrate that the PHMM is a general model and can be applicable for various mobile App services, such as trend based App recommendation, rating and review spam detection, and ranking fraud detection. Finally, we validate our approach on two real-world data sets collected from the Apple Appstore. Experimental results clearly validate both the effectiveness and efficiency of the proposed popularity modeling approach.

  18. Optimizing Likelihood Models for Particle Trajectory Segmentation in Multi-State Systems.

    Science.gov (United States)

    Young, Dylan Christopher; Scrimgeour, Jan

    2018-06-19

    Particle tracking offers significant insight into the molecular mechanics that govern the behav- ior of living cells. The analysis of molecular trajectories that transition between different motive states, such as diffusive, driven and tethered modes, is of considerable importance, with even single trajectories containing significant amounts of information about a molecule's environment and its interactions with cellular structures. Hidden Markov models (HMM) have been widely adopted to perform the segmentation of such complex tracks. In this paper, we show that extensive analysis of hidden Markov model outputs using data derived from multi-state Brownian dynamics simulations can be used both for the optimization of the likelihood models used to describe the states of the system and for characterization of the technique's failure mechanisms. This analysis was made pos- sible by the implementation of parallelized adaptive direct search algorithm on a Nvidia graphics processing unit. This approach provides critical information for the visualization of HMM failure and successful design of particle tracking experiments where trajectories contain multiple mobile states. © 2018 IOP Publishing Ltd.

  19. Cognitive Emotional Regulation Model in Human-Robot Interaction

    OpenAIRE

    Liu, Xin; Xie, Lun; Liu, Anqi; Li, Dan

    2015-01-01

    This paper integrated Gross cognitive process into the HMM (hidden Markov model) emotional regulation method and implemented human-robot emotional interaction with facial expressions and behaviors. Here, energy was the psychological driving force of emotional transition in the cognitive emotional model. The input facial expression was translated into external energy by expression-emotion mapping. Robot’s next emotional state was determined by the cognitive energy (the stimulus after cognition...

  20. Drawing-Based Procedural Modeling of Chinese Architectures.

    Science.gov (United States)

    Fei Hou; Yue Qi; Hong Qin

    2012-01-01

    This paper presents a novel modeling framework to build 3D models of Chinese architectures from elevation drawing. Our algorithm integrates the capability of automatic drawing recognition with powerful procedural modeling to extract production rules from elevation drawing. First, different from the previous symbol-based floor plan recognition, based on the novel concept of repetitive pattern trees, small horizontal repetitive regions of the elevation drawing are clustered in a bottom-up manner to form architectural components with maximum repetition, which collectively serve as building blocks for 3D model generation. Second, to discover the global architectural structure and its components' interdependencies, the components are structured into a shape tree in a top-down subdivision manner and recognized hierarchically at each level of the shape tree based on Markov Random Fields (MRFs). Third, shape grammar rules can be derived to construct 3D semantic model and its possible variations with the help of a 3D component repository. The salient contribution lies in the novel integration of procedural modeling with elevation drawing, with a unique application to Chinese architectures.

  1. Multi-category micro-milling tool wear monitoring with continuous hidden Markov models

    Science.gov (United States)

    Zhu, Kunpeng; Wong, Yoke San; Hong, Geok Soon

    2009-02-01

    In-process monitoring of tool conditions is important in micro-machining due to the high precision requirement and high tool wear rate. Tool condition monitoring in micro-machining poses new challenges compared to conventional machining. In this paper, a multi-category classification approach is proposed for tool flank wear state identification in micro-milling. Continuous Hidden Markov models (HMMs) are adapted for modeling of the tool wear process in micro-milling, and estimation of the tool wear state given the cutting force features. For a noise-robust approach, the HMM outputs are connected via a medium filter to minimize the tool state before entry into the next state due to high noise level. A detailed study on the selection of HMM structures for tool condition monitoring (TCM) is presented. Case studies on the tool state estimation in the micro-milling of pure copper and steel demonstrate the effectiveness and potential of these methods.

  2. Capturing the state transitions of seizure-like events using Hidden Markov models.

    Science.gov (United States)

    Guirgis, Mirna; Serletis, Demitre; Carlen, Peter L; Bardakjian, Berj L

    2011-01-01

    The purpose of this study was to investigate the number of states present in the progression of a seizure-like event (SLE). Of particular interest is to determine if there are more than two clearly defined states, as this would suggest that there is a distinct state preceding an SLE. Whole-intact hippocampus from C57/BL mice was used to model epileptiform activity induced by the perfusion of a low Mg(2+)/high K(+) solution while extracellular field potentials were recorded from CA3 pyramidal neurons. Hidden Markov models (HMM) were used to model the state transitions of the recorded SLEs by incorporating various features of the Hilbert transform into the training algorithm; specifically, 2- and 3-state HMMs were explored. Although the 2-state model was able to distinguish between SLE and nonSLE behavior, it provided no improvements compared to visual inspection alone. However, the 3-state model was able to capture two distinct nonSLE states that visual inspection failed to discriminate. Moreover, by developing an HMM based system a priori knowledge of the state transitions was not required making this an ideal platform for seizure prediction algorithms.

  3. Optimization and evaluation of probabilistic-logic sequence models

    DEFF Research Database (Denmark)

    Christiansen, Henning; Lassen, Ole Torp

    to, in principle, Turing complete languages. In general, such models are computationally far to complex for direct use, so optimization by pruning and approximation are needed. % The first steps are made towards a methodology for optimizing such models by approximations using auxiliary models......Analysis of biological sequence data demands more and more sophisticated and fine-grained models, but these in turn introduce hard computational problems. A class of probabilistic-logic models is considered, which increases the expressibility from HMM's and SCFG's regular and context-free languages...

  4. A Stepwise Fitting Procedure for automated fitting of Ecopath with Ecosim models

    Directory of Open Access Journals (Sweden)

    Erin Scott

    2016-01-01

    Full Text Available The Stepwise Fitting Procedure automates testing of alternative hypotheses used for fitting Ecopath with Ecosim (EwE models to observation reference data (Mackinson et al. 2009. The calibration of EwE model predictions to observed data is important to evaluate any model that will be used for ecosystem based management. Thus far, the model fitting procedure in EwE has been carried out manually: a repetitive task involving setting >1000 specific individual searches to find the statistically ‘best fit’ model. The novel fitting procedure automates the manual procedure therefore producing accurate results and lets the modeller concentrate on investigating the ‘best fit’ model for ecological accuracy.

  5. A skin abscess model for teaching incision and drainage procedures.

    Science.gov (United States)

    Fitch, Michael T; Manthey, David E; McGinnis, Henderson D; Nicks, Bret A; Pariyadath, Manoj

    2008-07-03

    Skin and soft tissue infections are increasingly prevalent clinical problems, and it is important for health care practitioners to be well trained in how to treat skin abscesses. A realistic model of abscess incision and drainage will allow trainees to learn and practice this basic physician procedure. We developed a realistic model of skin abscess formation to demonstrate the technique of incision and drainage for educational purposes. The creation of this model is described in detail in this report. This model has been successfully used to develop and disseminate a multimedia video production for teaching this medical procedure. Clinical faculty and resident physicians find this model to be a realistic method for demonstrating abscess incision and drainage. This manuscript provides a detailed description of our model of abscess incision and drainage for medical education. Clinical educators can incorporate this model into skills labs or demonstrations for teaching this basic procedure.

  6. Efficacy of hidden markov model over support vector machine on multiclass classification of healthy and cancerous cervical tissues

    Science.gov (United States)

    Mukhopadhyay, Sabyasachi; Kurmi, Indrajit; Pratiher, Sawon; Mukherjee, Sukanya; Barman, Ritwik; Ghosh, Nirmalya; Panigrahi, Prasanta K.

    2018-02-01

    In this paper, a comparative study between SVM and HMM has been carried out for multiclass classification of cervical healthy and cancerous tissues. In our study, the HMM methodology is more promising to produce higher accuracy in classification.

  7. Procedural Personas for Player Decision Modeling and Procedural Content Generation

    DEFF Research Database (Denmark)

    Holmgård, Christoffer

    2016-01-01

    ." These methods for constructing procedural personas are then integrated with existing procedural content generation systems, acting as critics that shape the output of these systems, optimizing generated content for different personas and by extension, different kinds of players and their decision making styles......How can player models and artificially intelligent (AI) agents be useful in early-stage iterative game and simulation design? One answer may be as ways of generating synthetic play-test data, before a game or level has ever seen a player, or when the sampled amount of play test data is very low....... This thesis explores methods for creating low-complexity, easily interpretable, generative AI agents for use in game and simulation design. Based on insights from decision theory and behavioral economics, the thesis investigates how player decision making styles may be defined, operationalised, and measured...

  8. Cluster-based adaptive power control protocol using Hidden Markov Model for Wireless Sensor Networks

    Science.gov (United States)

    Vinutha, C. B.; Nalini, N.; Nagaraja, M.

    2017-06-01

    This paper presents strategies for an efficient and dynamic transmission power control technique, in order to reduce packet drop and hence energy consumption of power-hungry sensor nodes operated in highly non-linear channel conditions of Wireless Sensor Networks. Besides, we also focus to prolong network lifetime and scalability by designing cluster-based network structure. Specifically we consider weight-based clustering approach wherein, minimum significant node is chosen as Cluster Head (CH) which is computed stemmed from the factors distance, remaining residual battery power and received signal strength (RSS). Further, transmission power control schemes to fit into dynamic channel conditions are meticulously implemented using Hidden Markov Model (HMM) where probability transition matrix is formulated based on the observed RSS measurements. Typically, CH estimates initial transmission power of its cluster members (CMs) from RSS using HMM and broadcast this value to its CMs for initialising their power value. Further, if CH finds that there are variations in link quality and RSS of the CMs, it again re-computes and optimises the transmission power level of the nodes using HMM to avoid packet loss due noise interference. We have demonstrated our simulation results to prove that our technique efficiently controls the power levels of sensing nodes to save significant quantity of energy for different sized network.

  9. Procedural Skills Education – Colonoscopy as a Model

    Directory of Open Access Journals (Sweden)

    Maitreyi Raman

    2008-01-01

    Full Text Available Traditionally, surgical and procedural apprenticeship has been an assumed activity of students, without a formal educational context. With increasing barriers to patient and operating room access such as shorter work week hours for residents, and operating room and endoscopy time at a premium, alternate strategies to maximizing procedural skill development are being considered. Recently, the traditional surgical apprenticeship model has been challenged, with greater emphasis on the need for surgical and procedural skills training to be more transparent and for alternatives to patient-based training to be considered. Colonoscopy performance is a complex psychomotor skill requiring practioners to integrate multiple sensory inputs, and involves higher cortical centres for optimal performance. Colonoscopy skills involve mastery in the cognitive, technical and process domains. In the present review, we propose a model for teaching colonoscopy to the novice trainee based on educational theory.

  10. State Token Petri Net modeling method for formal verification of computerized procedure including operator's interruptions of procedure execution flow

    International Nuclear Information System (INIS)

    Kim, Yun Goo; Seong, Poong Hyun

    2012-01-01

    The Computerized Procedure System (CPS) is one of the primary operating support systems in the digital Main Control Room. The CPS displays procedure on the computer screen in the form of a flow chart, and displays plant operating information along with procedure instructions. It also supports operator decision making by providing a system decision. A procedure flow should be correct and reliable, as an error would lead to operator misjudgement and inadequate control. In this paper we present a modeling for the CPS that enables formal verification based on Petri nets. The proposed State Token Petri Nets (STPN) also support modeling of a procedure flow that has various interruptions by the operator, according to the plant condition. STPN modeling is compared with Coloured Petri net when they are applied to Emergency Operating Computerized Procedure. A converting program for Computerized Procedure (CP) to STPN has been also developed. The formal verification and validation methods of CP with STPN increase the safety of a nuclear power plant and provide digital quality assurance means that are needed when the role and function of the CPS is increasing.

  11. Modeling promoter grammars with evolving hidden Markov models

    DEFF Research Database (Denmark)

    Won, Kyoung-Jae; Sandelin, Albin; Marstrand, Troels Torben

    2008-01-01

    MOTIVATION: Describing and modeling biological features of eukaryotic promoters remains an important and challenging problem within computational biology. The promoters of higher eukaryotes in particular display a wide variation in regulatory features, which are difficult to model. Often several...... factors are involved in the regulation of a set of co-regulated genes. If so, promoters can be modeled with connected regulatory features, where the network of connections is characteristic for a particular mode of regulation. RESULTS: With the goal of automatically deciphering such regulatory structures......, we present a method that iteratively evolves an ensemble of regulatory grammars using a hidden Markov Model (HMM) architecture composed of interconnected blocks representing transcription factor binding sites (TFBSs) and background regions of promoter sequences. The ensemble approach reduces the risk...

  12. On the general procedure for modelling complex ecological systems

    International Nuclear Information System (INIS)

    He Shanyu.

    1987-12-01

    In this paper, the principle of a general procedure for modelling complex ecological systems, i.e. the Adaptive Superposition Procedure (ASP) is shortly stated. The result of application of ASP in a national project for ecological regionalization is also described. (author). 3 refs

  13. On Realism of Architectural Procedural Models

    Czech Academy of Sciences Publication Activity Database

    Beneš, J.; Kelly, T.; Děchtěrenko, Filip; Křivánek, J.; Müller, P.

    2017-01-01

    Roč. 36, č. 2 (2017), s. 225-234 ISSN 0167-7055 Grant - others:AV ČR(CZ) StrategieAV21/14 Program:StrategieAV Institutional support: RVO:68081740 Keywords : realism * procedural modeling * architecture Subject RIV: IN - Informatics, Computer Science OBOR OECD: Cognitive sciences Impact factor: 1.611, year: 2016

  14. Score-based prediction of genomic islands in prokaryotic genomes using hidden Markov models

    Directory of Open Access Journals (Sweden)

    Surovcik Katharina

    2006-03-01

    Full Text Available Abstract Background Horizontal gene transfer (HGT is considered a strong evolutionary force shaping the content of microbial genomes in a substantial manner. It is the difference in speed enabling the rapid adaptation to changing environmental demands that distinguishes HGT from gene genesis, duplications or mutations. For a precise characterization, algorithms are needed that identify transfer events with high reliability. Frequently, the transferred pieces of DNA have a considerable length, comprise several genes and are called genomic islands (GIs or more specifically pathogenicity or symbiotic islands. Results We have implemented the program SIGI-HMM that predicts GIs and the putative donor of each individual alien gene. It is based on the analysis of codon usage (CU of each individual gene of a genome under study. CU of each gene is compared against a carefully selected set of CU tables representing microbial donors or highly expressed genes. Multiple tests are used to identify putatively alien genes, to predict putative donors and to mask putatively highly expressed genes. Thus, we determine the states and emission probabilities of an inhomogeneous hidden Markov model working on gene level. For the transition probabilities, we draw upon classical test theory with the intention of integrating a sensitivity controller in a consistent manner. SIGI-HMM was written in JAVA and is publicly available. It accepts as input any file created according to the EMBL-format. It generates output in the common GFF format readable for genome browsers. Benchmark tests showed that the output of SIGI-HMM is in agreement with known findings. Its predictions were both consistent with annotated GIs and with predictions generated by different methods. Conclusion SIGI-HMM is a sensitive tool for the identification of GIs in microbial genomes. It allows to interactively analyze genomes in detail and to generate or to test hypotheses about the origin of acquired

  15. Mobile Application Identification based on Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Yang Xinyan

    2018-01-01

    Full Text Available With the increasing number of mobile applications, there has more challenging network management tasks to resolve. Users also face security issues of the mobile Internet application when enjoying the mobile network resources. Identifying applications that correspond to network traffic can help network operators effectively perform network management. The existing mobile application recognition technology presents new challenges in extensibility and applications with encryption protocols. For the existing mobile application recognition technology, there are two problems, they can not recognize the application which using the encryption protocol and their scalability is poor. In this paper, a mobile application identification method based on Hidden Markov Model(HMM is proposed to extract the defined statistical characteristics from different network flows generated when each application starting. According to the time information of different network flows to get the corresponding time series, and then for each application to be identified separately to establish the corresponding HMM model. Then, we use 10 common applications to test the method proposed in this paper. The test results show that the mobile application recognition method proposed in this paper has a high accuracy and good generalization ability.

  16. Intonation model for TTS in Sepedi

    CSIR Research Space (South Africa)

    Van Niekerk, DR

    2010-09-01

    Full Text Available the size of the tone-marked corpus does not lend itself to a comprehensive statistical analysis of the comparison results, we have identified a number of characteristics consis- tently exhibited in the natural F0 contours not accounted for in the tone... in the tone-marked set was synthesised with excita- tion signals derived from the standard HMM-based models, the tone-based model and the linearly declining contours discussed above. Listeners were asked to rate each sample using integers ranging from 1...

  17. Computer-Based Procedures for Field Workers in Nuclear Power Plants: Development of a Model of Procedure Usage and Identification of Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Katya Le Blanc; Johanna Oxstrand

    2012-04-01

    The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use performance, researchers, together with the nuclear industry, have been looking at replacing the current paper-based procedures with computer-based procedure systems. The concept of computer-based procedures is not new by any means; however most research has focused on procedures used in the main control room. Procedures reviewed in these efforts are mainly emergency operating procedures and normal operating procedures. Based on lessons learned for these previous efforts we are now exploring a more unknown application for computer based procedures - field procedures, i.e. procedures used by nuclear equipment operators and maintenance technicians. The Idaho National Laboratory and participants from the U.S. commercial nuclear industry are collaborating in an applied research effort with the objective of developing requirements and specifications for a computer-based procedure system to be used by field workers. The goal is to identify the types of human errors that can be mitigated by using computer-based procedures and how to best design the computer-based procedures to do so. This paper describes the development of a Model of Procedure Use and the qualitative study on which the model is based. The study was conducted in collaboration with four nuclear utilities and five research institutes. During the qualitative study and the model development requirements and for computer-based procedures were identified.

  18. A Novel HMM Distributed Classifier for the Detection of Gait Phases by Means of a Wearable Inertial Sensor Network

    Directory of Open Access Journals (Sweden)

    Juri Taborri

    2014-09-01

    Full Text Available In this work, we decided to apply a hierarchical weighted decision, proposed and used in other research fields, for the recognition of gait phases. The developed and validated novel distributed classifier is based on hierarchical weighted decision from outputs of scalar Hidden Markov Models (HMM applied to angular velocities of foot, shank, and thigh. The angular velocities of ten healthy subjects were acquired via three uni-axial gyroscopes embedded in inertial measurement units (IMUs during one walking task, repeated three times, on a treadmill. After validating the novel distributed classifier and scalar and vectorial classifiers-already proposed in the literature, with a cross-validation, classifiers were compared for sensitivity, specificity, and computational load for all combinations of the three targeted anatomical segments. Moreover, the performance of the novel distributed classifier in the estimation of gait variability in terms of mean time and coefficient of variation was evaluated. The highest values of specificity and sensitivity (>0.98 for the three classifiers examined here were obtained when the angular velocity of the foot was processed. Distributed and vectorial classifiers reached acceptable values (>0.95 when the angular velocity of shank and thigh were analyzed. Distributed and scalar classifiers showed values of computational load about 100 times lower than the one obtained with the vectorial classifier. In addition, distributed classifiers showed an excellent reliability for the evaluation of mean time and a good/excellent reliability for the coefficient of variation. In conclusion, due to the better performance and the small value of computational load, the here proposed novel distributed classifier can be implemented in the real-time application of gait phases recognition, such as to evaluate gait variability in patients or to control active orthoses for the recovery of mobility of lower limb joints.

  19. Communication and Procedural Models of the E-Commerce Systems

    Directory of Open Access Journals (Sweden)

    Petr SUCHÁNEK

    2009-06-01

    Full Text Available E-commerce systems became a standard interface between sellers (or suppliers and customers. One of basic condition of an e-commerce system to be efficient is correct definitions and describes of the all internal and external processes. All is targeted the customers´ needs and requirements. The optimal and most exact way how to obtain and find optimal solution of e-commerce system and its processes structure in companies is the modeling and simulation. In this article author shows basic model of communication between customers and sellers in connection with the customer feedback and procedural models of e-commerce systems in terms of e-shops. Procedural model was made with the aid of definition of SOA.

  20. State-space model with deep learning for functional dynamics estimation in resting-state fMRI.

    Science.gov (United States)

    Suk, Heung-Il; Wee, Chong-Yaw; Lee, Seong-Whan; Shen, Dinggang

    2016-04-01

    Studies on resting-state functional Magnetic Resonance Imaging (rs-fMRI) have shown that different brain regions still actively interact with each other while a subject is at rest, and such functional interaction is not stationary but changes over time. In terms of a large-scale brain network, in this paper, we focus on time-varying patterns of functional networks, i.e., functional dynamics, inherent in rs-fMRI, which is one of the emerging issues along with the network modelling. Specifically, we propose a novel methodological architecture that combines deep learning and state-space modelling, and apply it to rs-fMRI based Mild Cognitive Impairment (MCI) diagnosis. We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks. Given the embedded functional features, we then use a Hidden Markov Model (HMM) to estimate dynamic characteristics of functional networks inherent in rs-fMRI via internal states, which are unobservable but can be inferred from observations statistically. By building a generative model with an HMM, we estimate the likelihood of the input features of rs-fMRI as belonging to the corresponding status, i.e., MCI or normal healthy control, based on which we identify the clinical label of a testing subject. In order to validate the effectiveness of the proposed method, we performed experiments on two different datasets and compared with state-of-the-art methods in the literature. We also analyzed the functional networks learned by DAE, estimated the functional connectivities by decoding hidden states in HMM, and investigated the estimated functional connectivities by means of a graph-theoretic approach. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  2. Bearing Fault Classification Based on Conditional Random Field

    Directory of Open Access Journals (Sweden)

    Guofeng Wang

    2013-01-01

    Full Text Available Condition monitoring of rolling element bearing is paramount for predicting the lifetime and performing effective maintenance of the mechanical equipment. To overcome the drawbacks of the hidden Markov model (HMM and improve the diagnosis accuracy, conditional random field (CRF model based classifier is proposed. In this model, the feature vectors sequences and the fault categories are linked by an undirected graphical model in which their relationship is represented by a global conditional probability distribution. In comparison with the HMM, the main advantage of the CRF model is that it can depict the temporal dynamic information between the observation sequences and state sequences without assuming the independence of the input feature vectors. Therefore, the interrelationship between the adjacent observation vectors can also be depicted and integrated into the model, which makes the classifier more robust and accurate than the HMM. To evaluate the effectiveness of the proposed method, four kinds of bearing vibration signals which correspond to normal, inner race pit, outer race pit and roller pit respectively are collected from the test rig. And the CRF and HMM models are built respectively to perform fault classification by taking the sub band energy features of wavelet packet decomposition (WPD as the observation sequences. Moreover, K-fold cross validation method is adopted to improve the evaluation accuracy of the classifier. The analysis and comparison under different fold times show that the accuracy rate of classification using the CRF model is higher than the HMM. This method brings some new lights on the accurate classification of the bearing faults.

  3. DNA motif elucidation using belief propagation.

    Science.gov (United States)

    Wong, Ka-Chun; Chan, Tak-Ming; Peng, Chengbin; Li, Yue; Zhang, Zhaolei

    2013-09-01

    Protein-binding microarray (PBM) is a high-throughout platform that can measure the DNA-binding preference of a protein in a comprehensive and unbiased manner. A typical PBM experiment can measure binding signal intensities of a protein to all the possible DNA k-mers (k=8∼10); such comprehensive binding affinity data usually need to be reduced and represented as motif models before they can be further analyzed and applied. Since proteins can often bind to DNA in multiple modes, one of the major challenges is to decompose the comprehensive affinity data into multimodal motif representations. Here, we describe a new algorithm that uses Hidden Markov Models (HMMs) and can derive precise and multimodal motifs using belief propagations. We describe an HMM-based approach using belief propagations (kmerHMM), which accepts and preprocesses PBM probe raw data into median-binding intensities of individual k-mers. The k-mers are ranked and aligned for training an HMM as the underlying motif representation. Multiple motifs are then extracted from the HMM using belief propagations. Comparisons of kmerHMM with other leading methods on several data sets demonstrated its effectiveness and uniqueness. Especially, it achieved the best performance on more than half of the data sets. In addition, the multiple binding modes derived by kmerHMM are biologically meaningful and will be useful in interpreting other genome-wide data such as those generated from ChIP-seq. The executables and source codes are available at the authors' websites: e.g. http://www.cs.toronto.edu/∼wkc/kmerHMM.

  4. DNA motif elucidation using belief propagation

    KAUST Repository

    Wong, Ka-Chun; Chan, Tak-Ming; Peng, Chengbin; Li, Yue; Zhang, Zhaolei

    2013-01-01

    Protein-binding microarray (PBM) is a high-throughout platform that can measure the DNA-binding preference of a protein in a comprehensive and unbiased manner. A typical PBM experiment can measure binding signal intensities of a protein to all the possible DNA k-mers (k = 8 ?10); such comprehensive binding affinity data usually need to be reduced and represented as motif models before they can be further analyzed and applied. Since proteins can often bind to DNA in multiple modes, one of the major challenges is to decompose the comprehensive affinity data into multimodal motif representations. Here, we describe a new algorithm that uses Hidden Markov Models (HMMs) and can derive precise and multimodal motifs using belief propagations. We describe an HMM-based approach using belief propagations (kmerHMM), which accepts and preprocesses PBM probe raw data into median-binding intensities of individual k-mers. The k-mers are ranked and aligned for training an HMM as the underlying motif representation. Multiple motifs are then extracted from the HMM using belief propagations. Comparisons of kmerHMM with other leading methods on several data sets demonstrated its effectiveness and uniqueness. Especially, it achieved the best performance on more than half of the data sets. In addition, the multiple binding modes derived by kmerHMM are biologically meaningful and will be useful in interpreting other genome-wide data such as those generated from ChIP-seq. The executables and source codes are available at the authors' websites: e.g. http://www.cs.toronto.edu/?wkc/kmerHMM. 2013 The Author(s).

  5. DNA motif elucidation using belief propagation

    KAUST Repository

    Wong, Ka-Chun

    2013-06-29

    Protein-binding microarray (PBM) is a high-throughout platform that can measure the DNA-binding preference of a protein in a comprehensive and unbiased manner. A typical PBM experiment can measure binding signal intensities of a protein to all the possible DNA k-mers (k = 8 ?10); such comprehensive binding affinity data usually need to be reduced and represented as motif models before they can be further analyzed and applied. Since proteins can often bind to DNA in multiple modes, one of the major challenges is to decompose the comprehensive affinity data into multimodal motif representations. Here, we describe a new algorithm that uses Hidden Markov Models (HMMs) and can derive precise and multimodal motifs using belief propagations. We describe an HMM-based approach using belief propagations (kmerHMM), which accepts and preprocesses PBM probe raw data into median-binding intensities of individual k-mers. The k-mers are ranked and aligned for training an HMM as the underlying motif representation. Multiple motifs are then extracted from the HMM using belief propagations. Comparisons of kmerHMM with other leading methods on several data sets demonstrated its effectiveness and uniqueness. Especially, it achieved the best performance on more than half of the data sets. In addition, the multiple binding modes derived by kmerHMM are biologically meaningful and will be useful in interpreting other genome-wide data such as those generated from ChIP-seq. The executables and source codes are available at the authors\\' websites: e.g. http://www.cs.toronto.edu/?wkc/kmerHMM. 2013 The Author(s).

  6. A model to determine payments associated with radiology procedures.

    Science.gov (United States)

    Mabotuwana, Thusitha; Hall, Christopher S; Thomas, Shiby; Wald, Christoph

    2017-12-01

    Across the United States, there is a growing number of patients in Accountable Care Organizations and under risk contracts with commercial insurance. This is due to proliferation of new value-based payment models and care delivery reform efforts. In this context, the business model of radiology within a hospital or health system context is shifting from a primary profit-center to a cost-center with a goal of cost savings. Radiology departments need to increasingly understand how the transactional nature of the business relates to financial rewards. The main challenge with current reporting systems is that the information is presented only at an aggregated level, and often not broken down further, for instance, by type of exam. As such, the primary objective of this research is to provide better visibility into payments associated with individual radiology procedures in order to better calibrate expense/capital structure of the imaging enterprise to the actual revenue or value-add to the organization it belongs to. We propose a methodology that can be used to determine technical payments at a procedure level. We use a proportion based model to allocate payments to individual radiology procedures based on total charges (which also includes non-radiology related charges). Using a production dataset containing 424,250 radiology exams we calculated the overall average technical charge for Radiology to be $873.08 per procedure and the corresponding average payment to be $326.43 (range: $48.27 for XR and $2750.11 for PET/CT) resulting in an average payment percentage of 37.39% across all exams. We describe how charges associated with a procedure can be used to approximate technical payments at a more granular level with a focus on Radiology. The methodology is generalizable to approximate payment for other services as well. Understanding payments associated with each procedure can be useful during strategic practice planning. Charge-to-total charge ratio can be used to

  7. Efficient Blind System Identification of Non-Gaussian Auto-Regressive Models with HMM Modeling of the Excitation

    DEFF Research Database (Denmark)

    Li, Chunjian; Andersen, Søren Vang

    2007-01-01

    We propose two blind system identification methods that exploit the underlying dynamics of non-Gaussian signals. The two signal models to be identified are: an Auto-Regressive (AR) model driven by a discrete-state Hidden Markov process, and the same model whose output is perturbed by white Gaussi...... outputs. The signal models are general and suitable to numerous important signals, such as speech signals and base-band communication signals. Applications to speech analysis and blind channel equalization are given to exemplify the efficiency of the new methods....

  8. Office-based deep sedation for pediatric ophthalmologic procedures using a sedation service model.

    Science.gov (United States)

    Lalwani, Kirk; Tomlinson, Matthew; Koh, Jeffrey; Wheeler, David

    2012-01-01

    Aims. (1) To assess the efficacy and safety of pediatric office-based sedation for ophthalmologic procedures using a pediatric sedation service model. (2) To assess the reduction in hospital charges of this model of care delivery compared to the operating room (OR) setting for similar procedures. Background. Sedation is used to facilitate pediatric procedures and to immobilize patients for imaging and examination. We believe that the pediatric sedation service model can be used to facilitate office-based deep sedation for brief ophthalmologic procedures and examinations. Methods. After IRB approval, all children who underwent office-based ophthalmologic procedures at our institution between January 1, 2000 and July 31, 2008 were identified using the sedation service database and the electronic health record. A comparison of hospital charges between similar procedures in the operating room was performed. Results. A total of 855 procedures were reviewed. Procedure completion rate was 100% (C.I. 99.62-100). There were no serious complications or unanticipated admissions. Our analysis showed a significant reduction in hospital charges (average of $1287 per patient) as a result of absent OR and recovery unit charges. Conclusions. Pediatric ophthalmologic minor procedures can be performed using a sedation service model with significant reductions in hospital charges.

  9. Office-Based Deep Sedation for Pediatric Ophthalmologic Procedures Using a Sedation Service Model

    Directory of Open Access Journals (Sweden)

    Kirk Lalwani

    2012-01-01

    Full Text Available Aims. (1 To assess the efficacy and safety of pediatric office-based sedation for ophthalmologic procedures using a pediatric sedation service model. (2 To assess the reduction in hospital charges of this model of care delivery compared to the operating room (OR setting for similar procedures. Background. Sedation is used to facilitate pediatric procedures and to immobilize patients for imaging and examination. We believe that the pediatric sedation service model can be used to facilitate office-based deep sedation for brief ophthalmologic procedures and examinations. Methods. After IRB approval, all children who underwent office-based ophthalmologic procedures at our institution between January 1, 2000 and July 31, 2008 were identified using the sedation service database and the electronic health record. A comparison of hospital charges between similar procedures in the operating room was performed. Results. A total of 855 procedures were reviewed. Procedure completion rate was 100% (C.I. 99.62–100. There were no serious complications or unanticipated admissions. Our analysis showed a significant reduction in hospital charges (average of $1287 per patient as a result of absent OR and recovery unit charges. Conclusions. Pediatric ophthalmologic minor procedures can be performed using a sedation service model with significant reductions in hospital charges.

  10. Procedural Content Graphs for Urban Modeling

    Directory of Open Access Journals (Sweden)

    Pedro Brandão Silva

    2015-01-01

    Full Text Available Massive procedural content creation, for example, for virtual urban environments, is a difficult, yet important challenge. While shape grammars are a popular example of effectiveness in architectural modeling, they have clear limitations regarding readability, manageability, and expressive power when addressing a variety of complex structural designs. Moreover, shape grammars aim at geometry specification and do not facilitate integration with other types of content, such as textures or light sources, which could rather accompany the generation process. We present procedural content graphs, a graph-based solution for procedural generation that addresses all these issues in a visual, flexible, and more expressive manner. Besides integrating handling of diverse types of content, this approach introduces collective entity manipulation as lists, seamlessly providing features such as advanced filtering, grouping, merging, ordering, and aggregation, essentially unavailable in shape grammars. Hereby, separated entities can be easily merged or just analyzed together in order to perform a variety of context-based decisions and operations. The advantages of this approach are illustrated via examples of tasks that are either very cumbersome or simply impossible to express with previous grammar approaches.

  11. Procedure for Application of Software Reliability Growth Models to NPP PSA

    International Nuclear Information System (INIS)

    Son, Han Seong; Kang, Hyun Gook; Chang, Seung Cheol

    2009-01-01

    As the use of software increases at nuclear power plants (NPPs), the necessity for including software reliability and/or safety into the NPP Probabilistic Safety Assessment (PSA) rises. This work proposes an application procedure of software reliability growth models (RGMs), which are most widely used to quantify software reliability, to NPP PSA. Through the proposed procedure, it can be determined if a software reliability growth model can be applied to the NPP PSA before its real application. The procedure proposed in this work is expected to be very helpful for incorporating software into NPP PSA

  12. De novo identification of replication-timing domains in the human genome by deep learning.

    Science.gov (United States)

    Liu, Feng; Ren, Chao; Li, Hao; Zhou, Pingkun; Bo, Xiaochen; Shu, Wenjie

    2016-03-01

    The de novo identification of the initiation and termination zones-regions that replicate earlier or later than their upstream and downstream neighbours, respectively-remains a key challenge in DNA replication. Building on advances in deep learning, we developed a novel hybrid architecture combining a pre-trained, deep neural network and a hidden Markov model (DNN-HMM) for the de novo identification of replication domains using replication timing profiles. Our results demonstrate that DNN-HMM can significantly outperform strong, discriminatively trained Gaussian mixture model-HMM (GMM-HMM) systems and other six reported methods that can be applied to this challenge. We applied our trained DNN-HMM to identify distinct replication domain types, namely the early replication domain (ERD), the down transition zone (DTZ), the late replication domain (LRD) and the up transition zone (UTZ), using newly replicated DNA sequencing (Repli-Seq) data across 15 human cells. A subsequent integrative analysis revealed that these replication domains harbour unique genomic and epigenetic patterns, transcriptional activity and higher-order chromosomal structure. Our findings support the 'replication-domain' model, which states (1) that ERDs and LRDs, connected by UTZs and DTZs, are spatially compartmentalized structural and functional units of higher-order chromosomal structure, (2) that the adjacent DTZ-UTZ pairs form chromatin loops and (3) that intra-interactions within ERDs and LRDs tend to be short-range and long-range, respectively. Our model reveals an important chromatin organizational principle of the human genome and represents a critical step towards understanding the mechanisms regulating replication timing. Our DNN-HMM method and three additional algorithms can be freely accessed at https://github.com/wenjiegroup/DNN-HMM The replication domain regions identified in this study are available in GEO under the accession ID GSE53984. shuwj@bmi.ac.cn or boxc

  13. Hidden Markov model approach for identifying the modular framework of the protein backbone.

    Science.gov (United States)

    Camproux, A C; Tuffery, P; Chevrolat, J P; Boisvieux, J F; Hazout, S

    1999-12-01

    The hidden Markov model (HMM) was used to identify recurrent short 3D structural building blocks (SBBs) describing protein backbones, independently of any a priori knowledge. Polypeptide chains are decomposed into a series of short segments defined by their inter-alpha-carbon distances. Basically, the model takes into account the sequentiality of the observed segments and assumes that each one corresponds to one of several possible SBBs. Fitting the model to a database of non-redundant proteins allowed us to decode proteins in terms of 12 distinct SBBs with different roles in protein structure. Some SBBs correspond to classical regular secondary structures. Others correspond to a significant subdivision of their bounding regions previously considered to be a single pattern. The major contribution of the HMM is that this model implicitly takes into account the sequential connections between SBBs and thus describes the most probable pathways by which the blocks are connected to form the framework of the protein structures. Validation of the SBBs code was performed by extracting SBB series repeated in recoding proteins and examining their structural similarities. Preliminary results on the sequence specificity of SBBs suggest promising perspectives for the prediction of SBBs or series of SBBs from the protein sequences.

  14. PROCRU: A model for analyzing crew procedures in approach to landing

    Science.gov (United States)

    Baron, S.; Muralidharan, R.; Lancraft, R.; Zacharias, G.

    1980-01-01

    A model for analyzing crew procedures in approach to landing is developed. The model employs the information processing structure used in the optimal control model and in recent models for monitoring and failure detection. Mechanisms are added to this basic structure to model crew decision making in this multi task environment. Decisions are based on probability assessments and potential mission impact (or gain). Sub models for procedural activities are included. The model distinguishes among external visual, instrument visual, and auditory sources of information. The external visual scene perception models incorporate limitations in obtaining information. The auditory information channel contains a buffer to allow for storage in memory until that information can be processed.

  15. [The emphases and basic procedures of genetic counseling in psychotherapeutic model].

    Science.gov (United States)

    Zhang, Yuan-Zhi; Zhong, Nanbert

    2006-11-01

    The emphases and basic procedures of genetic counseling are all different with those in old models. In the psychotherapeutic model, genetic counseling will not only focus on counselees' genetic disorders and birth defects, but also their psychological problems. "Client-centered therapy" termed by Carl Rogers plays an important role in genetic counseling process. The basic procedures of psychotherapeutic model of genetic counseling include 7 steps: initial contact, introduction, agendas, inquiry of family history, presenting information, closing the session and follow-up.

  16. Bayesian Modelling of fMRI Time Series

    DEFF Research Database (Denmark)

    Højen-Sørensen, Pedro; Hansen, Lars Kai; Rasmussen, Carl Edward

    2000-01-01

    We present a Hidden Markov Model (HMM) for inferring the hidden psychological state (or neural activity) during single trial fMRI activation experiments with blocked task paradigms. Inference is based on Bayesian methodology, using a combination of analytical and a variety of Markov Chain Monte...... Carlo (MCMC) sampling techniques. The advantage of this method is that detection of short time learning effects between repeated trials is possible since inference is based only on single trial experiments....

  17. A RENORMALIZATION PROCEDURE FOR TENSOR MODELS AND SCALAR-TENSOR THEORIES OF GRAVITY

    OpenAIRE

    SASAKURA, NAOKI

    2010-01-01

    Tensor models are more-index generalizations of the so-called matrix models, and provide models of quantum gravity with the idea that spaces and general relativity are emergent phenomena. In this paper, a renormalization procedure for the tensor models whose dynamical variable is a totally symmetric real three-tensor is discussed. It is proven that configurations with certain Gaussian forms are the attractors of the three-tensor under the renormalization procedure. Since these Gaussian config...

  18. 78 FR 20148 - Reporting Procedure for Mathematical Models Selected To Predict Heated Effluent Dispersion in...

    Science.gov (United States)

    2013-04-03

    ... procedure acceptable to the NRC staff for providing summary details of mathematical modeling methods used in... NUCLEAR REGULATORY COMMISSION [NRC-2013-0062] Reporting Procedure for Mathematical Models Selected... Regulatory Guide (RG) 4.4, ``Reporting Procedure for Mathematical Models Selected to Predict Heated Effluent...

  19. Time series segmentation: a new approach based on Genetic Algorithm and Hidden Markov Model

    Science.gov (United States)

    Toreti, A.; Kuglitsch, F. G.; Xoplaki, E.; Luterbacher, J.

    2009-04-01

    The subdivision of a time series into homogeneous segments has been performed using various methods applied to different disciplines. In climatology, for example, it is accompanied by the well-known homogenization problem and the detection of artificial change points. In this context, we present a new method (GAMM) based on Hidden Markov Model (HMM) and Genetic Algorithm (GA), applicable to series of independent observations (and easily adaptable to autoregressive processes). A left-to-right hidden Markov model, estimating the parameters and the best-state sequence, respectively, with the Baum-Welch and Viterbi algorithms, was applied. In order to avoid the well-known dependence of the Baum-Welch algorithm on the initial condition, a Genetic Algorithm was developed. This algorithm is characterized by mutation, elitism and a crossover procedure implemented with some restrictive rules. Moreover the function to be minimized was derived following the approach of Kehagias (2004), i.e. it is the so-called complete log-likelihood. The number of states was determined applying a two-fold cross-validation procedure (Celeux and Durand, 2008). Being aware that the last issue is complex, and it influences all the analysis, a Multi Response Permutation Procedure (MRPP; Mielke et al., 1981) was inserted. It tests the model with K+1 states (where K is the state number of the best model) if its likelihood is close to K-state model. Finally, an evaluation of the GAMM performances, applied as a break detection method in the field of climate time series homogenization, is shown. 1. G. Celeux and J.B. Durand, Comput Stat 2008. 2. A. Kehagias, Stoch Envir Res 2004. 3. P.W. Mielke, K.J. Berry, G.W. Brier, Monthly Wea Rev 1981.

  20. Hidden Markov models for the activity profile of terrorist groups

    OpenAIRE

    Raghavan, Vasanthan; Galstyan, Aram; Tartakovsky, Alexander G.

    2012-01-01

    The main focus of this work is on developing models for the activity profile of a terrorist group, detecting sudden spurts and downfalls in this profile, and, in general, tracking it over a period of time. Toward this goal, a $d$-state hidden Markov model (HMM) that captures the latent states underlying the dynamics of the group and thus its activity profile is developed. The simplest setting of $d=2$ corresponds to the case where the dynamics are coarsely quantized as Active and Inactive, re...

  1. A Framework for Estimating Long Term Driver Behavior

    Directory of Open Access Journals (Sweden)

    Vijay Gadepally

    2017-01-01

    Full Text Available We present a framework for estimation of long term driver behavior for autonomous vehicles and vehicle safety systems. The Hybrid State System and Hidden Markov Model (HSS+HMM system discussed in this article is capable of describing the hybrid characteristics of driver and vehicle coupling. In our model, driving observations follow a continuous trajectory that can be measured to create continuous state estimates. These continuous state estimates can then be used to estimate the most likely driver state using decision-behavior coupling inherent to the HSS+HMM system. The HSS+HMM system is encompassed in a HSS structure and intersystem connectivity is determined by using signal processing and pattern recognition techniques. The proposed method is suitable for a number of autonomous and vehicle safety scenarios such as estimating intent of other vehicles near intersections or avoiding hazardous driving events such as unexpected lane changes. The long term driver behavior estimation system involves an extended HSS+HMM structure that is capable of including external information in the estimation process. Through the grafting and pruning of metastates, the HSS+HMM system can be dynamically updated to best represent driver choices given external information. Three application examples are also provided to elucidate the theoretical system.

  2. Procedural Optimization Models for Multiobjective Flexible JSSP

    Directory of Open Access Journals (Sweden)

    Elena Simona NICOARA

    2013-01-01

    Full Text Available The most challenging issues related to manufacturing efficiency occur if the jobs to be sched-uled are structurally different, if these jobs allow flexible routings on the equipments and mul-tiple objectives are required. This framework, called Multi-objective Flexible Job Shop Scheduling Problems (MOFJSSP, applicable to many real processes, has been less reported in the literature than the JSSP framework, which has been extensively formalized, modeled and analyzed from many perspectives. The MOFJSSP lie, as many other NP-hard problems, in a tedious place where the vast optimization theory meets the real world context. The paper brings to discussion the most optimization models suited to MOFJSSP and analyzes in detail the genetic algorithms and agent-based models as the most appropriate procedural models.

  3. An incremental procedure model for e-learning projects at universities

    Directory of Open Access Journals (Sweden)

    Pahlke, Friedrich

    2006-11-01

    Full Text Available E-learning projects at universities are produced under different conditions than in industry. The main characteristic of many university projects is that these are realized quasi in a solo effort. In contrast, in private industry the different, interdisciplinary skills that are necessary for the development of e-learning are typically supplied by a multimedia agency.A specific procedure tailored for the use at universities is therefore required to facilitate mastering the amount and complexity of the tasks.In this paper an incremental procedure model is presented, which describes the proceeding in every phase of the project. It allows a high degree of flexibility and emphasizes the didactical concept – instead of the technical implementation. In the second part, we illustrate the practical use of the theoretical procedure model based on the project “Online training in Genetic Epidemiology”.

  4. New Procedure to Develop Lumped Kinetic Models for Heavy Fuel Oil Combustion

    KAUST Repository

    Han, Yunqing

    2016-09-20

    A new procedure to develop accurate lumped kinetic models for complex fuels is proposed, and applied to the experimental data of the heavy fuel oil measured by thermogravimetry. The new procedure is based on the pseudocomponents representing different reaction stages, which are determined by a systematic optimization process to ensure that the separation of different reaction stages with highest accuracy. The procedure is implemented and the model prediction was compared against that from a conventional method, yielding a significantly improved agreement with the experimental data. © 2016 American Chemical Society.

  5. Un calcul de Viterbi pour un Modèle de Markov Caché Contraint

    DEFF Research Database (Denmark)

    Petit, Matthieu; Christiansen, Henning

    2009-01-01

    A hidden Markov model (HMM) is a statistical model in which the system being modeled is assumed to be a Markov process with hidden states. This model has been widely used in speech recognition and biological sequence analysis. Viterbi algorithm has been proposed to compute the most probable value....... Several constraint techniques are used to reduce the search of the most probable value of hidden states of a constrained HMM. An implementation based on PRISM, a logic programming language for statistical modeling, is presented....

  6. A single model procedure for tank calibration function estimation

    International Nuclear Information System (INIS)

    York, J.C.; Liebetrau, A.M.

    1995-01-01

    Reliable tank calibrations are a vital component of any measurement control and accountability program for bulk materials in a nuclear reprocessing facility. Tank volume calibration functions used in nuclear materials safeguards and accountability programs are typically constructed from several segments, each of which is estimated independently. Ideally, the segments correspond to structural features in the tank. In this paper the authors use an extension of the Thomas-Liebetrau model to estimate the entire calibration function in a single step. This procedure automatically takes significant run-to-run differences into account and yields an estimate of the entire calibration function in one operation. As with other procedures, the first step is to define suitable calibration segments. Next, a polynomial of low degree is specified for each segment. In contrast with the conventional practice of constructing a separate model for each segment, this information is used to set up the design matrix for a single model that encompasses all of the calibration data. Estimation of the model parameters is then done using conventional statistical methods. The method described here has several advantages over traditional methods. First, modeled run-to-run differences can be taken into account automatically at the estimation step. Second, no interpolation is required between successive segments. Third, variance estimates are based on all the data, rather than that from a single segment, with the result that discontinuities in confidence intervals at segment boundaries are eliminated. Fourth, the restrictive assumption of the Thomas-Liebetrau method, that the measured volumes be the same for all runs, is not required. Finally, the proposed methods are readily implemented using standard statistical procedures and widely-used software packages

  7. Real-Time Landmine Detection with Ground-Penetrating Radar Using Discriminative and Adaptive Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Ho KC

    2005-01-01

    Full Text Available We propose a real-time software system for landmine detection using ground-penetrating radar (GPR. The system includes an efficient and adaptive preprocessing component; a hidden Markov model- (HMM- based detector; a corrective training component; and an incremental update of the background model. The preprocessing is based on frequency-domain processing and performs ground-level alignment and background removal. The HMM detector is an improvement of a previously proposed system (baseline. It includes additional pre- and postprocessing steps to improve the time efficiency and enable real-time application. The corrective training component is used to adjust the initial model parameters to minimize the number of misclassification sequences. This component could be used offline, or online through feedback to adapt an initial model to specific sites and environments. The background update component adjusts the parameters of the background model to adapt it to each lane during testing. The proposed software system is applied to data acquired from three outdoor test sites at different geographic locations, using a state-of-the-art array GPR prototype. The first collection was used as training, and the other two (contain data from more than 1200 m of simulated dirt and gravel roads for testing. Our results indicate that, on average, the corrective training can improve the performance by about 10% for each site. For individual lanes, the performance gain can reach 50%.

  8. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures

    Science.gov (United States)

    Vollant, A.; Balarac, G.; Corre, C.

    2017-09-01

    New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.

  9. Strong convective storm nowcasting using a hybrid approach of convolutional neural network and hidden Markov model

    Science.gov (United States)

    Zhang, Wei; Jiang, Ling; Han, Lei

    2018-04-01

    Convective storm nowcasting refers to the prediction of the convective weather initiation, development, and decay in a very short term (typically 0 2 h) .Despite marked progress over the past years, severe convective storm nowcasting still remains a challenge. With the boom of machine learning, it has been well applied in various fields, especially convolutional neural network (CNN). In this paper, we build a servere convective weather nowcasting system based on CNN and hidden Markov model (HMM) using reanalysis meteorological data. The goal of convective storm nowcasting is to predict if there is a convective storm in 30min. In this paper, we compress the VDRAS reanalysis data to low-dimensional data by CNN as the observation vector of HMM, then obtain the development trend of strong convective weather in the form of time series. It shows that, our method can extract robust features without any artificial selection of features, and can capture the development trend of strong convective storm.

  10. The plant operating procedure information modeling system for creation and maintenance of procedures

    International Nuclear Information System (INIS)

    Fanto, S.V.; Petras, D.S.; Reiner, R.T.; Frost, D.R.; Orendi, R.G.

    1990-01-01

    This paper reports that as a result of the accident at Three Mile Island, regulatory requirements were issued to upgrade Emergency Operating Procedures for nuclear power plants. The use of human-factored, function-oriented, EOPs were mandated to improve human reliability and to mitigate the consequences of a broad range of initiating events, subsequent failures and operator errors, without having to first diagnose the specific events. The Westinghouse Owners Group responded by developing the Emergency Response Guidelines in a human-factored, two-column format to aid in the transfer of the improved technical information to the operator during transients and accidents. The ERGs are a network of 43 interrelated guidelines which specify operator actions to be taken during plant emergencies to restore the plant to a safe and stable condition. Each utility then translates these guidelines into plant specific EOPs. The creation and maintenance of this large web of interconnecting ERGs/EOPs is an extremely complex task. This paper reports that in order to aid procedure documentation specialists with this time-consuming and tedious task, the Plant Operating Procedure Information Modeling system was developed to provide a controlled and consistent means to build and maintain the ERGs/EOPs and their supporting documentation

  11. A hierarchical stochastic model for bistable perception.

    Directory of Open Access Journals (Sweden)

    Stefan Albert

    2017-11-01

    Full Text Available Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM, which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group

  12. A hierarchical stochastic model for bistable perception.

    Science.gov (United States)

    Albert, Stefan; Schmack, Katharina; Sterzer, Philipp; Schneider, Gaby

    2017-11-01

    Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM) for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM), which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group differences to

  13. New Procedure to Develop Lumped Kinetic Models for Heavy Fuel Oil Combustion

    KAUST Repository

    Han, Yunqing; Elbaz, Ayman M.; Roberts, William L.; Im, Hong G.

    2016-01-01

    A new procedure to develop accurate lumped kinetic models for complex fuels is proposed, and applied to the experimental data of the heavy fuel oil measured by thermogravimetry. The new procedure is based on the pseudocomponents representing

  14. A Procedure for Modeling Photovoltaic Arrays under Any Configuration and Shading Conditions

    Directory of Open Access Journals (Sweden)

    Daniel Gonzalez Montoya

    2018-03-01

    Full Text Available Photovoltaic (PV arrays can be connected following regular or irregular connection patterns to form regular configurations (e.g., series-parallel, total cross-tied, bridge-linked, etc. or irregular configurations, respectively. Several reported works propose models for a single configuration; hence, making the evaluation of arrays with different configuration is a considerable time-consuming task. Moreover, if the PV array adopts an irregular configuration, the classical models cannot be used for its analysis. This paper proposes a modeling procedure for PV arrays connected in any configuration and operating under uniform or partial shading conditions. The procedure divides the array into smaller arrays, named sub-arrays, which can be independently solved. The modeling procedure selects the mesh current solution or the node voltage solution depending on the topology of each sub-array. Therefore, the proposed approach analyzes the PV array using the least number of nonlinear equations. The proposed solution is validated through simulation and experimental results, which demonstrate the proposed model capacity to reproduce the electrical behavior of PV arrays connected in any configuration.

  15. Visual perception of procedural textures: identifying perceptual dimensions and predicting generation models.

    Science.gov (United States)

    Liu, Jun; Dong, Junyu; Cai, Xiaoxu; Qi, Lin; Chantler, Mike

    2015-01-01

    Procedural models are widely used in computer graphics for generating realistic, natural-looking textures. However, these mathematical models are not perceptually meaningful, whereas the users, such as artists and designers, would prefer to make descriptions using intuitive and perceptual characteristics like "repetitive," "directional," "structured," and so on. To make up for this gap, we investigated the perceptual dimensions of textures generated by a collection of procedural models. Two psychophysical experiments were conducted: free-grouping and rating. We applied Hierarchical Cluster Analysis (HCA) and Singular Value Decomposition (SVD) to discover the perceptual features used by the observers in grouping similar textures. The results suggested that existing dimensions in literature cannot accommodate random textures. We therefore utilized isometric feature mapping (Isomap) to establish a three-dimensional perceptual texture space which better explains the features used by humans in texture similarity judgment. Finally, we proposed computational models to map perceptual features to the perceptual texture space, which can suggest a procedural model to produce textures according to user-defined perceptual scales.

  16. Visual perception of procedural textures: identifying perceptual dimensions and predicting generation models.

    Directory of Open Access Journals (Sweden)

    Jun Liu

    Full Text Available Procedural models are widely used in computer graphics for generating realistic, natural-looking textures. However, these mathematical models are not perceptually meaningful, whereas the users, such as artists and designers, would prefer to make descriptions using intuitive and perceptual characteristics like "repetitive," "directional," "structured," and so on. To make up for this gap, we investigated the perceptual dimensions of textures generated by a collection of procedural models. Two psychophysical experiments were conducted: free-grouping and rating. We applied Hierarchical Cluster Analysis (HCA and Singular Value Decomposition (SVD to discover the perceptual features used by the observers in grouping similar textures. The results suggested that existing dimensions in literature cannot accommodate random textures. We therefore utilized isometric feature mapping (Isomap to establish a three-dimensional perceptual texture space which better explains the features used by humans in texture similarity judgment. Finally, we proposed computational models to map perceptual features to the perceptual texture space, which can suggest a procedural model to produce textures according to user-defined perceptual scales.

  17. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    Science.gov (United States)

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  18. Interactive Procedural Modelling of Coherent Waterfall Scenes

    OpenAIRE

    Emilien , Arnaud; Poulin , Pierre; Cani , Marie-Paule; Vimont , Ulysse

    2015-01-01

    International audience; Combining procedural generation and user control is a fundamental challenge for the interactive design of natural scenery. This is particularly true for modelling complex waterfall scenes where, in addition to taking charge of geometric details, an ideal tool should also provide a user with the freedom to shape the running streams and falls, while automatically maintaining physical plausibility in terms of flow network, embedding into the terrain, and visual aspects of...

  19. MEGGASENSE - The Metagenome/Genome Annotated Sequence Natural Language Search Engine: A Platform for 
the Construction of Sequence Data Warehouses.

    Science.gov (United States)

    Gacesa, Ranko; Zucko, Jurica; Petursdottir, Solveig K; Gudmundsdottir, Elisabet Eik; Fridjonsson, Olafur H; Diminic, Janko; Long, Paul F; Cullum, John; Hranueli, Daslav; Hreggvidsson, Gudmundur O; Starcevic, Antonio

    2017-06-01

    The MEGGASENSE platform constructs relational databases of DNA or protein sequences. The default functional analysis uses 14 106 hidden Markov model (HMM) profiles based on sequences in the KEGG database. The Solr search engine allows sophisticated queries and a BLAST search function is also incorporated. These standard capabilities were used to generate the SCATT database from the predicted proteome of Streptomyces cattleya . The implementation of a specialised metagenome database (AMYLOMICS) for bioprospecting of carbohydrate-modifying enzymes is described. In addition to standard assembly of reads, a novel 'functional' assembly was developed, in which screening of reads with the HMM profiles occurs before the assembly. The AMYLOMICS database incorporates additional HMM profiles for carbohydrate-modifying enzymes and it is illustrated how the combination of HMM and BLAST analyses helps identify interesting genes. A variety of different proteome and metagenome databases have been generated by MEGGASENSE.

  20. Optical character recognition of handwritten Arabic using hidden Markov models

    Science.gov (United States)

    Aulama, Mohannad M.; Natsheh, Asem M.; Abandah, Gheith A.; Olama, Mohammed M.

    2011-04-01

    The problem of optical character recognition (OCR) of handwritten Arabic has not received a satisfactory solution yet. In this paper, an Arabic OCR algorithm is developed based on Hidden Markov Models (HMMs) combined with the Viterbi algorithm, which results in an improved and more robust recognition of characters at the sub-word level. Integrating the HMMs represents another step of the overall OCR trends being currently researched in the literature. The proposed approach exploits the structure of characters in the Arabic language in addition to their extracted features to achieve improved recognition rates. Useful statistical information of the Arabic language is initially extracted and then used to estimate the probabilistic parameters of the mathematical HMM. A new custom implementation of the HMM is developed in this study, where the transition matrix is built based on the collected large corpus, and the emission matrix is built based on the results obtained via the extracted character features. The recognition process is triggered using the Viterbi algorithm which employs the most probable sequence of sub-words. The model was implemented to recognize the sub-word unit of Arabic text raising the recognition rate from being linked to the worst recognition rate for any character to the overall structure of the Arabic language. Numerical results show that there is a potentially large recognition improvement by using the proposed algorithms.

  1. Belief Bisimulation for Hidden Markov Models Logical Characterisation and Decision Algorithm

    DEFF Research Database (Denmark)

    Jansen, David N.; Nielson, Flemming; Zhang, Lijun

    2012-01-01

    This paper establishes connections between logical equivalences and bisimulation relations for hidden Markov models (HMM). Both standard and belief state bisimulations are considered. We also present decision algorithms for the bisimilarities. For standard bisimilarity, an extension of the usual...... partition refinement algorithm is enough. Belief bisimilarity, being a relation on the continuous space of belief states, cannot be described directly. Instead, we show how to generate a linear equation system in time cubic in the number of states....

  2. A Procedure for Building Product Models in Intelligent Agent-based OperationsManagement

    DEFF Research Database (Denmark)

    Hvam, Lars; Riis, Jesper; Malis, Martin

    2003-01-01

    This article presents a procedure for building product models to support the specification processes dealing with sales, design of product variants and production preparation. The procedure includes, as the first phase, an analysis and redesign of the business processes that are to be supported b...

  3. Using genetic algorithm and TOPSIS for Xinanjiang model calibration with a single procedure

    Science.gov (United States)

    Cheng, Chun-Tian; Zhao, Ming-Yan; Chau, K. W.; Wu, Xin-Yu

    2006-01-01

    Genetic Algorithm (GA) is globally oriented in searching and thus useful in optimizing multiobjective problems, especially where the objective functions are ill-defined. Conceptual rainfall-runoff models that aim at predicting streamflow from the knowledge of precipitation over a catchment have become a basic tool for flood forecasting. The parameter calibration of a conceptual model usually involves the multiple criteria for judging the performances of observed data. However, it is often difficult to derive all objective functions for the parameter calibration problem of a conceptual model. Thus, a new method to the multiple criteria parameter calibration problem, which combines GA with TOPSIS (technique for order performance by similarity to ideal solution) for Xinanjiang model, is presented. This study is an immediate further development of authors' previous research (Cheng, C.T., Ou, C.P., Chau, K.W., 2002. Combining a fuzzy optimal model with a genetic algorithm to solve multi-objective rainfall-runoff model calibration. Journal of Hydrology, 268, 72-86), whose obvious disadvantages are to split the whole procedure into two parts and to become difficult to integrally grasp the best behaviors of model during the calibration procedure. The current method integrates the two parts of Xinanjiang rainfall-runoff model calibration together, simplifying the procedures of model calibration and validation and easily demonstrated the intrinsic phenomenon of observed data in integrity. Comparison of results with two-step procedure shows that the current methodology gives similar results to the previous method, is also feasible and robust, but simpler and easier to apply in practice.

  4. A testing procedure for wind turbine generators based on the power grid statistical model

    DEFF Research Database (Denmark)

    Farajzadehbibalan, Saber; Ramezani, Mohammad Hossein; Nielsen, Peter

    2017-01-01

    In this study, a comprehensive test procedure is developed to test wind turbine generators with a hardware-in-loop setup. The procedure employs the statistical model of the power grid considering the restrictions of the test facility and system dynamics. Given the model in the latent space...

  5. Pairagon+N-SCAN_EST: a model-based gene annotation pipeline

    DEFF Research Database (Denmark)

    Arumugam, Manimozhiyan; Wei, Chaochun; Brown, Randall H

    2006-01-01

    This paper describes Pairagon+N-SCAN_EST, a gene annotation pipeline that uses only native alignments. For each expressed sequence it chooses the best genomic alignment. Systems like ENSEMBL and ExoGean rely on trans alignments, in which expressed sequences are aligned to the genomic loci...... with de novo gene prediction by using N-SCAN_EST. N-SCAN_EST is based on a generalized HMM probability model augmented with a phylogenetic conservation model and EST alignments. It can predict complete transcripts by extending or merging EST alignments, but it can also predict genes in regions without EST...

  6. Comparative Effectiveness of Echoic and Modeling Procedures in Language Instruction With Culturally Disadvantaged Children.

    Science.gov (United States)

    Stern, Carolyn; Keislar, Evan

    In an attempt to explore a systematic approach to language expansion and improved sentence structure, echoic and modeling procedures for language instruction were compared. Four hypotheses were formulated: (1) children who use modeling procedures will produce better structured sentences than children who use echoic prompting, (2) both echoic and…

  7. Hidden Markov modelling of movement data from fish

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver

    Movement data from marine animals tagged with electronic tags are becoming increasingly diverse and plentiful. This trend entails a need for statistical methods that are able to filter the observations to extract the ecologically relevant content. This dissertation focuses on the development...... the behaviour of the animal. With the extended model can migratory and resident movement behaviour be related to geographical regions. For population inference multiple individual state-space analyses can be interconnected using mixed effects modelling. This framework provides parameter estimates...... approximated. This furthermore enables accurate probability densities of location to be computed. Finally, the performance of the HMM approach in analysing nonlinear state space models is compared with two alternatives: the AD Model Builder framework and BUGS, which relies on Markov chain Monte Carlo...

  8. Evaluating procedural modelling for 3D models of informal settlements in urban design activities

    Directory of Open Access Journals (Sweden)

    Victoria Rautenbach

    2015-11-01

    Full Text Available Three-dimensional (3D modelling and visualisation is one of the fastest growing application fields in geographic information science. 3D city models are being researched extensively for a variety of purposes and in various domains, including urban design, disaster management, education and computer gaming. These models typically depict urban business districts (downtown or suburban residential areas. Despite informal settlements being a prevailing feature of many cities in developing countries, 3D models of informal settlements are virtually non-existent. 3D models of informal settlements could be useful in various ways, e.g. to gather information about the current environment in the informal settlements, to design upgrades, to communicate these and to educate inhabitants about environmental challenges. In this article, we described the development of a 3D model of the Slovo Park informal settlement in the City of Johannesburg Metropolitan Municipality, South Africa. Instead of using time-consuming traditional manual methods, we followed the procedural modelling technique. Visualisation characteristics of 3D models of informal settlements were described and the importance of each characteristic in urban design activities for informal settlement upgrades was assessed. Next, the visualisation characteristics of the Slovo Park model were evaluated. The results of the evaluation showed that the 3D model produced by the procedural modelling technique is suitable for urban design activities in informal settlements. The visualisation characteristics and their assessment are also useful as guidelines for developing 3D models of informal settlements. In future, we plan to empirically test the use of such 3D models in urban design projects in informal settlements.

  9. Spatial Region Estimation for Autonomous CoT Clustering Using Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Joon‐young Jung

    2018-02-01

    Full Text Available This paper proposes a hierarchical dual filtering (HDF algorithm to estimate the spatial region between a Cloud of Things (CoT gateway and an Internet of Things (IoT device. The accuracy of the spatial region estimation is important for autonomous CoT clustering. We conduct spatial region estimation using a hidden Markov model (HMM with a raw Bluetooth received signal strength indicator (RSSI. However, the accuracy of the region estimation using the validation data is only 53.8%. To increase the accuracy of the spatial region estimation, the HDF algorithm removes the high‐frequency signals hierarchically, and alters the parameters according to whether the IoT device moves. The accuracy of spatial region estimation using a raw RSSI, Kalman filter, and HDF are compared to evaluate the effectiveness of the HDF algorithm. The success rate and root mean square error (RMSE of all regions are 0.538, 0.622, and 0.75, and 0.997, 0.812, and 0.5 when raw RSSI, a Kalman filter, and HDF are used, respectively. The HDF algorithm attains the best results in terms of the success rate and RMSE of spatial region estimation using HMM.

  10. A PROCEDURAL SOLUTION TO MODEL ROMAN MASONRY STRUCTURES

    Directory of Open Access Journals (Sweden)

    V. Cappellini

    2013-07-01

    Full Text Available The paper will describe a new approach based on the development of a procedural modelling methodology for archaeological data representation. This is a custom-designed solution based on the recognition of the rules belonging to the construction methods used in roman times. We have conceived a tool for 3D reconstruction of masonry structures starting from photogrammetric surveying. Our protocol considers different steps. Firstly we have focused on the classification of opus based on the basic interconnections that can lead to a descriptive system used for their unequivocal identification and design. Secondly, we have chosen an automatic, accurate, flexible and open-source photogrammetric pipeline named Pastis Apero Micmac – PAM, developed by IGN (Paris. We have employed it to generate ortho-images from non-oriented images, using a user-friendly interface implemented by CNRS Marseille (France. Thirdly, the masonry elements are created in parametric and interactive way, and finally they are adapted to the photogrammetric data. The presented application, currently under construction, is developed with an open source programming language called Processing, useful for visual, animated or static, 2D or 3D, interactive creations. Using this computer language, a Java environment has been developed. Therefore, even if the procedural modelling reveals an accuracy level inferior to the one obtained by manual modelling (brick by brick, this method can be useful when taking into account the static evaluation on buildings (requiring quantitative aspects and metric measures for restoration purposes.

  11. a Procedural Solution to Model Roman Masonry Structures

    Science.gov (United States)

    Cappellini, V.; Saleri, R.; Stefani, C.; Nony, N.; De Luca, L.

    2013-07-01

    The paper will describe a new approach based on the development of a procedural modelling methodology for archaeological data representation. This is a custom-designed solution based on the recognition of the rules belonging to the construction methods used in roman times. We have conceived a tool for 3D reconstruction of masonry structures starting from photogrammetric surveying. Our protocol considers different steps. Firstly we have focused on the classification of opus based on the basic interconnections that can lead to a descriptive system used for their unequivocal identification and design. Secondly, we have chosen an automatic, accurate, flexible and open-source photogrammetric pipeline named Pastis Apero Micmac - PAM, developed by IGN (Paris). We have employed it to generate ortho-images from non-oriented images, using a user-friendly interface implemented by CNRS Marseille (France). Thirdly, the masonry elements are created in parametric and interactive way, and finally they are adapted to the photogrammetric data. The presented application, currently under construction, is developed with an open source programming language called Processing, useful for visual, animated or static, 2D or 3D, interactive creations. Using this computer language, a Java environment has been developed. Therefore, even if the procedural modelling reveals an accuracy level inferior to the one obtained by manual modelling (brick by brick), this method can be useful when taking into account the static evaluation on buildings (requiring quantitative aspects) and metric measures for restoration purposes.

  12. Penerapan Model Pembelajaran Conceptual Understanding Procedures (CUPS sebagai Upaya Mengatasi Miskonsepsi Matematis Siswa

    Directory of Open Access Journals (Sweden)

    Asri Gita

    2018-01-01

    Full Text Available Kesalahan dalam memahami konsep menjadi salah satu faktor yang menyebabkan miskonsepsi pada pelajaran matematika. Miskonsepsi pada materi bangun datar disebabkan oleh cara belajar siswa yang hanya menghafalkan bentuk dasar tanpa memahami hubungan antar bangun datar dan sifat-sifatnya. Upaya yang dilakukan dalam mengatasi miskonsepsi tersebut adalah dengan menerapkan pembelajaran konstruktivis. Salah satu model pembelajaran konstruktivis adalah Conceptual Understanding Procedures (CUPs. Tujuan dari penelitian ini adalah untuk mengetahui penerapan model pembelajaran Conceptual Understanding Procedures (CUPs sebagai upaya mengatasi miskonsepsi matematis siswa pada materi sifat-sifat bangun datar segiempat. Subjek penelitian adalah 12 orang siswa SMP yang mengalami miskonsepsi pada materi sifat-sifat bangun datar segiempat. Teknik pengumpulan data pada penelitian ini melalui tes, video, observasi, dan wawancara. Validitas dan reliabilitas data melalui credibility, dependability, transferability, dan confirmability. Hasil dari penelitian ini menunjukkan bahwa penerapan model pembelajaran Conceptual Understanding Procedures (CUPs yang terdiri dari fase individu, fase kelompok triplet, dan fase interpretasi seluruh kelas dapat mengatasi miskonsepsi siswa pada materi sifat-sifat bangun datar segiempat. Perubahan miskonsepsi siswa juga dapat dilihat dari nilai tes yang mengalami peningkatan nilai berdasarkan nilai tes awal dan tes akhir siswa. Kata Kunci: Conceptual Understanding Procedures (CUPs, miskonsepsi, segiempat.   ABSTRACT Mistakes in understanding the concept became one of the factors that led to misconceptions in mathematics. The misconceptions in plane shapes are caused by the way of learning of students who only memorize the basic form without understanding the relationship between the plane shapes and its properties. Efforts made in overcoming these misconceptions is to apply constructivist learning. One of the constructivist learning

  13. Bootstrap procedure in the quasinuclear quark model

    International Nuclear Information System (INIS)

    Anisovich, V.V.; Gerasyuta, S.M.; Keltuyala, I.V.

    1983-01-01

    The scattering amplitude for quarks (dressed quarks of a single flavour, and three colours) is obtained by means of a bootstrap procedure with introdUction of an initial paint-wise interaction due to a heavy gluon exchange. The obtained quasi-nuclear model (effective short-range interaction in the S-wave states) has reasonable properties: there exist colourless meson states Jsup(p)=0sup(-), 1 - ; there are no bound states in coloured channels, a virtual diquark level Jsup(p)=1sup(+) appears in the coloured state anti 3sub(c)

  14. Calibration procedure for a potato crop growth model using information from across Europe

    DEFF Research Database (Denmark)

    Heidmann, Tove; Tofteng, Charlotte; Abrahamsen, Per

    2008-01-01

    for adaptation of the Daisy model to new potato varieties or for the improvement of the existing parameter set. The procedure is then, as a starting point, to focus the calibration process on the recommended list of parameters to change. We demonstrate this approach by showing the procedure for recalibrating...... three varieties using all relevant data from the sites. We believe these new parameterisations to be more robust, because they indirectly were based on information from the six different sites. We claim that this procedure combines both local and specific modeller expertise in a way that results in more......In the FertOrgaNic EU project, 3 years of field experiments with drip irrigation and fertigation were carried out at six different sites across Europe, involving seven different varieties of potato. The Daisy model, which simulates plant growth together with water and nitrogen dynamics, was used...

  15. Uncovering the cognitive processes underlying mental rotation: an eye-movement study.

    Science.gov (United States)

    Xue, Jiguo; Li, Chunyong; Quan, Cheng; Lu, Yiming; Yue, Jingwei; Zhang, Chenggang

    2017-08-30

    Mental rotation is an important paradigm for spatial ability. Mental-rotation tasks are assumed to involve five or three sequential cognitive-processing states, though this has not been demonstrated experimentally. Here, we investigated how processing states alternate during mental-rotation tasks. Inference was carried out using an advanced statistical modelling and data-driven approach - a discriminative hidden Markov model (dHMM) trained using eye-movement data obtained from an experiment consisting of two different strategies: (I) mentally rotate the right-side figure to be aligned with the left-side figure and (II) mentally rotate the left-side figure to be aligned with the right-side figure. Eye movements were found to contain the necessary information for determining the processing strategy, and the dHMM that best fit our data segmented the mental-rotation process into three hidden states, which we termed encoding and searching, comparison, and searching on one-side pair. Additionally, we applied three classification methods, logistic regression, support vector model and dHMM, of which dHMM predicted the strategies with the highest accuracy (76.8%). Our study did confirm that there are differences in processing states between these two of mental-rotation strategies, and were consistent with the previous suggestion that mental rotation is discrete process that is accomplished in a piecemeal fashion.

  16. Semantic Modeling of Administrative Procedures from a Spanish Regional Public Administration

    Directory of Open Access Journals (Sweden)

    Francisco José Hidalgo López

    2018-02-01

    Full Text Available Over the past few years, Public Administrations have been providing systems for procedures and files electronic processing to ensure compliance with regulations and provide public services to citizens. Although each administration provides similar services to their citizens, these systems usually differ from the internal information management point of view since they usually come from different products and manufacturers. The common framework that regulations demand, and that Public Administrations must respect when processing electronic files, provides a unique opportunity for the development of intelligent agents in the field of administrative processes. However, for this development to be truly effective and applicable to the public sector, it is necessary to have a common representation model for these administrative processes. Although a lot of work has already been done in the development of public information reuse initiatives and common vocabularies standardization, this has not been carried out at the processes level. In this paper, we propose a semantic representation model of both processes models and processes for Public Administrations: the procedures and administrative files. The goal is to improve public administration open data initiatives and help to develop their sustainability policies, such as improving decision-making procedures and administrative management sustainability. As a case study, we modelled public administrative processes and files in collaboration with a Regional Public Administration in Spain, the Principality of Asturias, which enabled access to its information systems, helping the evaluation of our approach.

  17. Computational Modelling in Development of a Design Procedure for Concrete Road

    Directory of Open Access Journals (Sweden)

    B. Novotný

    2000-01-01

    Full Text Available The computational modelling plays a decisive part in development of a new design procedure for concrete pavement by quantifying impacts of individual design factors. In the present paper, the emphasis is placed on the modelling of a structural response of the jointed concrete pavement as a system of interacting rectangular slabs transferring wheel loads into an elastic layered subgrade. The finite element plate analysis is combined with the assumption of a linear contact stress variation over triangular elements of the contact region division. The linking forces are introduced to model the load transfer across the joints. The unknown contact stress nodal intensities as well as unknown linking forces are determined in an iterative way to fulfil slab/foundation and slab/slab contact conditions. The temperature effects are also considered and space is reserved for modelling of inelastic and additional environmental effects. It is pointed out that pavement design should be based on full data of pavement stressing, in contradiction to procedures accounting only for the axle load induced stresses.

  18. The use of flow models for design of plant operating procedures

    International Nuclear Information System (INIS)

    Lind, M.

    1982-03-01

    The report describe a systematic approach to the design of operating procedures or sequence automatics for process plant control. It is shown how flow models representing the topology of mass and energy flows on different levels of function provide plant information which is important for the considered design problem. The modelling methodology leads to the definition of three categories of control tasks. Two tasks relate to the regulation and control of changes of levels and flows of mass and energy in a system within a defined mode of operation. The third type relate to the control actions necessary for switching operations involved in changes of operating mode. These control tasks are identified for a given plant as part of the flow modelling activity. It is discussed how the flow model deal with the problem of assigning control task precedence in time eg. during start-up or shut-down operations. The method may be a basis for providing automated procedure support to the operator in unforeseen situations or may be a tool for control design. (auth.)

  19. Procedural 3d Modelling for Traditional Settlements. The Case Study of Central Zagori

    Science.gov (United States)

    Kitsakis, D.; Tsiliakou, E.; Labropoulos, T.; Dimopoulou, E.

    2017-02-01

    Over the last decades 3D modelling has been a fast growing field in Geographic Information Science, extensively applied in various domains including reconstruction and visualization of cultural heritage, especially monuments and traditional settlements. Technological advances in computer graphics, allow for modelling of complex 3D objects achieving high precision and accuracy. Procedural modelling is an effective tool and a relatively novel method, based on algorithmic modelling concept. It is utilized for the generation of accurate 3D models and composite facade textures from sets of rules which are called Computer Generated Architecture grammars (CGA grammars), defining the objects' detailed geometry, rather than altering or editing the model manually. In this paper, procedural modelling tools have been exploited to generate the 3D model of a traditional settlement in the region of Central Zagori in Greece. The detailed geometries of 3D models derived from the application of shape grammars on selected footprints, and the process resulted in a final 3D model, optimally describing the built environment of Central Zagori, in three levels of Detail (LoD). The final 3D scene was exported and published as 3D web-scene which can be viewed with 3D CityEngine viewer, giving a walkthrough the whole model, same as in virtual reality or game environments. This research work addresses issues regarding textures' precision, LoD for 3D objects and interactive visualization within one 3D scene, as well as the effectiveness of large scale modelling, along with the benefits and drawbacks that derive from procedural modelling techniques in the field of cultural heritage and more specifically on 3D modelling of traditional settlements.

  20. Estimation methods for nonlinear state-space models in ecology

    DEFF Research Database (Denmark)

    Pedersen, Martin Wæver; Berg, Casper Willestofte; Thygesen, Uffe Høgsbro

    2011-01-01

    The use of nonlinear state-space models for analyzing ecological systems is increasing. A wide range of estimation methods for such models are available to ecologists, however it is not always clear, which is the appropriate method to choose. To this end, three approaches to estimation in the theta...... logistic model for population dynamics were benchmarked by Wang (2007). Similarly, we examine and compare the estimation performance of three alternative methods using simulated data. The first approach is to partition the state-space into a finite number of states and formulate the problem as a hidden...... Markov model (HMM). The second method uses the mixed effects modeling and fast numerical integration framework of the AD Model Builder (ADMB) open-source software. The third alternative is to use the popular Bayesian framework of BUGS. The study showed that state and parameter estimation performance...

  1. Enhanced spin Hall effect of tunneling light in hyperbolic metamaterial waveguide.

    Science.gov (United States)

    Tang, Tingting; Li, Chaoyang; Luo, Li

    2016-08-01

    Giant enhancement of spin Hall effect of tunneling light (SHETL) is theoretically proposed in a frustrated total internal reflection (FTIR) structure with hyperbolic metamaterial (HMM). We calculate the transverse shift of right-circularly polarized light in a SiO2-air-HMM-air-SiO2 waveguide and analyze the physical mechanism of the enhanced SHETL. The HMM anisotropy can greatly increase the transverse shift of polarized light even though HMM loss might reduce it. Compared with transverse shift of transmitted light through a single HMM slab with ZnAlO/ZnO multilayer, the maximum transverse shift of tunneling light through a FTIR structure with identical HMM can be significantly enlarged by more than three times which reaches -38 μm without any amplification method.

  2. Technical Note: Procedure for the calibration and validation of kilo-voltage cone-beam CT models

    Energy Technology Data Exchange (ETDEWEB)

    Vilches-Freixas, Gloria; Létang, Jean Michel; Rit, Simon, E-mail: simon.rit@creatis.insa-lyon.fr [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1206, INSA-Lyon, Université Lyon 1, Centre Léon Bérard, Lyon 69373 Cedex 08 (France); Brousmiche, Sébastien [Ion Beam Application, Louvain-la-Neuve 1348 (Belgium); Romero, Edward; Vila Oliva, Marc [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1206, INSA-Lyon, Université Lyon 1, Centre Léon Bérard, Lyon 69373 Cedex 08, France and Ion Beam Application, Louvain-la-Neuve 1348 (Belgium); Kellner, Daniel; Deutschmann, Heinz; Keuschnigg, Peter; Steininger, Philipp [Institute for Research and Development on Advanced Radiation Technologies, Paracelsus Medical University, Salzburg 5020 (Austria)

    2016-09-15

    Purpose: The aim of this work is to propose a general and simple procedure for the calibration and validation of kilo-voltage cone-beam CT (kV CBCT) models against experimental data. Methods: The calibration and validation of the CT model is a two-step procedure: the source model then the detector model. The source is described by the direction dependent photon energy spectrum at each voltage while the detector is described by the pixel intensity value as a function of the direction and the energy of incident photons. The measurements for the source consist of a series of dose measurements in air performed at each voltage with varying filter thicknesses and materials in front of the x-ray tube. The measurements for the detector are acquisitions of projection images using the same filters and several tube voltages. The proposed procedure has been applied to calibrate and assess the accuracy of simple models of the source and the detector of three commercial kV CBCT units. If the CBCT system models had been calibrated differently, the current procedure would have been exclusively used to validate the models. Several high-purity attenuation filters of aluminum, copper, and silver combined with a dosimeter which is sensitive to the range of voltages of interest were used. A sensitivity analysis of the model has also been conducted for each parameter of the source and the detector models. Results: Average deviations between experimental and theoretical dose values are below 1.5% after calibration for the three x-ray sources. The predicted energy deposited in the detector agrees with experimental data within 4% for all imaging systems. Conclusions: The authors developed and applied an experimental procedure to calibrate and validate any model of the source and the detector of a CBCT unit. The present protocol has been successfully applied to three x-ray imaging systems. The minimum requirements in terms of material and equipment would make its implementation suitable in

  3. GENERATION OF MULTI-LOD 3D CITY MODELS IN CITYGML WITH THE PROCEDURAL MODELLING ENGINE RANDOM3DCITY

    Directory of Open Access Journals (Sweden)

    F. Biljecki

    2016-09-01

    Full Text Available The production and dissemination of semantic 3D city models is rapidly increasing benefiting a growing number of use cases. However, their availability in multiple LODs and in the CityGML format is still problematic in practice. This hinders applications and experiments where multi-LOD datasets are required as input, for instance, to determine the performance of different LODs in a spatial analysis. An alternative approach to obtain 3D city models is to generate them with procedural modelling, which is – as we discuss in this paper – well suited as a method to source multi-LOD datasets useful for a number of applications. However, procedural modelling has not yet been employed for this purpose. Therefore, we have developed RANDOM3DCITY, an experimental procedural modelling engine for generating synthetic datasets of buildings and other urban features. The engine is designed to produce models in CityGML and does so in multiple LODs. Besides the generation of multiple geometric LODs, we implement the realisation of multiple levels of spatiosemantic coherence, geometric reference variants, and indoor representations. As a result of their permutations, each building can be generated in 392 different CityGML representations, an unprecedented number of modelling variants of the same feature. The datasets produced by RANDOM3DCITY are suited for several applications, as we show in this paper with documented uses. The developed engine is available under an open-source licence at Github at http://github.com/tudelft3d/Random3Dcity.

  4. A Study on Efficient Robust Speech Recognition with Stochastic Dynamic Time Warping

    OpenAIRE

    孫, 喜浩

    2014-01-01

    In recent years, great progress has been made in automatic speech recognition (ASR) system. The hidden Markov model (HMM) and dynamic time warping (DTW) are the two main algorithms which have been widely applied to ASR system. Although, HMM technique achieves higher recognition accuracy in clear speech environment and noisy environment. It needs large-set of words and realizes the algorithm more complexly.Thus, more and more researchers have focused on DTW-based ASR system.Dynamic time warpin...

  5. Spiral model of procedural cycle of educational process management

    Directory of Open Access Journals (Sweden)

    Bezrukov Valery I.

    2016-01-01

    Full Text Available The article analyzes the nature and characteristics of the spiral model Procedure educational systems management cycle. The authors identify patterns between the development of information and communication technologies and the transformation of the education management process, give the characteristics of the concept of “information literacy” and “Media Education”. Consider the design function, determine its potential in changing the traditional educational paradigm to the new - information.

  6. A Comparison of Exposure Control Procedures in CAT Systems Based on Different Measurement Models for Testlets

    Science.gov (United States)

    Boyd, Aimee M.; Dodd, Barbara; Fitzpatrick, Steven

    2013-01-01

    This study compared several exposure control procedures for CAT systems based on the three-parameter logistic testlet response theory model (Wang, Bradlow, & Wainer, 2002) and Masters' (1982) partial credit model when applied to a pool consisting entirely of testlets. The exposure control procedures studied were the modified within 0.10 logits…

  7. On a computational method for modelling complex ecosystems by superposition procedure

    International Nuclear Information System (INIS)

    He Shanyu.

    1986-12-01

    In this paper, the Superposition Procedure is concisely described, and a computational method for modelling a complex ecosystem is proposed. With this method, the information contained in acceptable submodels and observed data can be utilized to maximal degree. (author). 1 ref

  8. Nonlocal optical effects on the Goos–Hänchen shifts at multilayered hyperbolic metamaterials

    International Nuclear Information System (INIS)

    Chen, Chih-Wei; Bian, Tingting; Chiang, Hai-Pang; Leung, P T

    2016-01-01

    The lateral beam shift of light incident on a multilayered hyperbolic metamaterial (HMM) is investigated using a theoretical model which emphasizes the nonlocal optical response of the indefinite material. By applying an effective local response theory formulated recently in the literature, it is found that nonlocal effects only affect p polarized light in this Goos–Hänchen (GH) shift of the incident beam; leading to a blue-shifted peak for positive shifts at high frequencies and red-shifted dip for negative shifts at low frequencies in the GH shift spectrum. An account for the observed phenomenon is given by referring to the ‘Brewster condition’ for the reflected wave from the HMM. This observation thus provides a relatively direct probe for the nonlocal response of the HMM. (paper)

  9. Smart Sensor-Based Motion Detection System for Hand Movement Training in Open Surgery.

    Science.gov (United States)

    Sun, Xinyao; Byrns, Simon; Cheng, Irene; Zheng, Bin; Basu, Anup

    2017-02-01

    We introduce a smart sensor-based motion detection technique for objective measurement and assessment of surgical dexterity among users at different experience levels. The goal is to allow trainees to evaluate their performance based on a reference model shared through communication technology, e.g., the Internet, without the physical presence of an evaluating surgeon. While in the current implementation we used a Leap Motion Controller to obtain motion data for analysis, our technique can be applied to motion data captured by other smart sensors, e.g., OptiTrack. To differentiate motions captured from different participants, measurement and assessment in our approach are achieved using two strategies: (1) low level descriptive statistical analysis, and (2) Hidden Markov Model (HMM) classification. Based on our surgical knot tying task experiment, we can conclude that finger motions generated from users with different surgical dexterity, e.g., expert and novice performers, display differences in path length, number of movements and task completion time. In order to validate the discriminatory ability of HMM for classifying different movement patterns, a non-surgical task was included in our analysis. Experimental results demonstrate that our approach had 100 % accuracy in discriminating between expert and novice performances. Our proposed motion analysis technique applied to open surgical procedures is a promising step towards the development of objective computer-assisted assessment and training systems.

  10. Price adjustment for traditional Chinese medicine procedures: Based on a standardized value parity model.

    Science.gov (United States)

    Wang, Haiyin; Jin, Chunlin; Jiang, Qingwu

    2017-11-20

    Traditional Chinese medicine (TCM) is an important part of China's medical system. Due to the prolonged low price of TCM procedures and the lack of an effective mechanism for dynamic price adjustment, the development of TCM has markedly lagged behind Western medicine. The World Health Organization (WHO) has emphasized the need to enhance the development of alternative and traditional medicine when creating national health care systems. The establishment of scientific and appropriate mechanisms to adjust the price of medical procedures in TCM is crucial to promoting the development of TCM. This study has examined incorporating value indicators and data on basic manpower expended, time spent, technical difficulty, and the degree of risk in the latest standards for the price of medical procedures in China, and this study also offers a price adjustment model with the relative price ratio as a key index. This study examined 144 TCM procedures and found that prices of TCM procedures were mainly based on the value of medical care provided; on average, medical care provided accounted for 89% of the price. Current price levels were generally low and the current price accounted for 56% of the standardized value of a procedure, on average. Current price levels accounted for a markedly lower standardized value of acupuncture, moxibustion, special treatment with TCM, and comprehensive TCM procedures. This study selected a total of 79 procedures and adjusted them by priority. The relationship between the price of TCM procedures and the suggested price was significantly optimized (p based on a standardized value parity model is a scientific and suitable method of price adjustment that can serve as a reference for other provinces and municipalities in China and other countries and regions that mainly have fee-for-service (FFS) medical care.

  11. Recreation of architectural structures using procedural modeling based on volumes

    Directory of Open Access Journals (Sweden)

    Santiago Barroso Juan

    2013-11-01

    Full Text Available While the procedural modeling of buildings and other architectural structures has evolved very significantly in recent years, there is noticeable absence of high-level tools that allow a designer, an artist or an historian, creating important buildings or architectonic structures in a particular city. In this paper we present a tool for creating buildings in a simple and clear, following rules that use the language and methodology of creating their own buildings, and hiding the user the algorithmic details of the creation of the model.

  12. Predictive market segmentation model: An application of logistic regression model and CHAID procedure

    Directory of Open Access Journals (Sweden)

    Soldić-Aleksić Jasna

    2009-01-01

    Full Text Available Market segmentation presents one of the key concepts of the modern marketing. The main goal of market segmentation is focused on creating groups (segments of customers that have similar characteristics, needs, wishes and/or similar behavior regarding the purchase of concrete product/service. Companies can create specific marketing plan for each of these segments and therefore gain short or long term competitive advantage on the market. Depending on the concrete marketing goal, different segmentation schemes and techniques may be applied. This paper presents a predictive market segmentation model based on the application of logistic regression model and CHAID analysis. The logistic regression model was used for the purpose of variables selection (from the initial pool of eleven variables which are statistically significant for explaining the dependent variable. Selected variables were afterwards included in the CHAID procedure that generated the predictive market segmentation model. The model results are presented on the concrete empirical example in the following form: summary model results, CHAID tree, Gain chart, Index chart, risk and classification tables.

  13. Behavioural Procedural Models – a multipurpose mechanistic account

    Directory of Open Access Journals (Sweden)

    Leonardo Ivarola

    2012-05-01

    Full Text Available In this paper we outline an epistemological defence of what wecall Behavioural Procedural Models (BPMs, which represent the processes of individual decisions that lead to relevant economic patterns as psychologically (rather than rationally driven. Their general structure, and the way in which they may be incorporated to a multipurpose view of models, where the representational and interventionist goals are combined, is shown. It is argued that BPMs may provide “mechanistic-based explanations” in the sense defended by Hedström and Ylikoski (2010, which involve invariant regularities in Woodward’s sense. Such mechanisms provide a causal sort of explanation of anomalous economic patterns, which allow for extra marketintervention and manipulability in order to correct and improve some key individual decisions. This capability sets the basis for the so called libertarian paternalism (Sunstein and Thaler 2003.

  14. Evaluation of alternative surface runoff accounting procedures using the SWAT model

    Science.gov (United States)

    For surface runoff estimation in the Soil and Water Assessment Tool (SWAT) model, the curve number (CN) procedure is commonly adopted to calculate surface runoff by utilizing antecedent soil moisture condition (SCSI) in field. In the recent version of SWAT (SWAT2005), an alternative approach is ava...

  15. Qualitative mechanism models and the rationalization of procedures

    Science.gov (United States)

    Farley, Arthur M.

    1989-01-01

    A qualitative, cluster-based approach to the representation of hydraulic systems is described and its potential for generating and explaining procedures is demonstrated. Many ideas are formalized and implemented as part of an interactive, computer-based system. The system allows for designing, displaying, and reasoning about hydraulic systems. The interactive system has an interface consisting of three windows: a design/control window, a cluster window, and a diagnosis/plan window. A qualitative mechanism model for the ORS (Orbital Refueling System) is presented to coordinate with ongoing research on this system being conducted at NASA Ames Research Center.

  16. A Numerical Procedure for Model Identifiability Analysis Applied to Enzyme Kinetics

    DEFF Research Database (Denmark)

    Daele, Timothy, Van; Van Hoey, Stijn; Gernaey, Krist

    2015-01-01

    The proper calibration of models describing enzyme kinetics can be quite challenging. In the literature, different procedures are available to calibrate these enzymatic models in an efficient way. However, in most cases the model structure is already decided on prior to the actual calibration...... and Pronzato (1997) and which can be easily set up for any type of model. In this paper the proposed approach is applied to the forward reaction rate of the enzyme kinetics proposed by Shin and Kim(1998). Structural identifiability analysis showed that no local structural model problems were occurring......) identifiability problems. By using the presented approach it is possible to detect potential identifiability problems and avoid pointless calibration (and experimental!) effort....

  17. Rule-Governed Imitative Verbal Behavior as a Function of Modeling Procedures

    Science.gov (United States)

    Clinton, LeRoy; Boyce, Kathleen D.

    1975-01-01

    Investigated the effectiveness of modeling procedures alone and complemented by the appropriate rule statement on the production of plurals. Subjects were 20 normal and 20 retarded children who were randomly assigned to one of two learning conditions and who received either affective or informative social reinforcement. (Author/SDH)

  18. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    Directory of Open Access Journals (Sweden)

    Daniel T. L. Shek

    2011-01-01

    Full Text Available Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes in Hong Kong are presented.

  19. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  20. A P-value model for theoretical power analysis and its applications in multiple testing procedures

    Directory of Open Access Journals (Sweden)

    Fengqing Zhang

    2016-10-01

    Full Text Available Abstract Background Power analysis is a critical aspect of the design of experiments to detect an effect of a given size. When multiple hypotheses are tested simultaneously, multiplicity adjustments to p-values should be taken into account in power analysis. There are a limited number of studies on power analysis in multiple testing procedures. For some methods, the theoretical analysis is difficult and extensive numerical simulations are often needed, while other methods oversimplify the information under the alternative hypothesis. To this end, this paper aims to develop a new statistical model for power analysis in multiple testing procedures. Methods We propose a step-function-based p-value model under the alternative hypothesis, which is simple enough to perform power analysis without simulations, but not too simple to lose the information from the alternative hypothesis. The first step is to transform distributions of different test statistics (e.g., t, chi-square or F to distributions of corresponding p-values. We then use a step function to approximate each of the p-value’s distributions by matching the mean and variance. Lastly, the step-function-based p-value model can be used for theoretical power analysis. Results The proposed model is applied to problems in multiple testing procedures. We first show how the most powerful critical constants can be chosen using the step-function-based p-value model. Our model is then applied to the field of multiple testing procedures to explain the assumption of monotonicity of the critical constants. Lastly, we apply our model to a behavioral weight loss and maintenance study to select the optimal critical constants. Conclusions The proposed model is easy to implement and preserves the information from the alternative hypothesis.

  1. AdOn HDP-HMM: An Adaptive Online Model for Segmentation and Classification of Sequential Data.

    Science.gov (United States)

    Bargi, Ava; Xu, Richard Yi Da; Piccardi, Massimo

    2017-09-21

    Recent years have witnessed an increasing need for the automated classification of sequential data, such as activities of daily living, social media interactions, financial series, and others. With the continuous flow of new data, it is critical to classify the observations on-the-fly and without being limited by a predetermined number of classes. In addition, a model should be able to update its parameters in response to a possible evolution in the distributions of the classes. This compelling problem, however, does not seem to have been adequately addressed in the literature, since most studies focus on offline classification over predefined class sets. In this paper, we present a principled solution for this problem based on an adaptive online system leveraging Markov switching models and hierarchical Dirichlet process priors. This adaptive online approach is capable of classifying the sequential data over an unlimited number of classes while meeting the memory and delay constraints typical of streaming contexts. In this paper, we introduce an adaptive ''learning rate'' that is responsible for balancing the extent to which the model retains its previous parameters or adapts to new observations. Experimental results on stationary and evolving synthetic data and two video data sets, TUM Assistive Kitchen and collated Weizmann, show a remarkable performance in terms of segmentation and classification, particularly for sequences from evolutionary distributions and/or those containing previously unseen classes.

  2. Dietary flavonoid fisetin increases abundance of high-molecular-mass hyaluronan conferring resistance to prostate oncogenesis.

    Science.gov (United States)

    Lall, Rahul K; Syed, Deeba N; Khan, Mohammad Imran; Adhami, Vaqar M; Gong, Yuansheng; Lucey, John A; Mukhtar, Hasan

    2016-09-01

    We and others have shown previously that fisetin, a plant flavonoid, has therapeutic potential against many cancer types. Here, we examined the probable mechanism of its action in prostate cancer (PCa) using a global metabolomics approach. HPLC-ESI-MS analysis of tumor xenografts from fisetin-treated animals identified several metabolic targets with hyaluronan (HA) as the most affected. Efficacy of fisetin on HA was then evaluated in vitro and also in vivo in the transgenic TRAMP mouse model of PCa. Size exclusion chromatography-multiangle laser light scattering (SEC-MALS) was performed to analyze the molar mass (Mw) distribution of HA. Fisetin treatment downregulated intracellular and secreted HA levels both in vitro and in vivo Fisetin inhibited HA synthesis and degradation enzymes, which led to cessation of HA synthesis and also repressed the degradation of the available high-molecular-mass (HMM)-HA. SEC-MALS analysis of intact HA fragment size revealed that cells and animals have more abundance of HMM-HA and less of low-molecular-mass (LMM)-HA upon fisetin treatment. Elevated HA levels have been shown to be associated with disease progression in certain cancer types. Biological responses triggered by HA mainly depend on the HA polymer length where HMM-HA represses mitogenic signaling and has anti-inflammatory properties whereas LMM-HA promotes proliferation and inflammation. Similarly, Mw analysis of secreted HA fragment size revealed less HMM-HA is secreted that allowed more HMM-HA to be retained within the cells and tissues. Our findings establish that fisetin is an effective, non-toxic, potent HA synthesis inhibitor, which increases abundance of antiangiogenic HMM-HA and could be used for the management of PCa. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Implications of the Declarative/Procedural Model for Improving Second Language Learning: The Role of Memory Enhancement Techniques

    Science.gov (United States)

    Ullman, Michael T.; Lovelett, Jarrett T.

    2018-01-01

    The declarative/procedural (DP) model posits that the learning, storage, and use of language critically depend on two learning and memory systems in the brain: declarative memory and procedural memory. Thus, on the basis of independent research on the memory systems, the model can generate specific and often novel predictions for language. Till…

  4. Discovering System Health Anomalies using Data Mining Techniques

    Data.gov (United States)

    National Aeronautics and Space Administration — We discuss a statistical framework that underlies envelope detection schemes as well as dynamical models based on Hidden Markov Models (HMM) that can encompass both...

  5. Development of a diagnosis- and procedure-based risk model for 30-day outcome after pediatric cardiac surgery.

    Science.gov (United States)

    Crowe, Sonya; Brown, Kate L; Pagel, Christina; Muthialu, Nagarajan; Cunningham, David; Gibbs, John; Bull, Catherine; Franklin, Rodney; Utley, Martin; Tsang, Victor T

    2013-05-01

    The study objective was to develop a risk model incorporating diagnostic information to adjust for case-mix severity during routine monitoring of outcomes for pediatric cardiac surgery. Data from the Central Cardiac Audit Database for all pediatric cardiac surgery procedures performed in the United Kingdom between 2000 and 2010 were included: 70% for model development and 30% for validation. Units of analysis were 30-day episodes after the first surgical procedure. We used logistic regression for 30-day mortality. Risk factors considered included procedural information based on Central Cardiac Audit Database "specific procedures," diagnostic information defined by 24 "primary" cardiac diagnoses and "univentricular" status, and other patient characteristics. Of the 27,140 30-day episodes in the development set, 25,613 were survivals, 834 were deaths, and 693 were of unknown status (mortality, 3.2%). The risk model includes procedure, cardiac diagnosis, univentricular status, age band (neonate, infant, child), continuous age, continuous weight, presence of non-Down syndrome comorbidity, bypass, and year of operation 2007 or later (because of decreasing mortality). A risk score was calculated for 95% of cases in the validation set (weight missing in 5%). The model discriminated well; the C-index for validation set was 0.77 (0.81 for post-2007 data). Removal of all but procedural information gave a reduced C-index of 0.72. The model performed well across the spectrum of predicted risk, but there was evidence of underestimation of mortality risk in neonates undergoing operation from 2007. The risk model performs well. Diagnostic information added useful discriminatory power. A future application is risk adjustment during routine monitoring of outcomes in the United Kingdom to assist quality assurance. Copyright © 2013 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.

  6. Experiences with a procedure for modeling product knowledge

    DEFF Research Database (Denmark)

    Hansen, Benjamin Loer; Hvam, Lars

    2002-01-01

    This paper presents experiences with a procedure for building configurators. The procedure has been used in an American company producing custom-made precision air conditioning equipment. The paper describes experiences with the use of the procedure and experiences with the project in general....

  7. Emotion Expression of Robot with Personality

    Directory of Open Access Journals (Sweden)

    Xue Hu

    2013-01-01

    Full Text Available A robot emotional expression model based on Hidden Markov Model (HMM is built to enable robots which have different personalities to response in a more satisfactory emotional level. Gross emotion regulation theory and Five Factors Model (FFM which are the theoretical basis are firstly described. And then the importance of the personality effect on the emotion expression process is proposed, and how to make the effect quantization is discussed. After that, the algorithm of HMM is used to describe the process of emotional state transition and expression, and the performance transferring probability affected by personality is calculated. At last, the algorithm model is simulated and applied in a robot platform. The results prove that the emotional expression model can acquire humanlike expressions and improve the human-computer interaction.

  8. A Procedure for Identification of Appropriate State Space and ARIMA Models Based on Time-Series Cross-Validation

    Directory of Open Access Journals (Sweden)

    Patrícia Ramos

    2016-11-01

    Full Text Available In this work, a cross-validation procedure is used to identify an appropriate Autoregressive Integrated Moving Average model and an appropriate state space model for a time series. A minimum size for the training set is specified. The procedure is based on one-step forecasts and uses different training sets, each containing one more observation than the previous one. All possible state space models and all ARIMA models where the orders are allowed to range reasonably are fitted considering raw data and log-transformed data with regular differencing (up to second order differences and, if the time series is seasonal, seasonal differencing (up to first order differences. The value of root mean squared error for each model is calculated averaging the one-step forecasts obtained. The model which has the lowest root mean squared error value and passes the Ljung–Box test using all of the available data with a reasonable significance level is selected among all the ARIMA and state space models considered. The procedure is exemplified in this paper with a case study of retail sales of different categories of women’s footwear from a Portuguese retailer, and its accuracy is compared with three reliable forecasting approaches. The results show that our procedure consistently forecasts more accurately than the other approaches and the improvements in the accuracy are significant.

  9. A New Algorithm for Identifying Cis-Regulatory Modules Based on Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Haitao Guo

    2017-01-01

    Full Text Available The discovery of cis-regulatory modules (CRMs is the key to understanding mechanisms of transcription regulation. Since CRMs have specific regulatory structures that are the basis for the regulation of gene expression, how to model the regulatory structure of CRMs has a considerable impact on the performance of CRM identification. The paper proposes a CRM discovery algorithm called ComSPS. ComSPS builds a regulatory structure model of CRMs based on HMM by exploring the rules of CRM transcriptional grammar that governs the internal motif site arrangement of CRMs. We test ComSPS on three benchmark datasets and compare it with five existing methods. Experimental results show that ComSPS performs better than them.

  10. A New Algorithm for Identifying Cis-Regulatory Modules Based on Hidden Markov Model

    Science.gov (United States)

    2017-01-01

    The discovery of cis-regulatory modules (CRMs) is the key to understanding mechanisms of transcription regulation. Since CRMs have specific regulatory structures that are the basis for the regulation of gene expression, how to model the regulatory structure of CRMs has a considerable impact on the performance of CRM identification. The paper proposes a CRM discovery algorithm called ComSPS. ComSPS builds a regulatory structure model of CRMs based on HMM by exploring the rules of CRM transcriptional grammar that governs the internal motif site arrangement of CRMs. We test ComSPS on three benchmark datasets and compare it with five existing methods. Experimental results show that ComSPS performs better than them. PMID:28497059

  11. Dynamical modeling procedure of a Li-ion battery pack suitable for real-time applications

    International Nuclear Information System (INIS)

    Castano, S.; Gauchia, L.; Voncila, E.; Sanz, J.

    2015-01-01

    Highlights: • Dynamical modeling of a 50 A h battery pack composed of 56 cells. • Detailed analysis of SOC tests at realistic performance range imposed by BMS. • We propose an electrical circuit that improves how the battery capacity is modeled. • The model is validated in the SOC range using a real-time experimental setup. - Abstract: This paper presents the modeling of a 50 A h battery pack composed of 56 cells, taking into account real battery performance conditions imposed by the BMS control. The modeling procedure starts with a detailed analysis of experimental charge and discharge SOC tests. Results from these tests are used to obtain the battery model parameters at a realistic performance range (20–80% SOC). The model topology aims to better describe the finite charge contained in a battery pack. The model has been validated at three different SOC values in order to verify the model response at real battery pack operation conditions. The validation tests show that the battery pack model is able to simulate the real battery response with excellent accuracy in the range tested. The proposed modeling procedure is fully applicable to any Li-ion battery pack, regardless of the number of series or parallel cells or its rated capacity

  12. High-performance speech recognition using consistency modeling

    Science.gov (United States)

    Digalakis, Vassilios; Murveit, Hy; Monaco, Peter; Neumeyer, Leo; Sankar, Ananth

    1994-12-01

    The goal of SRI's consistency modeling project is to improve the raw acoustic modeling component of SRI's DECIPHER speech recognition system and develop consistency modeling technology. Consistency modeling aims to reduce the number of improper independence assumptions used in traditional speech recognition algorithms so that the resulting speech recognition hypotheses are more self-consistent and, therefore, more accurate. At the initial stages of this effort, SRI focused on developing the appropriate base technologies for consistency modeling. We first developed the Progressive Search technology that allowed us to perform large-vocabulary continuous speech recognition (LVCSR) experiments. Since its conception and development at SRI, this technique has been adopted by most laboratories, including other ARPA contracting sites, doing research on LVSR. Another goal of the consistency modeling project is to attack difficult modeling problems, when there is a mismatch between the training and testing phases. Such mismatches may include outlier speakers, different microphones and additive noise. We were able to either develop new, or transfer and evaluate existing, technologies that adapted our baseline genonic HMM recognizer to such difficult conditions.

  13. Epitaxial superlattices with titanium nitride as a plasmonic component for optical hyperbolic metamaterials

    DEFF Research Database (Denmark)

    Naik, Gururaj V.; Saha, Bivas; Liu, Jing

    2014-01-01

    , we address these issues by realizing an epitaxial superlattice as an HMM. The superlattice consists of ultrasmooth layers as thin as 5 nm and exhibits sharp interfaces which are essential for high-quality HMM devices. Our study reveals that such a TiN-based superlattice HMM provides a higher PDOS...

  14. Impact of silage additives on aerobic stability and characteristics of high-moisture maize during exposure to air, and on fermented liquid feed

    DEFF Research Database (Denmark)

    Canibe, Nuria; Kristensen, Niels Bastian; Jensen, Bent Borg

    2014-01-01

    during aeration- and impact of additives on the aerobic stability of HMM depended on the characteristics of the samples. No blooming of Enterobacteriaceae was observed in FLF containing c. 20 g HMM 100 g−1. Significance and Impact of the Study The impact of silage additives on aerobic stability of HMM...

  15. Model Building – A Circular Approach to Evaluate Multidimensional Patterns and Operationalized Procedures

    Directory of Open Access Journals (Sweden)

    Franz HAAS

    2017-12-01

    Full Text Available Managers operate in highly different fields. Decision-making can be based on models reflecting in part these differences. The challenge is to connect the respective models without too great a disruption. A threefold procedural approach is proposed by chaining a scheme of modeling in a complex field to an operationalized model to statistical multivariate methods. Multivariate pattern-detecting methods offer the chance to evaluate patterns within the complex field partly. This step completes the cycle of research and improved models can be used in a further cycle.

  16. A network society communicative model for optimizing the Refugee Status Determination (RSD procedures

    Directory of Open Access Journals (Sweden)

    Andrea Pacheco Pacífico

    2013-01-01

    Full Text Available This article recommends a new way to improve Refugee Status Determination (RSD procedures by proposing a network society communicative model based on active involvement and dialogue among all implementing partners. This model, named after proposals from Castells, Habermas, Apel, Chimni, and Betts, would be mediated by the United Nations High Commissioner for Refugees (UNHCR, whose role would be modeled after that of the International Committee of the Red Cross (ICRC practice.

  17. Two-dimensional hidden semantic information model for target saliency detection and eyetracking identification

    Science.gov (United States)

    Wan, Weibing; Yuan, Lingfeng; Zhao, Qunfei; Fang, Tao

    2018-01-01

    Saliency detection has been applied to the target acquisition case. This paper proposes a two-dimensional hidden Markov model (2D-HMM) that exploits the hidden semantic information of an image to detect its salient regions. A spatial pyramid histogram of oriented gradient descriptors is used to extract features. After encoding the image by a learned dictionary, the 2D-Viterbi algorithm is applied to infer the saliency map. This model can predict fixation of the targets and further creates robust and effective depictions of the targets' change in posture and viewpoint. To validate the model with a human visual search mechanism, two eyetrack experiments are employed to train our model directly from eye movement data. The results show that our model achieves better performance than visual attention. Moreover, it indicates the plausibility of utilizing visual track data to identify targets.

  18. Near-native protein loop sampling using nonparametric density estimation accommodating sparcity.

    Science.gov (United States)

    Joo, Hyun; Chavan, Archana G; Day, Ryan; Lennox, Kristin P; Sukhanov, Paul; Dahl, David B; Vannucci, Marina; Tsai, Jerry

    2011-10-01

    Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å), this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/.

  19. Near-native protein loop sampling using nonparametric density estimation accommodating sparcity.

    Directory of Open Access Journals (Sweden)

    Hyun Joo

    2011-10-01

    Full Text Available Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM. Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å, this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/.

  20. Near-Native Protein Loop Sampling Using Nonparametric Density Estimation Accommodating Sparcity

    Science.gov (United States)

    Day, Ryan; Lennox, Kristin P.; Sukhanov, Paul; Dahl, David B.; Vannucci, Marina; Tsai, Jerry

    2011-01-01

    Unlike the core structural elements of a protein like regular secondary structure, template based modeling (TBM) has difficulty with loop regions due to their variability in sequence and structure as well as the sparse sampling from a limited number of homologous templates. We present a novel, knowledge-based method for loop sampling that leverages homologous torsion angle information to estimate a continuous joint backbone dihedral angle density at each loop position. The φ,ψ distributions are estimated via a Dirichlet process mixture of hidden Markov models (DPM-HMM). Models are quickly generated based on samples from these distributions and were enriched using an end-to-end distance filter. The performance of the DPM-HMM method was evaluated against a diverse test set in a leave-one-out approach. Candidates as low as 0.45 Å RMSD and with a worst case of 3.66 Å were produced. For the canonical loops like the immunoglobulin complementarity-determining regions (mean RMSD 7.0 Å), this sampling method produces a population of loop structures to around 3.66 Å for loops up to 17 residues. In a direct test of sampling to the Loopy algorithm, our method demonstrates the ability to sample nearer native structures for both the canonical CDRH1 and non-canonical CDRH3 loops. Lastly, in the realistic test conditions of the CASP9 experiment, successful application of DPM-HMM for 90 loops from 45 TBM targets shows the general applicability of our sampling method in loop modeling problem. These results demonstrate that our DPM-HMM produces an advantage by consistently sampling near native loop structure. The software used in this analysis is available for download at http://www.stat.tamu.edu/~dahl/software/cortorgles/. PMID:22028638

  1. Metode Linear Predictive Coding (LPC Pada klasifikasi Hidden Markov Model (HMM Untuk Kata Arabic pada penutur Indonesia

    Directory of Open Access Journals (Sweden)

    Ririn Kusumawati

    2016-05-01

    In the classification, using Hidden Markov Model, voice signal is analyzed and searched the maximum possible value that can be recognized. The modeling results obtained parameters are used to compare with the sound of Arabic speakers. From the test results' Classification, Hidden Markov Models with Linear Predictive Coding extraction average accuracy of 78.6% for test data sampling frequency of 8,000 Hz, 80.2% for test data sampling frequency of 22050 Hz, 79% for frequencies sampling test data at 44100 Hz.

  2. Lumping procedure for a kinetic model of catalytic naphtha reforming

    Directory of Open Access Journals (Sweden)

    H. M. Arani

    2009-12-01

    Full Text Available A lumping procedure is developed for obtaining kinetic and thermodynamic parameters of catalytic naphtha reforming. All kinetic and deactivation parameters are estimated from industrial data and thermodynamic parameters are calculated from derived mathematical expressions. The proposed model contains 17 lumps that include the C6 to C8+ hydrocarbon range and 15 reaction pathways. Hougen-Watson Langmuir-Hinshelwood type reaction rate expressions are used for kinetic simulation of catalytic reactions. The kinetic parameters are benchmarked with several sets of plant data and estimated by the SQP optimization method. After calculation of deactivation and kinetic parameters, plant data are compared with model predictions and only minor deviations between experimental and calculated data are generally observed.

  3. An evaluation of in vivo desensitization and video modeling to increase compliance with dental procedures in persons with mental retardation.

    Science.gov (United States)

    Conyers, Carole; Miltenberger, Raymond G; Peterson, Blake; Gubin, Amber; Jurgens, Mandy; Selders, Andrew; Dickinson, Jessica; Barenz, Rebecca

    2004-01-01

    Fear of dental procedures deters many individuals with mental retardation from accepting dental treatment. This study was conducted to assess the effectiveness of two procedures, in vivo desensitization and video modeling, for increasing compliance with dental procedures in participants with severe or profound mental retardation. Desensitization increased compliance for all 5 participants, whereas video modeling increased compliance for only 1 of 3 participants.

  4. SITE-94. Discrete-feature modelling of the Aespoe site: 4. Source data and detailed analysis procedures

    Energy Technology Data Exchange (ETDEWEB)

    Geier, J E [Golder Associates AB, Uppsala (Sweden)

    1996-12-01

    Specific procedures and source data are described for the construction and application of discrete-feature hydrological models for the vicinity of Aespoe. Documentation is given for all major phases of the work, including: Statistical analyses to develop and validate discrete-fracture network models, Preliminary evaluation, construction, and calibration of the site-scale model based on the SITE-94 structural model of Aespoe, Simulation of multiple realizations of the integrated model, and variations, to predict groundwater flow, and Evaluation of near-field and far-field parameters for performance assessment calculations. Procedures are documented in terms of the computer batch files and executable scripts that were used to perform the main steps in these analyses, to provide for traceability of results that are used in the SITE-94 performance assessment calculations. 43 refs.

  5. SITE-94. Discrete-feature modelling of the Aespoe site: 4. Source data and detailed analysis procedures

    International Nuclear Information System (INIS)

    Geier, J.E.

    1996-12-01

    Specific procedures and source data are described for the construction and application of discrete-feature hydrological models for the vicinity of Aespoe. Documentation is given for all major phases of the work, including: Statistical analyses to develop and validate discrete-fracture network models, Preliminary evaluation, construction, and calibration of the site-scale model based on the SITE-94 structural model of Aespoe, Simulation of multiple realizations of the integrated model, and variations, to predict groundwater flow, and Evaluation of near-field and far-field parameters for performance assessment calculations. Procedures are documented in terms of the computer batch files and executable scripts that were used to perform the main steps in these analyses, to provide for traceability of results that are used in the SITE-94 performance assessment calculations. 43 refs

  6. Infant speech-sound discrimination testing: effects of stimulus intensity and procedural model on measures of performance.

    Science.gov (United States)

    Nozza, R J

    1987-06-01

    Performance of infants in a speech-sound discrimination task (/ba/ vs /da/) was measured at three stimulus intensity levels (50, 60, and 70 dB SPL) using the operant head-turn procedure. The procedure was modified so that data could be treated as though from a single-interval (yes-no) procedure, as is commonly done, as well as if from a sustained attention (vigilance) task. Discrimination performance changed significantly with increase in intensity, suggesting caution in the interpretation of results from infant discrimination studies in which only single stimulus intensity levels within this range are used. The assumptions made about the underlying methodological model did not change the performance-intensity relationships. However, infants demonstrated response decrement, typical of vigilance tasks, which supports the notion that the head-turn procedure is represented best by the vigilance model. Analysis then was done according to a method designed for tasks with undefined observation intervals [C. S. Watson and T. L. Nichols, J. Acoust. Soc. Am. 59, 655-668 (1976)]. Results reveal that, while group data are reasonably well represented across levels of difficulty by the fixed-interval model, there is a variation in performance as a function of time following trial onset that could lead to underestimation of performance in some cases.

  7. Activity Recognition Using Hybrid Generative/Discriminative Models on Home Environments Using Binary Sensors

    Directory of Open Access Journals (Sweden)

    Araceli Sanchis

    2013-04-01

    Full Text Available Activities of daily living are good indicators of elderly health status, and activity recognition in smart environments is a well-known problem that has been previously addressed by several studies. In this paper, we describe the use of two powerful machine learning schemes, ANN (Artificial Neural Network and SVM (Support Vector Machines, within the framework of HMM (Hidden Markov Model in order to tackle the task of activity recognition in a home setting. The output scores of the discriminative models, after processing, are used as observation probabilities of the hybrid approach. We evaluate our approach by comparing these hybrid models with other classical activity recognition methods using five real datasets. We show how the hybrid models achieve significantly better recognition performance, with significance level p < 0:05, proving that the hybrid approach is better suited for the addressed domain.

  8. Human gait recognition by pyramid of HOG feature on silhouette images

    Science.gov (United States)

    Yang, Guang; Yin, Yafeng; Park, Jeanrok; Man, Hong

    2013-03-01

    As a uncommon biometric modality, human gait recognition has a great advantage of identify people at a distance without high resolution images. It has attracted much attention in recent years, especially in the fields of computer vision and remote sensing. In this paper, we propose a human gait recognition framework that consists of a reliable background subtraction method followed by the pyramid of Histogram of Gradient (pHOG) feature extraction on the silhouette image, and a Hidden Markov Model (HMM) based classifier. Through background subtraction, the silhouette of human gait in each frame is extracted and normalized from the raw video sequence. After removing the shadow and noise in each region of interest (ROI), pHOG feature is computed on the silhouettes images. Then the pHOG features of each gait class will be used to train a corresponding HMM. In the test stage, pHOG feature will be extracted from each test sequence and used to calculate the posterior probability toward each trained HMM model. Experimental results on the CASIA Gait Dataset B1 demonstrate that with our proposed method can achieve very competitive recognition rate.

  9. Peer-assisted learning model enhances clinical clerk's procedural skills.

    Science.gov (United States)

    Huang, Chia-Chang; Hsu, Hui-Chi; Yang, Ling-Yu; Chen, Chen-Huan; Yang, Ying-Ying; Chang, Ching-Chih; Chuang, Chiao-Lin; Lee, Wei-Shin; Lee, Fa-Yauh; Hwang, Shinn-Jang

    2018-05-17

    Failure to transfer procedural skills learned in a laboratory to the bedside is commonly due to a lack of peer support/stimulation. A digital platform (Facebook) allows new clinical clerks to share experiences and tips that help augment their procedural skills in a peer-assisted learning/teaching method. This study aims to investigate the effectiveness of the innovation of using the digital platform to support the transfer of laboratory-trained procedural skills in the clinical units. Volunteer clinical clerks (n = 44) were enrolled into the peer-assisted learning (PAL) group, which was characterized by the peer-assisted learning of procedural skills during their final 3-month clinical clerkship block. Other clerks (n = 51) did not join the procedural skills-specific Facebook group and served as the self-directed learning regular group. The participants in both the PAL and regular groups completed pre- and post-intervention self-assessments for general self-assessed efficiency ratings (GSER) and skills specific self-assessed efficiency ratings (SSSER) for performing vein puncture, intravenous (IV) catheter and nasogastric (NG) tube insertion. Finally, all clerks received the post-intervention 3-station Objective Structured Clinical Skills Examination (OSCE) to test their proficiency for the abovementioned three procedural skills. Higher cumulative numbers of vein punctures, IV catheter insertions and NG tube insertions at the bedside were carried out by the PAL group than the regular group. A greater improvement in GSERs and SSSERs for medical procedures was found in the PAL group than in the regular group. The PAL group obtained higher procedural skills scores in the post-intervention OSCEs than the regular group. Our study suggested that the implementation of a procedural skill-specific digital platform effectively helps clerks to transfer laboratory-trained procedural skills into the clinical units. In comparison with the regular self-directed learning

  10. Using Video Modeling with Voiceover Instruction Plus Feedback to Train Staff to Implement Direct Teaching Procedures.

    Science.gov (United States)

    Giannakakos, Antonia R; Vladescu, Jason C; Kisamore, April N; Reeve, Sharon A

    2016-06-01

    Direct teaching procedures are often an important part of early intensive behavioral intervention for consumers with autism spectrum disorder. In the present study, a video model with voiceover (VMVO) instruction plus feedback was evaluated to train three staff trainees to implement a most-to-least direct (MTL) teaching procedure. Probes for generalization were conducted with untrained direct teaching procedures (i.e., least-to-most, prompt delay) and with an actual consumer. The results indicated that VMVO plus feedback was effective in training the staff trainees to implement the MTL procedure. Although additional feedback was required for the staff trainees to show mastery of the untrained direct teaching procedures (i.e., least-to-most and prompt delay) and with an actual consumer, moderate to high levels of generalization were observed.

  11. Actor-Network Procedures

    NARCIS (Netherlands)

    Pavlovic, Dusko; Meadows, Catherine; Ramanujam, R.; Ramaswamy, Srini

    2012-01-01

    In this paper we propose actor-networks as a formal model of computation in heterogenous networks of computers, humans and their devices, where these new procedures run; and we introduce Procedure Derivation Logic (PDL) as a framework for reasoning about security in actor-networks, as an extension

  12. Performance audit procedures for opacity monitors

    International Nuclear Information System (INIS)

    Plaisance, S.J.; Peeler, J.W.

    1987-04-01

    This manual contains monitor-specific performance audit procedures and data forms for use in conducting audits of installed opacity continuous emission monitoring systems (CEMS). General auditing procedures and acceptance limits for various audit criteria are discussed. Practical considerations and common problems encountered in conducting audits are delineated, and recommendations are included to optimize the successful completion of performance audits. Performance audit procedures and field-data forms were developed for six common opacity CEMS: (1) Lear Siegler, Inc. Model RM-41; (2) Lear Siegler, Inc. Model RM-4; (3) Dynatron Model 1100; (4) Thermo Electron, Inc. Model 400; (5) Thermo Electron, Inc. Model 1000A; and (6) Enviroplan Model D-R280 AV. Generic audit procedures are included for use in evaluating opacity CEMS with multiple transmissometers and combiner devices. In addition, several approaches for evaluating the zero-alignment or clear-path zero response are described. The zero-alignment procedures are included since the factor is fundamental to the accuracy of opacity monitoring data, even though the zero-alignment checks cannot usually be conducted during a performance audit

  13. Fermented high moisture maize grain as supplement to alfalfa haylage is superior over unfermented dry maize grain in diet dry matter digestibility

    Directory of Open Access Journals (Sweden)

    Marina Vranić

    2011-09-01

    Full Text Available The objectives of the experiment were to examine whether high moisture maize grain (HMM is superior to low moisture maize grain (LMM as supplement to alfalfa haylage (Medicago sativa L. (AH. The effects of HMM and LMM supplementation to AH were studied on feed intake, water intake and dry matter (DM digestibility in wether sheep. Alfalfa was harvested at the beginning of flowering and ensiled into round bales wrapped with plastic. The average DM and crude protein (CP concentration of AH was 534.7 g kg-1 fresh sample and 141 g kg-1 DM, respectively. The average DM content (g kg-1 fresh sample of HMM and LMM were 795.9 and 915.1 g kg-1 fresh sample, respectively, while the average CP concentration (g kg-1 DM were 116.8 and 106.0, respectively. The study consisted of five feeding treatments incorporating AH only and AH supplemented with 5 or 10 g HMM or LMM d-1 kg-1 wether body weight. The inclusion of HMM (5 or 10 g kg-1 body weight d-1 into AH based ration resulted in higher diet DM digestibility (P<0.05 in comparison with LMM inclusion (5 or 10 g kg-1 body weight d-1. Higher daily fresh matter intake (FMI (P<0.05, dry matter intake (DMI (P<0.05 and water intake (P<0.05 was achieved with LMM inclusion in comparison with HMM inclusion. The conclusion was that HMM is superior over LMM as supplement to AH in terms of DM digestibility, while LMM has advantages over HMM in the intake characteristics measured.

  14. Simplified proceeding as a civil procedure model

    Directory of Open Access Journals (Sweden)

    Олексій Юрійович Зуб

    2016-01-01

    Full Text Available Currently the directions for the development of modern civil procedural law such as optimization, facilitation, forwarding proceedings promoting the increase of the civil procedure efficiency factor are of peculiar importance. Their results are occurrence and functionality of simplified proceedings system designed to facilitate significantly hearing some categories of cases, promotion of their consideration within reasonable time and reduce legal expenses so far as it is possible. The category “simplified proceedings” in the native science of the procedural law is underexamined. A good deal of scientists-processualists were limited to studying summary (in the context of optimization as a way to improve the civil procedural form, summary proceedings and procedures functioning in terms of the mentioned proceedings, consideration of case in absentia as well as their modification. Among the Ukrainian scientist who studied some aspects of the simplified proceedings are: E. A. Belyanevych, V. I. Bobrik, S. V. Vasilyev, M. V. Verbitska, S. I. Zapara, A. A. Zgama, V. V. Komarov, D. D. Luspenuk, U. V. Navrotska, V. V. Protsenko, T. V. Stepanova, E. A. Talukin, S. Y. Fursa, M. Y. Shtefan others. The problems of the simplified proceedings were studied by the foreign scientists as well, such as: N. Andrews, Y. Y. Grubanon, N. A. Gromoshina, E. P. Kochanenko, J. Kohler, D. I. Krumskiy, E. M. Muradjan, I. V. Reshetnikova, U. Seidel, N. V. Sivak, M. Z. Shvarts, V. V. Yarkov and others. The paper objective is to develop theoretically supported, practically reasonable notion of simplified proceedings in the civil process, and also basing on the notion of simplified proceedings, international experience of the legislative regulation of simplified proceedings, native and foreign doctrine, to distinguish essential features of simplified proceedings in the civil process and to describe them. In the paper we generated the notion of simplified proceedings that

  15. An innovative 3-D numerical modelling procedure for simulating repository-scale excavations in rock - SAFETI

    Energy Technology Data Exchange (ETDEWEB)

    Young, R. P.; Collins, D.; Hazzard, J.; Heath, A. [Department of Earth Sciences, Liverpool University, 4 Brownlow street, UK-0 L69 3GP Liverpool (United Kingdom); Pettitt, W.; Baker, C. [Applied Seismology Consultants LTD, 10 Belmont, Shropshire, UK-S41 ITE Shrewsbury (United Kingdom); Billaux, D.; Cundall, P.; Potyondy, D.; Dedecker, F. [Itasca Consultants S.A., Centre Scientifique A. Moiroux, 64, chemin des Mouilles, F69130 Ecully (France); Svemar, C. [Svensk Karnbranslemantering AB, SKB, Aspo Hard Rock Laboratory, PL 300, S-57295 Figeholm (Sweden); Lebon, P. [ANDRA, Parc de la Croix Blanche, 7, rue Jean Monnet, F-92298 Chatenay-Malabry (France)

    2004-07-01

    This paper presents current results from work performed within the European Commission project SAFETI. The main objective of SAFETI is to develop and test an innovative 3D numerical modelling procedure that will enable the 3-D simulation of nuclear waste repositories in rock. The modelling code is called AC/DC (Adaptive Continuum/ Dis-Continuum) and is partially based on Itasca Consulting Group's Particle Flow Code (PFC). Results are presented from the laboratory validation study where algorithms and procedures have been developed and tested to allow accurate 'Models for Rock' to be produced. Preliminary results are also presented on the use of AC/DC with parallel processors and adaptive logic. During the final year of the project a detailed model of the Prototype Repository Experiment at SKB's Hard Rock Laboratory will be produced using up to 128 processors on the parallel super computing facility at Liverpool University. (authors)

  16. Making the error-controlling algorithm of observable operator models constructive.

    Science.gov (United States)

    Zhao, Ming-Jie; Jaeger, Herbert; Thon, Michael

    2009-12-01

    Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algorithms has been developed, with increasing computational and statistical efficiency, whose recent culmination was the error-controlling (EC) algorithm developed by the first author. The EC algorithm is an iterative, asymptotically correct algorithm that yields (and minimizes) an assured upper bound on the modeling error. The run time is faster by at least one order of magnitude than EM-based HMM learning algorithms and yields significantly more accurate models than the latter. Here we present a significant improvement of the EC algorithm: the constructive error-controlling (CEC) algorithm. CEC inherits from EC the main idea of minimizing an upper bound on the modeling error but is constructive where EC needs iterations. As a consequence, we obtain further gains in learning speed without loss in modeling accuracy.

  17. Improving model construction of profile HMMs for remote homology detection through structural alignment

    Directory of Open Access Journals (Sweden)

    Zaverucha Gerson

    2007-11-01

    Full Text Available Abstract Background Remote homology detection is a challenging problem in Bioinformatics. Arguably, profile Hidden Markov Models (pHMMs are one of the most successful approaches in addressing this important problem. pHMM packages present a relatively small computational cost, and perform particularly well at recognizing remote homologies. This raises the question of whether structural alignments could impact the performance of pHMMs trained from proteins in the Twilight Zone, as structural alignments are often more accurate than sequence alignments at identifying motifs and functional residues. Next, we assess the impact of using structural alignments in pHMM performance. Results We used the SCOP database to perform our experiments. Structural alignments were obtained using the 3DCOFFEE and MAMMOTH-mult tools; sequence alignments were obtained using CLUSTALW, TCOFFEE, MAFFT and PROBCONS. We performed leave-one-family-out cross-validation over super-families. Performance was evaluated through ROC curves and paired two tailed t-test. Conclusion We observed that pHMMs derived from structural alignments performed significantly better than pHMMs derived from sequence alignment in low-identity regions, mainly below 20%. We believe this is because structural alignment tools are better at focusing on the important patterns that are more often conserved through evolution, resulting in higher quality pHMMs. On the other hand, sensitivity of these tools is still quite low for these low-identity regions. Our results suggest a number of possible directions for improvements in this area.

  18. Improving model construction of profile HMMs for remote homology detection through structural alignment.

    Science.gov (United States)

    Bernardes, Juliana S; Dávila, Alberto M R; Costa, Vítor S; Zaverucha, Gerson

    2007-11-09

    Remote homology detection is a challenging problem in Bioinformatics. Arguably, profile Hidden Markov Models (pHMMs) are one of the most successful approaches in addressing this important problem. pHMM packages present a relatively small computational cost, and perform particularly well at recognizing remote homologies. This raises the question of whether structural alignments could impact the performance of pHMMs trained from proteins in the Twilight Zone, as structural alignments are often more accurate than sequence alignments at identifying motifs and functional residues. Next, we assess the impact of using structural alignments in pHMM performance. We used the SCOP database to perform our experiments. Structural alignments were obtained using the 3DCOFFEE and MAMMOTH-mult tools; sequence alignments were obtained using CLUSTALW, TCOFFEE, MAFFT and PROBCONS. We performed leave-one-family-out cross-validation over super-families. Performance was evaluated through ROC curves and paired two tailed t-test. We observed that pHMMs derived from structural alignments performed significantly better than pHMMs derived from sequence alignment in low-identity regions, mainly below 20%. We believe this is because structural alignment tools are better at focusing on the important patterns that are more often conserved through evolution, resulting in higher quality pHMMs. On the other hand, sensitivity of these tools is still quite low for these low-identity regions. Our results suggest a number of possible directions for improvements in this area.

  19. Evaluating bacterial gene-finding HMM structures as probabilistic logic programs.

    Science.gov (United States)

    Mørk, Søren; Holmes, Ian

    2012-03-01

    Probabilistic logic programming offers a powerful way to describe and evaluate structured statistical models. To investigate the practicality of probabilistic logic programming for structure learning in bioinformatics, we undertook a simplified bacterial gene-finding benchmark in PRISM, a probabilistic dialect of Prolog. We evaluate Hidden Markov Model structures for bacterial protein-coding gene potential, including a simple null model structure, three structures based on existing bacterial gene finders and two novel model structures. We test standard versions as well as ADPH length modeling and three-state versions of the five model structures. The models are all represented as probabilistic logic programs and evaluated using the PRISM machine learning system in terms of statistical information criteria and gene-finding prediction accuracy, in two bacterial genomes. Neither of our implementations of the two currently most used model structures are best performing in terms of statistical information criteria or prediction performances, suggesting that better-fitting models might be achievable. The source code of all PRISM models, data and additional scripts are freely available for download at: http://github.com/somork/codonhmm. Supplementary data are available at Bioinformatics online.

  20. Zoledronic acid overcomes chemoresistance and immunosuppression of malignant mesothelioma

    Science.gov (United States)

    Kopecka, Joanna; Gazzano, Elena; Sara, Orecchia; Ghigo, Dario; Riganti, Chiara

    2015-01-01

    The human malignant mesothelioma (HMM) is characterized by a chemoresistant and immunosuppressive phenotype. An effective strategy to restore chemosensitivity and immune reactivity against HMM is lacking. We investigated whether the use of zoledronic acid is an effective chemo-immunosensitizing strategy. We compared primary HMM samples with non-transformed mesothelial cells. HMM cells had higher rate of cholesterol and isoprenoid synthesis, constitutive activation of Ras/extracellular signal-regulated kinase1/2 (ERK1/2)/hypoxia inducible factor-1α (HIF-1α) pathway and up-regulation of the drug efflux transporter P-glycoprotein (Pgp). By decreasing the isoprenoid supply, zoledronic acid down-regulated the Ras/ERK1/2/HIF-1α/Pgp axis and chemosensitized the HMM cells to Pgp substrates. The HMM cells also produced higher amounts of kynurenine, decreased the proliferation of T-lymphocytes and expanded the number of T-regulatory (Treg) cells. Kynurenine synthesis was due to the transcription of the indoleamine 1,2 dioxygenase (IDO) enzyme, consequent to the activation of the signal transducer and activator of transcription-3 (STAT3). By reducing the activity of the Ras/ERK1/2/STAT3/IDO axis, zoledronic acid lowered the kyurenine synthesis and the expansion of Treg cells, and increased the proliferation of T-lymphocytes. Thanks to its ability to decrease Ras/ERK1/2 activity, which is responsible for both Pgp-mediated chemoresistance and IDO-mediated immunosuppression, zoledronic acid is an effective chemo-immunosensitizing agent in HMM cells. PMID:25544757

  1. The Effects of Muscle Mass on Homocyst(e)ine Levels in Plasma and Urine.

    Science.gov (United States)

    Malinow, M René; Lister, Craig L; DE Crée, Carl

    The present study was designed to examine the relationship between homocyst(e)ine (H[e]) levels and muscle mass. Two experimental groups each of 24 Caucasian males, one consisting of higher-muscle mass subjects (HMM) and the other of lower-muscle mass subjects (LMM) participated in this study. Muscle mass was estimated from 24-hour urine collections of creatinine (Crt). Muscle mass was 40.3 ± 15.9 kg in HMM and 37.2 ± 11.4 kg in LMM (P= 0.002). Mean plasma H(e) levels in HMM were 10.29 ± 2.9 nmol/mL, and in LMM were 10.02 ± 2.4 nmol/L (Not significant, [NS]). Urinary H(e) levels (UH[e]) were 9.95 ± 4.3 nmol/mL and 9.22 ± 2.9 nmol/mL for HMM and LMM, respectively (NS). Plasma H(e) levels correlated well with UH(e) (HMM: r= 0.58, P= 0.009; LMM: r= 0.66, P= 0.004). Muscle mass and was not correlated to either plasma H(e) or UH(e). However, in HMM trends were identified for body mass to be correlated with UH(e) (r= 0.39, P= 0.10) and UCrt (r= 0.41, P= 0.08). Surprisingly, in HMM plasma and UCrt were only weakly correlated (r= 0.44, P= 0.06). Our results do not support a causal relationship between the amount of muscle mass and H(e) levels in plasma or urine.

  2. Experimental Testing Procedures and Dynamic Model Validation for Vanadium Redox Flow Battery Storage System

    DEFF Research Database (Denmark)

    Baccino, Francesco; Marinelli, Mattia; Nørgård, Per Bromand

    2013-01-01

    The paper aims at characterizing the electrochemical and thermal parameters of a 15 kW/320 kWh vanadium redox flow battery (VRB) installed in the SYSLAB test facility of the DTU Risø Campus and experimentally validating the proposed dynamic model realized in Matlab-Simulink. The adopted testing...... efficiency of the battery system. The test procedure has general validity and could also be used for other storage technologies. The storage model proposed and described is suitable for electrical studies and can represent a general model in terms of validity. Finally, the model simulation outputs...

  3. Annual Expeditionary Warfare Conference (22nd)

    Science.gov (United States)

    2017-10-24

    Shipbuilding has what it takes to build the military ships that keep America and our allies safe. Leonardo DRS is a prime contractor , leading technology...innovator and supplier of integrated products, services and support to military forces, intelligence agencies and defense contractors worldwide. The...deployed with HMM- 264, HMM-365 and HMM-162. He has served as a Basic and Advanced Flight Instructor at Helicopter Training Squadron ( HT ) 18, NAS

  4. A single-photon ecat reconstruction procedure based on a PSF model

    International Nuclear Information System (INIS)

    Ying-Lie, O.

    1984-01-01

    Emission Computed Axial Tomography (ECAT) has been applied in nuclear medicine for the past few years. Owing to attenuation and scatter along the ray path, adequate correction methods are required. In this thesis, a correction method for attenuation, detector response and Compton scatter has been proposed. The method developed is based on a PSF model. The parameters of the models were derived by fitting experimental and simulation data. Because of its flexibility, a Monte Carlo simulation method has been employed. Using the PSF models, it was found that the ECAT problem can be described by the added modified equation. Application of the reconstruction procedure on simulation data yield satisfactory results. The algorithm tends to amplify noise and distortion in the data, however. Therefore, the applicability of the method on patient studies remain to be seen. (Auth.)

  5. New robust statistical procedures for the polytomous logistic regression models.

    Science.gov (United States)

    Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro

    2018-05-17

    This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.

  6. Functionalization of mesoporous silica membrane with a Schiff base fluorophore for Cu(II) ion sensing

    Energy Technology Data Exchange (ETDEWEB)

    Chen Xiaotong [Department of Chemistry, Graduate School of Science, Tohoku University, Aoba-ku 980-8578, Sendai, Miyagi Prefecture (Japan); Department of Chemistry, Tsinghua University, Beijing 100084 (China); Yamaguchi, Akira [College of Science, Ibaraki University, Bunkyo 2-1-1, Mito, Ibaraki 310-8512 (Japan); Frontier Research Center for Applied Atomic Sciences, Ibaraki University, Tokai, Ibaraki 319-1106 (Japan); Namekawa, Manato [Department of Chemistry, Graduate School of Science, Tohoku University, Aoba-ku 980-8578, Sendai, Miyagi Prefecture (Japan); Kamijo, Toshio [Department of Chemistry, Graduate School of Science, Tohoku University, Aoba-ku 980-8578, Sendai, Miyagi Prefecture (Japan); Tsuruoka National College of Technology, Aza-Sawada, Tsuruoka 997-8511 (Japan); Teramae, Norio, E-mail: teramae@m.tohoku.ac.jp [Department of Chemistry, Graduate School of Science, Tohoku University, Aoba-ku 980-8578, Sendai, Miyagi Prefecture (Japan); Tong, Aijun, E-mail: tongaj@mail.tsinghua.edu.cn [Department of Chemistry, Tsinghua University, Beijing 100084 (China)

    2011-06-24

    Graphical abstract: Highlights: > A hybrid mesoporous membrane (SB-HMM) functionalized by Schiff base fluorophores was fabricated. > SB-HMM showed strong fluorescence with aggregation-induced emission enhancement properties. > SB-HMM was applicable for the detection of Cu(II) in an aqueous solution with good reversibility and reproducibility. - Abstract: A Schiff base (SB) immobilized hybrid mesoporous silica membrane (SB-HMM) was prepared by immobilizing a Schiff base onto the pore surface of mesoporous silica (pore size = 3.1 nm) embedded in the pores of a porous anodic alumina membrane. In contrast to the non-fluorescent analogous SB molecule in homogeneous solutions, SB-HMM exhibited intense fluorescence due to emission enhancement caused by aggregation of SB groups on the pore surface. The high quantum efficiency of the surface SB groups allows SB-HMM to function as a fluorescent sensor for Cu(II) ions in an aqueous solution with good sensitivity, selectivity and reproducibility. Under the optimal conditions described, the linear ranges of fluorescence intensity for Cu(II) are 1.2-13.8 (M (R{sup 2} = 0.993) and 19.4-60 (R{sup 2} = 0.992) (M. The limit of detection for Cu(II) is 0.8 {mu}M on basis of the definition by IUPAC (C{sub LOD} = 3.3S{sub b}/m).

  7. Automatic transcription of continuous speech into syllable-like units ...

    Indian Academy of Sciences (India)

    style HMM models are generated for each of the clusters during training. During testing .... manual segmentation at syllable-like units followed by isolated style recognition of continu- ous speech ..... obtaining demisyllabic reference patterns.

  8. ANALYTIC WORD RECOGNITION WITHOUT SEGMENTATION BASED ON MARKOV RANDOM FIELDS

    NARCIS (Netherlands)

    Coisy, C.; Belaid, A.

    2004-01-01

    In this paper, a method for analytic handwritten word recognition based on causal Markov random fields is described. The words models are HMMs where each state corresponds to a letter; each letter is modelled by a NSHP­HMM (Markov field). Global models are build dynamically, and used for recognition

  9. The Psychology Department Model Advisement Procedure: A Comprehensive, Systematic Approach to Career Development Advisement

    Science.gov (United States)

    Howell-Carter, Marya; Nieman-Gonder, Jennifer; Pellegrino, Jennifer; Catapano, Brittani; Hutzel, Kimberly

    2016-01-01

    The MAP (Model Advisement Procedure) is a comprehensive, systematic approach to developmental student advisement. The MAP was implemented to improve advisement consistency, improve student preparation for internships/senior projects, increase career exploration, reduce career uncertainty, and, ultimately, improve student satisfaction with the…

  10. Diagnosis of OCD Patients Using Drawing Features of Bender Gestalt Shapes

    Directory of Open Access Journals (Sweden)

    Boostani R.

    2017-03-01

    Full Text Available Background: Since psychological tests such as questionnaire or drawing tests are almost qualitative, their results carry a degree of uncertainty and sometimes subjectivity. The deficiency of all drawing tests is that the assessment is carried out after drawing the objects and lots of information such as pen angle, speed, curvature and pressure are missed through the test. In other words, the psychologists cannot assess their patients while running the tests. One of the famous drawing tests to measure the degree of Obsession Compulsion Disorder (OCD is the Bender Gestalt, though its reliability is not promising. Objective: The main objective of this study is to make the Bender Gestalt test quantitative; therefore, an optical pen along with a digital tablet is utilized to preserve the key drawing features of OCD patients during the test. Materials and Methods: Among a large population of patients who referred to a special clinic of OCD, 50 under therapy subjects voluntarily took part in this study. In contrast, 50 subjects with no sign of OCD performed the test as a control group. This test contains 9 shapes and the participants were not constraint to draw the shapes in a certain interval of time; consequently, to classify the stream of feature vectors (samples through drawing Hidden Markov Model (HMM is employed and its flexibility increased by incorporating the fuzzy technique into its learning scheme. Results: Applying fuzzy HMM classifier to the data stream of subjects could classify two groups up to 95.2% accuracy, whereas the results by applying the standard HMM resulted in 94.5%. In addition, multi-layer perceptron (MLP, as a strong static classifier, is applied to the features and resulted in 86.6% accuracy. Conclusion: Applying the pair of T-test to the results implies a significant supremacy of the fuzzy HMM to the standard HMM and MLP classifiers.

  11. Diagnosis of the OCD Patients using Drawing Features of the Bender Gestalt Shapes.

    Science.gov (United States)

    Boostani, R; Asadi, F; Mohammadi, N

    2017-03-01

    Since psychological tests such as questionnaire or drawing tests are almost qualitative, their results carry a degree of uncertainty and sometimes subjectivity. The deficiency of all drawing tests is that the assessment is carried out after drawing the objects and lots of information such as pen angle, speed, curvature and pressure are missed through the test. In other words, the psychologists cannot assess their patients while running the tests. One of the famous drawing tests to measure the degree of Obsession Compulsion Disorder (OCD) is the Bender Gestalt, though its reliability is not promising. The main objective of this study is to make the Bender Gestalt test quantitative; therefore, an optical pen along with a digital tablet is utilized to preserve the key drawing features of OCD patients during the test. Among a large population of patients who referred to a special clinic of OCD, 50 under therapy subjects voluntarily took part in this study. In contrast, 50 subjects with no sign of OCD performed the test as a control group. This test contains 9 shapes and the participants were not constraint to draw the shapes in a certain interval of time; consequently, to classify the stream of feature vectors (samples through drawing) Hidden Markov Model (HMM) is employed and its flexibility increased by incorporating the fuzzy technique into its learning scheme. Applying fuzzy HMM classifier to the data stream of subjects could classify two groups up to 95.2% accuracy, whereas the results by applying the standard HMM resulted in 94.5%. In addition, multi-layer perceptron (MLP), as a strong static classifier, is applied to the features and resulted in 86.6% accuracy. Applying the pair of T-test to the results implies a significant supremacy of the fuzzy HMM to the standard HMM and MLP classifiers.

  12. A Novel Approach to ECG Classification Based upon Two-Layered HMMs in Body Sensor Networks

    Science.gov (United States)

    Liang, Wei; Zhang, Yinlong; Tan, Jindong; Li, Yang

    2014-01-01

    This paper presents a novel approach to ECG signal filtering and classification. Unlike the traditional techniques which aim at collecting and processing the ECG signals with the patient being still, lying in bed in hospitals, our proposed algorithm is intentionally designed for monitoring and classifying the patient's ECG signals in the free-living environment. The patients are equipped with wearable ambulatory devices the whole day, which facilitates the real-time heart attack detection. In ECG preprocessing, an integral-coefficient-band-stop (ICBS) filter is applied, which omits time-consuming floating-point computations. In addition, two-layered Hidden Markov Models (HMMs) are applied to achieve ECG feature extraction and classification. The periodic ECG waveforms are segmented into ISO intervals, P subwave, QRS complex and T subwave respectively in the first HMM layer where expert-annotation assisted Baum-Welch algorithm is utilized in HMM modeling. Then the corresponding interval features are selected and applied to categorize the ECG into normal type or abnormal type (PVC, APC) in the second HMM layer. For verifying the effectiveness of our algorithm on abnormal signal detection, we have developed an ECG body sensor network (BSN) platform, whereby real-time ECG signals are collected, transmitted, displayed and the corresponding classification outcomes are deduced and shown on the BSN screen. PMID:24681668

  13. Evaluating bacterial gene-finding HMM structures as probabilistic logic programs

    DEFF Research Database (Denmark)

    Mørk, Søren; Holmes, Ian

    2012-01-01

    , a probabilistic dialect of Prolog. Results: We evaluate Hidden Markov Model structures for bacterial protein-coding gene potential, including a simple null model structure, three structures based on existing bacterial gene finders and two novel model structures. We test standard versions as well as ADPH length...

  14. Developing Novel Machine Learning Algorithms to Improve Sedentary Assessment for Youth Health Enhancement.

    Science.gov (United States)

    Golla, Gowtham Kumar; Carlson, Jordan A; Huan, Jun; Kerr, Jacqueline; Mitchell, Tarrah; Borner, Kelsey

    2016-10-01

    Sedentary behavior of youth is an important determinant of health. However, better measures are needed to improve understanding of this relationship and the mechanisms at play, as well as to evaluate health promotion interventions. Wearable accelerometers are considered as the standard for assessing physical activity in research, but do not perform well for assessing posture (i.e., sitting vs. standing), a critical component of sedentary behavior. The machine learning algorithms that we propose for assessing sedentary behavior will allow us to re-examine existing accelerometer data to better understand the association between sedentary time and health in various populations. We collected two datasets, a laboratory-controlled dataset and a free-living dataset. We trained machine learning classifiers separately on each dataset and compared performance across datasets. The classifiers predict five postures: sit, stand, sit-stand, stand-sit, and stand\\walk. We compared a manually constructed Hidden Markov model (HMM) with an automated HMM from existing software. The manually constructed HMM gave more F1-Macro score on both datasets.

  15. Prediction of lipoprotein signal peptides in Gram-negative bacteria.

    Science.gov (United States)

    Juncker, Agnieszka S; Willenbrock, Hanni; Von Heijne, Gunnar; Brunak, Søren; Nielsen, Henrik; Krogh, Anders

    2003-08-01

    A method to predict lipoprotein signal peptides in Gram-negative Eubacteria, LipoP, has been developed. The hidden Markov model (HMM) was able to distinguish between lipoproteins (SPaseII-cleaved proteins), SPaseI-cleaved proteins, cytoplasmic proteins, and transmembrane proteins. This predictor was able to predict 96.8% of the lipoproteins correctly with only 0.3% false positives in a set of SPaseI-cleaved, cytoplasmic, and transmembrane proteins. The results obtained were significantly better than those of previously developed methods. Even though Gram-positive lipoprotein signal peptides differ from Gram-negatives, the HMM was able to identify 92.9% of the lipoproteins included in a Gram-positive test set. A genome search was carried out for 12 Gram-negative genomes and one Gram-positive genome. The results for Escherichia coli K12 were compared with new experimental data, and the predictions by the HMM agree well with the experimentally verified lipoproteins. A neural network-based predictor was developed for comparison, and it gave very similar results. LipoP is available as a Web server at www.cbs.dtu.dk/services/LipoP/.

  16. Music genre classification via likelihood fusion from multiple feature models

    Science.gov (United States)

    Shiu, Yu; Kuo, C.-C. J.

    2005-01-01

    Music genre provides an efficient way to index songs in a music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. A new two-stage scheme for music genre classification is proposed in this work. At the first stage, we examine a couple of different features, construct their corresponding parametric models (e.g. GMM and HMM) and compute their likelihood functions to yield soft classification results. In particular, the timbre, rhythm and temporal variation features are considered. Then, at the second stage, these soft classification results are integrated to result in a hard decision for final music genre classification. Experimental results are given to demonstrate the performance of the proposed scheme.

  17. Realism of procedural task trainers in a pediatric emergency medicine procedures course.

    Science.gov (United States)

    Shefrin, Allan; Khazei, Afshin; Cheng, Adam

    2015-01-01

    Pediatric emergency medicine (PEM) physicians have minimal experience in life saving procedures and have turned to task trainers to learn these skills. Realism of these models is an important consideration that has received little study. PEM physicians and trainees participated in a day long procedural training course that utilized commercially available and homemade task trainers to teach pericardiocentesis, chest tube insertion, cricothyroidotomy and central line insertion. Participants rated the realism of the task trainers as part of a post-course survey. The homemade task trainers received variable realism ratings, with 91% of participants rating the pork rib chest tube model as realistic, 82% rating the gelatin pericardiocentesis mold as realistic and 36% rating the ventilator tubing cricothyroidotomy model as realistic. Commercial trainers also received variable ratings, with 45% rating the chest drain and pericardiocentesis simulator as realistic, 74% rating the crichotracheotomy trainer as realistic and 80% rating the central line insertion trainer as realistic. Task training models utilized in our course received variable realism ratings. When deciding what type of task trainer to use future courses should carefully consider the desired aspect of realism, and how it aligns with the procedural skill, balanced with cost considerations.

  18. A fault diagnosis system for interdependent critical infrastructures based on HMMs

    International Nuclear Information System (INIS)

    Ntalampiras, Stavros; Soupionis, Yannis; Giannopoulos, Georgios

    2015-01-01

    Modern society depends on the smooth functioning of critical infrastructures which provide services of fundamental importance, e.g. telecommunications and water supply. These infrastructures may suffer from faults/malfunctions coming e.g. from aging effects or they may even comprise targets of terrorist attacks. Prompt detection and accommodation of these situations is of paramount significance. This paper proposes a probabilistic modeling scheme for analyzing malicious events appearing in interdependent critical infrastructures. The proposed scheme is based on modeling the relationship between datastreams coming from two network nodes by means of a hidden Markov model (HMM) trained on the parameters of linear time-invariant dynamic systems which estimate the relationships existing among the specific nodes over consecutive time windows. Our study includes an energy network (IEEE 30 model bus) operated via a telecommunications infrastructure. The relationships among the elements of the network of infrastructures are represented by an HMM and the novel data is categorized according to its distance (computed in the probabilistic space) from the training ones. We considered two types of cyber-attacks (denial of service and integrity/replay) and report encouraging results in terms of false positive rate, false negative rate and detection delay. - Highlights: • An HMM-based scheme is proposed for analyzing malicious events in critical infrastructures. • We use the IEEE 30 model bus operated via an emulated ICT infrastructure. • Novel data is categorized based on its probabilistic distance from the training one. • We considered two types of cyber-attacks and report results of extensive experiments

  19. Dark-field hyperlens for high-contrast sub-wavelength imaging

    DEFF Research Database (Denmark)

    Repän, Taavi; Zhukovsky, Sergei; Lavrinenko, Andrei

    2016-01-01

    By now superresolution imaging using hyperbolic metamaterial (HMM) structures – hyperlenses – has been demonstrated both theoretically and experimentally. The hyperlens operation relies on the fact that HMM allows propagation of waves with very large transverse wavevectors, which would be evanesc......By now superresolution imaging using hyperbolic metamaterial (HMM) structures – hyperlenses – has been demonstrated both theoretically and experimentally. The hyperlens operation relies on the fact that HMM allows propagation of waves with very large transverse wavevectors, which would...... be evanescent in common isotropic media (thus giving rise to the diffraction limit). However, nearly all hyperlenses proposed so far have been suitable only for very strong scatterers – such as holes in a metal film. When weaker scatterers, dielectric objects for example, are imaged then incident light forms...... a very strong background, and weak scatterers are not visible due to a poor contrast. We propose a so-called dark-field hyperlens, which would be suitable for imaging of weakly scattering objects. By designing parameters of the HMM, we managed to obtain its response in such way that the hyperlens...

  20. Clear-sky classification procedures and models using a world-wide data-base

    International Nuclear Information System (INIS)

    Younes, S.; Muneer, T.

    2007-01-01

    Clear-sky data need to be extracted from all-sky measured solar-irradiance dataset, often by using algorithms that rely on other measured meteorological parameters. Current procedures for clear-sky data extraction have been examined and compared with each other to determine their reliability and location dependency. New clear-sky determination algorithms are proposed that are based on a combination of clearness index, diffuse ratio, cloud cover and Linke's turbidity limits. Various researchers have proposed clear-sky irradiance models that rely on synoptic parameters; four of these models, MRM, PRM, YRM and REST2 have been compared for six world-wide-locations. Based on a previously-developed comprehensive accuracy scoring method, the models MRM, REST2 and YRM were found to be of satisfactory performance in decreasing order. The so-called Page radiation model (PRM) was found to underestimate solar radiation, even though local turbidity data were provided for its operation

  1. Approximate Learning and Inference for Tracking with Non-overlapping Cameras

    NARCIS (Netherlands)

    Zajdel, W.; Kröse, B.; Hamza, M.H.

    2003-01-01

    Tracking with multiple cameras requires partitioning of ob servations from various sensors into trajectories. In this paper we assume that the observations are generated by a hidden, stochastic 'partition' process and propose a hidden Markov model (HMM) as a generative model for the data. The state

  2. A fast and systematic procedure to develop dynamic models of bioprocesses: application to microalgae cultures

    Directory of Open Access Journals (Sweden)

    J. Mailier

    2010-09-01

    Full Text Available The purpose of this paper is to report on the development of a procedure for inferring black-box, yet biologically interpretable, dynamic models of bioprocesses based on sets of measurements of a few external components (biomass, substrates, and products of interest. The procedure has three main steps: (a the determination of the number of macroscopic biological reactions linking the measured components; (b the estimation of a first reaction scheme, which has interesting mathematical properties, but might lack a biological interpretation; and (c the "projection" (or transformation of this reaction scheme onto a biologically-consistent scheme. The advantage of the method is that it allows the fast prototyping of models for the culture of microorganisms that are not well documented. The good performance of the third step of the method is demonstrated by application to an example of microalgal culture.

  3. Prediction of Placental Barrier Permeability: A Model Based on Partial Least Squares Variable Selection Procedure

    Directory of Open Access Journals (Sweden)

    Yong-Hong Zhang

    2015-05-01

    Full Text Available Assessing the human placental barrier permeability of drugs is very important to guarantee drug safety during pregnancy. Quantitative structure–activity relationship (QSAR method was used as an effective assessing tool for the placental transfer study of drugs, while in vitro human placental perfusion is the most widely used method. In this study, the partial least squares (PLS variable selection and modeling procedure was used to pick out optimal descriptors from a pool of 620 descriptors of 65 compounds and to simultaneously develop a QSAR model between the descriptors and the placental barrier permeability expressed by the clearance indices (CI. The model was subjected to internal validation by cross-validation and y-randomization and to external validation by predicting CI values of 19 compounds. It was shown that the model developed is robust and has a good predictive potential (r2 = 0.9064, RMSE = 0.09, q2 = 0.7323, rp2 = 0.7656, RMSP = 0.14. The mechanistic interpretation of the final model was given by the high variable importance in projection values of descriptors. Using PLS procedure, we can rapidly and effectively select optimal descriptors and thus construct a model with good stability and predictability. This analysis can provide an effective tool for the high-throughput screening of the placental barrier permeability of drugs.

  4. Procedure for the Selection and Validation of a Calibration Model I-Description and Application.

    Science.gov (United States)

    Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D

    2017-05-01

    Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. “Drinking in the Dark” (DID) Procedures: A Model of Binge-Like Ethanol Drinking in Non-Dependent Mice

    Science.gov (United States)

    Thiele, Todd E.; Navarro, Montserrat

    2013-01-01

    This review provides an overview of an animal model of binge-like ethanol drinking that has come to be called “drinking in the dark” (DID), a procedure that promotes high levels of ethanol drinking and pharmacologically relevant blood ethanol concentrations (BECs) in ethanol-preferring strains of mice. Originally described by Rhodes et al. (2005), the most common variation of the DID procedure, using singly housed mice, involves replacing the water bottle with a bottle containing 20% ethanol for 2 to 4 hours, beginning 3 hours into the dark cycle. Using this procedure, high ethanol drinking strains of mice (e.g., C57BL/6J) typically consume enough ethanol to achieve BECs greater than 100 mg/dL and to exhibit behavioral evidence of intoxication. This limited access procedure takes advantage of the time in the animal’s dark cycle in which the levels of ingestive behaviors are high, yet high ethanol intake does not appear to stem from caloric need. Mice have the choice of drinking or avoiding the ethanol solution, eliminating the stressful conditions that are inherent in other models of binge-like ethanol exposure in which ethanol is administered by the experimenter, and in some cases, potentially painful. The DID procedure is a high throughput approach that does not require extensive training or the inclusion of sweet compounds to motivate high levels of ethanol intake. The high throughput nature of the DID procedure makes it useful for rapid screening of pharmacological targets that are protective against binge-like drinking and for identifying strains of mice that exhibit binge-like drinking behavior. Additionally, the simplicity of DID procedures allows for easy integration into other paradigms, such as prenatal ethanol exposure and adolescent ethanol drinking. It is suggested that the DID model is a useful tool for studying the neurobiology and genetics underlying binge-like ethanol drinking, and may be useful for studying the transition to ethanol

  6. Innovative procedure for computer-assisted genioplasty: three-dimensional cephalometry, rapid-prototyping model and surgical splint.

    Science.gov (United States)

    Olszewski, R; Tranduy, K; Reychler, H

    2010-07-01

    The authors present a new procedure of computer-assisted genioplasty. They determined the anterior, posterior and inferior limits of the chin in relation to the skull and face with the newly developed and validated three-dimensional cephalometric planar analysis (ACRO 3D). Virtual planning of the osteotomy lines was carried out with Mimics (Materialize) software. The authors built a three-dimensional rapid-prototyping multi-position model of the chin area from a medical low-dose CT scan. The transfer of virtual information to the operating room consisted of two elements. First, the titanium plates on the 3D RP model were pre-bent. Second, a surgical guide for the transfer of the osteotomy lines and the positions of the screws to the operating room was manufactured. The authors present the first case of the use of this model on a patient. The postoperative results are promising, and the technique is fast and easy-to-use. More patients are needed for a definitive clinical validation of this procedure. Copyright 2010 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  7. Specific acoustic models for spontaneous and dictated style in indonesian speech recognition

    Science.gov (United States)

    Vista, C. B.; Satriawan, C. H.; Lestari, D. P.; Widyantoro, D. H.

    2018-03-01

    The performance of an automatic speech recognition system is affected by differences in speech style between the data the model is originally trained upon and incoming speech to be recognized. In this paper, the usage of GMM-HMM acoustic models for specific speech styles is investigated. We develop two systems for the experiments; the first employs a speech style classifier to predict the speech style of incoming speech, either spontaneous or dictated, then decodes this speech using an acoustic model specifically trained for that speech style. The second system uses both acoustic models to recognise incoming speech and decides upon a final result by calculating a confidence score of decoding. Results show that training specific acoustic models for spontaneous and dictated speech styles confers a slight recognition advantage as compared to a baseline model trained on a mixture of spontaneous and dictated training data. In addition, the speech style classifier approach of the first system produced slightly more accurate results than the confidence scoring employed in the second system.

  8. Naive scoring of human sleep based on a hidden Markov model of the electroencephalogram.

    Science.gov (United States)

    Yaghouby, Farid; Modur, Pradeep; Sunderam, Sridhar

    2014-01-01

    Clinical sleep scoring involves tedious visual review of overnight polysomnograms by a human expert. Many attempts have been made to automate the process by training computer algorithms such as support vector machines and hidden Markov models (HMMs) to replicate human scoring. Such supervised classifiers are typically trained on scored data and then validated on scored out-of-sample data. Here we describe a methodology based on HMMs for scoring an overnight sleep recording without the benefit of a trained initial model. The number of states in the data is not known a priori and is optimized using a Bayes information criterion. When tested on a 22-subject database, this unsupervised classifier agreed well with human scores (mean of Cohen's kappa > 0.7). The HMM also outperformed other unsupervised classifiers (Gaussian mixture models, k-means, and linkage trees), that are capable of naive classification but do not model dynamics, by a significant margin (p < 0.05).

  9. Multimodal Speaker Diarization

    NARCIS (Netherlands)

    Noulas, A.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    We present a novel probabilistic framework that fuses information coming from the audio and video modality to perform speaker diarization. The proposed framework is a Dynamic Bayesian Network (DBN) that is an extension of a factorial Hidden Markov Model (fHMM) and models the people appearing in an

  10. A Proposed Model for Selecting Measurement Procedures for the Assessment and Treatment of Problem Behavior.

    Science.gov (United States)

    LeBlanc, Linda A; Raetz, Paige B; Sellers, Tyra P; Carr, James E

    2016-03-01

    Practicing behavior analysts frequently assess and treat problem behavior as part of their ongoing job responsibilities. Effective measurement of problem behavior is critical to success in these activities because some measures of problem behavior provide more accurate and complete information about the behavior than others. However, not every measurement procedure is appropriate for every problem behavior and therapeutic circumstance. We summarize the most commonly used measurement procedures, describe the contexts for which they are most appropriate, and propose a clinical decision-making model for selecting measurement produces given certain features of the behavior and constraints of the therapeutic environment.

  11. Implementasi Model Pembelajaran Kooperatif Conceptual Understanding Procedures (Cups) Untuk Meningkatkan Hasil Belajar Siswa

    OpenAIRE

    Qadariyah, Laylatul; Zainuddin, Zainuddin; Hartini, Sri

    2015-01-01

    Something cause the low learning result of students is a models and methods less variation .Learning inovatian become interesting can improved the learning result of student conducted research purpose at describes the effectiveness implementation of conceptual understanding procedures (CUPs) cooperative learning in improving student learning result on the subject of light reflection. Specifically this study purposed to describe: (1) enforceability LPA, (2) social skills, (3) learning result, ...

  12. Application of Viterbi’s Algorithm for Predicting Rainfall Occurrence and Simulating Wet\\Dry Spells – Comparison with Common Methods

    Directory of Open Access Journals (Sweden)

    M. Ghamghami

    2015-06-01

    Full Text Available Today, there arevarious statistical models for the discrete simulation of the rainfall occurrence/non-occurrence with more emphasizing on long-term climatic statistics. Nevertheless, the accuracy of such models or predictions should be improved in short timescale. In the present paper, it is assumed that the rainfall occurrence/non-occurrence sequences follow a two-layer Hidden Markov Model (HMM consist of a hidden layer (discrete time series of rainfall occurrence and non-occurrence and an observable layer (weather variables, which is considered as a case study in Khoramabad station during the period of 1961-2005. The decoding algorithm of Viterbi has been used for simulation of wet/dry sequences. Performance of five weather variables, as the observable variables, including air pressure, vapor pressure, diurnal air temperature, relative humidity and dew point temperature for choosing the best observed variables were evaluated using some measures oferror evaluation. Results showed that the variable of diurnal air temperatureis the best observable variable for decoding process of wet/dry sequences, which detects the strong physical relationship between those variables. Also the Viterbi output was compared with ClimGen and LARS-WG weather generators, in terms of two accuracy measures including similarity of climatic statistics and forecasting skills. Finally, it is concluded that HMM has more skills rather than the other two weather generators in simulation of wet and dry spells. Therefore, we recommend the use of HMM instead of two other approaches for generation of wet and dry sequences.

  13. A Survey on Hidden Markov Model (HMM) Based Intention Prediction Techniques

    OpenAIRE

    Mrs. Manisha Bharati; Dr. Santosh Lomte

    2016-01-01

    The extensive use of virtualization in implementing cloud infrastructure brings unrivaled security concerns for cloud tenants or customers and introduces an additional layer that itself must be completely configured and secured. Intruders can exploit the large amount of cloud resources for their attacks. This paper discusses two approaches In the first three features namely ongoing attacks, autonomic prevention actions, and risk measure are Integrated to our Autonomic Cloud Intrus...

  14. Graphical models for inferring single molecule dynamics

    Directory of Open Access Journals (Sweden)

    Gonzalez Ruben L

    2010-10-01

    Full Text Available Abstract Background The recent explosion of experimental techniques in single molecule biophysics has generated a variety of novel time series data requiring equally novel computational tools for analysis and inference. This article describes in general terms how graphical modeling may be used to learn from biophysical time series data using the variational Bayesian expectation maximization algorithm (VBEM. The discussion is illustrated by the example of single-molecule fluorescence resonance energy transfer (smFRET versus time data, where the smFRET time series is modeled as a hidden Markov model (HMM with Gaussian observables. A detailed description of smFRET is provided as well. Results The VBEM algorithm returns the model’s evidence and an approximating posterior parameter distribution given the data. The former provides a metric for model selection via maximum evidence (ME, and the latter a description of the model’s parameters learned from the data. ME/VBEM provide several advantages over the more commonly used approach of maximum likelihood (ML optimized by the expectation maximization (EM algorithm, the most important being a natural form of model selection and a well-posed (non-divergent optimization problem. Conclusions The results demonstrate the utility of graphical modeling for inference of dynamic processes in single molecule biophysics.

  15. Revisiting the destination ranking procedure in development of an Intervening Opportunities Model for public transit trip distribution

    Science.gov (United States)

    Nazem, Mohsen; Trépanier, Martin; Morency, Catherine

    2015-01-01

    An Enhanced Intervening Opportunities Model (EIOM) is developed for Public Transit (PT). This is a distribution supply dependent model, with single constraints on trip production for work trips during morning peak hours (6:00 a.m.-9:00 a.m.) within the Island of Montreal, Canada. Different data sets, including the 2008 Origin-Destination (OD) survey of the Greater Montreal Area, the 2006 Census of Canada, GTFS network data, along with the geographical data of the study area, are used. EIOM is a nonlinear model composed of socio-demographics, PT supply data and work location attributes. An enhanced destination ranking procedure is used to calculate the number of spatially cumulative opportunities, the basic variable of EIOM. For comparison, a Basic Intervening Opportunities Model (BIOM) is developed by using the basic destination ranking procedure. The main difference between EIOM and BIOM is in the destination ranking procedure: EIOM considers the maximization of a utility function composed of PT Level Of Service and number of opportunities at the destination, along with the OD trip duration, whereas BIOM is based on a destination ranking derived only from OD trip durations. Analysis confirmed that EIOM is more accurate than BIOM. This study presents a new tool for PT analysts, planners and policy makers to study the potential changes in PT trip patterns due to changes in socio-demographic characteristics, PT supply, and other factors. Also it opens new opportunities for the development of more accurate PT demand models with new emergent data such as smart card validations.

  16. The amino-terminal domain of human signal transducers and ...

    Indian Academy of Sciences (India)

    Unknown

    transferase (GST) moiety was cloned into the expression vector pGEX-2T ... containing 100 µg/ml of ampicillin to mid log phase as indicated by the .... equipped with pulsed field gradients. ... ferent algorithms like hidden Markov model (HMM).

  17. Are minimally invasive procedures harder to acquire than conventional surgical procedures?

    Science.gov (United States)

    Hiemstra, Ellen; Kolkman, Wendela; le Cessie, Saskia; Jansen, Frank Willem

    2011-01-01

    It is frequently suggested that minimally invasive surgery (MIS) is harder to acquire than conventional surgery. To test this hypothesis, residents' learning curves of both surgical skills are compared. Residents had to be assessed using a general global rating scale of the OSATS (Objective Structured Assessment of Technical Skills) for every procedure they performed as primary surgeon during a 3-month clinical rotation in gynecological surgery. Nine postgraduate-year-4 residents collected a total of 319 OSATS during the 2 years and 3 months investigation period. These assessments concerned 129 MIS (laparoscopic and hysteroscopic) and 190 conventional (open abdominal and vaginal) procedures. Learning curves (in this study defined as OSATS score plotted against procedure-specific caseload) for MIS and conventional surgery were compared using a linear mixed model. The MIS curve revealed to be steeper than the conventional curve (1.77 vs. 0.75 OSATS points per assessed procedure; 95% CI 1.19-2.35 vs. 0.15-1.35, p < 0.01). Basic MIS procedures do not seem harder to acquire during residency than conventional surgical procedures. This may have resulted from the incorporation of structured MIS training programs in residency. Hopefully, this will lead to a more successful implementation of the advanced MIS procedures. Copyright © 2010 S. Karger AG, Basel.

  18. Hand Gesture Modeling and Recognition for Human and Robot Interactive Assembly Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Fei Chen

    2015-04-01

    Full Text Available Gesture recognition is essential for human and robot collaboration. Within an industrial hybrid assembly cell, the performance of such a system significantly affects the safety of human workers. This work presents an approach to recognizing hand gestures accurately during an assembly task while in collaboration with a robot co-worker. We have designed and developed a sensor system for measuring natural human-robot interactions. The position and rotation information of a human worker's hands and fingertips are tracked in 3D space while completing a task. A modified chain-code method is proposed to describe the motion trajectory of the measured hands and fingertips. The Hidden Markov Model (HMM method is adopted to recognize patterns via data streams and identify workers' gesture patterns and assembly intentions. The effectiveness of the proposed system is verified by experimental results. The outcome demonstrates that the proposed system is able to automatically segment the data streams and recognize the gesture patterns thus represented with a reasonable accuracy ratio.

  19. Plasmonic Lithography Utilizing Epsilon Near Zero Hyperbolic Metamaterial.

    Science.gov (United States)

    Chen, Xi; Zhang, Cheng; Yang, Fan; Liang, Gaofeng; Li, Qiaochu; Guo, L Jay

    2017-10-24

    In this work, a special hyperbolic metamaterial (HMM) metamaterial is investigated for plasmonic lithography of period reduction patterns. It is a type II HMM (ϵ ∥ 0) whose tangential component of the permittivity ϵ ∥ is close to zero. Due to the high anisotropy of the type II epsilon-near-zero (ENZ) HMM, only one plasmonic mode can propagate horizontally with low loss in a waveguide system with ENZ HMM as its core. This work takes the advantage of a type II ENZ HMM composed of aluminum/aluminum oxide films and the associated unusual mode to expose a photoresist layer in a specially designed lithography system. Periodic patterns with a half pitch of 58.3 nm were achieved due to the interference of third-order diffracted light of the grating. The lines were 1/6 of the mask with a period of 700 nm and ∼1/7 of the wavelength of the incident light. Moreover, the theoretical analyses performed are widely applicable to structures made of different materials such as silver as well as systems working at deep ultraviolet wavelengths including 193, 248, and 365 nm.

  20. A Novel Approach to ECG Classification Based upon Two-Layered HMMs in Body Sensor Networks

    Directory of Open Access Journals (Sweden)

    Wei Liang

    2014-03-01

    Full Text Available This paper presents a novel approach to ECG signal filtering and classification. Unlike the traditional techniques which aim at collecting and processing the ECG signals with the patient being still, lying in bed in hospitals, our proposed algorithm is intentionally designed for monitoring and classifying the patient’s ECG signals in the free-living environment. The patients are equipped with wearable ambulatory devices the whole day, which facilitates the real-time heart attack detection. In ECG preprocessing, an integral-coefficient-band-stop (ICBS filter is applied, which omits time-consuming floating-point computations. In addition, two-layered Hidden Markov Models (HMMs are applied to achieve ECG feature extraction and classification. The periodic ECG waveforms are segmented into ISO intervals, P subwave, QRS complex and T subwave respectively in the first HMM layer where expert-annotation assisted Baum-Welch algorithm is utilized in HMM modeling. Then the corresponding interval features are selected and applied to categorize the ECG into normal type or abnormal type (PVC, APC in the second HMM layer. For verifying the effectiveness of our algorithm on abnormal signal detection, we have developed an ECG body sensor network (BSN platform, whereby real-time ECG signals are collected, transmitted, displayed and the corresponding classification outcomes are deduced and shown on the BSN screen.

  1. AUTOMATIC SPEECH RECOGNITION SYSTEM CONCERNING THE MOROCCAN DIALECTE (Darija and Tamazight)

    OpenAIRE

    A. EL GHAZI; C. DAOUI; N. IDRISSI

    2012-01-01

    In this work we present an automatic speech recognition system for Moroccan dialect mainly: Darija (Arab dialect) and Tamazight. Many approaches have been used to model the Arabic and Tamazightphonetic units. In this paper, we propose to use the hidden Markov model (HMM) for modeling these phoneticunits. Experimental results show that the proposed approach further improves the recognition.

  2. Intake Procedures in College Counseling Centers.

    Science.gov (United States)

    Pappas, James P.; And Others

    Intake procedures is the common subject of four papers presented in this booklet. James P. Pappas discusses trends, a decision theory model, information and issues in his article "Intake Procedures in Counseling Centers--Trends and Theory." In the second article "The Utilization of Standardized Tests in Intake Procedures or 'Where's the Post…

  3. Analytical procedures. Pt. 1

    International Nuclear Information System (INIS)

    Weber, G.

    1985-01-01

    In analytical procedures (Boole procedures) there is certain to be a close relationship between the safety assessment and reliability assessment of technical facilities. The paper gives an overview of the organization of models, fault trees, the probabilistic evaluation of systems, evaluation with minimum steps or minimum paths regarding statistically dependent components and of systems liable to suffer different kinds of outages. (orig.) [de

  4. A stochastic estimation procedure for intermittently-observed semi-Markov multistate models with back transitions.

    Science.gov (United States)

    Aralis, Hilary; Brookmeyer, Ron

    2017-01-01

    Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.

  5. A Hidden Markov Model Representing the Spatial and Temporal Correlation of Multiple Wind Farms

    DEFF Research Database (Denmark)

    Fang, Jiakun; Su, Chi; Hu, Weihao

    2015-01-01

    To accommodate the increasing wind energy with stochastic nature becomes a major issue on power system reliability. This paper proposes a methodology to characterize the spatiotemporal correlation of multiple wind farms. First, a hierarchical clustering method based on self-organizing maps is ado....... The proposed statistical modeling framework is compatible with the sequential power system reliability analysis. A case study on optimal sizing and location of fast-response regulation sources is presented.......To accommodate the increasing wind energy with stochastic nature becomes a major issue on power system reliability. This paper proposes a methodology to characterize the spatiotemporal correlation of multiple wind farms. First, a hierarchical clustering method based on self-organizing maps...... is adopted to categorize the similar output patterns of several wind farms into joint states. Then the hidden Markov model (HMM) is then designed to describe the temporal correlations among these joint states. Unlike the conventional Markov chain model, the accumulated wind power is taken into consideration...

  6. Sensitivity of Hydrologic Response to Climate Model Debiasing Procedures

    Science.gov (United States)

    Channell, K.; Gronewold, A.; Rood, R. B.; Xiao, C.; Lofgren, B. M.; Hunter, T.

    2017-12-01

    Climate change is already having a profound impact on the global hydrologic cycle. In the Laurentian Great Lakes, changes in long-term evaporation and precipitation can lead to rapid water level fluctuations in the lakes, as evidenced by unprecedented change in water levels seen in the last two decades. These fluctuations often have an adverse impact on the region's human, environmental, and economic well-being, making accurate long-term water level projections invaluable to regional water resources management planning. Here we use hydrological components from a downscaled climate model (GFDL-CM3/WRF), to obtain future water supplies for the Great Lakes. We then apply a suite of bias correction procedures before propagating these water supplies through a routing model to produce lake water levels. Results using conventional bias correction methods suggest that water levels will decline by several feet in the coming century. However, methods that reflect the seasonal water cycle and explicitly debias individual hydrological components (overlake precipitation, overlake evaporation, runoff) imply that future water levels may be closer to their historical average. This discrepancy between debiased results indicates that water level forecasts are highly influenced by the bias correction method, a source of sensitivity that is commonly overlooked. Debiasing, however, does not remedy misrepresentation of the underlying physical processes in the climate model that produce these biases and contribute uncertainty to the hydrological projections. This uncertainty coupled with the differences in water level forecasts from varying bias correction methods are important for water management and long term planning in the Great Lakes region.

  7. No evidence for the use of DIR, D-D fusions, chromosome 15 open reading frames or VH replacement in the peripheral repertoire was found on application of an improved algorithm, JointML, to 6329 human immunoglobulin H rearrangements

    DEFF Research Database (Denmark)

    Ohm-Laursen, Line; Nielsen, Morten; Larsen, Stine R

    2006-01-01

    gene (VH) replacement. Safe conclusions require large, well-defined sequence samples and algorithms minimizing stochastic assignment of segments. Two computer programs were developed for analysis of heavy chain joints. JointHMM is a profile hidden Markow model, while JointML is a maximum...

  8. Prediction of lipoprotein signal peptides in Gram-negative bacteria

    DEFF Research Database (Denmark)

    Juncker, Agnieszka; Willenbrock, Hanni; Von Heijne, G.

    2003-01-01

    A method to predict lipoprotein signal peptides in Gram-negative Eubacteria, LipoP, has been developed. The hidden Markov model (HMM) was able to distinguish between lipoproteins (SPaseII-cleaved proteins), SPaseI-cleaved proteins, cytoplasmic proteins, and transmembrane proteins. This predictor ...

  9. User Acceptance of YouTube for Procedural Learning: An Extension of the Technology Acceptance Model

    Science.gov (United States)

    Lee, Doo Young; Lehto, Mark R.

    2013-01-01

    The present study was framed using the Technology Acceptance Model (TAM) to identify determinants affecting behavioral intention to use YouTube. Most importantly, this research emphasizes the motives for using YouTube, which is notable given its extrinsic task goal of being used for procedural learning tasks. Our conceptual framework included two…

  10. Modelling and analysis of transient state during improved coupling procedure with the grid for DFIG based wind turbine generator

    Science.gov (United States)

    Kammoun, Soulaymen; Sallem, Souhir; Ben Ali Kammoun, Mohamed

    2017-11-01

    The aim of this study is to enhance DFIG based Wind Energy Conversion Systems (WECS) dynamics during grid coupling. In this paper, a system modelling and a starting/coupling procedure for this generator to the grid are proposed. The proposed non-linear system is a variable structure system (VSS) and has two different states, before and after coupling. So, two different state models are given to the system to analyse transient stability during the coupling. The given model represents well the transient state of the machine, through which, a behaviour assessment of the generator before, during and after connection is given based on simulation results. For this, a 300 kW DFIG based wind generation system model was simulated on the Matlab/SIMULINK environment. We judge the proposed procedure to be practical, smooth and stability improved.

  11. Automated procedure execution for space vehicle autonomous control

    Science.gov (United States)

    Broten, Thomas A.; Brown, David A.

    1990-01-01

    Increased operational autonomy and reduced operating costs have become critical design objectives in next-generation NASA and DoD space programs. The objective is to develop a semi-automated system for intelligent spacecraft operations support. The Spacecraft Operations and Anomaly Resolution System (SOARS) is presented as a standardized, model-based architecture for performing High-Level Tasking, Status Monitoring and automated Procedure Execution Control for a variety of spacecraft. The particular focus is on the Procedure Execution Control module. A hierarchical procedure network is proposed as the fundamental means for specifying and representing arbitrary operational procedures. A separate procedure interpreter controls automatic execution of the procedure, taking into account the current status of the spacecraft as maintained in an object-oriented spacecraft model.

  12. Developing Physiologic Models for Emergency Medical Procedures Under Microgravity

    Science.gov (United States)

    Parker, Nigel; O'Quinn, Veronica

    2012-01-01

    Several technological enhancements have been made to METI's commercial Emergency Care Simulator (ECS) with regard to how microgravity affects human physiology. The ECS uses both a software-only lung simulation, and an integrated mannequin lung that uses a physical lung bag for creating chest excursions, and a digital simulation of lung mechanics and gas exchange. METI s patient simulators incorporate models of human physiology that simulate lung and chest wall mechanics, as well as pulmonary gas exchange. Microgravity affects how O2 and CO2 are exchanged in the lungs. Procedures were also developed to take into affect the Glasgow Coma Scale for determining levels of consciousness by varying the ECS eye-blinking function to partially indicate the level of consciousness of the patient. In addition, the ECS was modified to provide various levels of pulses from weak and thready to hyper-dynamic to assist in assessing patient conditions from the femoral, carotid, brachial, and pedal pulse locations.

  13. HMM Adaptation for Improving a Human Activity Recognition System

    Directory of Open Access Journals (Sweden)

    Rubén San-Segundo

    2016-09-01

    Full Text Available When developing a fully automatic system for evaluating motor activities performed by a person, it is necessary to segment and recognize the different activities in order to focus the analysis. This process must be carried out by a Human Activity Recognition (HAR system. This paper proposes a user adaptation technique for improving a HAR system based on Hidden Markov Models (HMMs. This system segments and recognizes six different physical activities (walking, walking upstairs, walking downstairs, sitting, standing and lying down using inertial signals from a smartphone. The system is composed of a feature extractor for obtaining the most relevant characteristics from the inertial signals, a module for training the six HMMs (one per activity, and the last module for segmenting new activity sequences using these models. The user adaptation technique consists of a Maximum A Posteriori (MAP approach that adapts the activity HMMs to the user, using some activity examples from this specific user. The main results on a public dataset have reported a significant relative error rate reduction of more than 30%. In conclusion, adapting a HAR system to the user who is performing the physical activities provides significant improvement in the system’s performance.

  14. Detecting critical state before phase transition of complex biological systems by hidden Markov model.

    Science.gov (United States)

    Chen, Pei; Liu, Rui; Li, Yongjun; Chen, Luonan

    2016-07-15

    Identifying the critical state or pre-transition state just before the occurrence of a phase transition is a challenging task, because the state of the system may show little apparent change before this critical transition during the gradual parameter variations. Such dynamics of phase transition is generally composed of three stages, i.e. before-transition state, pre-transition state and after-transition state, which can be considered as three different Markov processes. By exploring the rich dynamical information provided by high-throughput data, we present a novel computational method, i.e. hidden Markov model (HMM) based approach, to detect the switching point of the two Markov processes from the before-transition state (a stationary Markov process) to the pre-transition state (a time-varying Markov process), thereby identifying the pre-transition state or early-warning signals of the phase transition. To validate the effectiveness, we apply this method to detect the signals of the imminent phase transitions of complex systems based on the simulated datasets, and further identify the pre-transition states as well as their critical modules for three real datasets, i.e. the acute lung injury triggered by phosgene inhalation, MCF-7 human breast cancer caused by heregulin and HCV-induced dysplasia and hepatocellular carcinoma. Both functional and pathway enrichment analyses validate the computational results. The source code and some supporting files are available at https://github.com/rabbitpei/HMM_based-method lnchen@sibs.ac.cn or liyj@scut.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Procedure for identifying models for the heat dynamics of buildings

    DEFF Research Database (Denmark)

    Bacher, Peder; Madsen, Henrik

    This report describes a new method for obtaining detailed information about the heat dynamics of a building using frequent reading of the heat consumption. Such a procedure is considered to be of uttermost importance as a key procedure for using readings from smart meters, which is expected...

  16. Comparison and extension of a direct model reference adaptive control procedure

    Science.gov (United States)

    Neat, Gregory W.; Kaufman, Howard; Steinvorth, Rodrigo

    1992-01-01

    This paper analyzes and extends an easily implemented direct model reference adaptive control procedure. The paper focuses on the major limitation of this control approach which is the satisfaction of a strictly positive real sufficiency condition in order to guarantee asymptotic tracking. Attempts, to date, to address this problem have been unable to relax simultaneously the stringent condition and maintain asymptotic tracking capabilities. Three different modifications to existing versions of this algorithm are presented which substantially relax the stringent sufficiency condition while providing asymptotic tracking. These three modifications achieve this goal by imposing slight adjustments to existing sufficiency conditions. A simulation example demonstrates that the modifications eliminate the steady error inherent in the existing methods.

  17. New Inference Procedures for Semiparametric Varying-Coefficient Partially Linear Cox Models

    Directory of Open Access Journals (Sweden)

    Yunbei Ma

    2014-01-01

    Full Text Available In biomedical research, one major objective is to identify risk factors and study their risk impacts, as this identification can help clinicians to both properly make a decision and increase efficiency of treatments and resource allocation. A two-step penalized-based procedure is proposed to select linear regression coefficients for linear components and to identify significant nonparametric varying-coefficient functions for semiparametric varying-coefficient partially linear Cox models. It is shown that the penalized-based resulting estimators of the linear regression coefficients are asymptotically normal and have oracle properties, and the resulting estimators of the varying-coefficient functions have optimal convergence rates. A simulation study and an empirical example are presented for illustration.

  18. Insights into the evolution of enzyme substrate promiscuity after the discovery of (βα)₈ isomerase evolutionary intermediates from a diverse metagenome.

    Science.gov (United States)

    Noda-García, Lianet; Juárez-Vázquez, Ana L; Ávila-Arcos, María C; Verduzco-Castro, Ernesto A; Montero-Morán, Gabriela; Gaytán, Paul; Carrillo-Tripp, Mauricio; Barona-Gómez, Francisco

    2015-06-10

    Current sequence-based approaches to identify enzyme functional shifts, such as enzyme promiscuity, have proven to be highly dependent on a priori functional knowledge, hampering our ability to reconstruct evolutionary history behind these mechanisms. Hidden Markov Model (HMM) profiles, broadly used to classify enzyme families, can be useful to distinguish between closely related enzyme families with different specificities. The (βα)8-isomerase HisA/PriA enzyme family, involved in L-histidine (HisA, mono-substrate) biosynthesis in most bacteria and plants, but also in L-tryptophan (HisA/TrpF or PriA, dual-substrate) biosynthesis in most Actinobacteria, has been used as model system to explore evolutionary hypotheses and therefore has a considerable amount of evolutionary, functional and structural knowledge available. We searched for functional evolutionary intermediates between the HisA and PriA enzyme families in order to understand the functional divergence between these families. We constructed a HMM profile that correctly classifies sequences of unknown function into the HisA and PriA enzyme sub-families. Using this HMM profile, we mined a large metagenome to identify plausible evolutionary intermediate sequences between HisA and PriA. These sequences were used to perform phylogenetic reconstructions and to identify functionally conserved amino acids. Biochemical characterization of one selected enzyme (CAM1) with a mutation within the functionally essential N-terminus phosphate-binding site, namely, an alanine instead of a glycine in HisA or a serine in PriA, showed that this evolutionary intermediate has dual-substrate specificity. Moreover, site-directed mutagenesis of this alanine residue, either backwards into a glycine or forward into a serine, revealed the robustness of this enzyme. None of these mutations, presumably upon functionally essential amino acids, significantly abolished its enzyme activities. A truncated version of this enzyme (CAM2

  19. Monitoring Farmland Loss Caused by Urbanization in Beijing from Modis Time Series Using Hierarchical Hidden Markov Model

    Science.gov (United States)

    Yuan, Y.; Meng, Y.; Chen, Y. X.; Jiang, C.; Yue, A. Z.

    2018-04-01

    In this study, we proposed a method to map urban encroachment onto farmland using satellite image time series (SITS) based on the hierarchical hidden Markov model (HHMM). In this method, the farmland change process is decomposed into three hierarchical levels, i.e., the land cover level, the vegetation phenology level, and the SITS level. Then a three-level HHMM is constructed to model the multi-level semantic structure of farmland change process. Once the HHMM is established, a change from farmland to built-up could be detected by inferring the underlying state sequence that is most likely to generate the input time series. The performance of the method is evaluated on MODIS time series in Beijing. Results on both simulated and real datasets demonstrate that our method improves the change detection accuracy compared with the HMM-based method.

  20. Primitive Based Action Representation and Recognition

    DEFF Research Database (Denmark)

    Baby, Sanmohan; Krüger, Volker

    2009-01-01

    a sequential and statistical     learning algorithm for   automatic detection of the action primitives and the action grammar   based on these primitives.  We model a set of actions using a   single HMM whose structure is learned incrementally as we observe   new types.   Actions are modeled with sufficient...

  1. Nitrous oxide emissions from cropland: a procedure for calibrating the DayCent biogeochemical model using inverse modelling

    Science.gov (United States)

    Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.

    2013-01-01

    DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.

  2. Mathematical Model and Calibration Procedure of a PSD Sensor Used in Local Positioning Systems.

    Science.gov (United States)

    Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Domingo-Perez, Francisco; Tsirigotis, Georgios

    2016-09-15

    Here, we propose a mathematical model and a calibration procedure for a PSD (position sensitive device) sensor equipped with an optical system, to enable accurate measurement of the angle of arrival of one or more beams of light emitted by infrared (IR) transmitters located at distances of between 4 and 6 m. To achieve this objective, it was necessary to characterize the intrinsic parameters that model the system and obtain their values. This first approach was based on a pin-hole model, to which system nonlinearities were added, and this was used to model the points obtained with the nA currents provided by the PSD. In addition, we analyzed the main sources of error, including PSD sensor signal noise, gain factor imbalances and PSD sensor distortion. The results indicated that the proposed model and method provided satisfactory calibration and yielded precise parameter values, enabling accurate measurement of the angle of arrival with a low degree of error, as evidenced by the experimental results.

  3. An Efficient Upscaling Procedure Based on Stokes-Brinkman Model and Discrete Fracture Network Method for Naturally Fractured Carbonate Karst Reservoirs

    KAUST Repository

    Qin, Guan; Bi, Linfeng; Popov, Peter; Efendiev, Yalchin; Espedal, Magne

    2010-01-01

    , fractures and their interconnectivities in coarse-scale simulation models. In this paper, we present a procedure based on our previously proposed Stokes-Brinkman model (SPE 125593) and the discrete fracture network method for accurate and efficient upscaling

  4. Heterogeneous Sensor Webs for Automated Target Recognition and Tracking in Urban Terrain

    Science.gov (United States)

    2012-04-09

    Alan Willsky. A Sticky HDP-HMM with Application to Speaker Diarization , Annals of Applied Statistics, (02 2011): 0. doi: 2012/04/01 18:46:19 156...Willsky, A Sticky HDP-HMM with Application to Speaker Diarization , Annals of Applied Statistics, June 2011. [59] Arvind Ganesh, Andrew Wagner, John

  5. Objective classification of latent behavioral states in bio-logging data using multivariate-normal hidden Markov models.

    Science.gov (United States)

    Phillips, Joe Scutt; Patterson, Toby A; Leroy, Bruno; Pilling, Graham M; Nicol, Simon J

    2015-07-01

    Analysis of complex time-series data from ecological system study requires quantitative tools for objective description and classification. These tools must take into account largely ignored problems of bias in manual classification, autocorrelation, and noise. Here we describe a method using existing estimation techniques for multivariate-normal hidden Markov models (HMMs) to develop such a classification. We use high-resolution behavioral data from bio-loggers attached to free-roaming pelagic tuna as an example. Observed patterns are assumed to be generated by an unseen Markov process that switches between several multivariate-normal distributions. Our approach is assessed in two parts. The first uses simulation experiments, from which the ability of the HMM to estimate known parameter values is examined using artificial time series of data consistent with hypotheses about pelagic predator foraging ecology. The second is the application to time series of continuous vertical movement data from yellowfin and bigeye tuna taken from tuna tagging experiments. These data were compressed into summary metrics capturing the variation of patterns in diving behavior and formed into a multivariate time series used to estimate a HMM. Each observation was associated with covariate information incorporating the effect of day and night on behavioral switching. Known parameter values were well recovered by the HMMs in our simulation experiments, resulting in mean correct classification rates of 90-97%, although some variance-covariance parameters were estimated less accurately. HMMs with two distinct behavioral states were selected for every time series of real tuna data, predicting a shallow warm state, which was similar across all individuals, and a deep colder state, which was more variable. Marked diurnal behavioral switching was predicted, consistent with many previous empirical studies on tuna. HMMs provide easily interpretable models for the objective classification of

  6. MODELING IN MAPLE AS THE RESEARCHING MEANS OF FUNDAMENTAL CONCEPTS AND PROCEDURES IN LINEAR ALGEBRA

    Directory of Open Access Journals (Sweden)

    Vasil Kushnir

    2016-05-01

    -th degree of a square matrix, to calculate matrix exponent, etc. The author creates four basic forms of canonical models of matrices and shows how to design matrices of similar transformations to these four forms. We introduce the programs-procedures for square matrices construction based on the selected models of canonical matrices. Then you can create a certain amount of various square matrices based on canonical matrix models, it allows to use individual learning technologies. The use of Maple-technology allows to automate the cumbersome and complex procedures for finding the transformation matrices of canonical form of a matrix, values of matrices functions, etc., which not only saves time but also attracts attention and efforts on understanding the above mentioned fundamental concepts of linear algebra and procedures for investigation of their properties. All these create favorable conditions for the use of fundamental concepts of linear algebra in scientific and research work of students and undergraduates using Maple-technology

  7. Navigation of guidewires and catheters in the body during intervention procedures : A review of computer-based models

    NARCIS (Netherlands)

    Sharei Amarghan, H.; Alderliesten, Tanja; van den Dobbelsteen, J.J.; Dankelman, J.

    2018-01-01

    Guidewires and catheters are used during minimally invasive interventional procedures to traverse in vascular system and access the desired position. Computer models are increasingly being used to predict the behavior of these instruments. This information can be used to choose the right

  8. Solution Procedure for Transport Modeling in Effluent Recharge Based on Operator-Splitting Techniques

    Directory of Open Access Journals (Sweden)

    Shutang Zhu

    2008-01-01

    Full Text Available The coupling of groundwater movement and reactive transport during groundwater recharge with wastewater leads to a complicated mathematical model, involving terms to describe convection-dispersion, adsorption/desorption and/or biodegradation, and so forth. It has been found very difficult to solve such a coupled model either analytically or numerically. The present study adopts operator-splitting techniques to decompose the coupled model into two submodels with different intrinsic characteristics. By applying an upwind finite difference scheme to the finite volume integral of the convection flux term, an implicit solution procedure is derived to solve the convection-dominant equation. The dispersion term is discretized in a standard central-difference scheme while the dispersion-dominant equation is solved using either the preconditioned Jacobi conjugate gradient (PJCG method or Thomas method based on local-one-dimensional scheme. The solution method proposed in this study is applied to the demonstration project of groundwater recharge with secondary effluent at Gaobeidian sewage treatment plant (STP successfully.

  9. Unassigned MURF1 of kinetoplastids codes for NADH dehydrogenase subunit 2

    Directory of Open Access Journals (Sweden)

    Burger Gertraud

    2008-10-01

    Full Text Available Abstract Background In a previous study, we conducted a large-scale similarity-free function prediction of mitochondrion-encoded hypothetical proteins, by which the hypothetical gene murf1 (maxicircle unidentified reading frame 1 was assigned as nad2, encoding subunit 2 of NADH dehydrogenase (Complex I of the respiratory chain. This hypothetical gene occurs in the mitochondrial genome of kinetoplastids, a group of unicellular eukaryotes including the causative agents of African sleeping sickness and leishmaniasis. In the present study, we test this assignment by using bioinformatics methods that are highly sensitive in identifying remote homologs and confront the prediction with available biological knowledge. Results Comparison of MURF1 profile Hidden Markov Model (HMM against function-known profile HMMs in Pfam, Panther and TIGR shows that MURF1 is a Complex I protein, but without specifying the exact subunit. Therefore, we constructed profile HMMs for each individual subunit, using all available sequences clustered at various identity thresholds. HMM-HMM comparison of these individual NADH subunits against MURF1 clearly identifies this hypothetical protein as NAD2. Further, we collected the relevant experimental information about kinetoplastids, which provides additional evidence in support of this prediction. Conclusion Our in silico analyses provide convincing evidence for MURF1 being a highly divergent member of NAD2.

  10. Using DEDICOM for completely unsupervised part-of-speech tagging.

    Energy Technology Data Exchange (ETDEWEB)

    Chew, Peter A.; Bader, Brett William; Rozovskaya, Alla (University of Illinois, Urbana, IL)

    2009-02-01

    A standard and widespread approach to part-of-speech tagging is based on Hidden Markov Models (HMMs). An alternative approach, pioneered by Schuetze (1993), induces parts of speech from scratch using singular value decomposition (SVD). We introduce DEDICOM as an alternative to SVD for part-of-speech induction. DEDICOM retains the advantages of SVD in that it is completely unsupervised: no prior knowledge is required to induce either the tagset or the associations of terms with tags. However, unlike SVD, it is also fully compatible with the HMM framework, in that it can be used to estimate emission- and transition-probability matrices which can then be used as the input for an HMM. We apply the DEDICOM method to the CONLL corpus (CONLL 2000) and compare the output of DEDICOM to the part-of-speech tags given in the corpus, and find that the correlation (almost 0.5) is quite high. Using DEDICOM, we also estimate part-of-speech ambiguity for each term, and find that these estimates correlate highly with part-of-speech ambiguity as measured in the original corpus (around 0.88). Finally, we show how the output of DEDICOM can be evaluated and compared against the more familiar output of supervised HMM-based tagging.

  11. Computer-Vision-Assisted Palm Rehabilitation With Supervised Learning.

    Science.gov (United States)

    Vamsikrishna, K M; Dogra, Debi Prosad; Desarkar, Maunendra Sankar

    2016-05-01

    Physical rehabilitation supported by the computer-assisted-interface is gaining popularity among health-care fraternity. In this paper, we have proposed a computer-vision-assisted contactless methodology to facilitate palm and finger rehabilitation. Leap motion controller has been interfaced with a computing device to record parameters describing 3-D movements of the palm of a user undergoing rehabilitation. We have proposed an interface using Unity3D development platform. Our interface is capable of analyzing intermediate steps of rehabilitation without the help of an expert, and it can provide online feedback to the user. Isolated gestures are classified using linear discriminant analysis (DA) and support vector machines (SVM). Finally, a set of discrete hidden Markov models (HMM) have been used to classify gesture sequence performed during rehabilitation. Experimental validation using a large number of samples collected from healthy volunteers reveals that DA and SVM perform similarly while applied on isolated gesture recognition. We have compared the results of HMM-based sequence classification with CRF-based techniques. Our results confirm that both HMM and CRF perform quite similarly when tested on gesture sequences. The proposed system can be used for home-based palm or finger rehabilitation in the absence of experts.

  12. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.

    Science.gov (United States)

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2017-02-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  13. Modified uterine allotransplantation and immunosuppression procedure in the sheep model.

    Directory of Open Access Journals (Sweden)

    Li Wei

    Full Text Available OBJECTIVE: To develop an orthotopic, allogeneic, uterine transplantation technique and an effective immunosuppressive protocol in the sheep model. METHODS: In this pilot study, 10 sexually mature ewes were subjected to laparotomy and total abdominal hysterectomy with oophorectomy to procure uterus allografts. The cold ischemic time was 60 min. End-to-end vascular anastomosis was performed using continuous, non-interlocking sutures. Complete tissue reperfusion was achieved in all animals within 30 s after the vascular re-anastomosis, without any evidence of arterial or venous thrombosis. The immunosuppressive protocol consisted of tacrolimus, mycophenolate mofetil and methylprednisolone tablets. Graft viability was assessed by transrectal ultrasonography and second-look laparotomy at 2 and 4 weeks, respectively. RESULTS: Viable uterine tissue and vascular patency were observed on transrectal ultrasonography and second-look laparotomy. Histological analysis of the graft tissue (performed in one ewe revealed normal tissue architecture with a very subtle inflammatory reaction but no edema or stasis. CONCLUSION: We have developed a modified procedure that allowed us to successfully perform orthotopic, allogeneic, uterine transplantation in sheep, whose uterine and vascular anatomy (apart from the bicornuate uterus is similar to the human anatomy, making the ovine model excellent for human uterine transplant research.

  14. Human activity recognition based on feature selection in smart home using back-propagation algorithm.

    Science.gov (United States)

    Fang, Hongqing; He, Lei; Si, Hao; Liu, Peng; Xie, Xiaolei

    2014-09-01

    In this paper, Back-propagation(BP) algorithm has been used to train the feed forward neural network for human activity recognition in smart home environments, and inter-class distance method for feature selection of observed motion sensor events is discussed and tested. And then, the human activity recognition performances of neural network using BP algorithm have been evaluated and compared with other probabilistic algorithms: Naïve Bayes(NB) classifier and Hidden Markov Model(HMM). The results show that different feature datasets yield different activity recognition accuracy. The selection of unsuitable feature datasets increases the computational complexity and degrades the activity recognition accuracy. Furthermore, neural network using BP algorithm has relatively better human activity recognition performances than NB classifier and HMM. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  15. An Online Full-Body Motion Recognition Method Using Sparse and Deficient Signal Sequences

    Directory of Open Access Journals (Sweden)

    Chengyu Guo

    2014-01-01

    Full Text Available This paper presents a method to recognize continuous full-body human motion online by using sparse, low-cost sensors. The only input signals needed are linear accelerations without any rotation information, which are provided by four Wiimote sensors attached to the four human limbs. Based on the fused hidden Markov model (FHMM and autoregressive process, a predictive fusion model (PFM is put forward, which considers the different influences of the upper and lower limbs, establishes HMM for each part, and fuses them using a probabilistic fusion model. Then an autoregressive process is introduced in HMM to predict the gesture, which enables the model to deal with incomplete signal data. In order to reduce the number of alternatives in the online recognition process, a graph model is built that rejects parts of motion types based on the graph structure and previous recognition results. Finally, an online signal segmentation method based on semantics information and PFM is presented to finish the efficient recognition task. The results indicate that the method is robust with a high recognition rate of sparse and deficient signals and can be used in various interactive applications.

  16. Net Metering and Interconnection Procedures-- Incorporating Best Practices

    Energy Technology Data Exchange (ETDEWEB)

    Jason Keyes, Kevin Fox, Joseph Wiedman, Staff at North Carolina Solar Center

    2009-04-01

    State utility commissions and utilities themselves are actively developing and revising their procedures for the interconnection and net metering of distributed generation. However, the procedures most often used by regulators and utilities as models have not been updated in the past three years, in which time most of the distributed solar facilities in the United States have been installed. In that period, the Interstate Renewable Energy Council (IREC) has been a participant in more than thirty state utility commission rulemakings regarding interconnection and net metering of distributed generation. With the knowledge gained from this experience, IREC has updated its model procedures to incorporate current best practices. This paper presents the most significant changes made to IREC’s model interconnection and net metering procedures.

  17. A Survey of Procedural Methods for Terrain Modelling

    NARCIS (Netherlands)

    Smelik, R.M.; Kraker, J.K. de; Groenewegen, S.A.; Tutenel, T.; Bidarra, R.

    2009-01-01

    Procedural methods are a promising but underused alternative to manual content creation. Commonly heard drawbacks are the randomness of and the lack of control over the output and the absence of integrated solutions, although more recent publications increasingly address these issues. This paper

  18. A Proposal for a Procedural Terrain Modelling Framework

    NARCIS (Netherlands)

    Smelik, R.M.; T. Tutenel, T.; Kraker, K.J. de; Bidarra, R.

    2008-01-01

    Manual game content creation is an increasingly laborious task; with each advance in graphics hardware, a higher level of fidelity and detail is achievable and, therefore, expected. Although numerous automatic (e.g. procedural) content generation algorithms and techniques have been developed over

  19. A new approach to household appliance energy test procedures

    Energy Technology Data Exchange (ETDEWEB)

    Ernebrant, S.; Wihlborg, M.

    1999-03-01

    Energy test procedures provides the industry with a method to measure its products energy consumption. The energy test procedures are the technical foundation to every energy standard and labelling system. Depending on which country the product is going to be sold in, the manufacturers must follow different standards. This report concentrates on appliance test procedures, with the main focus on refrigerators. Recently, a new technology -microcontrollers- is undermining the credibility of the test procedures. New features, saving energy in real-life, are not picked up by the test procedures. It is estimated that as much as 30% of energy could be saved with this technology. Microcontrollers have also led to the possibility to circumvent tests. A new model is presented in the report, which handles these energy savings and make it harder cheating on tests. The model divides the test procedure in two parts; hardware tests and software tests, and uses a Matlab/Simulink computer model to calculate the energy consumption. Examples of hardware- and software test methods for refrigerators are described. A refrigerator is used as an example to present the model. The possibility to harmonize the energy standards to one global standard, which could mean substantial savings and make international trade more efficient, is also discussed 24 refs, 30 figs. Examination paper

  20. Evaluation of a novel, hybrid model (Mumbai EUS II) for stepwise teaching and training in EUS-guided biliary drainage and rendezvous procedures.

    Science.gov (United States)

    Dhir, Vinay; Itoi, Takao; Pausawasdi, Nonthalee; Khashab, Mouen A; Perez-Miranda, Manuel; Sun, Siyu; Park, Do Hyun; Iwashita, Takuji; Teoh, Anthony Y B; Maydeo, Amit P; Ho, Khek Yu

    2017-11-01

     EUS-guided biliary drainage (EUS-BD) and rendezvous (EUS-RV) are acceptable rescue options for patients with failed endoscopic retrograde cholangiopancreatography (ERCP). However, there are limited training opportunities at most centers owing to low case volumes. The existing models do not replicate the difficulties encountered during EUS-BD. We aimed to develop and validate a model for stepwise learning of EUS-BD and EUS-RV, which replicates the actual EUS-BD procedures.  A hybrid model was created utilizing pig esophagus and stomach, with a synthetic duodenum and biliary system. The model was objectively assessed on a grade of 1 - 4 by two experts. Twenty-eight trainees were given initial training with didactic lectures and live procedures. This was followed by hands-on training in EUS-BD and EUS-RV on the hybrid model. Trainees were assessed for objective criteria of technical difficulties.  Both the experts graded the model as very good or above for all parameters. All trainees could complete the requisite steps of EUS-BD and EUS-RV in a mean time of 11 minutes (8 - 18 minutes). Thirty-six technical difficulties were noted during the training (wrong scope position, 13; incorrect duct puncture, 12; guidewire related problems, 11). Technical difficulties peaked for EUS-RV, followed by hepaticogastrostomy (HGS) and choledochoduodenostomy (CDS) (20, 9, and 7, P  = 0.001). At 10 days follow-up, nine of 28 trainees had successfully performed three EUS-RV and seven EUS-BD procedures independently.  The Mumbai EUS II hybrid model replicates situations encountered during EUS-RV and EUS-BD. Stepwise mentoring improves the chances of success in EUS-RV and EUS-BD procedures.

  1. Evaluation of a novel, hybrid model (Mumbai EUS II) for stepwise teaching and training in EUS-guided biliary drainage and rendezvous procedures

    Science.gov (United States)

    Dhir, Vinay; Itoi, Takao; Pausawasdi, Nonthalee; Khashab, Mouen A.; Perez-Miranda, Manuel; Sun, Siyu; Park, Do Hyun; Iwashita, Takuji; Teoh, Anthony Y. B.; Maydeo, Amit P.; Ho, Khek Yu

    2017-01-01

    Background and aims  EUS-guided biliary drainage (EUS-BD) and rendezvous (EUS-RV) are acceptable rescue options for patients with failed endoscopic retrograde cholangiopancreatography (ERCP). However, there are limited training opportunities at most centers owing to low case volumes. The existing models do not replicate the difficulties encountered during EUS-BD. We aimed to develop and validate a model for stepwise learning of EUS-BD and EUS-RV, which replicates the actual EUS-BD procedures. Methods  A hybrid model was created utilizing pig esophagus and stomach, with a synthetic duodenum and biliary system. The model was objectively assessed on a grade of 1 – 4 by two experts. Twenty-eight trainees were given initial training with didactic lectures and live procedures. This was followed by hands-on training in EUS-BD and EUS-RV on the hybrid model. Trainees were assessed for objective criteria of technical difficulties. Results  Both the experts graded the model as very good or above for all parameters. All trainees could complete the requisite steps of EUS-BD and EUS-RV in a mean time of 11 minutes (8 – 18 minutes). Thirty-six technical difficulties were noted during the training (wrong scope position, 13; incorrect duct puncture, 12; guidewire related problems, 11). Technical difficulties peaked for EUS-RV, followed by hepaticogastrostomy (HGS) and choledochoduodenostomy (CDS) (20, 9, and 7, P  = 0.001). At 10 days follow-up, nine of 28 trainees had successfully performed three EUS-RV and seven EUS-BD procedures independently. Conclusions  The Mumbai EUS II hybrid model replicates situations encountered during EUS-RV and EUS-BD. Stepwise mentoring improves the chances of success in EUS-RV and EUS-BD procedures. PMID:29250585

  2. A multiscale MD-FE model of diffusion in composite media with internal surface interaction based on numerical homogenization procedure.

    Science.gov (United States)

    Kojic, M; Milosevic, M; Kojic, N; Kim, K; Ferrari, M; Ziemys, A

    2014-02-01

    Mass transport by diffusion within composite materials may depend not only on internal microstructural geometry, but also on the chemical interactions between the transported substance and the material of the microstructure. Retrospectively, there is a gap in methods and theory to connect material microstructure properties with macroscale continuum diffusion characteristics. Here we present a new hierarchical multiscale model for diffusion within composite materials that couples material microstructural geometry and interactions between diffusing particles and the material matrix. This model, which bridges molecular dynamics (MD) and the finite element (FE) method, is employed to construct a continuum diffusion model based on a novel numerical homogenization procedure. The procedure is general and robust for evaluating constitutive material parameters of the continuum model. These parameters include the traditional bulk diffusion coefficients and, additionally, the distances from the solid surface accounting for surface interaction effects. We implemented our models to glucose diffusion through the following two geometrical/material configurations: tightly packed silica nanospheres, and a complex fibrous structure surrounding nanospheres. Then, rhodamine 6G diffusion analysis through an aga-rose gel network was performed, followed by a model validation using our experimental results. The microstructural model, numerical homogenization and continuum model offer a new platform for modeling and predicting mass diffusion through complex biological environment and within composite materials that are used in a wide range of applications, like drug delivery and nanoporous catalysts.

  3. Simple estimation procedures for regression analysis of interval-censored failure time data under the proportional hazards model.

    Science.gov (United States)

    Sun, Jianguo; Feng, Yanqin; Zhao, Hui

    2015-01-01

    Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.

  4. Comparison of transition-matrix sampling procedures

    DEFF Research Database (Denmark)

    Yevick, D.; Reimer, M.; Tromborg, Bjarne

    2009-01-01

    We compare the accuracy of the multicanonical procedure with that of transition-matrix models of static and dynamic communication system properties incorporating different acceptance rules. We find that for appropriate ranges of the underlying numerical parameters, algorithmically simple yet high...... accurate procedures can be employed in place of the standard multicanonical sampling algorithm....

  5. Autoregressive-moving-average hidden Markov model for vision-based fall prediction-An application for walker robot.

    Science.gov (United States)

    Taghvaei, Sajjad; Jahanandish, Mohammad Hasan; Kosuge, Kazuhiro

    2017-01-01

    Population aging of the societies requires providing the elderly with safe and dependable assistive technologies in daily life activities. Improving the fall detection algorithms can play a major role in achieving this goal. This article proposes a real-time fall prediction algorithm based on the acquired visual data of a user with walking assistive system from a depth sensor. In the lack of a coupled dynamic model of the human and the assistive walker a hybrid "system identification-machine learning" approach is used. An autoregressive-moving-average (ARMA) model is fitted on the time-series walking data to forecast the upcoming states, and a hidden Markov model (HMM) based classifier is built on the top of the ARMA model to predict falling in the upcoming time frames. The performance of the algorithm is evaluated through experiments with four subjects including an experienced physiotherapist while using a walker robot in five different falling scenarios; namely, fall forward, fall down, fall back, fall left, and fall right. The algorithm successfully predicts the fall with a rate of 84.72%.

  6. Search procedure for models based on the evolution of experimental curves

    International Nuclear Information System (INIS)

    Delforge, J.

    1975-01-01

    The possibilities offered by numerical analysis regarding the identification of parameters for the model are outlined. The use of a large number of experimental measurements is made possible by the flexibility of the proposed method. It is shown that the errors of numerical identification over all parameters are proportional to experimental errors, and to a proportionality factor called conditioning of the identification problem which is easily computed. Moreover, it is possible to define and calculate, for each parameter, a factor of sensitivity to experimental errors. The numerical values of conditioning and sensitivity factor depend on all experimental conditions, that is, on the one hand, the specific definition of the experiments, and on the other hand, the number and quality of the undertaken measurements. The identification procedure proposed includes several phases. The preliminary phase consists in a first definition of experimental conditions, in agreement with the experimenter. From the data thus obtained, it is generally possible to evaluate the minimum number of equivalence classes required for an interpretation compatible with the morphology of experimental curves. Possibly, from this point, some additional measurements may prove useful or required. The numerical phase comes afterwards to determine a first approximate model by means of the methods previously described. Next phases again require a close collaboration between experimenters and theoreticians. They consist mainly in refining the first model [fr

  7. A titanium nitride based metamaterial for applications in the visible

    DEFF Research Database (Denmark)

    Naik, Gururaj V.; Saha, Bivas; Liu, Jing

    2013-01-01

    Epitaxially grown TiN/Al0.6Sc0.4N superlattice behaves as a hyperbolic metamaterial (HMM) in the visible range. Since HMMs enhance photonic-density-of-states and reduce lifetime of an emitter, we observed nine times decrease in lifetime of a dye molecule placed close to this HMM. © 2013 The Optic...

  8. Realism of procedural task trainers in a pediatric emergency medicine procedures course

    Directory of Open Access Journals (Sweden)

    Allan Shefrin

    2015-04-01

    Conclusions: Task training models utilized in our course received variable realism ratings. When deciding what type of task trainer to use future courses should carefully consider the desired aspect of realism, and how it aligns with the procedural skill, balanced with cost considerations.

  9. Procedural Portfolio Planning in Plastic Surgery, Part 2: Collaboration Between Surgeons and Hospital Administrators to Develop a Funds Flow Model for Procedures Performed at an Academic Medical Center.

    Science.gov (United States)

    Hultman, Charles Scott

    2016-06-01

    Although plastic surgeons make important contributions to the clinical, educational, and research missions of academic medical centers (AMCs), determining the financial value of a plastic surgery service can be difficult, due to complex cost accounting systems. We analyzed the financial impact of plastic surgery on an AMC, by examining the contribution margins and operating income of surgical procedures. We collaborated with hospital administrators to implement 3 types of strategic changes: (1) growth of areas with high contribution margin, (2) curtailment of high-risk procedures with negative contribution margin, (3) improved efficiency of mission-critical services with high resource consumption. Outcome measures included: facility charges, hospital collections, contribution margin, operating margin, and operating room times. We also studied the top 50 Current Procedural Terminology codes (total case number × charge/case), ranking procedures for profitability, as determined by operating margin. During the 2-year study period, we had no turnover in faculty; did not pursue any formal marketing; did not change our surgical fees, billing system, or payer mix; and maintained our commitment to indigent care. After rebalancing our case mix, through procedural portfolio planning, average hospital operating income/procedure increased from $-79 to $+816. Volume and diversity of cases increased, with no change in payer mix. Although charges/case decreased, both contribution margin and operating margin increased, due to improved throughput and decreased operating room times. The 5 most profitable procedures for the hospital were hernia repair, mandibular osteotomy, hand skin graft, free fibula flap, and head and neck flap, whereas the 5 least profitable were latissimus breast reconstruction, craniosynostosis repair, free-flap breast reconstruction, trunk skin graft, and cutaneous free flap. Total operating income for the hospital, from plastic surgery procedures, increased

  10. Fitting direct covariance structures by the MSTRUCT modeling language of the CALIS procedure.

    Science.gov (United States)

    Yung, Yiu-Fai; Browne, Michael W; Zhang, Wei

    2015-02-01

    This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple-group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy-to-use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large. © 2014 The British Psychological Society.

  11. Procedural Design of Exterior Lighting for Buildings with Complex Constraints

    KAUST Repository

    Schwarz, Michael; Wonka, Peter

    2014-01-01

    We present a system for the lighting design of procedurally modeled buildings. The design is procedurally specified as part of the ordinary modeling workflow by defining goals for the illumination that should be attained and locations where

  12. On the additive splitting procedures and their computer realization

    DEFF Research Database (Denmark)

    Farago, I.; Thomsen, Per Grove; Zlatev, Z.

    2008-01-01

    Two additive splitting procedures are defined and studied in this paper. It is shown that these splitting procedures have good stability properties. Some other splitting procedures, which are traditionally used in mathematical models used in many scientific and engineering fields, are sketched. All...

  13. A multiscale MD–FE model of diffusion in composite media with internal surface interaction based on numerical homogenization procedure

    Science.gov (United States)

    Kojic, M.; Milosevic, M.; Kojic, N.; Kim, K.; Ferrari, M.; Ziemys, A.

    2014-01-01

    Mass transport by diffusion within composite materials may depend not only on internal microstructural geometry, but also on the chemical interactions between the transported substance and the material of the microstructure. Retrospectively, there is a gap in methods and theory to connect material microstructure properties with macroscale continuum diffusion characteristics. Here we present a new hierarchical multiscale model for diffusion within composite materials that couples material microstructural geometry and interactions between diffusing particles and the material matrix. This model, which bridges molecular dynamics (MD) and the finite element (FE) method, is employed to construct a continuum diffusion model based on a novel numerical homogenization procedure. The procedure is general and robust for evaluating constitutive material parameters of the continuum model. These parameters include the traditional bulk diffusion coefficients and, additionally, the distances from the solid surface accounting for surface interaction effects. We implemented our models to glucose diffusion through the following two geometrical/material configurations: tightly packed silica nanospheres, and a complex fibrous structure surrounding nanospheres. Then, rhodamine 6G diffusion analysis through an aga-rose gel network was performed, followed by a model validation using our experimental results. The microstructural model, numerical homogenization and continuum model offer a new platform for modeling and predicting mass diffusion through complex biological environment and within composite materials that are used in a wide range of applications, like drug delivery and nanoporous catalysts. PMID:24578582

  14. Acceleration of the sliding movement of actin filaments with the use of a non-motile mutant myosin in in vitro motility assays driven by skeletal muscle heavy meromyosin.

    Directory of Open Access Journals (Sweden)

    Kohei Iwase

    Full Text Available We examined the movement of an actin filament sliding on a mixture of normal and genetically modified myosin molecules that were attached to a glass surface. For this purpose, we used a Dictyostelium G680V mutant myosin II whose release rates of Pi and ADP were highly suppressed relative to normal myosin, leading to a significantly extended life-time of the strongly bound state with actin and virtually no motility. When the mixing ratio of G680V mutant myosin II to skeletal muscle HMM (heavy myosin was 0.01%, the actin filaments moved intermittently. When they moved, their sliding velocities were about two-fold faster than the velocity of skeletal HMM alone. Furthermore, sliding movements were also faster when the actin filaments were allowed to slide on skeletal muscle HMM-coated glass surfaces in the motility buffer solution containing G680V HMM. In this case no intermittent movement was observed. When the actin filaments used were copolymerized with a fusion protein consisting of Dictyostelium actin and Dictyostelium G680V myosin II motor domain, similar faster sliding movements were observed on skeletal muscle HMM-coated surfaces. The filament sliding velocities were about two-fold greater than the velocities of normal actin filaments. We found that the velocity of actin filaments sliding on skeletal muscle myosin molecules increased in the presence of a non-motile G680V mutant myosin motor.

  15. HMM-Based Gene Annotation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Haussler, David; Hughey, Richard; Karplus, Keven

    1999-09-20

    Development of new statistical methods and computational tools to identify genes in human genomic DNA, and to provide clues to their functions by identifying features such as transcription factor binding sites, tissue, specific expression and splicing patterns, and remove homologies at the protein level with genes of known function.

  16. Formal Verification of Computerized Procedure with Colored Petri Nets

    International Nuclear Information System (INIS)

    Kim, Yun Goo; Shin, Yeong Cheol

    2008-01-01

    Computerized Procedure System (CPS) supports nuclear power plant operators in performing operating procedures which are instructions to guide in monitoring, decision making and controlling nuclear power plants. Computerized Procedure (CP) should be loaded to CPS. Due to its execution characteristic, computerized procedure acts like a software in CPS. For example, procedure flows are determined by operator evaluation and computerized procedure logic which are pre-defined. So the verification of Computerized Procedure logic and execution flow is needed before computerized procedures are installed in the system. Formal verification methods are proposed and the modeling of operating procedures with Coloured Petri Nets(CP-nets) is presented

  17. Adapting a Markov Monte Carlo simulation model for forecasting the number of Coronary Artery Revascularisation Procedures in an era of rapidly changing technology and policy

    Directory of Open Access Journals (Sweden)

    Knuiman Matthew

    2008-06-01

    Full Text Available Abstract Background Treatments for coronary heart disease (CHD have evolved rapidly over the last 15 years with considerable change in the number and effectiveness of both medical and surgical treatments. This period has seen the rapid development and uptake of statin drugs and coronary artery revascularization procedures (CARPs that include Coronary Artery Bypass Graft procedures (CABGs and Percutaneous Coronary Interventions (PCIs. It is difficult in an era of such rapid change to accurately forecast requirements for treatment services such as CARPs. In a previous paper we have described and outlined the use of a Markov Monte Carlo simulation model for analyzing and predicting the requirements for CARPs for the population of Western Australia (Mannan et al, 2007. In this paper, we expand on the use of this model for forecasting CARPs in Western Australia with a focus on the lack of adequate performance of the (standard model for forecasting CARPs in a period during the mid 1990s when there were considerable changes to CARP technology and implementation policy and an exploration and demonstration of how the standard model may be adapted to achieve better performance. Methods Selected key CARP event model probabilities are modified based on information relating to changes in the effectiveness of CARPs from clinical trial evidence and an awareness of trends in policy and practice of CARPs. These modified model probabilities and the ones obtained by standard methods are used as inputs in our Markov simulation model. Results The projected numbers of CARPs in the population of Western Australia over 1995–99 only improve marginally when modifications to model probabilities are made to incorporate an increase in effectiveness of PCI procedures. However, the projected numbers improve substantially when, in addition, further modifications are incorporated that relate to the increased probability of a PCI procedure and the reduced probability of a CABG

  18. Adapting a Markov Monte Carlo simulation model for forecasting the number of coronary artery revascularisation procedures in an era of rapidly changing technology and policy.

    Science.gov (United States)

    Mannan, Haider R; Knuiman, Matthew; Hobbs, Michael

    2008-06-25

    Treatments for coronary heart disease (CHD) have evolved rapidly over the last 15 years with considerable change in the number and effectiveness of both medical and surgical treatments. This period has seen the rapid development and uptake of statin drugs and coronary artery revascularization procedures (CARPs) that include Coronary Artery Bypass Graft procedures (CABGs) and Percutaneous Coronary Interventions (PCIs). It is difficult in an era of such rapid change to accurately forecast requirements for treatment services such as CARPs. In a previous paper we have described and outlined the use of a Markov Monte Carlo simulation model for analyzing and predicting the requirements for CARPs for the population of Western Australia (Mannan et al, 2007). In this paper, we expand on the use of this model for forecasting CARPs in Western Australia with a focus on the lack of adequate performance of the (standard) model for forecasting CARPs in a period during the mid 1990s when there were considerable changes to CARP technology and implementation policy and an exploration and demonstration of how the standard model may be adapted to achieve better performance. Selected key CARP event model probabilities are modified based on information relating to changes in the effectiveness of CARPs from clinical trial evidence and an awareness of trends in policy and practice of CARPs. These modified model probabilities and the ones obtained by standard methods are used as inputs in our Markov simulation model. The projected numbers of CARPs in the population of Western Australia over 1995-99 only improve marginally when modifications to model probabilities are made to incorporate an increase in effectiveness of PCI procedures. However, the projected numbers improve substantially when, in addition, further modifications are incorporated that relate to the increased probability of a PCI procedure and the reduced probability of a CABG procedure stemming from changed CARP preference

  19. Impact of gastrectomy procedural complexity on surgical outcomes and hospital comparisons.

    Science.gov (United States)

    Mohanty, Sanjay; Paruch, Jennifer; Bilimoria, Karl Y; Cohen, Mark; Strong, Vivian E; Weber, Sharon M

    2015-08-01

    Most risk adjustment approaches adjust for patient comorbidities and the primary procedure. However, procedures done at the same time as the index case may increase operative risk and merit inclusion in adjustment models for fair hospital comparisons. Our objectives were to evaluate the impact of surgical complexity on postoperative outcomes and hospital comparisons in gastric cancer surgery. Patients who underwent gastric resection for cancer were identified from a large clinical dataset. Procedure complexity was characterized using secondary procedure CPT codes and work relative value units (RVUs). Regression models were developed to evaluate the association between complexity variables and outcomes. The impact of complexity adjustment on model performance and hospital comparisons was examined. Among 3,467 patients who underwent gastrectomy for adenocarcinoma, 2,171 operations were distal and 1,296 total. A secondary procedure was reported for 33% of distal gastrectomies and 59% of total gastrectomies. Six of 10 secondary procedures were associated with adverse outcomes. For example, patients who underwent a synchronous bowel resection had a higher risk of mortality (odds ratio [OR], 2.14; 95% CI, 1.07-4.29) and reoperation (OR, 2.09; 95% CI, 1.26-3.47). Model performance was slightly better for nearly all outcomes with complexity adjustment (mortality c-statistics: standard model, 0.853; secondary procedure model, 0.858; RVU model, 0.855). Hospital ranking did not change substantially after complexity adjustment. Surgical complexity variables are associated with adverse outcomes in gastrectomy, but complexity adjustment does not affect hospital rankings appreciably. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Relations among conceptual knowledge, procedural knowledge, and procedural flexibility in two samples differing in prior knowledge.

    Science.gov (United States)

    Schneider, Michael; Rittle-Johnson, Bethany; Star, Jon R

    2011-11-01

    Competence in many domains rests on children developing conceptual and procedural knowledge, as well as procedural flexibility. However, research on the developmental relations between these different types of knowledge has yielded unclear results, in part because little attention has been paid to the validity of the measures or to the effects of prior knowledge on the relations. To overcome these problems, we modeled the three constructs in the domain of equation solving as latent factors and tested (a) whether the predictive relations between conceptual and procedural knowledge were bidirectional, (b) whether these interrelations were moderated by prior knowledge, and (c) how both constructs contributed to procedural flexibility. We analyzed data from 2 measurement points each from two samples (Ns = 228 and 304) of middle school students who differed in prior knowledge. Conceptual and procedural knowledge had stable bidirectional relations that were not moderated by prior knowledge. Both kinds of knowledge contributed independently to procedural flexibility. The results demonstrate how changes in complex knowledge structures contribute to competence development.

  1. A Computational Model of the Temporal Dynamics of Plasticity in Procedural Learning: Sensitivity to Feedback Timing

    Directory of Open Access Journals (Sweden)

    Vivian V. Valentin

    2014-07-01

    Full Text Available The evidence is now good that different memory systems mediate the learning of different types of category structures. In particular, declarative memory dominates rule-based (RB category learning and procedural memory dominates information-integration (II category learning. For example, several studies have reported that feedback timing is critical for II category learning, but not for RB category learning – results that have broad support within the memory systems literature. Specifically, II category learning has been shown to be best with feedback delays of 500ms compared to delays of 0 and 1000ms, and highly impaired with delays of 2.5 seconds or longer. In contrast, RB learning is unaffected by any feedback delay up to 10 seconds. We propose a neurobiologically detailed theory of procedural learning that is sensitive to different feedback delays. The theory assumes that procedural learning is mediated by plasticity at cortical-striatal synapses that are modified by dopamine-mediated reinforcement learning. The model captures the time-course of the biochemical events in the striatum that cause synaptic plasticity, and thereby accounts for the empirical effects of various feedback delays on II category learning.

  2. Analysis and optimization of blood-testing procedures.

    NARCIS (Netherlands)

    Bar-Lev, S.K.; Boxma, O.J.; Perry, D.; Vastazos, L.P.

    2017-01-01

    This paper is devoted to the performance analysis and optimization of blood testing procedures. We present a queueing model of two queues in series, representing the two stages of a blood-testing procedure. Service (testing) in stage 1 is performed in batches, whereas it is done individually in

  3. Sequence2Vec: A novel embedding approach for modeling transcription factor binding affinity landscape

    KAUST Repository

    Dai, Hanjun

    2017-07-26

    Motivation: An accurate characterization of transcription factor (TF)-DNA affinity landscape is crucial to a quantitative understanding of the molecular mechanisms underpinning endogenous gene regulation. While recent advances in biotechnology have brought the opportunity for building binding affinity prediction methods, the accurate characterization of TF-DNA binding affinity landscape still remains a challenging problem. Results: Here we propose a novel sequence embedding approach for modeling the transcription factor binding affinity landscape. Our method represents DNA binding sequences as a hidden Markov model (HMM) which captures both position specific information and long-range dependency in the sequence. A cornerstone of our method is a novel message passing-like embedding algorithm, called Sequence2Vec, which maps these HMMs into a common nonlinear feature space and uses these embedded features to build a predictive model. Our method is a novel combination of the strength of probabilistic graphical models, feature space embedding and deep learning. We conducted comprehensive experiments on over 90 large-scale TF-DNA data sets which were measured by different high-throughput experimental technologies. Sequence2Vec outperforms alternative machine learning methods as well as the state-of-the-art binding affinity prediction methods.

  4. The Use of a Fresh-Tissue Cadaver Model for the Instruction of Dermatological Procedures: A Laboratory Study for Training Medical Students.

    Science.gov (United States)

    Cervantes, Jose A; Costello, Collin M; Maarouf, Melody; McCrary, Hilary C; Zeitouni, Nathalie C

    2017-09-01

    A realistic model for the instruction of basic dermatologic procedural skills was developed, while simultaneously increasing medical student exposure to the field of dermatology. The primary purpose of the authors' study was to evaluate the utilization of a fresh-tissue cadaver model (FTCM) as a method for the instruction of common dermatologic procedures. The authors' secondary aim was to assess students' perceived clinical skills and overall perception of the field of dermatology after the lab. Nineteen first- and second-year medical students were pre- and post-tested on their ability to perform punch and excisional biopsies on a fresh-tissue cadaver. Students were then surveyed on their experience. Assessment of the cognitive knowledge gain and technical skills revealed a statistically significant improvement in all categories (p < .001). An analysis of the survey demonstrated that 78.9% were more interested in selecting dermatology as a career and 63.2% of participants were more likely to refer their future patients to a Mohs surgeon. An FTCM is a viable method for the instruction and training of dermatologic procedures. In addition, the authors conclude that an FTCM provides realistic instruction for common dermatologic procedures and enhances medical students' early exposure and interest in the field of dermatology.

  5. Safety analysis procedures for PHWR

    International Nuclear Information System (INIS)

    Min, Byung Joo; Kim, Hyoung Tae; Yoo, Kun Joong

    2004-03-01

    The methodology of safety analyses for CANDU reactors in Canada, a vendor country, uses a combination of best-estimate physical models and conservative input parameters so as to minimize the uncertainty of the plant behavior predictions. As using the conservative input parameters, the results of the safety analyses are assured the regulatory requirements such as the public dose, the integrity of fuel and fuel channel, the integrity of containment and reactor structures, etc. However, there is not the comprehensive and systematic procedures for safety analyses for CANDU reactors in Korea. In this regard, the development of the safety analyses procedures for CANDU reactors is being conducted not only to establish the safety analyses system, but also to enhance the quality assurance of the safety assessment. In the first phase of this study, the general procedures of the deterministic safety analyses are developed. The general safety procedures are covered the specification of the initial event, selection of the methodology and accident sequences, computer codes, safety analysis procedures, verification of errors and uncertainties, etc. Finally, These general procedures of the safety analyses are applied to the Large Break Loss Of Coolant Accident (LBLOCA) in Final Safety Analysis Report (FSAR) for Wolsong units 2, 3, 4

  6. Advances in the EDM-DEDM procedure.

    Science.gov (United States)

    Caliandro, Rocco; Carrozzini, Benedetta; Cascarano, Giovanni Luca; Giacovazzo, Carmelo; Mazzone, Anna Maria; Siliqi, Dritan

    2009-03-01

    The DEDM (difference electron-density modification) algorithm has been described in a recent paper [Caliandro et al. (2008), Acta Cryst. A64, 519-528]: it breaks down the collinearity between model structure phases and difference structure phase estimates. The new difference electron-density produced by DEDM, summed to the calculated Fourier maps, is expected to provide a representation of the full structure that is more accurate than that obtained by the observed Fourier synthesis. In the same paper, the DEDM algorithm was combined with the EDM (electron-density modification) approach to give the EDM-DEDM procedure which, when applied to practical molecular-replacement cases, was able to improve the model structures. In this paper, it is shown that EDM-DEDM suffers from some critical points that did not allow cyclical application of the procedure. These points are identified and modifications are made to allow iteration of the procedure. The applications indicate that EDM-DEDM may become a fundamental tool in protein crystallography.

  7. A Tuning Procedure for ARX-based MPC of Multivariate Processes

    DEFF Research Database (Denmark)

    Olesen, Daniel; Huusom, Jakob Kjøbsted; Jørgensen, John Bagterp

    2013-01-01

    We present an optimization based tuning procedure with certain robustness properties for an offset free Model Predictive Controller (MPC). The MPC is designed for multivariate processes that can be represented by an ARX model. The stochastic model of the ARX model identified from input-output data...... is modified with an ARMA model designed as part of the MPC-design procedure to ensure offset-free control. The MPC is designed and implemented based on a state space model in innovation form. Expressions for the closed-loop dynamics of the unconstrained system is used to derive the sensitivity function...... to a constraint on the maximum of the sensitivity function. The latter constraint provides a robustness measure that is essential for the procedure. The method is demonstrated for two simulated examples: A Wood-Berry distillation column example and a cement mill example....

  8. A procedure for Building Product Models

    DEFF Research Database (Denmark)

    Hvam, Lars

    1999-01-01

    , easily adaptable concepts and methods from data modeling (object oriented analysis) and domain modeling (product modeling). The concepts are general and can be used for modeling all types of specifications in the different phases in the product life cycle. The modeling techniques presented have been......The application of product modeling in manufacturing companies raises the important question of how to model product knowledge in a comprehensible and efficient way. An important challenge is to qualify engineers to model and specify IT-systems (product models) to support their specification...... activities. A basic assumption is that engineers have to take the responsability for building product models to be used in their domain. To do that they must be able to carry out the modeling task on their own without any need for support from computer science experts. This paper presents a set of simple...

  9. Bayesian structural inference for hidden processes

    Science.gov (United States)

    Strelioff, Christopher C.; Crutchfield, James P.

    2014-04-01

    We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ɛ-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ɛ-machines, irrespective of estimated transition probabilities. Properties of ɛ-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.

  10. Administrative Procedure Act and mass procedures (illustrated by the nuclear licensing procedure)

    International Nuclear Information System (INIS)

    Naumann, R.

    1977-01-01

    The report deals with the administrative procedure law of 25.5.76 of the Fed. Government, esp. with its meaning for the administrative procedures for the permission for nuclear power plants, as fas ar so-called mass procedures are concerned. (UN) [de

  11. ISOLATED SPEECH RECOGNITION SYSTEM FOR TAMIL LANGUAGE USING STATISTICAL PATTERN MATCHING AND MACHINE LEARNING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    VIMALA C.

    2015-05-01

    Full Text Available In recent years, speech technology has become a vital part of our daily lives. Various techniques have been proposed for developing Automatic Speech Recognition (ASR system and have achieved great success in many applications. Among them, Template Matching techniques like Dynamic Time Warping (DTW, Statistical Pattern Matching techniques such as Hidden Markov Model (HMM and Gaussian Mixture Models (GMM, Machine Learning techniques such as Neural Networks (NN, Support Vector Machine (SVM, and Decision Trees (DT are most popular. The main objective of this paper is to design and develop a speaker-independent isolated speech recognition system for Tamil language using the above speech recognition techniques. The background of ASR system, the steps involved in ASR, merits and demerits of the conventional and machine learning algorithms and the observations made based on the experiments are presented in this paper. For the above developed system, highest word recognition accuracy is achieved with HMM technique. It offered 100% accuracy during training process and 97.92% for testing process.

  12. Comparison of RF spectrum prediction methods for dynamic spectrum access

    Science.gov (United States)

    Kovarskiy, Jacob A.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Narayanan, Ram M.

    2017-05-01

    Dynamic spectrum access (DSA) refers to the adaptive utilization of today's busy electromagnetic spectrum. Cognitive radio/radar technologies require DSA to intelligently transmit and receive information in changing environments. Predicting radio frequency (RF) activity reduces sensing time and energy consumption for identifying usable spectrum. Typical spectrum prediction methods involve modeling spectral statistics with Hidden Markov Models (HMM) or various neural network structures. HMMs describe the time-varying state probabilities of Markov processes as a dynamic Bayesian network. Neural Networks model biological brain neuron connections to perform a wide range of complex and often non-linear computations. This work compares HMM, Multilayer Perceptron (MLP), and Recurrent Neural Network (RNN) algorithms and their ability to perform RF channel state prediction. Monte Carlo simulations on both measured and simulated spectrum data evaluate the performance of these algorithms. Generalizing spectrum occupancy as an alternating renewal process allows Poisson random variables to generate simulated data while energy detection determines the occupancy state of measured RF spectrum data for testing. The results suggest that neural networks achieve better prediction accuracy and prove more adaptable to changing spectral statistics than HMMs given sufficient training data.

  13. A Hidden Markov Model for Urban-Scale Traffic Estimation Using Floating Car Data.

    Science.gov (United States)

    Wang, Xiaomeng; Peng, Ling; Chi, Tianhe; Li, Mengzhu; Yao, Xiaojing; Shao, Jing

    2015-01-01

    Urban-scale traffic monitoring plays a vital role in reducing traffic congestion. Owing to its low cost and wide coverage, floating car data (FCD) serves as a novel approach to collecting traffic data. However, sparse probe data represents the vast majority of the data available on arterial roads in most urban environments. In order to overcome the problem of data sparseness, this paper proposes a hidden Markov model (HMM)-based traffic estimation model, in which the traffic condition on a road segment is considered as a hidden state that can be estimated according to the conditions of road segments having similar traffic characteristics. An algorithm based on clustering and pattern mining rather than on adjacency relationships is proposed to find clusters with road segments having similar traffic characteristics. A multi-clustering strategy is adopted to achieve a trade-off between clustering accuracy and coverage. Finally, the proposed model is designed and implemented on the basis of a real-time algorithm. Results of experiments based on real FCD confirm the applicability, accuracy, and efficiency of the model. In addition, the results indicate that the model is practicable for traffic estimation on urban arterials and works well even when more than 70% of the probe data are missing.

  14. Effect of metallic and hyperbolic metamaterial surface on electric and magnetic dipole emission

    DEFF Research Database (Denmark)

    Ni, Xingjie; Naik, Gururaj V.; Kildishev, Alexander V.

    2010-01-01

    Spontaneous emission patterns of electric and magnetic dipoles on different material surfaces were studied numerically and experimentally. The results show the modified behavior of electric and magnetic dipoles on metallic and HMM surfaces.......Spontaneous emission patterns of electric and magnetic dipoles on different material surfaces were studied numerically and experimentally. The results show the modified behavior of electric and magnetic dipoles on metallic and HMM surfaces....

  15. A general U-block model-based design procedure for nonlinear polynomial control systems

    Science.gov (United States)

    Zhu, Q. M.; Zhao, D. Y.; Zhang, Jianhua

    2016-10-01

    The proposition of U-model concept (in terms of 'providing concise and applicable solutions for complex problems') and a corresponding basic U-control design algorithm was originated in the first author's PhD thesis. The term of U-model appeared (not rigorously defined) for the first time in the first author's other journal paper, which established a framework for using linear polynomial control system design approaches to design nonlinear polynomial control systems (in brief, linear polynomial approaches → nonlinear polynomial plants). This paper represents the next milestone work - using linear state-space approaches to design nonlinear polynomial control systems (in brief, linear state-space approaches → nonlinear polynomial plants). The overall aim of the study is to establish a framework, defined as the U-block model, which provides a generic prototype for using linear state-space-based approaches to design the control systems with smooth nonlinear plants/processes described by polynomial models. For analysing the feasibility and effectiveness, sliding mode control design approach is selected as an exemplary case study. Numerical simulation studies provide a user-friendly step-by-step procedure for the readers/users with interest in their ad hoc applications. In formality, this is the first paper to present the U-model-oriented control system design in a formal way and to study the associated properties and theorems. The previous publications, in the main, have been algorithm-based studies and simulation demonstrations. In some sense, this paper can be treated as a landmark for the U-model-based research from intuitive/heuristic stage to rigour/formal/comprehensive studies.

  16. Emergency evacuation/transportation plan update: Traffic model development and evaluation of early closure procedures. Final report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-10-28

    Prolonged delays in traffic experienced by Laboratory personnel during a recent early dismissal in inclement weather, coupled with reconstruction efforts along NM 502 east of the White Rock Wye for the next 1 to 2 years, has prompted Los Alamos National Laboratory (LANL) to re-evaluate and improve the present transportation plan and its integration with contingency plans maintained in other organizations. Facilities planners and emergency operations staff need to evaluate the transportation system`s capability to inefficiently and safely evacuate LANL under different low-level emergency conditions. A variety of potential procedures governing the release of employees from the different technical areas (TAs) requires evaluation, perhaps with regard to multiple emergency-condition scenarios, with one or more optimal procedures ultimately presented for adoption by Lab Management. The work undertaken in this project will hopefully lay a foundation for an on-going, progressive transportation system analysis capability. It utilizes microscale simulation techniques to affirm, reassess and validate the Laboratory`s Early Dismissal/Closure/Delayed Opening Plan. The Laboratory is required by Federal guidelines, and compelled by prudent practice and conscientious regard for the welfare of employees and nearby residents, to maintain plans and operating procedures for evacuation if the need arises. The tools developed during this process can be used outside of contingency planning. It is anticipated that the traffic models developed will allow site planners to evaluate changes to the traffic network which could better serve the normal traffic levels. Changes in roadway configuration, control strategies (signalization and signing), response strategies to traffic accidents, and patterns of demand can be modelled using the analysis tools developed during this project. Such scenarios typically are important considerations in master planning and facilities programming.

  17. A declarative approach to procedural modeling of virtual worlds

    NARCIS (Netherlands)

    Smelik, R.M.; Tutenel, T.; Kraker, K.J.de; Bidarra, R.

    2011-01-01

    With the ever increasing costs of manual content creation for virtual worlds, the potential of creating it automatically becomes too attractive to ignore. However, for most designers, traditional procedural content generation methods are complex and unintuitive to use, hard to control, and generated

  18. Accurately Identifying New QoS Violation Driven by High-Distributed Low-Rate Denial of Service Attacks Based on Multiple Observed Features

    Directory of Open Access Journals (Sweden)

    Jian Kang

    2015-01-01

    Full Text Available We propose using multiple observed features of network traffic to identify new high-distributed low-rate quality of services (QoS violation so that detection accuracy may be further improved. For the multiple observed features, we choose F feature in TCP packet header as a microscopic feature and, P feature and D feature of network traffic as macroscopic features. Based on these features, we establish multistream fused hidden Markov model (MF-HMM to detect stealthy low-rate denial of service (LDoS attacks hidden in legitimate network background traffic. In addition, the threshold value is dynamically adjusted by using Kaufman algorithm. Our experiments show that the additive effect of combining multiple features effectively reduces the false-positive rate. The average detection rate of MF-HMM results in a significant 23.39% and 44.64% improvement over typical power spectrum density (PSD algorithm and nonparametric cumulative sum (CUSUM algorithm.

  19. Audio-Visual Speech Recognition Using Lip Information Extracted from Side-Face Images

    Directory of Open Access Journals (Sweden)

    Koji Iwano

    2007-03-01

    Full Text Available This paper proposes an audio-visual speech recognition method using lip information extracted from side-face images as an attempt to increase noise robustness in mobile environments. Our proposed method assumes that lip images can be captured using a small camera installed in a handset. Two different kinds of lip features, lip-contour geometric features and lip-motion velocity features, are used individually or jointly, in combination with audio features. Phoneme HMMs modeling the audio and visual features are built based on the multistream HMM technique. Experiments conducted using Japanese connected digit speech contaminated with white noise in various SNR conditions show effectiveness of the proposed method. Recognition accuracy is improved by using the visual information in all SNR conditions. These visual features were confirmed to be effective even when the audio HMM was adapted to noise by the MLLR method.

  20. Robust and efficient solution procedures for association models

    DEFF Research Database (Denmark)

    Michelsen, Michael Locht

    2006-01-01

    Equations of state that incorporate the Wertheim association expression are more difficult to apply than conventional pressure explicit equations, because the association term is implicit and requires solution for an internal set of composition variables. In this work, we analyze the convergence...... behavior of different solution methods and demonstrate how a simple and efficient, yet globally convergent, procedure for the solution of the equation of state can be formulated....

  1. Emergency procedures

    International Nuclear Information System (INIS)

    Abd Nasir Ibrahim; Azali Muhammad; Ab Razak Hamzah; Abd Aziz Mohamed; Mohammad Pauzi Ismail

    2004-01-01

    The following subjects are discussed - Emergency Procedures: emergency equipment, emergency procedures; emergency procedure involving X-Ray equipment; emergency procedure involving radioactive sources

  2. Multivariate Bias Correction Procedures for Improving Water Quality Predictions from the SWAT Model

    Science.gov (United States)

    Arumugam, S.; Libera, D.

    2017-12-01

    Water quality observations are usually not available on a continuous basis for longer than 1-2 years at a time over a decadal period given the labor requirements making calibrating and validating mechanistic models difficult. Further, any physical model predictions inherently have bias (i.e., under/over estimation) and require post-simulation techniques to preserve the long-term mean monthly attributes. This study suggests a multivariate bias-correction technique and compares to a common technique in improving the performance of the SWAT model in predicting daily streamflow and TN loads across the southeast based on split-sample validation. The approach is a dimension reduction technique, canonical correlation analysis (CCA) that regresses the observed multivariate attributes with the SWAT model simulated values. The common approach is a regression based technique that uses an ordinary least squares regression to adjust model values. The observed cross-correlation between loadings and streamflow is better preserved when using canonical correlation while simultaneously reducing individual biases. Additionally, canonical correlation analysis does a better job in preserving the observed joint likelihood of observed streamflow and loadings. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically, watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are compared for the observed period and over a multi-decadal period using loading estimates from the USGS LOADEST model. Lastly, the CCA technique is applied in a forecasting sense by using 1-month ahead forecasts of P & T from ECHAM4.5 as forcings in the SWAT model. Skill in using the SWAT model for forecasting loadings and streamflow at the monthly and seasonal timescale is also discussed.

  3. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling

    Directory of Open Access Journals (Sweden)

    Eric R. Edelman

    2017-06-01

    Full Text Available For efficient utilization of operating rooms (ORs, accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT. We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT. TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related

  4. Improving the Prediction of Total Surgical Procedure Time Using Linear Regression Modeling.

    Science.gov (United States)

    Edelman, Eric R; van Kuijk, Sander M J; Hamaekers, Ankie E W; de Korte, Marcel J M; van Merode, Godefridus G; Buhre, Wolfgang F F A

    2017-01-01

    For efficient utilization of operating rooms (ORs), accurate schedules of assigned block time and sequences of patient cases need to be made. The quality of these planning tools is dependent on the accurate prediction of total procedure time (TPT) per case. In this paper, we attempt to improve the accuracy of TPT predictions by using linear regression models based on estimated surgeon-controlled time (eSCT) and other variables relevant to TPT. We extracted data from a Dutch benchmarking database of all surgeries performed in six academic hospitals in The Netherlands from 2012 till 2016. The final dataset consisted of 79,983 records, describing 199,772 h of total OR time. Potential predictors of TPT that were included in the subsequent analysis were eSCT, patient age, type of operation, American Society of Anesthesiologists (ASA) physical status classification, and type of anesthesia used. First, we computed the predicted TPT based on a previously described fixed ratio model for each record, multiplying eSCT by 1.33. This number is based on the research performed by van Veen-Berkx et al., which showed that 33% of SCT is generally a good approximation of anesthesia-controlled time (ACT). We then systematically tested all possible linear regression models to predict TPT using eSCT in combination with the other available independent variables. In addition, all regression models were again tested without eSCT as a predictor to predict ACT separately (which leads to TPT by adding SCT). TPT was most accurately predicted using a linear regression model based on the independent variables eSCT, type of operation, ASA classification, and type of anesthesia. This model performed significantly better than the fixed ratio model and the method of predicting ACT separately. Making use of these more accurate predictions in planning and sequencing algorithms may enable an increase in utilization of ORs, leading to significant financial and productivity related benefits.

  5. Revision Arthroscopic Repair Versus Latarjet Procedure in Patients With Recurrent Instability After Initial Repair Attempt: A Cost-Effectiveness Model.

    Science.gov (United States)

    Makhni, Eric C; Lamba, Nayan; Swart, Eric; Steinhaus, Michael E; Ahmad, Christopher S; Romeo, Anthony A; Verma, Nikhil N

    2016-09-01

    To compare the cost-effectiveness of arthroscopic revision instability repair and Latarjet procedure in treating patients with recurrent instability after initial arthroscopic instability repair. An expected-value decision analysis of revision arthroscopic instability repair compared with Latarjet procedure for recurrent instability followed by failed repair attempt was modeled. Inputs regarding procedure cost, clinical outcomes, and health utilities were derived from the literature. Compared with revision arthroscopic repair, Latarjet was less expensive ($13,672 v $15,287) with improved clinical outcomes (43.78 v 36.76 quality-adjusted life-years). Both arthroscopic repair and Latarjet were cost-effective compared with nonoperative treatment (incremental cost-effectiveness ratios of 3,082 and 1,141, respectively). Results from sensitivity analyses indicate that under scenarios of high rates of stability postoperatively, along with improved clinical outcome scores, revision arthroscopic repair becomes increasingly cost-effective. Latarjet procedure for failed instability repair is a cost-effective treatment option, with lower costs and improved clinical outcomes compared with revision arthroscopic instability repair. However, surgeons must still incorporate clinical judgment into treatment algorithm formation. Level IV, expected value decision analysis. Copyright © 2016. Published by Elsevier Inc.

  6. MATLAB-implemented estimation procedure for model-based assessment of hepatic insulin degradation from standard intravenous glucose tolerance test data.

    Science.gov (United States)

    Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela

    2013-05-01

    Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  7. Computer Simulation Model to Train Medical Personnel on Glucose Clamp Procedures.

    Science.gov (United States)

    Maghoul, Pooya; Boulet, Benoit; Tardif, Annie; Haidar, Ahmad

    2017-10-01

    A glucose clamp procedure is the most reliable way to quantify insulin pharmacokinetics and pharmacodynamics, but skilled and trained research personnel are required to frequently adjust the glucose infusion rate. A computer environment that simulates glucose clamp experiments can be used for efficient personnel training and development and testing of algorithms for automated glucose clamps. We built 17 virtual healthy subjects (mean age, 25±6 years; mean body mass index, 22.2±3 kg/m 2 ), each comprising a mathematical model of glucose regulation and a unique set of parameters. Each virtual subject simulates plasma glucose and insulin concentrations in response to intravenous insulin and glucose infusions. Each virtual subject provides a unique response, and its parameters were estimated from combined intravenous glucose tolerance test-hyperinsulinemic-euglycemic clamp data using the Bayesian approach. The virtual subjects were validated by comparing their simulated predictions against data from 12 healthy individuals who underwent a hyperglycemic glucose clamp procedure. Plasma glucose and insulin concentrations were predicted by the virtual subjects in response to glucose infusions determined by a trained research staff performing a simulated hyperglycemic clamp experiment. The total amount of glucose infusion was indifferent between the simulated and the real subjects (85±18 g vs. 83±23 g; p=NS) as well as plasma insulin levels (63±20 mU/L vs. 58±16 mU/L; p=NS). The virtual subjects can reliably predict glucose needs and plasma insulin profiles during hyperglycemic glucose clamp conditions. These virtual subjects can be used to train personnel to make glucose infusion adjustments during clamp experiments. Copyright © 2017 Diabetes Canada. Published by Elsevier Inc. All rights reserved.

  8. Feasibility, safety and effectiveness of combining home based malaria management and seasonal malaria chemoprevention in children less than 10 years in Senegal

    DEFF Research Database (Denmark)

    Tine, Roger C K; Ndour, Cheikh T; Faye, Babacar

    2014-01-01

    Home-based management of malaria (HMM) may improve access to diagnostic testing and treatment with artemisinin combination therapy (ACT). In the Sahel region, seasonal malaria chemoprevention (SMC) is now recommended for the prevention of malaria in children. It is likely that combinations...... of antimalarial interventions can reduce the malaria burden. This study assessed the feasibility, effectiveness and safety of combining SMC and HMM delivered by community health workers (CHWs)....

  9. Histogram equalization with Bayesian estimation for noise robust speech recognition.

    Science.gov (United States)

    Suh, Youngjoo; Kim, Hoirin

    2018-02-01

    The histogram equalization approach is an efficient feature normalization technique for noise robust automatic speech recognition. However, it suffers from performance degradation when some fundamental conditions are not satisfied in the test environment. To remedy these limitations of the original histogram equalization methods, class-based histogram equalization approach has been proposed. Although this approach showed substantial performance improvement under noise environments, it still suffers from performance degradation due to the overfitting problem when test data are insufficient. To address this issue, the proposed histogram equalization technique employs the Bayesian estimation method in the test cumulative distribution function estimation. It was reported in a previous study conducted on the Aurora-4 task that the proposed approach provided substantial performance gains in speech recognition systems based on the acoustic modeling of the Gaussian mixture model-hidden Markov model. In this work, the proposed approach was examined in speech recognition systems with deep neural network-hidden Markov model (DNN-HMM), the current mainstream speech recognition approach where it also showed meaningful performance improvement over the conventional maximum likelihood estimation-based method. The fusion of the proposed features with the mel-frequency cepstral coefficients provided additional performance gains in DNN-HMM systems, which otherwise suffer from performance degradation in the clean test condition.

  10. Prediction Method for Rain Rate and Rain Propagation Attenuation for K-Band Satellite Communications Links in Tropical Areas

    Directory of Open Access Journals (Sweden)

    Baso Maruddani

    2015-01-01

    Full Text Available This paper deals with the prediction method using hidden Markov model (HMM for rain rate and rain propagation attenuation for K-band satellite communication link at tropical area. As is well known, the K-band frequency is susceptible of being affected by atmospheric condition, especially in rainy condition. The wavelength of K-band frequency which approaches to the size of rain droplet causes the signal strength is easily attenuated and absorbed by the rain droplet. In order to keep the quality of system performance for K-band satellite communication link, therefore a special attention has to be paid for rain rate and rain propagation attenuation. Thus, a prediction method for rain rate and rain propagation attenuation based on HMM is developed to process the measurement data. The measured and predicted data are then compared with the ITU-R recommendation. From the result, it is shown that the measured and predicted data show similarity with the model of ITU-R P.837-5 recommendation for rain rate and the model of ITU-R P.618-10 recommendation for rain propagation attenuation. Meanwhile, statistical data for measured and predicted data such as fade duration and interfade duration have insignificant discrepancy with the model of ITU-R P.1623-1 recommendation.

  11. How does the workload and work activities of procedural GPs compare to non-procedural GPs?

    Science.gov (United States)

    Russell, Deborah J; McGrail, Matthew R

    2017-08-01

    To investigate patterns of Australian GP procedural activity and associations with: geographical remoteness and population size hours worked in hospitals and in total; and availability for on-call DESIGN AND PARTICIPANTS: National annual panel survey (Medicine in Australia: Balancing Employment and Life) of Australian GPs, 2011-2013. Self-reported geographical work location, hours worked in different settings, and on-call availability per usual week, were analysed against GP procedural activity in anaesthetics, obstetrics, surgery or emergency medicine. Analysis of 9301 survey responses from 4638 individual GPs revealed significantly increased odds of GP procedural activity in anaesthetics, obstetrics or emergency medicine as geographical remoteness increased and community population size decreased, albeit with plateauing of the effect-size from medium-sized (population 5000-15 000) rural communities. After adjusting for confounders, procedural GPs work more hospital and more total hours each week than non-procedural GPs. In 2011 this equated to GPs practising anaesthetics, obstetrics, surgery, and emergency medicine providing 8% (95%CI 0, 16), 13% (95%CI 8, 19), 8% (95%CI 2, 15) and 18% (95%CI 13, 23) more total hours each week, respectively. The extra hours are attributable to longer hours worked in hospital settings, with no reduction in private consultation hours. Procedural GPs also carry a significantly higher burden of on-call. The longer working hours and higher on-call demands experienced by rural and remote procedural GPs demand improved solutions, such as changes to service delivery models, so that long-term procedural GP careers are increasingly attractive to current and aspiring rural GPs. © 2016 National Rural Health Alliance Inc.

  12. Portal monitor evaluation and test procedure

    International Nuclear Information System (INIS)

    Johnson, L.O.; Gupta, V.P.; Stevenson, R.L.; Rich, B.L.

    1983-10-01

    The purpose was to develop techniques and procedures to allow users to measure performance and sensitivity of portal monitors. Additionally, a methodology was developed to assist users in optimizing monitor performance. The two monitors tested utilized thin-window gas-flow proportional counters sensitive to beta and gamma radiation. Various tests were performed: a) background count rate and the statistical variability, b) detector efficiency at different distances, c) moving source sensitivity for various size sources and speeds, and d) false alarm rates at different background levels. A model was developed for the moving source measurements to compare the experimental data with measured results, and to test whether it is possible to adequately model the behavior of a portal monitor's response to a moving source. The model results were compared with the actual test results. A procedure for testing portal monitors is also given. 1 reference, 9 figures, 8 tables

  13. Function Allocation in Complex Socio-Technical Systems: Procedure usage in nuclear power and the Context Analysis Method for Identifying Design Solutions (CAMIDS) Model

    Science.gov (United States)

    Schmitt, Kara Anne

    This research aims to prove that strict adherence to procedures and rigid compliance to process in the US Nuclear Industry may not prevent incidents or increase safety. According to the Institute of Nuclear Power Operations, the nuclear power industry has seen a recent rise in events, and this research claims that a contributing factor to this rise is organizational, cultural, and based on peoples overreliance on procedures and policy. Understanding the proper balance of function allocation, automation and human decision-making is imperative to creating a nuclear power plant that is safe, efficient, and reliable. This research claims that new generations of operators are less engaged and thinking because they have been instructed to follow procedures to a fault. According to operators, they were once to know the plant and its interrelations, but organizationally more importance is now put on following procedure and policy. Literature reviews were performed, experts were questioned, and a model for context analysis was developed. The Context Analysis Method for Identifying Design Solutions (CAMIDS) Model was created, verified and validated through both peer review and application in real world scenarios in active nuclear power plant simulators. These experiments supported the claim that strict adherence and rigid compliance to procedures may not increase safety by studying the industry's propensity for following incorrect procedures, and when it directly affects the outcome of safety or security of the plant. The findings of this research indicate that the younger generations of operators rely highly on procedures, and the organizational pressures of required compliance to procedures may lead to incidents within the plant because operators feel pressured into following the rules and policy above performing the correct actions in a timely manner. The findings support computer based procedures, efficient alarm systems, and skill of the craft matrices. The solution to

  14. Concomitant use of the matrix strategy and the mand-model procedure in teaching graphic symbol combinations.

    Science.gov (United States)

    Nigam, Ravi; Schlosser, Ralf W; Lloyd, Lyle L

    2006-09-01

    Matrix strategies employing parts of speech arranged in systematic language matrices and milieu language teaching strategies have been successfully used to teach word combining skills to children who have cognitive disabilities and some functional speech. The present study investigated the acquisition and generalized production of two-term semantic relationships in a new population using new types of symbols. Three children with cognitive disabilities and little or no functional speech were taught to combine graphic symbols. The matrix strategy and the mand-model procedure were used concomitantly as intervention procedures. A multiple probe design across sets of action-object combinations with generalization probes of untrained combinations was used to teach the production of graphic symbol combinations. Results indicated that two of the three children learned the early syntactic-semantic rule of combining action-object symbols and demonstrated generalization to untrained action-object combinations and generalization across trainers. The results and future directions for research are discussed.

  15. Hierarchical HMM based learning of navigation primitives for cooperative robotic endovascular catheterization.

    Science.gov (United States)

    Rafii-Tari, Hedyeh; Liu, Jindong; Payne, Christopher J; Bicknell, Colin; Yang, Guang-Zhong

    2014-01-01

    Despite increased use of remote-controlled steerable catheter navigation systems for endovascular intervention, most current designs are based on master configurations which tend to alter natural operator tool interactions. This introduces problems to both ergonomics and shared human-robot control. This paper proposes a novel cooperative robotic catheterization system based on learning-from-demonstration. By encoding the higher-level structure of a catheterization task as a sequence of primitive motions, we demonstrate how to achieve prospective learning for complex tasks whilst incorporating subject-specific variations. A hierarchical Hidden Markov Model is used to model each movement primitive as well as their sequential relationship. This model is applied to generation of motion sequences, recognition of operator input, and prediction of future movements for the robot. The framework is validated by comparing catheter tip motions against the manual approach, showing significant improvements in the quality of catheterization. The results motivate the design of collaborative robotic systems that are intuitive to use, while reducing the cognitive workload of the operator.

  16. Proposed Core Competencies and Empirical Validation Procedure in Competency Modeling: Confirmation and Classification.

    Science.gov (United States)

    Baczyńska, Anna K; Rowiński, Tomasz; Cybis, Natalia

    2016-01-01

    Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach's alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed.

  17. Creating Online Training for Procedures in Global Health with PEARLS (Procedural Education for Adaptation to Resource-Limited Settings).

    Science.gov (United States)

    Bensman, Rachel S; Slusher, Tina M; Butteris, Sabrina M; Pitt, Michael B; On Behalf Of The Sugar Pearls Investigators; Becker, Amanda; Desai, Brinda; George, Alisha; Hagen, Scott; Kiragu, Andrew; Johannsen, Ron; Miller, Kathleen; Rule, Amy; Webber, Sarah

    2017-11-01

    The authors describe a multiinstitutional collaborative project to address a gap in global health training by creating a free online platform to share a curriculum for performing procedures in resource-limited settings. This curriculum called PEARLS (Procedural Education for Adaptation to Resource-Limited Settings) consists of peer-reviewed instructional and demonstration videos describing modifications for performing common pediatric procedures in resource-limited settings. Adaptations range from the creation of a low-cost spacer for inhaled medications to a suction chamber for continued evacuation of a chest tube. By describing the collaborative process, we provide a model for educators in other fields to collate and disseminate procedural modifications adapted for their own specialty and location, ideally expanding this crowd-sourced curriculum to reach a wide audience of trainees and providers in global health.

  18. Phoenix – A model-based Human Reliability Analysis methodology: Qualitative Analysis Procedure

    International Nuclear Information System (INIS)

    Ekanem, Nsimah J.; Mosleh, Ali; Shen, Song-Hua

    2016-01-01

    Phoenix method is an attempt to address various issues in the field of Human Reliability Analysis (HRA). Built on a cognitive human response model, Phoenix incorporates strong elements of current HRA good practices, leverages lessons learned from empirical studies, and takes advantage of the best features of existing and emerging HRA methods. Its original framework was introduced in previous publications. This paper reports on the completed methodology, summarizing the steps and techniques of its qualitative analysis phase. The methodology introduces the “Crew Response Tree” which provides a structure for capturing the context associated with Human Failure Events (HFEs), including errors of omission and commission. It also uses a team-centered version of the Information, Decision and Action cognitive model and “macro-cognitive” abstractions of crew behavior, as well as relevant findings from cognitive psychology literature and operating experience, to identify potential causes of failures and influencing factors during procedure-driven and knowledge-supported crew-plant interactions. The result is the set of identified HFEs and likely scenarios leading to each. The methodology itself is generic in the sense that it is compatible with various quantification methods, and can be adapted for use across different environments including nuclear, oil and gas, aerospace, aviation, and healthcare. - Highlights: • Produces a detailed, consistent, traceable, reproducible and properly documented HRA. • Uses “Crew Response Tree” to capture context associated with Human Failure Events. • Models dependencies between Human Failure Events and influencing factors. • Provides a human performance model for relating context to performance. • Provides a framework for relating Crew Failure Modes to its influencing factors.

  19. Predicting the Best Fit: A Comparison of Response Surface Models for Midazolam and Alfentanil Sedation in Procedures With Varying Stimulation.

    Science.gov (United States)

    Liou, Jing-Yang; Ting, Chien-Kun; Mandell, M Susan; Chang, Kuang-Yi; Teng, Wei-Nung; Huang, Yu-Yin; Tsou, Mei-Yung

    2016-08-01

    Selecting an effective dose of sedative drugs in combined upper and lower gastrointestinal endoscopy is complicated by varying degrees of pain stimulation. We tested the ability of 5 response surface models to predict depth of sedation after administration of midazolam and alfentanil in this complex model. The procedure was divided into 3 phases: esophagogastroduodenoscopy (EGD), colonoscopy, and the time interval between the 2 (intersession). The depth of sedation in 33 adult patients was monitored by Observer Assessment of Alertness/Scores. A total of 218 combinations of midazolam and alfentanil effect-site concentrations derived from pharmacokinetic models were used to test 5 response surface models in each of the 3 phases of endoscopy. Model fit was evaluated with objective function value, corrected Akaike Information Criterion (AICc), and Spearman ranked correlation. A model was arbitrarily defined as accurate if the predicted probability is effect-site concentrations tested ranged from 1 to 76 ng/mL and from 5 to 80 ng/mL for midazolam and alfentanil, respectively. Midazolam and alfentanil had synergistic effects in colonoscopy and EGD, but additivity was observed in the intersession group. Adequate prediction rates were 84% to 85% in the intersession group, 84% to 88% during colonoscopy, and 82% to 87% during EGD. The reduced Greco and Fixed alfentanil concentration required for 50% of the patients to achieve targeted response Hierarchy models performed better with comparable predictive strength. The reduced Greco model had the lowest AICc with strong correlation in all 3 phases of endoscopy. Dynamic, rather than fixed, γ and γalf in the Hierarchy model improved model fit. The reduced Greco model had the lowest objective function value and AICc and thus the best fit. This model was reliable with acceptable predictive ability based on adequate clinical correlation. We suggest that this model has practical clinical value for patients undergoing procedures

  20. Modeling epileptic brain states using EEG spectral analysis and topographic mapping.

    Science.gov (United States)

    Direito, Bruno; Teixeira, César; Ribeiro, Bernardete; Castelo-Branco, Miguel; Sales, Francisco; Dourado, António

    2012-09-30

    Changes in the spatio-temporal behavior of the brain electrical activity are believed to be associated to epileptic brain states. We propose a novel methodology to identify the different states of the epileptic brain, based on the topographic mapping of the time varying relative power of delta, theta, alpha, beta and gamma frequency sub-bands, estimated from EEG. Using normalized-cuts segmentation algorithm, points of interest are identified in the topographic mappings and their trajectories over time are used for finding out relations with epileptogenic propagations in the brain. These trajectories are used to train a Hidden Markov Model (HMM), which models the different epileptic brain states and the transition among them. Applied to 10 patients suffering from focal seizures, with a total of 30 seizures over 497.3h of data, the methodology shows good results (an average point-by-point accuracy of 89.31%) for the identification of the four brain states--interictal, preictal, ictal and postictal. The results suggest that the spatio-temporal dynamics captured by the proposed methodology are related to the epileptic brain states and transitions involved in focal seizures. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Inequality and Procedural Justice in Social Dilemmas

    NARCIS (Netherlands)

    Aksoy, Ozan; Weesie, Jeroen

    2009-01-01

    This study investigates the influence of resource inequality and the fairness of the allocation procedure of unequal resources on cooperative behavior in social dilemmas. We propose a simple formal behavioral model that incorporates conflicting selfish and social motivations. This model allows us to

  2. histoneHMM: Differential analysis of histone modifications with broad genomic footprints

    Czech Academy of Sciences Publication Activity Database

    Heinig, M.; Colomé-Tatché, M.; Taudt, A.; Rintisch, C.; Schafer, S.; Pravenec, Michal; Hubner, N.; Vingron, M.; Johannes, F.

    2015-01-01

    Roč. 16, Feb 22 (2015), s. 60 ISSN 1471-2105 R&D Projects: GA MŠk(CZ) 7E10067; GA ČR(CZ) GA13-04420S Institutional support: RVO:67985823 Keywords : ChIP - seq * histone modifications * Hidden Markov model * computational biology * differential analysis Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 2.435, year: 2015

  3. Theory and procedures for finding a correct kinetic model for the bacteriorhodopsin photocycle.

    Science.gov (United States)

    Hendler, R W; Shrager, R; Bose, S

    2001-04-26

    In this paper, we present the implementation and results of new methodology based on linear algebra. The theory behind these methods is covered in detail in the Supporting Information, available electronically (Shragerand Hendler). In brief, the methods presented search through all possible forward sequential submodels in order to find candidates that can be used to construct a complete model for the BR-photocycle. The methodology is limited only to forward sequential models. If no such models are compatible with the experimental data,none will be found. The procedures apply objective tests and filters to eliminate possibilities that cannot be correct, thus cutting the total number of candidate sequences to be considered. In the current application,which uses six exponentials, the total sequences were cut from 1950 to 49. The remaining sequences were further screened using known experimental criteria. The approach led to a solution which consists of a pair of sequences, one with 5 exponentials showing BR* f L(f) M(f) N O BR and the other with three exponentials showing BR* L(s) M(s) BR. The deduced complete kinetic model for the BR photocycle is thus either a single photocycle branched at the L intermediate or a pair of two parallel photocycles. Reasons for preferring the parallel photocycles are presented. Synthetic data constructed on the basis of the parallel photocycles were indistinguishable from the experimental data in a number of analytical tests that were applied.

  4. Precipitation projections under GCMs perspective and Turkish Water Foundation (TWF) statistical downscaling model procedures

    Science.gov (United States)

    Dabanlı, İsmail; Şen, Zekai

    2018-04-01

    The statistical climate downscaling model by the Turkish Water Foundation (TWF) is further developed and applied to a set of monthly precipitation records. The model is structured by two phases as spatial (regional) and temporal downscaling of global circulation model (GCM) scenarios. The TWF model takes into consideration the regional dependence function (RDF) for spatial structure and Markov whitening process (MWP) for temporal characteristics of the records to set projections. The impact of climate change on monthly precipitations is studied by downscaling Intergovernmental Panel on Climate Change-Special Report on Emission Scenarios (IPCC-SRES) A2 and B2 emission scenarios from Max Plank Institute (EH40PYC) and Hadley Center (HadCM3). The main purposes are to explain the TWF statistical climate downscaling model procedures and to expose the validation tests, which are rewarded in same specifications as "very good" for all stations except one (Suhut) station in the Akarcay basin that is in the west central part of Turkey. Eventhough, the validation score is just a bit lower at the Suhut station, the results are "satisfactory." It is, therefore, possible to say that the TWF model has reasonably acceptable skill for highly accurate estimation regarding standard deviation ratio (SDR), Nash-Sutcliffe efficiency (NSE), and percent bias (PBIAS) criteria. Based on the validated model, precipitation predictions are generated from 2011 to 2100 by using 30-year reference observation period (1981-2010). Precipitation arithmetic average and standard deviation have less than 5% error for EH40PYC and HadCM3 SRES (A2 and B2) scenarios.

  5. Long-range propagation of plasmon and phonon polaritons in hyperbolic-metamaterial waveguides

    Science.gov (United States)

    Babicheva, Viktoriia E.

    2017-12-01

    We study photonic multilayer waveguides that include layers of materials and metamaterials with a hyperbolic dispersion (HMM). We consider the long-range propagation of plasmon and phonon polaritons at the dielectric-HMM interface in different waveguide geometries (single boundary or different layers of symmetric cladding). In contrast to the traditional analysis of geometrical parameters, we make an emphasis on the optical properties of constituent materials: solving dispersion equations, we analyze how dielectric and HMM permittivities affect propagation length and mode size of waveguide eigenmodes. We derive figures of merit that should be used for each waveguide in a broad range of permittivity values as well as compare them with plasmonic waveguides. We show that the conventional plasmonic quality factor, which is the ratio of real to imaginary parts of permittivity, is not applicable to the case of waveguides with complex structure. Both telecommunication wavelengths and mid-infrared spectral ranges are of interest considering recent advances in van der Waals materials, such as hexagonal boron nitride. We evaluate the performance of the waveguides with hexagonal boron nitride in the range where it possesses hyperbolic dispersion (wavelength 6.3-7.3 μm), and we show that these waveguides with natural hyperbolic properties have higher propagation lengths than metal-based HMM waveguides.

  6. Optimization of a near-field thermophotovoltaic system operating at low temperature and large vacuum gap

    Science.gov (United States)

    Lim, Mikyung; Song, Jaeman; Kim, Jihoon; Lee, Seung S.; Lee, Ikjin; Lee, Bong Jae

    2018-05-01

    The present work successfully achieves a strong enhancement in performance of a near-field thermophotovoltaic (TPV) system operating at low temperature and large-vacuum-gap width by introducing a hyperbolic-metamaterial (HMM) emitter, multilayered graphene, and an Au-backside reflector. Design variables for the HMM emitter and the multilayered-graphene-covered TPV cell are optimized for maximizing the power output of the near-field TPV system with the genetic algorithm. The near-field TPV system with the optimized configuration results in 24.2 times of enhancement in power output compared with that of the system with a bulk emitter and a bare TPV cell. Through the analysis of the radiative heat transfer together with surface-plasmon-polariton (SPP) dispersion curves, it is found that coupling of SPPs generated from both the HMM emitter and the multilayered-graphene-covered TPV cell plays a key role in a substantial increase in the heat transfer even at a 200-nm vacuum gap. Further, the backside reflector at the bottom of the TPV cell significantly increases not only the conversion efficiency, but also the power output by generating additional polariton modes which can be readily coupled with the existing SPPs of the HMM emitter and the multilayered-graphene-covered TPV cell.

  7. The influence of different measurement structures on NRTA test procedures

    International Nuclear Information System (INIS)

    Beedgen, R.

    1986-01-01

    The development of sequential statistical test procedures in the area of near real time material accountancy (NRTA) mostly assumed a fixed measurement model of a given model facility. In this paper different measurement models (dispersion matrices) for a sequence of balance periods are studied. They are used to compare the detection probabilities of three different sequential test procedures for losses of material. It is shown how different plant models have an influence on the sensitivity of specified tests. Great importance for that analysis have the optimal loss patterns in each measurement situation

  8. Natural language generation of surgical procedures.

    Science.gov (United States)

    Wagner, J C; Rogers, J E; Baud, R H; Scherrer, J R

    1999-01-01

    A number of compositional Medical Concept Representation systems are being developed. Although these provide for a detailed conceptual representation of the underlying information, they have to be translated back to natural language for used by end-users and applications. The GALEN programme has been developing one such representation and we report here on a tool developed to generate natural language phrases from the GALEN conceptual representations. This tool can be adapted to different source modelling schemes and to different destination languages or sublanguages of a domain. It is based on a multilingual approach to natural language generation, realised through a clean separation of the domain model from the linguistic model and their link by well defined structures. Specific knowledge structures and operations have been developed for bridging between the modelling 'style' of the conceptual representation and natural language. Using the example of the scheme developed for modelling surgical operative procedures within the GALEN-IN-USE project, we show how the generator is adapted to such a scheme. The basic characteristics of the surgical procedures scheme are presented together with the basic principles of the generation tool. Using worked examples, we discuss the transformation operations which change the initial source representation into a form which can more directly be translated to a given natural language. In particular, the linguistic knowledge which has to be introduced--such as definitions of concepts and relationships is described. We explain the overall generator strategy and how particular transformation operations are triggered by language-dependent and conceptual parameters. Results are shown for generated French phrases corresponding to surgical procedures from the urology domain.

  9. Optical absorption of hyperbolic metamaterial with stochastic surfaces

    DEFF Research Database (Denmark)

    Liu, Jingjing; Naik, Gururaj V.; Ishii, Satoshi

    2014-01-01

    We investigate the absorption properties of planar hyperbolic metamaterials (HMMs) consisting of metal-dielectric multilayers, which support propagating plane waves with anomalously large wavevectors and high photonic-density-of-states over a broad bandwidth. An interface formed by depositing...... indium-tin-oxide nanoparticles on an HMM surface scatters light into the high-k propagating modes of the metamaterial and reduces reflection. We compare the reflection and absorption from an HMM with the nanoparticle cover layer versus those of a metal film with the same thickness also covered...... with the nanoparticles. It is predicted that the super absorption properties of HMM show up when exceedingly large amounts of high-k modes are excited by strong plasmonic resonances. In the case that the coupling interface is formed by non-resonance scatterers, there is almost the same enhancement in the absorption...

  10. Continuous Change Detection and Classification Using Hidden Markov Model: A Case Study for Monitoring Urban Encroachment onto Farmland in Beijing

    Directory of Open Access Journals (Sweden)

    Yuan Yuan

    2015-11-01

    Full Text Available In this paper, we propose a novel method to continuously monitor land cover change using satellite image time series, which can extract comprehensive change information including change time, location, and “from-to” information. This method is based on a hidden Markov model (HMM trained for each land cover class. Assuming a pixel’s initial class has been obtained, likelihoods of the corresponding model are calculated on incoming time series extracted with a temporal sliding window. By observing the likelihood change over the windows, land cover change can be precisely detected from the dramatic drop of likelihood. The established HMMs are then used for identifying the land cover class after the change. As a case study, the proposed method is applied to monitoring urban encroachment onto farmland in Beijing using 10-year MODIS time series from 2001 to 2010. The performance is evaluated on a validation set for different model structures and thresholds. Compared with other change detection methods, the proposed method shows superior change detection accuracy. In addition, it is also more computationally efficient.

  11. Correction: Keep Calm and Learn Multilevel Logistic Modeling: A Simplified Three-Step Procedure Using Stata, R, Mplus, and SPSS

    Directory of Open Access Journals (Sweden)

    Nicolas Sommet

    2017-12-01

    Full Text Available This article details a correction to the article: Sommet, N. & Morselli, D., (2017. Keep Calm and Learn Multilevel Logistic Modeling: A Simplified Three-Step Procedure Using Stata, R, Mplus, and SPSS. 'International Review of Social Psychology'. 30(1, pp. 203–218. DOI: https://doi.org/10.5334/irsp.90

  12. Efficient and robust relaxation procedures for multi-component mixtures including phase transition

    International Nuclear Information System (INIS)

    Han, Ee; Hantke, Maren; Müller, Siegfried

    2017-01-01

    We consider a thermodynamic consistent multi-component model in multi-dimensions that is a generalization of the classical two-phase flow model of Baer and Nunziato. The exchange of mass, momentum and energy between the phases is described by additional source terms. Typically these terms are handled by relaxation procedures. Available relaxation procedures suffer from efficiency and robustness resulting in very costly computations that in general only allow for one-dimensional computations. Therefore we focus on the development of new efficient and robust numerical methods for relaxation processes. We derive exact procedures to determine mechanical and thermal equilibrium states. Further we introduce a novel iterative method to treat the mass transfer for a three component mixture. All new procedures can be extended to an arbitrary number of inert ideal gases. We prove existence, uniqueness and physical admissibility of the resulting states and convergence of our new procedures. Efficiency and robustness of the procedures are verified by means of numerical computations in one and two space dimensions. - Highlights: • We develop novel relaxation procedures for a generalized, thermodynamically consistent Baer–Nunziato type model. • Exact procedures for mechanical and thermal relaxation procedures avoid artificial parameters. • Existence, uniqueness and physical admissibility of the equilibrium states are proven for special mixtures. • A novel iterative method for mass transfer is introduced for a three component mixture providing a unique and admissible equilibrium state.

  13. Reliability assessment of a manual-based procedure towards learning curve modeling and fmea analysis

    Directory of Open Access Journals (Sweden)

    Gustavo Rech

    2013-03-01

    Full Text Available Separation procedures in drug Distribution Centers (DC are manual-based activities prone to failures such as shipping exchanged, expired or broken drugs to the customer. Two interventions seem as promising in improving the reliability in the separation procedure: (i selection and allocation of appropriate operators to the procedure, and (ii analysis of potential failure modes incurred by selected operators. This article integrates Learning Curves (LC and FMEA (Failure Mode and Effect Analysis aimed at reducing the occurrence of failures in the manual separation of a drug DC. LCs parameters enable generating an index to identify the recommended operators to perform the procedures. The FMEA is then applied to the separation procedure carried out by the selected operators in order to identify failure modes. It also deployed the traditional FMEA severity index into two sub-indexes related to financial issues and damage to company´s image in order to characterize failures severity. When applied to a drug DC, the proposed method significantly reduced the frequency and severity of failures in the separation procedure.

  14. Updated Lagrangian finite element formulations of various biological soft tissue non-linear material models: a comprehensive procedure and review.

    Science.gov (United States)

    Townsend, Molly T; Sarigul-Klijn, Nesrin

    2016-01-01

    Simplified material models are commonly used in computational simulation of biological soft tissue as an approximation of the complicated material response and to minimize computational resources. However, the simulation of complex loadings, such as long-duration tissue swelling, necessitates complex models that are not easy to formulate. This paper strives to offer the updated Lagrangian formulation comprehensive procedure of various non-linear material models for the application of finite element analysis of biological soft tissues including a definition of the Cauchy stress and the spatial tangential stiffness. The relationships between water content, osmotic pressure, ionic concentration and the pore pressure stress of the tissue are discussed with the merits of these models and their applications.

  15. ORNL-PWR BDHT analysis procedure: an overview

    International Nuclear Information System (INIS)

    Cliff, S.B.

    1978-01-01

    The key computer programs currently used by the analysis procedure of the ORNL-PWR Blowdown Heat Transfer Separate Effects Program are overviewed with particular emphasis placed on their interrelationships. The major modeling and calculational programs, COBRA, ORINC, ORTCAL, PINSIM, and various versions of RELAP4, are summarized and placed into the perspective of the procedure. The supportive programs, REDPLT, ORCPLT, BDHTPLOT, OXREPT, and OTOCI, and their uses are described

  16. Introduction of pattern recognition by MATLAB practice 2

    International Nuclear Information System (INIS)

    1999-06-01

    The contents of this book starts introduction and examples of pattern recognition. This book describes a vector and matrix, basic statistics and a probability distribution, statistical decision theory and probability density function, liner shunt, vector quantizing and clustering GMM, PCA and KL conversion, LDA, ID 3, a nerve cell modeling, HMM, SVM and Ada boost. It has direction of MATLAB in the appendix.

  17. Computational procedure of optimal inventory model involving controllable backorder rate and variable lead time with defective units

    Science.gov (United States)

    Lee, Wen-Chuan; Wu, Jong-Wuu; Tsou, Hsin-Hui; Lei, Chia-Ling

    2012-10-01

    This article considers that the number of defective units in an arrival order is a binominal random variable. We derive a modified mixture inventory model with backorders and lost sales, in which the order quantity and lead time are decision variables. In our studies, we also assume that the backorder rate is dependent on the length of lead time through the amount of shortages and let the backorder rate be a control variable. In addition, we assume that the lead time demand follows a mixture of normal distributions, and then relax the assumption about the form of the mixture of distribution functions of the lead time demand and apply the minimax distribution free procedure to solve the problem. Furthermore, we develop an algorithm procedure to obtain the optimal ordering strategy for each case. Finally, three numerical examples are also given to illustrate the results.

  18. Towards a generalization procedure for WRF mesoscale wind climatologies

    DEFF Research Database (Denmark)

    Hahmann, Andrea N.; Casso, P.; Campmany, E.

    We present a method for generalizing wind climatologies generated from mesoscale model output (e.g. the Weather, Research and Forecasting (WRF) model.) The generalization procedure is based on Wind Atlas framework of WAsP and KAMM/WAsP, and been extensively in wind resources assessment in DTU Wind...... generalized wind climatologies estimated by the microscale model WAsP and the methodology presented here. For the Danish wind measurements the mean absolute error in the ‘raw’ wind speeds is 9.2%, while the mean absolute error in the generalized wind speeds is 4.1%. The generalization procedure has been...

  19. A kingdom-specific protein domain HMM library for improved annotation of fungal genomes

    Directory of Open Access Journals (Sweden)

    Oliver Stephen G

    2007-04-01

    Full Text Available Abstract Background Pfam is a general-purpose database of protein domain alignments and profile Hidden Markov Models (HMMs, which is very popular for the annotation of sequence data produced by genome sequencing projects. Pfam provides models that are often very general in terms of the taxa that they cover and it has previously been suggested that such general models may lack some of the specificity or selectivity that would be provided by kingdom-specific models. Results Here we present a general approach to create domain libraries of HMMs for sub-taxa of a kingdom. Taking fungal species as an example, we construct a domain library of HMMs (called Fungal Pfam or FPfam using sequences from 30 genomes, consisting of 24 species from the ascomycetes group and two basidiomycetes, Ustilago maydis, a fungal pathogen of maize, and the white rot fungus Phanerochaete chrysosporium. In addition, we include the Microsporidion Encephalitozoon cuniculi, an obligate intracellular parasite, and two non-fungal species, the oomycetes Phytophthora sojae and Phytophthora ramorum, both plant pathogens. We evaluate the performance in terms of coverage against the original 30 genomes used in training FPfam and against five more recently sequenced fungal genomes that can be considered as an independent test set. We show that kingdom-specific models such as FPfam can find instances of both novel and well characterized domains, increases overall coverage and detects more domains per sequence with typically higher bitscores than Pfam for the same domain families. An evaluation of the effect of changing E-values on the coverage shows that the performance of FPfam is consistent over the range of E-values applied. Conclusion Kingdom-specific models are shown to provide improved coverage. However, as the models become more specific, some sequences found by Pfam may be missed by the models in FPfam and some of the families represented in the test set are not present in FPfam

  20. Developing a spatial-statistical model and map of historical malaria prevalence in Botswana using a staged variable selection procedure

    Directory of Open Access Journals (Sweden)

    Mabaso Musawenkosi LH

    2007-09-01

    Full Text Available Abstract Background Several malaria risk maps have been developed in recent years, many from the prevalence of infection data collated by the MARA (Mapping Malaria Risk in Africa project, and using various environmental data sets as predictors. Variable selection is a major obstacle due to analytical problems caused by over-fitting, confounding and non-independence in the data. Testing and comparing every combination of explanatory variables in a Bayesian spatial framework remains unfeasible for most researchers. The aim of this study was to develop a malaria risk map using a systematic and practicable variable selection process for spatial analysis and mapping of historical malaria risk in Botswana. Results Of 50 potential explanatory variables from eight environmental data themes, 42 were significantly associated with malaria prevalence in univariate logistic regression and were ranked by the Akaike Information Criterion. Those correlated with higher-ranking relatives of the same environmental theme, were temporarily excluded. The remaining 14 candidates were ranked by selection frequency after running automated step-wise selection procedures on 1000 bootstrap samples drawn from the data. A non-spatial multiple-variable model was developed through step-wise inclusion in order of selection frequency. Previously excluded variables were then re-evaluated for inclusion, using further step-wise bootstrap procedures, resulting in the exclusion of another variable. Finally a Bayesian geo-statistical model using Markov Chain Monte Carlo simulation was fitted to the data, resulting in a final model of three predictor variables, namely summer rainfall, mean annual temperature and altitude. Each was independently and significantly associated with malaria prevalence after allowing for spatial correlation. This model was used to predict malaria prevalence at unobserved locations, producing a smooth risk map for the whole country. Conclusion We have

  1. Procedure to derive analytical models for microwave noise performances of Si/SiGe:C and InP/InGaAs heterojunction bipolar transistors

    International Nuclear Information System (INIS)

    Ramirez-Garcia, E; Enciso-Aguilar, M A; Aniel, F P; Zerounian, N

    2013-01-01

    We present a useful procedure to derive simplified expressions to model the minimum noise factor and the equivalent noise resistance of Si/SiGe:C and InP/InGaAs heterojunction bipolar transistors (HBTs). An acceptable agreement between models and measurements at operation frequencies up to 18 GHz and at several bias points is demonstrated. The development procedure includes all the significant microwave noise sources of the HBTs. These relations should be useful to model F min and R n for state-of-the-art IV-IV and III–V HBTs. The method is the first step to derive noise analyses formulas valid for operation frequencies near the unitary current gain frequency (f T ); however, to achieve this goal a necessary condition is to have access to HFN measurements up to this frequency regime. (paper)

  2. Validation of EOPs/FRGs Procedures Using LOHS Scenario

    International Nuclear Information System (INIS)

    Bajs, T.; Konjarek, D.; Vukovic, J.

    2012-01-01

    Validation of EOPs (Emergency Operating Procedures) and FRGs (Function Restoration Guidelines) can be achieved either through plant full scope simulator or on desk top exercises. The desk top exercise is conducted when for the given scenario plant full scope simulator is not suitable. In either verification cases predefined scenario should be evaluated and possible branching foreseen. The scenario presented is LOHS, with bleed and feed procedure initiated. Best estimate light water reactor transient analysis code RELAP5/mod3.3 was used in calculation. Standardized detailed plant model was used. Operator actions were modelled from beginning of the scenario to its termination.(author).

  3. Helicopter Flight Procedures for Community Noise Reduction

    Science.gov (United States)

    Greenwood, Eric

    2017-01-01

    A computationally efficient, semiempirical noise model suitable for maneuvering flight noise prediction is used to evaluate the community noise impact of practical variations on several helicopter flight procedures typical of normal operations. Turns, "quick-stops," approaches, climbs, and combinations of these maneuvers are assessed. Relatively small variations in flight procedures are shown to cause significant changes to Sound Exposure Levels over a wide area. Guidelines are developed for helicopter pilots intended to provide effective strategies for reducing the negative effects of helicopter noise on the community. Finally, direct optimization of flight trajectories is conducted to identify low noise optimal flight procedures and quantify the magnitude of community noise reductions that can be obtained through tailored helicopter flight procedures. Physically realizable optimal turns and approaches are identified that achieve global noise reductions of as much as 10 dBA Sound Exposure Level.

  4. Effect of metallic and hyperbolic metamaterial surfaces on electric and magnetic dipole emission transitions

    DEFF Research Database (Denmark)

    Ni, X.; Naik, G. V.; Kildishev, A. V.

    2011-01-01

    Spontaneous emission patterns of electric and magnetic dipoles on different metallic surfaces and a hyperbolic metamaterial (HMM) surface were simulated using the dyadic Green’s function technique. The theoretical approach was verified by experimental results obtained by measuring angular......-dependent emission spectra of europium ions on top of different films. The results show the modified behavior of electric and magnetic dipoles on metallic and HMM surfaces. The results of numerical calculations agree well with experimental data....

  5. Effects of emotion on different phoneme classes

    Science.gov (United States)

    Lee, Chul Min; Yildirim, Serdar; Bulut, Murtaza; Busso, Carlos; Kazemzadeh, Abe; Lee, Sungbok; Narayanan, Shrikanth

    2004-10-01

    This study investigates the effects of emotion on different phoneme classes using short-term spectral features. In the research on emotion in speech, most studies have focused on prosodic features of speech. In this study, based on the hypothesis that different emotions have varying effects on the properties of the different speech sounds, we investigate the usefulness of phoneme-class level acoustic modeling for automatic emotion classification. Hidden Markov models (HMM) based on short-term spectral features for five broad phonetic classes are used for this purpose using data obtained from recordings of two actresses. Each speaker produces 211 sentences with four different emotions (neutral, sad, angry, happy). Using the speech material we trained and compared the performances of two sets of HMM classifiers: a generic set of ``emotional speech'' HMMs (one for each emotion) and a set of broad phonetic-class based HMMs (vowel, glide, nasal, stop, fricative) for each emotion type considered. Comparison of classification results indicates that different phoneme classes were affected differently by emotional change and that the vowel sounds are the most important indicator of emotions in speech. Detailed results and their implications on the underlying speech articulation will be discussed.

  6. Instruction of pattern recognition by MATLAB practice 1

    International Nuclear Information System (INIS)

    1999-06-01

    This book describes the pattern recognition by MATLAB practice. It includes possibility and limit of AI, introduction of pattern recognition a vector and matrix, basic status and a probability theory, a random variable and probability distribution, statistical decision theory, data-mining, gaussian mixture model, a nerve cell modeling such as Hebb's learning rule, LMS learning rule, genetic algorithm, dynamic programming and DTW, HMN on Markov model and HMM's three problems and solution, introduction of SVM with KKT condition and margin optimum, kernel trick and MATLAB practice.

  7. Examining procedural working memory processing in obsessive-compulsive disorder.

    Science.gov (United States)

    Shahar, Nitzan; Teodorescu, Andrei R; Anholt, Gideon E; Karmon-Presser, Anat; Meiran, Nachshon

    2017-07-01

    Previous research has suggested that a deficit in working memory might underlie the difficulty of obsessive-compulsive disorder (OCD) patients to control their thoughts and actions. However, a recent meta-analyses found only small effect sizes for working memory deficits in OCD. Recently, a distinction has been made between declarative and procedural working memory. Working memory in OCD was tested mostly using declarative measurements. However, OCD symptoms typically concerns actions, making procedural working-memory more relevant. Here, we tested the operation of procedural working memory in OCD. Participants with OCD and healthy controls performed a battery of choice reaction tasks under high and low procedural working memory demands. Reaction-times (RT) were estimated using ex-Gaussian distribution fitting, revealing no group differences in the size of the RT distribution tail (i.e., τ parameter), known to be sensitive to procedural working memory manipulations. Group differences, unrelated to working memory manipulations, were found in the leading-edge of the RT distribution and analyzed using a two-stage evidence accumulation model. Modeling results suggested that perceptual difficulties might underlie the current group differences. In conclusion, our results suggest that procedural working-memory processing is most likely intact in OCD, and raise a novel, yet untested assumption regarding perceptual deficits in OCD. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  8. 3D/2D model-to-image registration by imitation learning for cardiac procedures.

    Science.gov (United States)

    Toth, Daniel; Miao, Shun; Kurzendorfer, Tanja; Rinaldi, Christopher A; Liao, Rui; Mansi, Tommaso; Rhode, Kawal; Mountney, Peter

    2018-05-12

    In cardiac interventions, such as cardiac resynchronization therapy (CRT), image guidance can be enhanced by involving preoperative models. Multimodality 3D/2D registration for image guidance, however, remains a significant research challenge for fundamentally different image data, i.e., MR to X-ray. Registration methods must account for differences in intensity, contrast levels, resolution, dimensionality, field of view. Furthermore, same anatomical structures may not be visible in both modalities. Current approaches have focused on developing modality-specific solutions for individual clinical use cases, by introducing constraints, or identifying cross-modality information manually. Machine learning approaches have the potential to create more general registration platforms. However, training image to image methods would require large multimodal datasets and ground truth for each target application. This paper proposes a model-to-image registration approach instead, because it is common in image-guided interventions to create anatomical models for diagnosis, planning or guidance prior to procedures. An imitation learning-based method, trained on 702 datasets, is used to register preoperative models to intraoperative X-ray images. Accuracy is demonstrated on cardiac models and artificial X-rays generated from CTs. The registration error was [Formula: see text] on 1000 test cases, superior to that of manual ([Formula: see text]) and gradient-based ([Formula: see text]) registration. High robustness is shown in 19 clinical CRT cases. Besides the proposed methods feasibility in a clinical environment, evaluation has shown good accuracy and high robustness indicating that it could be applied in image-guided interventions.

  9. Analytical solution to the 1D Lemaitre's isotropic damage model and plane stress projected implicit integration procedure

    DEFF Research Database (Denmark)

    Andriollo, Tito; Thorborg, Jesper; Hattel, Jesper Henri

    2016-01-01

    obtaining an integral relationship between total strain and effective stress. By means of the generalized binomial theorem, an expression in terms of infinite series is subsequently derived. The solution is found to simplify considerably existing techniques for material parameters identification based...... on optimization, as all issues associated with classical numerical solution procedures of the constitutive equations are eliminated. In addition, an implicit implementation of the plane stress projected version of Lemaitre's model is discussed, showing that the resulting algebraic system can be reduced...

  10. Design of Training Systems, Phase II Report, Volume III; Model Program Descriptions and Operating Procedures. TAEG Report No. 12-2.

    Science.gov (United States)

    Naval Training Equipment Center, Orlando, FL. Training Analysis and Evaluation Group.

    The Design of Training Systems (DOTS) project was initiated by the Department of Defense (DOD) to develop tools for the effective management of military training organizations. Volume 3 contains the model and data base program descriptions and operating procedures designed for phase 2 of the project. Flow charts and program listings for the…

  11. Arbuscular Mycorrhizal Fungi Community Structure, Abundance and Species Richness Changes in Soil by Different Levels of Heavy Metal and Metalloid Concentration

    Science.gov (United States)

    Krishnamoorthy, Ramasamy; Kim, Chang-Gi; Subramanian, Parthiban; Kim, Ki-Yoon; Selvakumar, Gopal; Sa, Tong-Min

    2015-01-01

    Arbuscular Mycorrhizal Fungi (AMF) play major roles in ecosystem functioning such as carbon sequestration, nutrient cycling, and plant growth promotion. It is important to know how this ecologically important soil microbial player is affected by soil abiotic factors particularly heavy metal and metalloid (HMM). The objective of this study was to understand the impact of soil HMM concentration on AMF abundance and community structure in the contaminated sites of South Korea. Soil samples were collected from the vicinity of an abandoned smelter and the samples were subjected to three complementary methods such as spore morphology, terminal restriction fragment length polymorphism (T-RFLP) and denaturing gradient gel electrophoresis (DGGE) for diversity analysis. Spore density was found to be significantly higher in highly contaminated soil compared to less contaminated soil. Spore morphological study revealed that Glomeraceae family was more abundant followed by Acaulosporaceae and Gigasporaceae in the vicinity of the smelter. T-RFLP and DGGE analysis confirmed the dominance of Funneliformis mosseae and Rhizophagus intraradices in all the study sites. Claroideoglomus claroideum, Funneliformis caledonium, Rhizophagus clarus and Funneliformis constrictum were found to be sensitive to high concentration of soil HMM. Richness and diversity of Glomeraceae family increased with significant increase in soil arsenic, cadmium and zinc concentrations. Our results revealed that the soil HMM has a vital impact on AMF community structure, especially with Glomeraceae family abundance, richness and diversity. PMID:26035444

  12. CCHMM_PROF: a HMM-based coiled-coil predictor with evolutionary information

    DEFF Research Database (Denmark)

    Bartoli, Lisa; Fariselli, Piero; Krogh, Anders

    2009-01-01

    tools are available for predicting coiled-coil domains in protein sequences, including those based on position-specific score matrices and machine learning methods. RESULTS: In this article, we introduce a hidden Markov model (CCHMM_PROF) that exploits the information contained in multiple sequence...... alignments (profiles) to predict coiled-coil regions. The new method discriminates coiled-coil sequences with an accuracy of 97% and achieves a true positive rate of 79% with only 1% of false positives. Furthermore, when predicting the location of coiled-coil segments in protein sequences, the method reaches...

  13. AGREED-UPON PROCEDURES, PROCEDURES FOR AUDITING EUROPEAN GRANTS

    Directory of Open Access Journals (Sweden)

    Daniel Petru VARTEIU

    2016-12-01

    The audit of EU-funded projects is an audit based on agreed-upon procedures, which are established by the Managing Authority or the Intermediate Body. Agreed-upon procedures can be defined as engagements made in accordance with ISRS 4400, applicable to agreed-upon procedures, where the auditor undertakes to carry out the agreed-upon procedures and issue a report on factual findings. The report provided by the auditor does not express any assurance. It allows users to form their own opinions about the conformity of the expenses with the project budget as well as the eligibility of the expenses.

  14. Realistic Vascular Replicator for TAVR Procedures.

    Science.gov (United States)

    Rotman, Oren M; Kovarovic, Brandon; Sadasivan, Chander; Gruberg, Luis; Lieber, Baruch B; Bluestein, Danny

    2018-04-13

    Transcatheter aortic valve replacement (TAVR) is an over-the-wire procedure for treatment of severe aortic stenosis (AS). TAVR valves are conventionally tested using simplified left heart simulators (LHS). While those provide baseline performance reliably, their aortic root geometries are far from the anatomical in situ configuration, often overestimating the valves' performance. We report on a novel benchtop patient-specific arterial replicator designed for testing TAVR and training interventional cardiologists in the procedure. The Replicator is an accurate model of the human upper body vasculature for training physicians in percutaneous interventions. It comprises of fully-automated Windkessel mechanism to recreate physiological flow conditions. Calcified aortic valve models were fabricated and incorporated into the Replicator, then tested for performing TAVR procedure by an experienced cardiologist using the Inovare valve. EOA, pressures, and angiograms were monitored pre- and post-TAVR. A St. Jude mechanical valve was tested as a reference that is less affected by the AS anatomy. Results in the Replicator of both valves were compared to the performance in a commercial ISO-compliant LHS. The AS anatomy in the Replicator resulted in a significant decrease of the TAVR valve performance relative to the simplified LHS, with EOA and transvalvular pressures comparable to clinical data. Minor change was seen in the mechanical valve performance. The Replicator showed to be an effective platform for TAVR testing. Unlike a simplified geometric anatomy LHS, it conservatively provides clinically-relevant outcomes and complement it. The Replicator can be most valuable for testing new valves under challenging patient anatomies, physicians training, and procedural planning.

  15. An Efficient Upscaling Procedure Based on Stokes-Brinkman Model and Discrete Fracture Network Method for Naturally Fractured Carbonate Karst Reservoirs

    KAUST Repository

    Qin, Guan

    2010-01-01

    Naturally-fractured carbonate karst reservoirs are characterized by various-sized solution caves that are connected via fracture networks at multiple scales. These complex geologic features can not be fully resolved in reservoir simulations due to the underlying uncertainty in geologic models and the large computational resource requirement. They also bring in multiple flow physics which adds to the modeling difficulties. It is thus necessary to develop a method to accurately represent the effect of caves, fractures and their interconnectivities in coarse-scale simulation models. In this paper, we present a procedure based on our previously proposed Stokes-Brinkman model (SPE 125593) and the discrete fracture network method for accurate and efficient upscaling of naturally fractured carbonate karst reservoirs.

  16. Statistical, Graphical, and Learning Methods for Sensing, Surveillance, and Navigation Systems

    Science.gov (United States)

    2016-06-28

    using experimental data from an indoor measurement campaign with FCC -compliant ultra-wideband DISTRIBUTION A: Distribution approved for public release...information from inertial measurements and situational context through Bayesian inference over an augmented hidden Markovian model (HMM). In addition, the...We have taken one step further and introduced the concept of range likelihood (RL) of a set of range-related measurements . We show that such RLs

  17. Automatic segmentation of rotational x-ray images for anatomic intra-procedural surface generation in atrial fibrillation ablation procedures.

    Science.gov (United States)

    Manzke, Robert; Meyer, Carsten; Ecabert, Olivier; Peters, Jochen; Noordhoek, Niels J; Thiagalingam, Aravinda; Reddy, Vivek Y; Chan, Raymond C; Weese, Jürgen

    2010-02-01

    Since the introduction of 3-D rotational X-ray imaging, protocols for 3-D rotational coronary artery imaging have become widely available in routine clinical practice. Intra-procedural cardiac imaging in a computed tomography (CT)-like fashion has been particularly compelling due to the reduction of clinical overhead and ability to characterize anatomy at the time of intervention. We previously introduced a clinically feasible approach for imaging the left atrium and pulmonary veins (LAPVs) with short contrast bolus injections and scan times of approximately 4 -10 s. The resulting data have sufficient image quality for intra-procedural use during electro-anatomic mapping (EAM) and interventional guidance in atrial fibrillation (AF) ablation procedures. In this paper, we present a novel technique to intra-procedural surface generation which integrates fully-automated segmentation of the LAPVs for guidance in AF ablation interventions. Contrast-enhanced rotational X-ray angiography (3-D RA) acquisitions in combination with filtered-back-projection-based reconstruction allows for volumetric interrogation of LAPV anatomy in near-real-time. An automatic model-based segmentation algorithm allows for fast and accurate LAPV mesh generation despite the challenges posed by image quality; relative to pre-procedural cardiac CT/MR, 3-D RA images suffer from more artifacts and reduced signal-to-noise. We validate our integrated method by comparing 1) automatic and manual segmentations of intra-procedural 3-D RA data, 2) automatic segmentations of intra-procedural 3-D RA and pre-procedural CT/MR data, and 3) intra-procedural EAM point cloud data with automatic segmentations of 3-D RA and CT/MR data. Our validation results for automatically segmented intra-procedural 3-D RA data show average segmentation errors of 1) approximately 1.3 mm compared with manual 3-D RA segmentations 2) approximately 2.3 mm compared with automatic segmentation of pre-procedural CT/MR data and 3

  18. Assessment of the MPACT Resonance Data Generation Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Williams, Mark L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-12-26

    Currently, heterogeneous models are being used to generate resonance self-shielded cross-section tables as a function of background cross sections for important nuclides such as 235U and 238U by performing the CENTRM (Continuous Energy Transport Model) slowing down calculation with the MOC (Method of Characteristics) spatial discretization and ESSM (Embedded Self-Shielding Method) calculations to obtain background cross sections. And then the resonance self-shielded cross section tables are converted into subgroup data which are to be used in estimating problem-dependent self-shielded cross sections in MPACT (Michigan Parallel Characteristics Transport Code). Although this procedure has been developed and thus resonance data have been generated and validated by benchmark calculations, assessment has never been performed to review if the resonance data are properly generated by the procedure and utilized in MPACT. This study focuses on assessing the procedure and a proper use in MPACT.

  19. Numerical Simulation Procedure for Modeling TGO Crack Propagation and TGO Growth in Thermal Barrier Coatings upon Thermal-Mechanical Cycling

    Directory of Open Access Journals (Sweden)

    Ding Jun

    2014-01-01

    Full Text Available This paper reports a numerical simulation procedure to model crack propagation in TGO layer and TGO growth near a surface groove in metal substrate upon multiple thermal-mechanical cycles. The material property change method is employed to model TGO formation cycle by cycle, and the creep properties for constituent materials are also incorporated. Two columns of repeated nodes are placed along the interface of the potential crack, and these nodes are bonded together as one node at a geometrical location. In terms of critical crack opening displacement criterion, onset of crack propagation in TGO layer has been determined by finite element analyses in comparison with that without predefined crack. Then, according to the results from the previous analyses, the input values for the critical failure parameters for the subsequent analyses can be decided. The robust capabilities of restart analysis in ABAQUS help to implement the overall simulation for TGO crack propagation. The comparison of the TGO final deformation profile between numerical and experimental observation shows a good agreement indicating the correctness and effectiveness of the present procedure, which can guide the prediction of the failure in TGO for the future design and optimization for TBC system.

  20. Effect of three different bariatric obesity surgery procedures on nutrient and energy digestibility using a swine experimental model.

    Science.gov (United States)

    Gandarillas, Mónica; Hodgkinson, Suzanne Marie; Riveros, José Luis; Bas, Fernando

    2015-09-01

    Morbid obesity is a worldwide health concern that compromises life quality and health status of obese human subjects. Bariatric surgery for treating morbid obesity remains as one of the best alternatives to promote excess weight loss and to reduce co-morbidities. We have not found studies reporting nutrients and energy balance considering digestibility trials in humans following surgery. The purpose of this study was to determine protein, lipid, fiber, energy, calcium, and phosphorous digestibility in a swine model that underwent ileal transposition (IT), sleeve gastrectomy with ileal transposition (SGIT), Roux-en-Y gastric bypass (RYGBP), and with sham operated animals (SHAM). Thirty-two pigs were randomly assigned to four laparoscopic procedures: IT (n = 8), RYGBP (n = 8), SGIT (n = 8), and Sham-operated pigs (n = 8). From day 0 postsurgery to 130, pigs were weighed monthly to determine live weight and weight gain was calculated for each month postsurgery until day 130. Food intake in a metabolic weight basis was calculated by measuring ad libitum food intake at day 130. Swine were fitted into metabolic crates to determine digestibility coefficients of dry matter, protein, fat, fiber, ash, energy, calcium, and phosphorous from day 130. A one-way ANOVA and Student-Newman-Keuls were used to detect differences in weight, food intake, and digestibility coefficients. Digestibility values for dry matter, fiber, phosphorus, and energy showed no differences among groups (P > 0.05). However, significant differences (P ≤ 0.05) were encountered among groups for fat, protein, ash, and calcium digestibilities. The RYGBP procedure, when applied to the pig model, significantly reduced calcium, fat, and ash digestibility, which did not occur with SGIT or IT procedure, when compared with Sham-operated animals. © 2015 by the Society for Experimental Biology and Medicine.

  1. A Semi-Continuous State-Transition Probability HMM-Based Voice Activity Detector

    Directory of Open Access Journals (Sweden)

    H. Othman

    2007-02-01

    Full Text Available We introduce an efficient hidden Markov model-based voice activity detection (VAD algorithm with time-variant state-transition probabilities in the underlying Markov chain. The transition probabilities vary in an exponential charge/discharge scheme and are softly merged with state conditional likelihood into a final VAD decision. Working in the domain of ITU-T G.729 parameters, with no additional cost for feature extraction, the proposed algorithm significantly outperforms G.729 Annex B VAD while providing a balanced tradeoff between clipping and false detection errors. The performance compares very favorably with the adaptive multirate VAD, option 2 (AMR2.

  2. A Semi-Continuous State-Transition Probability HMM-Based Voice Activity Detector

    Directory of Open Access Journals (Sweden)

    Othman H

    2007-01-01

    Full Text Available We introduce an efficient hidden Markov model-based voice activity detection (VAD algorithm with time-variant state-transition probabilities in the underlying Markov chain. The transition probabilities vary in an exponential charge/discharge scheme and are softly merged with state conditional likelihood into a final VAD decision. Working in the domain of ITU-T G.729 parameters, with no additional cost for feature extraction, the proposed algorithm significantly outperforms G.729 Annex B VAD while providing a balanced tradeoff between clipping and false detection errors. The performance compares very favorably with the adaptive multirate VAD, option 2 (AMR2.

  3. Quantization Procedures

    International Nuclear Information System (INIS)

    Cabrera, J. A.; Martin, R.

    1976-01-01

    We present in this work a review of the conventional quantization procedure, the proposed by I.E. Segal and a new quantization procedure similar to this one for use in non linear problems. We apply this quantization procedures to different potentials and we obtain the appropriate equations of motion. It is shown that for the linear case the three procedures exposed are equivalent but for the non linear cases we obtain different equations of motion and different energy spectra. (Author) 16 refs

  4. Inference-based procedural modeling of solids

    KAUST Repository

    Biggers, Keith

    2011-11-01

    As virtual environments become larger and more complex, there is an increasing need for more automated construction algorithms to support the development process. We present an approach for modeling solids by combining prior examples with a simple sketch. Our algorithm uses an inference-based approach to incrementally fit patches together in a consistent fashion to define the boundary of an object. This algorithm samples and extracts surface patches from input models, and develops a Petri net structure that describes the relationship between patches along an imposed parameterization. Then, given a new parameterized line or curve, we use the Petri net to logically fit patches together in a manner consistent with the input model. This allows us to easily construct objects of varying sizes and configurations using arbitrary articulation, repetition, and interchanging of parts. The result of our process is a solid model representation of the constructed object that can be integrated into a simulation-based environment. © 2011 Elsevier Ltd. All rights reserved.

  5. Clustering Multivariate Time Series Using Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Shima Ghassempour

    2014-03-01

    Full Text Available In this paper we describe an algorithm for clustering multivariate time series with variables taking both categorical and continuous values. Time series of this type are frequent in health care, where they represent the health trajectories of individuals. The problem is challenging because categorical variables make it difficult to define a meaningful distance between trajectories. We propose an approach based on Hidden Markov Models (HMMs, where we first map each trajectory into an HMM, then define a suitable distance between HMMs and finally proceed to cluster the HMMs with a method based on a distance matrix. We test our approach on a simulated, but realistic, data set of 1,255 trajectories of individuals of age 45 and over, on a synthetic validation set with known clustering structure, and on a smaller set of 268 trajectories extracted from the longitudinal Health and Retirement Survey. The proposed method can be implemented quite simply using standard packages in R and Matlab and may be a good candidate for solving the difficult problem of clustering multivariate time series with categorical variables using tools that do not require advanced statistic knowledge, and therefore are accessible to a wide range of researchers.

  6. Detection of bursts in extracellular spike trains using hidden semi-Markov point process models.

    Science.gov (United States)

    Tokdar, Surya; Xi, Peiyi; Kelly, Ryan C; Kass, Robert E

    2010-08-01

    Neurons in vitro and in vivo have epochs of bursting or "up state" activity during which firing rates are dramatically elevated. Various methods of detecting bursts in extracellular spike trains have appeared in the literature, the most widely used apparently being Poisson Surprise (PS). A natural description of the phenomenon assumes (1) there are two hidden states, which we label "burst" and "non-burst," (2) the neuron evolves stochastically, switching at random between these two states, and (3) within each state the spike train follows a time-homogeneous point process. If in (2) the transitions from non-burst to burst and burst to non-burst states are memoryless, this becomes a hidden Markov model (HMM). For HMMs, the state transitions follow exponential distributions, and are highly irregular. Because observed bursting may in some cases be fairly regular-exhibiting inter-burst intervals with small variation-we relaxed this assumption. When more general probability distributions are used to describe the state transitions the two-state point process model becomes a hidden semi-Markov model (HSMM). We developed an efficient Bayesian computational scheme to fit HSMMs to spike train data. Numerical simulations indicate the method can perform well, sometimes yielding very different results than those based on PS.

  7. Efficiency of performing pulmonary procedures in a shared endoscopy unit: procedure time, turnaround time, delays, and procedure waiting time.

    Science.gov (United States)

    Verma, Akash; Lee, Mui Yok; Wang, Chunhong; Hussein, Nurmalah B M; Selvi, Kalai; Tee, Augustine

    2014-04-01

    The purpose of this study was to assess the efficiency of performing pulmonary procedures in the endoscopy unit in a large teaching hospital. A prospective study from May 20 to July 19, 2013, was designed. The main outcome measures were procedure delays and their reasons, duration of procedural steps starting from patient's arrival to endoscopy unit, turnaround time, total case durations, and procedure wait time. A total of 65 procedures were observed. The most common procedure was BAL (61%) followed by TBLB (31%). Overall procedures for 35 (53.8%) of 65 patients were delayed by ≥ 30 minutes, 21/35 (60%) because of "spillover" of the gastrointestinal and surgical cases into the time block of pulmonary procedure. Time elapsed between end of pulmonary procedure and start of the next procedure was ≥ 30 minutes in 8/51 (16%) of cases. In 18/51 (35%) patients there was no next case in the room after completion of the pulmonary procedure. The average idle time of the room after the end of pulmonary procedure and start of next case or end of shift at 5:00 PM if no next case was 58 ± 53 minutes. In 17/51 (33%) patients the room's idle time was >60 minutes. A total of 52.3% of patients had the wait time >2 days and 11% had it ≥ 6 days, reason in 15/21 (71%) being unavailability of the slot. Most pulmonary procedures were delayed due to spillover of the gastrointestinal and surgical cases into the block time allocated to pulmonary procedures. The most common reason for difficulty encountered in scheduling the pulmonary procedure was slot unavailability. This caused increased procedure waiting time. The strategies to reduce procedure delays and turnaround times, along with improved scheduling methods, may have a favorable impact on the volume of procedures performed in the unit thereby optimizing the existing resources.

  8. Vision based flight procedure stereo display system

    Science.gov (United States)

    Shen, Xiaoyun; Wan, Di; Ma, Lan; He, Yuncheng

    2008-03-01

    A virtual reality flight procedure vision system is introduced in this paper. The digital flight map database is established based on the Geographic Information System (GIS) and high definitions satellite remote sensing photos. The flight approaching area database is established through computer 3D modeling system and GIS. The area texture is generated from the remote sensing photos and aerial photographs in various level of detail. According to the flight approaching procedure, the flight navigation information is linked to the database. The flight approaching area vision can be dynamic displayed according to the designed flight procedure. The flight approaching area images are rendered in 2 channels, one for left eye images and the others for right eye images. Through the polarized stereoscopic projection system, the pilots and aircrew can get the vivid 3D vision of the flight destination approaching area. Take the use of this system in pilots preflight preparation procedure, the aircrew can get more vivid information along the flight destination approaching area. This system can improve the aviator's self-confidence before he carries out the flight mission, accordingly, the flight safety is improved. This system is also useful in validate the visual flight procedure design, and it helps to the flight procedure design.

  9. A male-specific QTL for social interaction behavior in mice mapped with automated pattern detection by a hidden Markov model incorporated into newly developed freeware.

    Science.gov (United States)

    Arakawa, Toshiya; Tanave, Akira; Ikeuchi, Shiho; Takahashi, Aki; Kakihara, Satoshi; Kimura, Shingo; Sugimoto, Hiroki; Asada, Nobuhiko; Shiroishi, Toshihiko; Tomihara, Kazuya; Tsuchiya, Takashi; Koide, Tsuyoshi

    2014-08-30

    Owing to their complex nature, social interaction tests normally require the observation of video data by a human researcher, and thus are difficult to use in large-scale studies. We previously established a statistical method, a hidden Markov model (HMM), which enables the differentiation of two social states ("interaction" and "indifference"), and three social states ("sniffing", "following", and "indifference"), automatically in silico. Here, we developed freeware called DuoMouse for the rapid evaluation of social interaction behavior. This software incorporates five steps: (1) settings, (2) video recording, (3) tracking from the video data, (4) HMM analysis, and (5) visualization of the results. Using DuoMouse, we mapped a genetic locus related to social interaction. We previously reported that a consomic strain, B6-Chr6C(MSM), with its chromosome 6 substituted for one from MSM/Ms, showed more social interaction than C57BL/6 (B6). We made four subconsomic strains, C3, C5, C6, and C7, each of which has a shorter segment of chromosome 6 derived from B6-Chr6C, and conducted social interaction tests on these strains. DuoMouse indicated that C6, but not C3, C5, and C7, showed higher interaction, sniffing, and following than B6, specifically in males. The data obtained by human observation showed high concordance to those from DuoMouse. The results indicated that the MSM-derived chromosomal region present in C6-but not in C3, C5, and C7-associated with increased social behavior. This method to analyze social interaction will aid primary screening for difference in social behavior in mice. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Research on harmonized molecular materials; Bunshi kyocho zairyo ni kansuru chosa kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-03-01

    Harmonized molecular materials (HMM) were researched to create functional materials adaptable to needs such as environmental harmony and high-efficient conversion in post-industrial society and aging society. Superior mechanisms function efficiently in organisms for perception, transmission and processing of information, and transport and conversion of substances. These functions are caused by harmonization between organic molecules, or organic molecule and metal or inorganic substance. HMM is a key substance to realize these functions similar to those of organisms artificially. It is the purpose of this research to develop HMMs, reform production process by innovating separation and conversion technologies, and finally realize molecular chemical plants. This research also develops high-efficient devices to contribute to the information society, and progresses the industry of bio-functional materials such as high-sensitive bio-sensor. The functions, applications and creation technologies of three kinds of HMM such as assembly, mesophase and microporous materials were researched in fiscal 1995. 956 refs., 128 figs., 13 tabs.

  11. Change in ploidy status from hyperdiploid to near-tetraploid in multiple myeloma associated with bortezomib/lenalidomide resistance.

    Science.gov (United States)

    Pavlistova, Lenka; Zemanova, Zuzana; Sarova, Iveta; Lhotska, Halka; Berkova, Adela; Spicka, Ivan; Michalova, Kyra

    2014-01-01

    Ploidy is an important prognostic factor in the risk stratification of multiple myeloma (MM) patients. Patients with MM can be divided into two groups according to the modal number of chromosomes: nonhyperdiploid (NH-MM) and hyperdiploid (H-MM), which has a more favorable outcome. The two ploidy groups represent two different oncogenetic pathways determined at the premalignant stage. The ploidy subtype also persists during the course of the disease, even during progression after the therapy, with only very rare cases of ploidy conversion. The clinical significance of ploidy conversion and its relation to drug resistance have been previously discussed. Here, we describe a female MM patient with a rare change in her ploidy status from H-MM to NH-MM, detected by cytogenetic and molecular cytogenetic examinations of consecutive bone marrow aspirates. We hypothesize that ploidy conversion (from H-MM to NH-MM) is associated with disease progression and acquired resistance to bortezomib/lenalidomide therapy. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Can proceduralization support coping with the unexpected?

    International Nuclear Information System (INIS)

    Norros, Leena; Savioja, Paula; Liinasuo, Marja; Wahlstrom, Mikael

    2014-01-01

    Operations of safety critical industries unquestionably require a diversity of technical and organizational control measures to increase stability and predictability of the complex sociotechnical systems. Nevertheless, experiences from recent severe accidents and results of safety research have questioned the effectiveness of the prevailing safety management strategy that mainly relies on standardization and designed-in defenses. This paper discusses the identified need to balance between stability and flexibility in a concrete safety issue, i.e., proceduralization. The main research problem of our study is whether procedure guided practice can offer sufficient support for flexibility of operating activity. We shall frame our study with the help of a model that explains different aspects of procedures. We then elaborate how these different aspects were considered empirically in our 3-phase study. In the first study we interviewed 62 main control room operators and asked how they consider procedures to support balancing. In the second study we observed in detail 12 NPP operator crews' activity in a simulated loss-of-coolant accident. In a third study we inquired 5 procedure designers about their conceptions concerning procedure guidance in operator work. Drawing on either interview or behavioral data we analyzed the personnel's stance to the flexibility and stability balancing, and how the conceptions portray in the practices of procedure usage. Our results demonstrate that the operators are aware of the need for balancing flexibility and stability and consider successful balancing to represent 'good' professional action. In actual action many operators, however, tend towards more straightforward following of procedures. Designers also see the capability for balancing stability and flexibility as a key operator competence but describe actual acting simply as procedure-following. According to the documents of the nuclear community, procedure

  13. Can proceduralization support coping with the unexpected?

    Energy Technology Data Exchange (ETDEWEB)

    Norros, Leena; Savioja, Paula; Liinasuo, Marja; Wahlstrom, Mikael [VTT Technical Research Centre of Finland, Vuorimiehentie (Finland)

    2014-08-15

    Operations of safety critical industries unquestionably require a diversity of technical and organizational control measures to increase stability and predictability of the complex sociotechnical systems. Nevertheless, experiences from recent severe accidents and results of safety research have questioned the effectiveness of the prevailing safety management strategy that mainly relies on standardization and designed-in defenses. This paper discusses the identified need to balance between stability and flexibility in a concrete safety issue, i.e., proceduralization. The main research problem of our study is whether procedure guided practice can offer sufficient support for flexibility of operating activity. We shall frame our study with the help of a model that explains different aspects of procedures. We then elaborate how these different aspects were considered empirically in our 3-phase study. In the first study we interviewed 62 main control room operators and asked how they consider procedures to support balancing. In the second study we observed in detail 12 NPP operator crews' activity in a simulated loss-of-coolant accident. In a third study we inquired 5 procedure designers about their conceptions concerning procedure guidance in operator work. Drawing on either interview or behavioral data we analyzed the personnel's stance to the flexibility and stability balancing, and how the conceptions portray in the practices of procedure usage. Our results demonstrate that the operators are aware of the need for balancing flexibility and stability and consider successful balancing to represent 'good' professional action. In actual action many operators, however, tend towards more straightforward following of procedures. Designers also see the capability for balancing stability and flexibility as a key operator competence but describe actual acting simply as procedure-following. According to the documents of the nuclear community, procedure

  14. Emotion regulation strategies: procedure modeling of J. Gross and cultural activity approach

    Directory of Open Access Journals (Sweden)

    Elena I. Pervichko

    2015-03-01

    Full Text Available The first part of this paper argued the desirability of structural-dynamic model of emotion regulation in the theoretical and methodological framework of cultural activity paradigm with the construction of a psychologically-based typology of emotion regulation strategies in norm and pathology, and also psychological mechanisms enabling the regulation of emotions. This conclusion was based on the analysis of the basic concepts and paradigms in which the issue of emotion regulation is studied: cognitive and psychoanalytic approaches, concept and emotional development of emotional intelligence, cultural activity approach. The paper considers the procedure model of emotion regulation by J. Gross, identifies emotion regulation strategies and evaluates their effectiveness. The possibilities and limitations of the model. Based on the review of the today research the conclusion is arrived at that the existing labels on a wide range of regulatory strategies remain an open issue.The author’s definition of emotion regulation is drawn. Emotion regulation is deemed as a set of mental processes, psychological mechanisms and regulatory strategies that people use to preserve the capacity for productive activities in a situation of emotional stress; to ensure optimal impulse control and emotions; to maintain the excitement at the optimum level. The second part of this paper provides the general description of emotion regulation strategies, the approach to their typology, the psychological mechanisms of emotion regulation that lie in the basis of this typology, i.e. the main elements of the structural-dynamic model of emotion regulation. The work shows theoretical and methodological efficacy of empirical significance of signs and symbols and also personal reflection. The diagnostic system to allow empirically identify a wide range of emotion regulation strategies is suggested. The psychological mechanisms used by the subject to solve the problem of emotional

  15. FEM modeling and histological analyses on thermal damage induced in facial skin resurfacing procedure with different CO2 laser pulse duration

    Science.gov (United States)

    Rossi, Francesca; Zingoni, Tiziano; Di Cicco, Emiliano; Manetti, Leonardo; Pini, Roberto; Fortuna, Damiano

    2011-07-01

    Laser light is nowadays routinely used in the aesthetic treatments of facial skin, such as in laser rejuvenation, scar removal etc. The induced thermal damage may be varied by setting different laser parameters, in order to obtain a particular aesthetic result. In this work, it is proposed a theoretical study on the induced thermal damage in the deep tissue, by considering different laser pulse duration. The study is based on the Finite Element Method (FEM): a bidimensional model of the facial skin is depicted in axial symmetry, considering the different skin structures and their different optical and thermal parameters; the conversion of laser light into thermal energy is modeled by the bio-heat equation. The light source is a CO2 laser, with different pulse durations. The model enabled to study the thermal damage induced into the skin, by calculating the Arrhenius integral. The post-processing results enabled to study in space and time the temperature dynamics induced in the facial skin, to study the eventual cumulative effects of subsequent laser pulses and to optimize the procedure for applications in dermatological surgery. The calculated data where then validated in an experimental measurement session, performed in a sheep animal model. Histological analyses were performed on the treated tissues, evidencing the spatial distribution and the entity of the thermal damage in the collageneous tissue. Modeling and experimental results were in good agreement, and they were used to design a new optimized laser based skin resurfacing procedure.

  16. Calibration of semi-stochastic procedure for simulating high-frequency ground motions

    Science.gov (United States)

    Seyhan, Emel; Stewart, Jonathan P.; Graves, Robert

    2013-01-01

    Broadband ground motion simulation procedures typically utilize physics-based modeling at low frequencies, coupled with semi-stochastic procedures at high frequencies. The high-frequency procedure considered here combines deterministic Fourier amplitude spectra (dependent on source, path, and site models) with random phase. Previous work showed that high-frequency intensity measures from this simulation methodology attenuate faster with distance and have lower intra-event dispersion than in empirical equations. We address these issues by increasing crustal damping (Q) to reduce distance attenuation bias and by introducing random site-to-site variations to Fourier amplitudes using a lognormal standard deviation ranging from 0.45 for Mw  100 km).

  17. Procedural wound geometry and blood flow generation for medical training simulators

    Science.gov (United States)

    Aras, Rifat; Shen, Yuzhong; Li, Jiang

    2012-02-01

    Efficient application of wound treatment procedures is vital in both emergency room and battle zone scenes. In order to train first responders for such situations, physical casualty simulation kits, which are composed of tens of individual items, are commonly used. Similar to any other training scenarios, computer simulations can be effective means for wound treatment training purposes. For immersive and high fidelity virtual reality applications, realistic 3D models are key components. However, creation of such models is a labor intensive process. In this paper, we propose a procedural wound geometry generation technique that parameterizes key simulation inputs to establish the variability of the training scenarios without the need of labor intensive remodeling of the 3D geometry. The procedural techniques described in this work are entirely handled by the graphics processing unit (GPU) to enable interactive real-time operation of the simulation and to relieve the CPU for other computational tasks. The visible human dataset is processed and used as a volumetric texture for the internal visualization of the wound geometry. To further enhance the fidelity of the simulation, we also employ a surface flow model for blood visualization. This model is realized as a dynamic texture that is composed of a height field and a normal map and animated at each simulation step on the GPU. The procedural wound geometry and the blood flow model are applied to a thigh model and the efficiency of the technique is demonstrated in a virtual surgery scene.

  18. Animated pose templates for modeling and detecting human actions.

    Science.gov (United States)

    Yao, Benjamin Z; Nie, Bruce X; Liu, Zicheng; Zhu, Song-Chun

    2014-03-01

    This paper presents animated pose templates (APTs) for detecting short-term, long-term, and contextual actions from cluttered scenes in videos. Each pose template consists of two components: 1) a shape template with deformable parts represented in an And-node whose appearances are represented by the Histogram of Oriented Gradient (HOG) features, and 2) a motion template specifying the motion of the parts by the Histogram of Optical-Flows (HOF) features. A shape template may have more than one motion template represented by an Or-node. Therefore, each action is defined as a mixture (Or-node) of pose templates in an And-Or tree structure. While this pose template is suitable for detecting short-term action snippets in two to five frames, we extend it in two ways: 1) For long-term actions, we animate the pose templates by adding temporal constraints in a Hidden Markov Model (HMM), and 2) for contextual actions, we treat contextual objects as additional parts of the pose templates and add constraints that encode spatial correlations between parts. To train the model, we manually annotate part locations on several keyframes of each video and cluster them into pose templates using EM. This leaves the unknown parameters for our learning algorithm in two groups: 1) latent variables for the unannotated frames including pose-IDs and part locations, 2) model parameters shared by all training samples such as weights for HOG and HOF features, canonical part locations of each pose, coefficients penalizing pose-transition and part-deformation. To learn these parameters, we introduce a semi-supervised structural SVM algorithm that iterates between two steps: 1) learning (updating) model parameters using labeled data by solving a structural SVM optimization, and 2) imputing missing variables (i.e., detecting actions on unlabeled frames) with parameters learned from the previous step and progressively accepting high-score frames as newly labeled examples. This algorithm belongs to a

  19. FRP-RC Beam in Shear: Mechanical Model and Assessment Procedure for Pseudo-Ductile Behavior

    Directory of Open Access Journals (Sweden)

    Floriana Petrone

    2014-07-01

    Full Text Available This work deals with the development of a mechanics-based shear model for reinforced concrete (RC elements strengthened in shear with fiber-reinforced polymer (FRP and a design/assessment procedure capable of predicting the failure sequence of resisting elements: the yielding of existing transverse steel ties and the debonding of FRP sheets/strips, while checking the corresponding compressive stress in concrete. The research aims at the definition of an accurate capacity equation, consistent with the requirement of the pseudo-ductile shear behavior of structural elements, that is, transverse steel ties yield before FRP debonding and concrete crushing. For the purpose of validating the proposed model, an extended parametric study and a comparison against experimental results have been conducted: it is proven that the common accepted rule of assuming the shear capacity of RC members strengthened in shear with FRP as the sum of the maximum contribution of both FRP and stirrups can lead to an unsafe overestimation of the shear capacity. This issue has been pointed out by some authors, when comparing experimental shear capacity values with the theoretical ones, but without giving a convincing explanation of that. In this sense, the proposed model represents also a valid instrument to better understand the mechanical behavior of FRP-RC beams in shear and to calculate their actual shear capacity.

  20. A Spatial Allocation Procedure to Downscale Regional Crop Production Estimates from an Integrated Assessment Model

    Science.gov (United States)

    Moulds, S.; Djordjevic, S.; Savic, D.

    2017-12-01

    The Global Change Assessment Model (GCAM), an integrated assessment model, provides insight into the interactions and feedbacks between physical and human systems. The land system component of GCAM, which simulates land use activities and the production of major crops, produces output at the subregional level which must be spatially downscaled in order to use with gridded impact assessment models. However, existing downscaling routines typically consider cropland as a homogeneous class and do not provide information about land use intensity or specific management practices such as irrigation and multiple cropping. This paper presents a spatial allocation procedure to downscale crop production data from GCAM to a spatial grid, producing a time series of maps which show the spatial distribution of specific crops (e.g. rice, wheat, maize) at four input levels (subsistence, low input rainfed, high input rainfed and high input irrigated). The model algorithm is constrained by available cropland at each time point and therefore implicitly balances extensification and intensification processes in order to meet global food demand. It utilises a stochastic approach such that an increase in production of a particular crop is more likely to occur in grid cells with a high biophysical suitability and neighbourhood influence, while a fall in production will occur more often in cells with lower suitability. User-supplied rules define the order in which specific crops are downscaled as well as allowable transitions. A regional case study demonstrates the ability of the model to reproduce historical trends in India by comparing the model output with district-level agricultural inventory data. Lastly, the model is used to predict the spatial distribution of crops globally under various GCAM scenarios.

  1. On the design of flight-deck procedures

    Science.gov (United States)

    Degani, Asaf; Wiener, Earl L.

    1994-01-01

    In complex human-machine systems, operations, training, and standardization depend on a elaborate set of procedures which are specified and mandated by the operational management of the organization. The intent is to provide guidance to the pilots, to ensure a logical, efficient, safe, and predictable means of carrying out the mission objectives. In this report the authors examine the issue of procedure use and design from a broad viewpoint. The authors recommend a process which we call 'The Four P's:' philosophy, policies, procedures, and practices. We believe that if an organization commits to this process, it can create a set of procedures that are more internally consistent, less confusing, better respected by the flight crews, and that will lead to greater conformity. The 'Four-P' model, and the guidelines for procedural development in appendix 1, resulted from cockpit observations, extensive interviews with airline management and pilots, interviews and discussion at one major airframe manufacturer, and an examination of accident and incident reports. Although this report is based on airline operations, we believe that the principles may be applicable to other complex, high-risk systems, such as nuclear power production, manufacturing process control, space flight, and military operations.

  2. Engineering the propagation of high-k bulk plasmonic waves in multilayer hyperbolic metamaterials by multiscale structuring

    DEFF Research Database (Denmark)

    Zhukovsky, Sergei; Lavrinenko, Andrei; Sipe, J. E.

    2013-01-01

    , wavelength scale, the propagation of bulk plasmon polaritons in the resulting multiscale HMM is subject to photonic band gap phenomena. A great degree of control over such plasmons can be exerted by varying the superstructure geometry. As an example, Bragg reflection and Fabry-Pérot resonances...... are demonstrated in multiscale HMMs with periodic superstructures. More complicated, aperiodically ordered superstructures are also considered, with fractal Cantor-like multiscale HMMs exhibiting characteristic self-similar spectral signatures in the high-k band. The multiscale HMM concept is shown...

  3. Evaluation of the behavioral characteristics of the mdx mouse model of duchenne muscular dystrophy through operant conditioning procedures.

    Science.gov (United States)

    Lewon, Matthew; Peters, Christina M; Van Ry, Pam M; Burkin, Dean J; Hunter, Kenneth W; Hayes, Linda J

    2017-09-01

    The mdx mouse is an important nonhuman model for Duchenne muscular dystrophy (DMD) research. Characterizing the behavioral traits of the strain relative to congenic wild-type (WT) mice may enhance our understanding of the cognitive deficits observed in some humans with DMD and contribute to treatment development and evaluation. In this paper we report the results of a number of experiments comparing the behavior of mdx to WT mice in operant conditioning procedures designed to assess learning and memory. We found that mdx outperformed WT in all learning and memory tasks involving food reinforcement, and this appeared to be related to the differential effects of the food deprivation motivating operation on mdx mice. Conversely, WT outperformed mdx in an escape/avoidance learning task. These results suggest motivational differences between the strains and demonstrate the potential utility of operant conditioning procedures in the assessment of the behavioral characteristics of the mdx mouse. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. A model for evaluating steroids acting at the hypothalamus-pituitary axis using radioimmunoassay and related procedures

    International Nuclear Information System (INIS)

    Spona, J.; Bieglmayer, Ch.; Schroeder, R.; Poeckl, E.

    1978-01-01

    The relative affinity constants for binding of estrone (E 1 ), estriol (E 3 ), 17β-estradiol(E 2 ) and 17α-ethinyl-17β-estradiol(EE 2 ) to cytosol estrogen-receptors of rat hypothalamus and pituitary were estimated by a radioligand-receptor assay procedure. The relative affinity constants in the hypothalamic system were 6.5x10 -10 M for E 2 , 1x10 -9 M for EE 2 and 2x10 -8 M for E 1 and E 3 . The affinity constants were 1x10 -9 M for E 2 and E 3 and 7x10 -9 M for E 1 and E 3 when pituitary cytosol samples were used. Some discrepancies between biological activity and affinity for the estrogen-receptor were noted. These may be due to differences in the metabolism and cellular uptake of the estrogens. The radioligand-receptor assay procedure may be useful in evaluating the action of estrogens and anti-estrogens acting at the hypothalamic and pituitary level. Sedimentation patterns of cytosol samples labelled with the estrogens used in this study revealed, upon ultracentrifugation, protein moieties sedimenting in the 8 S region. The potency of progesterone and D-Norgestrel to modulate the release of LH and FSH stimulated by luteinizing hormone-releasing hormone (LH-RH) in castrated female rats was found to correlate well with the biological activity of the progestogens. It is concluded that the radioligand-receptor assay procedure for estrogens and the in-vivo model for the evaluation of the central action of progestogens may be valuable tools for testing new steroids to be used in oral contraceptives. (author)

  5. The Econometric Procedures of Specific Transaction Identification

    Directory of Open Access Journals (Sweden)

    Doszyń Mariusz

    2017-06-01

    Full Text Available The paper presents the econometric procedures of identifying specific transactions, in which atypical conditions or attributes may occur. These procedures are based on studentized and predictive residuals of the accordingly specified econometric models. The dependent variable is a unit transactional price, and explanatory variables are both the real properties’ attributes and accordingly defined artificial binary variables. The utility of the proposed method has been verified by means of a real market data base. The proposed procedures can be helpful during the property valuation process, making it possible to reject real properties that are specific (both from the point of view of the transaction conditions and the properties’ attributes and, consequently, to select an appropriate set of similar attributes that are essential for the valuation process.

  6. Accident Sequence Evaluation Program: Human reliability analysis procedure

    International Nuclear Information System (INIS)

    Swain, A.D.

    1987-02-01

    This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs

  7. The radiation protection principles model as a tool in the e-waste procedures

    Energy Technology Data Exchange (ETDEWEB)

    Tsitomeneas, S. Th., E-mail: stsit@teipir.gr [Piraeus University of Applied Sciences, Aigaleo (Greece); Vourlias, K., E-mail: kvourlias@yahoo.gr [Aristotle University of Thessaloniki (Greece); Geronikolou, St. A., E-mail: sgeronik@bioacademy.gr [Biomedical Research Foundation Academy of Athens, Athens (Greece)

    2016-03-25

    The electrical and electronic waste (e-waste) management is a global environmental problem dominated by the precautionary principle application, resulted to preliminary and ambiguous potential adverse effects, of extensive scientific uncertainty. In order to overcome the detected stochastic effects confusions in this field, we propose the inclusion of the principles of justification-optimization-limitation and of prudent avoidance. This model is already, established in radiation protection, so that toxicity as a result of the e-waste management would decrease, whilst the precious metals would be saved. We, further, resolve the classification of rejected items as reusable or as waste, so that the procedure of dismantling and recycling becomes easier, and the collecting-transporting-placement at an e-waste landfill would be safer. In conclusion, our proposing pattern in the e-waste management enforces the sustainable reducing-reusing-recycling, saves time/money and advances safety by including more sources of e-waste (military, medical etc) that were excluded previously.

  8. The radiation protection principles model as a tool in the e-waste procedures

    International Nuclear Information System (INIS)

    Tsitomeneas, S. Th.; Vourlias, K.; Geronikolou, St. A.

    2016-01-01

    The electrical and electronic waste (e-waste) management is a global environmental problem dominated by the precautionary principle application, resulted to preliminary and ambiguous potential adverse effects, of extensive scientific uncertainty. In order to overcome the detected stochastic effects confusions in this field, we propose the inclusion of the principles of justification-optimization-limitation and of prudent avoidance. This model is already, established in radiation protection, so that toxicity as a result of the e-waste management would decrease, whilst the precious metals would be saved. We, further, resolve the classification of rejected items as reusable or as waste, so that the procedure of dismantling and recycling becomes easier, and the collecting-transporting-placement at an e-waste landfill would be safer. In conclusion, our proposing pattern in the e-waste management enforces the sustainable reducing-reusing-recycling, saves time/money and advances safety by including more sources of e-waste (military, medical etc) that were excluded previously.

  9. Computerized procedures system

    Science.gov (United States)

    Lipner, Melvin H.; Mundy, Roger A.; Franusich, Michael D.

    2010-10-12

    An online data driven computerized procedures system that guides an operator through a complex process facility's operating procedures. The system monitors plant data, processes the data and then, based upon this processing, presents the status of the current procedure step and/or substep to the operator. The system supports multiple users and a single procedure definition supports several interface formats that can be tailored to the individual user. Layered security controls access privileges and revisions are version controlled. The procedures run on a server that is platform independent of the user workstations that the server interfaces with and the user interface supports diverse procedural views.

  10. Risk of Venous Thromboembolism and Operative Duration in Patients Undergoing Neurosurgical Procedures.

    Science.gov (United States)

    Bekelis, Kimon; Labropoulos, Nicos; Coy, Shannon

    2017-05-01

    The association of operative duration with the risk of venous thromboembolism (VTE) has not been quantified in neurosurgery. To investigate the association of surgical duration for several neurosurgical procedures and the incidence of VTE. We performed a retrospective cohort study involving patients who underwent neurosurgical procedures from 2005 to 2012 and were registered in the American College of Surgeons National Quality Improvement Project registry. In order to control for confounding, we used multivariable regression models, and propensity score conditioning. During the study period, there were 94 747 patients, who underwent neurosurgical procedures, and met the inclusion criteria. Of these, 1358 (1.0%) developed VTE within 30 days postoperatively. Multivariable logistic regression demonstrated an association of longer operative duration with higher 30-day incidence of VTE (odds ratio [OR], 1.22; 95% confidence interval [CI], 1.19-1.25). Compared with procedures of moderate duration (third quintile, 40-60th percentile), patients undergoing the longest procedures (>80th percentile) had higher odds (OR, 3.15; 95% CI, 2.49-3.99) of developing VTE. The shortest procedures (<20th percentile) were associated with a decreased incidence of VTE (OR, 0.51; 95% CI, 0.27-0.76) in comparison to those of moderate duration. The same associations were present in propensity score-adjusted models, and models stratified by subgroups of cranial, spinal, peripheral nerve, and carotid procedures. In a cohort of patients from a national prospective surgical registry, increased operative duration was associated with increased incidence of VTE for neurosurgical procedures. These results can be used by neurosurgeons to inform operative management, and to stratify patients with regard to VTE risk. Copyright © 2016 by the Congress of Neurological Surgeons

  11. Procedural justice, legitimacy beliefs, and moral disengagement in emerging adulthood: Explaining continuity and desistance in the moral model of criminal lifestyle development.

    Science.gov (United States)

    Walters, Glenn D

    2018-02-01

    Research has shown that procedural justice reliably predicts future offending behavior, although there is some indication that this may be more a function of legitimacy beliefs than of procedural justice per se. The current study sought to explain continuity and desistance in the moral model of criminal lifestyle development by comparing legitimacy beliefs, procedural justice, and moral disengagement as initiators and mediators of pathways leading to early adult offending. It was hypothesized that low legitimacy beliefs but not perceived procedural (in)justice or moral disengagement would initiate, and that moral disengagement but not low legitimacy beliefs or procedural injustice would mediate, the effect of low legitimacy beliefs on subsequent offending behavior. This hypothesis was tested in a group of 1,142 young adult males (age range = 18 to 20) from the Pathways to Desistance study (Mulvey, 2012). Results showed that as predicted, the target pathway (legitimacy → moral disengagement → offending) but none of the control pathways achieved a significant indirect effect. Hence, 1 way legitimacy beliefs reduce future offending and lead to desistance is by inhibiting moral disengagement. Besides the theoretical implications of these results, there is also the suggestion that legitimacy beliefs and moral disengagement should be considered for inclusion in secondary prevention and criminal justice intervention programs. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  12. Assessment of neural networks training strategies for histomorphometric analysis of synchrotron radiation medical images

    International Nuclear Information System (INIS)

    Alvarenga de Moura Meneses, Anderson; Gomes Pinheiro, Christiano Jorge; Rancoita, Paola; Schaul, Tom; Gambardella, Luca Maria; Schirru, Roberto; Barroso, Regina Cely; Oliveira, Luis Fernando de

    2010-01-01

    Micro-computed tomography (μCT) obtained by synchrotron radiation (SR) enables magnified images with a high space resolution that might be used as a non-invasive and non-destructive technique for the quantitative analysis of medical images, in particular the histomorphometry (HMM) of bony mass. In the preprocessing of such images, conventional operations such as binarization and morphological filtering are used before calculating the stereological parameters related, for example, to the trabecular bone microarchitecture. However, there is no standardization of methods for HMM based on μCT images, especially the ones obtained with SR X-ray. Notwithstanding the several uses of artificial neural networks (ANNs) in medical imaging, their application to the HMM of SR-μCT medical images is still incipient, despite the potential of both techniques. The contribution of this paper is the assessment and comparison of well-known training algorithms as well as the proposal of training strategies (combinations of training algorithms, sub-image kernel and symmetry information) for feed-forward ANNs in the task of bone pixels recognition in SR-μCT medical images. For a quantitative comparison, the results of a cross validation and a statistical analysis of the results for 36 training strategies are presented. The ANNs demonstrated both very low mean square errors in the validation, and good quality segmentation of the image of interest for application to HMM in SR-μCT medical images.

  13. Assessment of neural networks training strategies for histomorphometric analysis of synchrotron radiation medical images

    Energy Technology Data Exchange (ETDEWEB)

    Alvarenga de Moura Meneses, Anderson, E-mail: ameneses@lmp.ufrj.b [Federal University of Rio de Janeiro, COPPE, Nuclear Engineering Program, CP 68509, CEP 21.941-972, Rio de Janeiro, RJ (Brazil); IDSIA (Dalle Molle Institute for Artificial Intelligence), University of Lugano (Switzerland); Gomes Pinheiro, Christiano Jorge [State University of Rio de Janeiro, RJ (Brazil); Rancoita, Paola [IDSIA (Dalle Molle Institute for Artificial Intelligence), University of Lugano (Switzerland); Mathematics Department, Universita degli Studi di Milano (Italy); Schaul, Tom; Gambardella, Luca Maria [IDSIA (Dalle Molle Institute for Artificial Intelligence), University of Lugano (Switzerland); Schirru, Roberto [Federal University of Rio de Janeiro, COPPE, Nuclear Engineering Program, CP 68509, CEP 21.941-972, Rio de Janeiro, RJ (Brazil); Barroso, Regina Cely; Oliveira, Luis Fernando de [State University of Rio de Janeiro, RJ (Brazil)

    2010-09-21

    Micro-computed tomography ({mu}CT) obtained by synchrotron radiation (SR) enables magnified images with a high space resolution that might be used as a non-invasive and non-destructive technique for the quantitative analysis of medical images, in particular the histomorphometry (HMM) of bony mass. In the preprocessing of such images, conventional operations such as binarization and morphological filtering are used before calculating the stereological parameters related, for example, to the trabecular bone microarchitecture. However, there is no standardization of methods for HMM based on {mu}CT images, especially the ones obtained with SR X-ray. Notwithstanding the several uses of artificial neural networks (ANNs) in medical imaging, their application to the HMM of SR-{mu}CT medical images is still incipient, despite the potential of both techniques. The contribution of this paper is the assessment and comparison of well-known training algorithms as well as the proposal of training strategies (combinations of training algorithms, sub-image kernel and symmetry information) for feed-forward ANNs in the task of bone pixels recognition in SR-{mu}CT medical images. For a quantitative comparison, the results of a cross validation and a statistical analysis of the results for 36 training strategies are presented. The ANNs demonstrated both very low mean square errors in the validation, and good quality segmentation of the image of interest for application to HMM in SR-{mu}CT medical images.

  14. A model for analysing factors which may influence quality management procedures in higher education

    Directory of Open Access Journals (Sweden)

    Cătălin MAICAN

    2015-12-01

    Full Text Available In all universities, the Office for Quality Assurance defines the procedure for assessing the performance of the teaching staff, with a view to establishing students’ perception as regards the teachers’ activity from the point of view of the quality of the teaching process, of the relationship with the students and of the assistance provided for learning. The present paper aims at creating a combined model for evaluation, based on Data Mining statistical methods: starting from the findings revealed by the evaluations teachers performed to students, using the cluster analysis and the discriminant analysis, we identified the subjects which produced significant differences between students’ grades, subjects which were subsequently subjected to an evaluation by students. The results of these analyses allowed the formulation of certain measures for enhancing the quality of the evaluation process.

  15. ASSESSING GOING CONCERN ASSUMPTION BY USING RATING VALUATION MODELS BASED UPON ANALYTICAL PROCEDURES IN CASE OF FINANCIAL INVESTMENT COMPANIES

    OpenAIRE

    Tatiana Danescu; Ovidiu Spatacean; Paula Nistor; Andrea Cristina Danescu

    2010-01-01

    Designing and performing analytical procedures aimed to assess the rating of theFinancial Investment Companies are essential activities both in the phase of planning a financialaudit mission and in the phase of issuing conclusions regarding the suitability of using by themanagement and other persons responsible for governance of going concern, as the basis forpreparation and disclosure of financial statements. The paper aims to examine the usefulness ofrecognized models used in the practice o...

  16. SATCHMO-JS: a webserver for simultaneous protein multiple sequence alignment and phylogenetic tree construction.

    Science.gov (United States)

    Hagopian, Raffi; Davidson, John R; Datta, Ruchira S; Samad, Bushra; Jarvis, Glen R; Sjölander, Kimmen

    2010-07-01

    We present the jump-start simultaneous alignment and tree construction using hidden Markov models (SATCHMO-JS) web server for simultaneous estimation of protein multiple sequence alignments (MSAs) and phylogenetic trees. The server takes as input a set of sequences in FASTA format, and outputs a phylogenetic tree and MSA; these can be viewed online or downloaded from the website. SATCHMO-JS is an extension of the SATCHMO algorithm, and employs a divide-and-conquer strategy to jump-start SATCHMO at a higher point in the phylogenetic tree, reducing the computational complexity of the progressive all-versus-all HMM-HMM scoring and alignment. Results on a benchmark dataset of 983 structurally aligned pairs from the PREFAB benchmark dataset show that SATCHMO-JS provides a statistically significant improvement in alignment accuracy over MUSCLE, Multiple Alignment using Fast Fourier Transform (MAFFT), ClustalW and the original SATCHMO algorithm. The SATCHMO-JS webserver is available at http://phylogenomics.berkeley.edu/satchmo-js. The datasets used in these experiments are available for download at http://phylogenomics.berkeley.edu/satchmo-js/supplementary/.

  17. EvoCor: a platform for predicting functionally related genes using phylogenetic and expression profiles.

    Science.gov (United States)

    Dittmar, W James; McIver, Lauren; Michalak, Pawel; Garner, Harold R; Valdez, Gregorio

    2014-07-01

    The wealth of publicly available gene expression and genomic data provides unique opportunities for computational inference to discover groups of genes that function to control specific cellular processes. Such genes are likely to have co-evolved and be expressed in the same tissues and cells. Unfortunately, the expertise and computational resources required to compare tens of genomes and gene expression data sets make this type of analysis difficult for the average end-user. Here, we describe the implementation of a web server that predicts genes involved in affecting specific cellular processes together with a gene of interest. We termed the server 'EvoCor', to denote that it detects functional relationships among genes through evolutionary analysis and gene expression correlation. This web server integrates profiles of sequence divergence derived by a Hidden Markov Model (HMM) and tissue-wide gene expression patterns to determine putative functional linkages between pairs of genes. This server is easy to use and freely available at http://pilot-hmm.vbi.vt.edu/. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Redefining the PF06864 Pfam family based on Burkholderia pseudomallei PilO2(Bp S-SAD crystal structure.

    Directory of Open Access Journals (Sweden)

    Patricia Lassaux

    Full Text Available Type IV pili are surface-exposed filaments and bacterial virulence factors, represented by the Tfpa and Tfpb types, which assemble via specific machineries. The Tfpb group is further divided into seven variants, linked to heterogeneity in the assembly machineries. Here we focus on PilO2(Bp, a protein component of the Tfpb R64 thin pilus variant assembly machinery from the pathogen Burkholderia pseudomallei. PilO2(Bp belongs to the PF06864 Pfam family, for which an improved definition is presented based on newly derived Hidden Markov Model (HMM profiles. The 3D structure of the N-terminal domain of PilO2(Bp (N-PilO2(Bp, here reported, is the first structural representative of the PF06864 family. N-PilO2(Bp presents an actin-like ATPase fold that is shown to be present in BfpC, a different variant assembly protein; the new HMM profiles classify BfpC as a PF06864 member. Our results provide structural insight into the PF06864 family and on the Type IV pili assembly machinery.

  19. Statistical elements in calculations procedures for air quality control; Elementi di statistica nelle procedure di calcolo per il controllo della qualita' dell'aria

    Energy Technology Data Exchange (ETDEWEB)

    Mura, M.C. [Istituto Superiore di Sanita' , Laboratorio di Igiene Ambientale, Rome (Italy)

    2001-07-01

    The statistical processing of data resulting from the monitoring of chemical atmospheric pollution aimed at air quality control is presented. The form of procedural models may offer a practical instrument to the operators in the sector. The procedural models are modular and can be easily integrated with other models. They include elementary calculation procedures and mathematical methods for statistical analysis. The calculation elements have been developed by probabilistic induction so as to relate them to the statistical analysis. The calculation elements have been developed by probabilistic induction so as to relate them to the statistical models, which are the basis of the methods used for the study and the forecast of atmospheric pollution. This report is part of the updating and training activity that the Istituto Superiore di Sanita' has been carrying on for over twenty years, addressed to operators of the environmental field. [Italian] Il processo di elaborazione statistica dei dati provenienti dal monitoraggio dell'inquinamento chimico dell'atmosfera, finalizzato al controllo della qualita' dell'aria, e' presentato in modelli di procedure al fine di fornire un sintetico strumento di lavoro agli operatori del settore. I modelli di procedure sono modulari ed integrabili. Includono gli elementi di calcolo elementare ed i metodi statistici d'analisi. Gli elementi di calcolo sono sviluppati con metodo d'induzione probabilistica per collegarli ai modelli statistici, che sono alla base dei metodi d'analisi nello studio del fenomeno dell'inquinamento atmosferico anche a fini previsionali. Il rapporto si inserisce nell'attivita' di aggiornamento e di formazione che fin dagli anni ottanta l'Istituto Superiore di Sanita' indirizza agli operatori del settore ambientale.

  20. PSA Update Procedures, an Ultimate Need for Living PSA

    International Nuclear Information System (INIS)

    Hegedus, D.

    1998-01-01

    Nuclear facilities by their complex nature, change with time. These changes can be both physical (plant modification, etc.), operational (enhanced procedures, etc.) and organizational. In addition, there are also changes in our understanding of the plant, due to operational experience, data collection, technology enhancements, etc. Therefore, it is imperative that PSA model must be frequently up-dated or modified to reflect these changes. Over the last ten years. these has been a remarkable growth of the use of Probabilistic Safety Assessments (PSAs). The most rapidly growing area of the PSA Applications is their use to support operational decision-making. Many of these applications are characterized by the potential for not only improving the safety level but also for providing guidance on the optimal use of resources and reducing regulatory burden. To enable a wider use of the PSA model as a tool for safety activities it is essential to maintain the model in a controlled state. Moreover, to fulfill requirements for L iving PSA , the PSA model has to be constantly updated and/or monitored to reflect the current plant configuration. It should be noted that the PSA model should not only represent the plant design but should also represent the operational and emergency procedures. To keep the PSA model up-to-date several issues should be clearly defined including: - Responsibility should be divided among the PSA group, - Procedures for implementing changes should be established, and - QA requirements/program should be established to assure documentation and reporting. (author)