WorldWideScience

Sample records for operator lasso model

  1. Building interpretable predictive models for pediatric hospital readmission using Tree-Lasso logistic regression.

    Science.gov (United States)

    Jovanovic, Milos; Radovanovic, Sandro; Vukicevic, Milan; Van Poucke, Sven; Delibasic, Boris

    2016-09-01

    Quantification and early identification of unplanned readmission risk have the potential to improve the quality of care during hospitalization and after discharge. However, high dimensionality, sparsity, and class imbalance of electronic health data and the complexity of risk quantification, challenge the development of accurate predictive models. Predictive models require a certain level of interpretability in order to be applicable in real settings and create actionable insights. This paper aims to develop accurate and interpretable predictive models for readmission in a general pediatric patient population, by integrating a data-driven model (sparse logistic regression) and domain knowledge based on the international classification of diseases 9th-revision clinical modification (ICD-9-CM) hierarchy of diseases. Additionally, we propose a way to quantify the interpretability of a model and inspect the stability of alternative solutions. The analysis was conducted on >66,000 pediatric hospital discharge records from California, State Inpatient Databases, Healthcare Cost and Utilization Project between 2009 and 2011. We incorporated domain knowledge based on the ICD-9-CM hierarchy in a data driven, Tree-Lasso regularized logistic regression model, providing the framework for model interpretation. This approach was compared with traditional Lasso logistic regression resulting in models that are easier to interpret by fewer high-level diagnoses, with comparable prediction accuracy. The results revealed that the use of a Tree-Lasso model was as competitive in terms of accuracy (measured by area under the receiver operating characteristic curve-AUC) as the traditional Lasso logistic regression, but integration with the ICD-9-CM hierarchy of diseases provided more interpretable models in terms of high-level diagnoses. Additionally, interpretations of models are in accordance with existing medical understanding of pediatric readmission. Best performing models have

  2. Recommendations for the Implementation of the LASSO Workflow

    Energy Technology Data Exchange (ETDEWEB)

    Gustafson, William I [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Vogelmann, Andrew M [Brookhaven National Lab. (BNL), Upton, NY (United States); Cheng, Xiaoping [National University of Defense Technology, China; Endo, Satoshi [Brookhaven National Lab. (BNL), Upton, NY (United States); Krishna, Bhargavi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Li, Zhijin [California Inst. of Technology (CalTech), La Canada Flintridge, CA (United States). Jet Propulsion Lab.; University of California, Los Angeles; Toto, Tami [Brookhaven National Lab. (BNL), Upton, NY (United States); Xiao, Heng [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-11-15

    The U. S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Research Fa-cility began a pilot project in May 2015 to design a routine, high-resolution modeling capability to complement ARM’s extensive suite of measurements. This modeling capability, envisioned in the ARM Decadal Vision (U.S. Department of Energy 2014), subsequently has been named the Large-Eddy Simu-lation (LES) ARM Symbiotic Simulation and Observation (LASSO) project, and it has an initial focus of shallow convection at the ARM Southern Great Plains (SGP) atmospheric observatory. This report documents the recommendations resulting from the pilot project to be considered by ARM for imple-mentation into routine operations. During the pilot phase, LASSO has evolved from the initial vision outlined in the pilot project white paper (Gustafson and Vogelmann 2015) to what is recommended in this report. Further details on the overall LASSO project are available at https://www.arm.gov/capabilities/modeling/lasso. Feedback regarding LASSO and the recommendations in this report can be directed to William Gustafson, the project principal investigator (PI), and Andrew Vogelmann, the co-principal investigator (Co-PI), via lasso@arm.gov.

  3. Efficient methods for overlapping group lasso.

    Science.gov (United States)

    Yuan, Lei; Liu, Jun; Ye, Jieping

    2013-09-01

    The group Lasso is an extension of the Lasso for feature selection on (predefined) nonoverlapping groups of features. The nonoverlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. Our methods and theoretical results are then generalized to tackle the general overlapping group Lasso formulation based on the l(q) norm. We further extend our algorithm to solve a nonconvex overlapping group Lasso formulation based on the capped norm regularization, which reduces the estimation bias introduced by the convex penalty. We have performed empirical evaluations using both a synthetic and the breast cancer gene expression dataset, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results show that the proposed algorithm is more efficient than existing state-of-the-art algorithms. Results also demonstrate the effectiveness of the nonconvex formulation for overlapping group Lasso.

  4. Description of the LASSO Alpha 2 Release

    Energy Technology Data Exchange (ETDEWEB)

    Gustafson, William I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Vogelmann, Andrew M. [Brookhaven National Lab. (BNL), Upton, NY (United States); Cheng, Xiaoping [Univ. of California, Los Angeles, CA (United States); Endo, Satoshi [Brookhaven National Lab. (BNL), Upton, NY (United States); Krishna, Bhargavi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Li, Z. [Univ. of California, Los Angeles, CA (United States); Toto, Tami [Brookhaven National Lab. (BNL), Upton, NY (United States); Xiao, H. [Univ. of California, Los Angeles, CA (United States)

    2017-09-01

    The Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility began a pilot project in May 2015 to design a routine, high-resolution modeling capability to complement ARM’s extensive suite of measurements. This modeling capability has been named the Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation (LASSO) project. The initial focus of LASSO is on shallow convection at the ARM Southern Great Plains (SGP) Climate Research Facility. The availability of LES simulations with concurrent observations will serve many purposes. LES helps bridge the scale gap between DOE ARM observations and models, and the use of routine LES adds value to observations. It provides a self-consistent representation of the atmosphere and a dynamical context for the observations. Further, it elucidates unobservable processes and properties. LASSO will generate a simulation library for researchers that enables statistical approaches beyond a single-case mentality. It will also provide tools necessary for modelers to reproduce the LES and conduct their own sensitivity experiments. Many different uses are envisioned for the combined LASSO LES and observational library. For an observationalist, LASSO can help inform instrument remote sensing retrievals, conduct Observation System Simulation Experiments (OSSEs), and test implications of radar scan strategies or flight paths. For a theoretician, LASSO will help calculate estimates of fluxes and co-variability of values, and test relationships without having to run the model yourself. For a modeler, LASSO will help one know ahead of time which days have good forcing, have co-registered observations at high-resolution scales, and have simulation inputs and corresponding outputs to test parameterizations. Further details on the overall LASSO project are available at https://www.arm.gov/capabilities/modeling/lasso.

  5. Identifying the Prognosis Factors in Death after Liver Transplantation via Adaptive LASSO in Iran

    Directory of Open Access Journals (Sweden)

    Hadi Raeisi Shahraki

    2016-01-01

    Full Text Available Despite the widespread use of liver transplantation as a routine therapy in liver diseases, the effective factors on its outcomes are still controversial. This study attempted to identify the most effective factors on death after liver transplantation. For this purpose, modified least absolute shrinkage and selection operator (LASSO, called Adaptive LASSO, was utilized. One of the best advantages of this method is considering high number of factors. Therefore, in a historical cohort study from 2008 to 2013, the clinical findings of 680 patients undergoing liver transplant surgery were considered. Ridge and Adaptive LASSO regression methods were then implemented to identify the most effective factors on death. To compare the performance of these two models, receiver operating characteristic (ROC curve was used. According to the results, 12 factors in Ridge regression and 9 ones in Adaptive LASSO regression were significant. The area under the ROC curve (AUC of Adaptive LASSO was equal to 89% (95% CI: 86%–91%, which was significantly greater than Ridge regression (64%, 95% CI: 61%–68% (p<0.001. As a conclusion, the significant factors and the performance criteria revealed the superiority of Adaptive LASSO method as a penalized model versus traditional regression model in the present study.

  6. Description of the LASSO Alpha 1 Release

    Energy Technology Data Exchange (ETDEWEB)

    Gustafson, William I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Vogelmann, Andrew M. [Brookhaven National Lab. (BNL), Upton, NY (United States); Cheng, Xiaoping [Univ. of California, Los Angeles, CA (United States); Endo, Satoshi [Brookhaven National Lab. (BNL), Upton, NY (United States); Krishna, Bhargavi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Li, Zhijin [Univ. of California, Los Angeles, CA (United States); Toto, Tami [Brookhaven National Lab. (BNL), Upton, NY (United States); Xiao, H [Univ. of California, Los Angeles, CA (United States)

    2017-07-31

    The Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility began a pilot project in May 2015 to design a routine, high-resolution modeling capability to complement ARM’s extensive suite of measurements. This modeling capability has been named the Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation (LASSO) project. The availability of LES simulations with concurrent observations will serve many purposes. LES helps bridge the scale gap between DOE ARM observations and models, and the use of routine LES adds value to observations. It provides a self-consistent representation of the atmosphere and a dynamical context for the observations. Further, it elucidates unobservable processes and properties. LASSO will generate a simulation library for researchers that enables statistical approaches beyond a single-case mentality. It will also provide tools necessary for modelers to reproduce the LES and conduct their own sensitivity experiments. Many different uses are envisioned for the combined LASSO LES and observational library. For an observationalist, LASSO can help inform instrument remote-sensing retrievals, conduct Observation System Simulation Experiments (OSSEs), and test implications of radar scan strategies or flight paths. For a theoretician, LASSO will help calculate estimates of fluxes and co-variability of values, and test relationships without having to run the model yourself. For a modeler, LASSO will help one know ahead of time which days have good forcing, have co-registered observations at high-resolution scales, and have simulation inputs and corresponding outputs to test parameterizations. Further details on the overall LASSO project are available at http://www.arm. gov/science/themes/lasso.

  7. Using Multivariate Regression Model with Least Absolute Shrinkage and Selection Operator (LASSO) to Predict the Incidence of Xerostomia after Intensity-Modulated Radiotherapy for Head and Neck Cancer

    Science.gov (United States)

    Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Wu, Jia-Ming; Wang, Hung-Yu; Horng, Mong-Fong; Chang, Chun-Ming; Lan, Jen-Hong; Huang, Ya-Yu; Fang, Fu-Min; Leung, Stephen Wan

    2014-01-01

    Purpose The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderate-to-severe patient-rated xerostomia among head and neck cancer (HNC) patients treated with IMRT. Methods and Materials Quality of life questionnaire datasets from 206 patients with HNC were analyzed. The European Organization for Research and Treatment of Cancer QLQ-H&N35 and QLQ-C30 questionnaires were used as the endpoint evaluation. The primary endpoint (grade 3+ xerostomia) was defined as moderate-to-severe xerostomia at 3 (XER3m) and 12 months (XER12m) after the completion of IMRT. Normal tissue complication probability (NTCP) models were developed. The optimal and suboptimal numbers of prognostic factors for a multivariate logistic regression model were determined using the LASSO with bootstrapping technique. Statistical analysis was performed using the scaled Brier score, Nagelkerke R2, chi-squared test, Omnibus, Hosmer-Lemeshow test, and the AUC. Results Eight prognostic factors were selected by LASSO for the 3-month time point: Dmean-c, Dmean-i, age, financial status, T stage, AJCC stage, smoking, and education. Nine prognostic factors were selected for the 12-month time point: Dmean-i, education, Dmean-c, smoking, T stage, baseline xerostomia, alcohol abuse, family history, and node classification. In the selection of the suboptimal number of prognostic factors by LASSO, three suboptimal prognostic factors were fine-tuned by Hosmer-Lemeshow test and AUC, i.e., Dmean-c, Dmean-i, and age for the 3-month time point. Five suboptimal prognostic factors were also selected for the 12-month time point, i.e., Dmean-i, education, Dmean-c, smoking, and T stage. The overall performance for both time points of the NTCP model in terms of scaled Brier score, Omnibus, and Nagelkerke R2 was satisfactory and corresponded well with the expected values. Conclusions

  8. The use of vector bootstrapping to improve variable selection precision in Lasso models

    NARCIS (Netherlands)

    Laurin, C.; Boomsma, D.I.; Lubke, G.H.

    2016-01-01

    The Lasso is a shrinkage regression method that is widely used for variable selection in statistical genetics. Commonly, K-fold cross-validation is used to fit a Lasso model. This is sometimes followed by using bootstrap confidence intervals to improve precision in the resulting variable selections.

  9. Introduction to the LASSO

    Indian Academy of Sciences (India)

    the LASSO method as a constrained quadratic programming prob- lem, and ... solve the LASSO problem. We also ... The problem (2) is equivalent to the best subset selection. .... erator (LASSO), which is based on the following key concepts:.

  10. EPS-LASSO: Test for High-Dimensional Regression Under Extreme Phenotype Sampling of Continuous Traits.

    Science.gov (United States)

    Xu, Chao; Fang, Jian; Shen, Hui; Wang, Yu-Ping; Deng, Hong-Wen

    2018-01-25

    Extreme phenotype sampling (EPS) is a broadly-used design to identify candidate genetic factors contributing to the variation of quantitative traits. By enriching the signals in extreme phenotypic samples, EPS can boost the association power compared to random sampling. Most existing statistical methods for EPS examine the genetic factors individually, despite many quantitative traits have multiple genetic factors underlying their variation. It is desirable to model the joint effects of genetic factors, which may increase the power and identify novel quantitative trait loci under EPS. The joint analysis of genetic data in high-dimensional situations requires specialized techniques, e.g., the least absolute shrinkage and selection operator (LASSO). Although there are extensive research and application related to LASSO, the statistical inference and testing for the sparse model under EPS remain unknown. We propose a novel sparse model (EPS-LASSO) with hypothesis test for high-dimensional regression under EPS based on a decorrelated score function. The comprehensive simulation shows EPS-LASSO outperforms existing methods with stable type I error and FDR control. EPS-LASSO can provide a consistent power for both low- and high-dimensional situations compared with the other methods dealing with high-dimensional situations. The power of EPS-LASSO is close to other low-dimensional methods when the causal effect sizes are small and is superior when the effects are large. Applying EPS-LASSO to a transcriptome-wide gene expression study for obesity reveals 10 significant body mass index associated genes. Our results indicate that EPS-LASSO is an effective method for EPS data analysis, which can account for correlated predictors. The source code is available at https://github.com/xu1912/EPSLASSO. hdeng2@tulane.edu. Supplementary data are available at Bioinformatics online. © The Author (2018). Published by Oxford University Press. All rights reserved. For Permissions, please

  11. When Is Network Lasso Accurate?

    Directory of Open Access Journals (Sweden)

    Alexander Jung

    2018-01-01

    Full Text Available The “least absolute shrinkage and selection operator” (Lasso method has been adapted recently for network-structured datasets. In particular, this network Lasso method allows to learn graph signals from a small number of noisy signal samples by using the total variation of a graph signal for regularization. While efficient and scalable implementations of the network Lasso are available, only little is known about the conditions on the underlying network structure which ensure network Lasso to be accurate. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal. We also quantify the error incurred by network Lasso in terms of two constants which reflect the connectivity of the sampled nodes.

  12. Oracle Efficient Estimation and Forecasting with the Adaptive LASSO and the Adaptive Group LASSO in Vector Autoregressions

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Callot, Laurent

    We show that the adaptive Lasso (aLasso) and the adaptive group Lasso (agLasso) are oracle efficient in stationary vector autoregressions where the number of parameters per equation is smaller than the number of observations. In particular, this means that the parameters are estimated consistently...

  13. Variable selection by lasso-type methods

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2011-09-01

    Full Text Available Variable selection is an important property of shrinkage methods. The adaptive lasso is an oracle procedure and can do consistent variable selection. In this paper, we provide an explanation that how use of adaptive weights make it possible for the adaptive lasso to satisfy the necessary and almost sufcient condition for consistent variable selection. We suggest a novel algorithm and give an important result that for the adaptive lasso if predictors are normalised after the introduction of adaptive weights, it makes the adaptive lasso performance identical to the lasso.

  14. FFTLasso: Large-Scale LASSO in the Fourier Domain

    KAUST Repository

    Bibi, Adel Aamer

    2017-11-09

    In this paper, we revisit the LASSO sparse representation problem, which has been studied and used in a variety of different areas, ranging from signal processing and information theory to computer vision and machine learning. In the vision community, it found its way into many important applications, including face recognition, tracking, super resolution, image denoising, to name a few. Despite advances in efficient sparse algorithms, solving large-scale LASSO problems remains a challenge. To circumvent this difficulty, people tend to downsample and subsample the problem (e.g. via dimensionality reduction) to maintain a manageable sized LASSO, which usually comes at the cost of losing solution accuracy. This paper proposes a novel circulant reformulation of the LASSO that lifts the problem to a higher dimension, where ADMM can be efficiently applied to its dual form. Because of this lifting, all optimization variables are updated using only basic element-wise operations, the most computationally expensive of which is a 1D FFT. In this way, there is no need for a linear system solver nor matrix-vector multiplication. Since all operations in our FFTLasso method are element-wise, the subproblems are completely independent and can be trivially parallelized (e.g. on a GPU). The attractive computational properties of FFTLasso are verified by extensive experiments on synthetic and real data and on the face recognition task. They demonstrate that FFTLasso scales much more effectively than a state-of-the-art solver.

  15. FFTLasso: Large-Scale LASSO in the Fourier Domain

    KAUST Repository

    Bibi, Adel Aamer; Itani, Hani; Ghanem, Bernard

    2017-01-01

    In this paper, we revisit the LASSO sparse representation problem, which has been studied and used in a variety of different areas, ranging from signal processing and information theory to computer vision and machine learning. In the vision community, it found its way into many important applications, including face recognition, tracking, super resolution, image denoising, to name a few. Despite advances in efficient sparse algorithms, solving large-scale LASSO problems remains a challenge. To circumvent this difficulty, people tend to downsample and subsample the problem (e.g. via dimensionality reduction) to maintain a manageable sized LASSO, which usually comes at the cost of losing solution accuracy. This paper proposes a novel circulant reformulation of the LASSO that lifts the problem to a higher dimension, where ADMM can be efficiently applied to its dual form. Because of this lifting, all optimization variables are updated using only basic element-wise operations, the most computationally expensive of which is a 1D FFT. In this way, there is no need for a linear system solver nor matrix-vector multiplication. Since all operations in our FFTLasso method are element-wise, the subproblems are completely independent and can be trivially parallelized (e.g. on a GPU). The attractive computational properties of FFTLasso are verified by extensive experiments on synthetic and real data and on the face recognition task. They demonstrate that FFTLasso scales much more effectively than a state-of-the-art solver.

  16. Improving the prediction of going concern of Taiwanese listed companies using a hybrid of LASSO with data mining techniques.

    Science.gov (United States)

    Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De

    2016-01-01

    The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).

  17. The joint graphical lasso for inverse covariance estimation across multiple classes.

    Science.gov (United States)

    Danaher, Patrick; Wang, Pei; Witten, Daniela M

    2014-03-01

    We consider the problem of estimating multiple related Gaussian graphical models from a high-dimensional data set with observations belonging to distinct classes. We propose the joint graphical lasso , which borrows strength across the classes in order to estimate multiple graphical models that share certain characteristics, such as the locations or weights of nonzero edges. Our approach is based upon maximizing a penalized log likelihood. We employ generalized fused lasso or group lasso penalties, and implement a fast ADMM algorithm to solve the corresponding convex optimization problems. The performance of the proposed method is illustrated through simulated and real data examples.

  18. Implementations of geographically weighted lasso in spatial data with multicollinearity (Case study: Poverty modeling of Java Island)

    Science.gov (United States)

    Setiyorini, Anis; Suprijadi, Jadi; Handoko, Budhi

    2017-03-01

    Geographically Weighted Regression (GWR) is a regression model that takes into account the spatial heterogeneity effect. In the application of the GWR, inference on regression coefficients is often of interest, as is estimation and prediction of the response variable. Empirical research and studies have demonstrated that local correlation between explanatory variables can lead to estimated regression coefficients in GWR that are strongly correlated, a condition named multicollinearity. It later results on a large standard error on estimated regression coefficients, and, hence, problematic for inference on relationships between variables. Geographically Weighted Lasso (GWL) is a method which capable to deal with spatial heterogeneity and local multicollinearity in spatial data sets. GWL is a further development of GWR method, which adds a LASSO (Least Absolute Shrinkage and Selection Operator) constraint in parameter estimation. In this study, GWL will be applied by using fixed exponential kernel weights matrix to establish a poverty modeling of Java Island, Indonesia. The results of applying the GWL to poverty datasets show that this method stabilizes regression coefficients in the presence of multicollinearity and produces lower prediction and estimation error of the response variable than GWR does.

  19. LES ARM Symbiotic Simulation and Observation (LASSO) Implementation Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Gustafson Jr., WI [Pacific Northwest National Laboratory; Vogelmann, AM [Brookhaven National Laboratory

    2015-09-01

    This document illustrates the design of the Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation (LASSO) workflow to provide a routine, high-resolution modeling capability to augment the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility’s high-density observations. LASSO will create a powerful new capability for furthering ARM’s mission to advance understanding of cloud, radiation, aerosol, and land-surface processes. The combined observational and modeling elements will enable a new level of scientific inquiry by connecting processes and context to observations and providing needed statistics for details that cannot be measured. The result will be improved process understanding that facilitates concomitant improvements in climate model parameterizations. The initial LASSO implementation will be for ARM’s Southern Great Plains site in Oklahoma and will focus on shallow convection, which is poorly simulated by climate models due in part to clouds’ typically small spatial scale compared to model grid spacing, and because the convection involves complicated interactions of microphysical and boundary layer processes.

  20. Genetic risk prediction using a spatial autoregressive model with adaptive lasso.

    Science.gov (United States)

    Wen, Yalu; Shen, Xiaoxi; Lu, Qing

    2018-05-31

    With rapidly evolving high-throughput technologies, studies are being initiated to accelerate the process toward precision medicine. The collection of the vast amounts of sequencing data provides us with great opportunities to systematically study the role of a deep catalog of sequencing variants in risk prediction. Nevertheless, the massive amount of noise signals and low frequencies of rare variants in sequencing data pose great analytical challenges on risk prediction modeling. Motivated by the development in spatial statistics, we propose a spatial autoregressive model with adaptive lasso (SARAL) for risk prediction modeling using high-dimensional sequencing data. The SARAL is a set-based approach, and thus, it reduces the data dimension and accumulates genetic effects within a single-nucleotide variant (SNV) set. Moreover, it allows different SNV sets having various magnitudes and directions of effect sizes, which reflects the nature of complex diseases. With the adaptive lasso implemented, SARAL can shrink the effects of noise SNV sets to be zero and, thus, further improve prediction accuracy. Through simulation studies, we demonstrate that, overall, SARAL is comparable to, if not better than, the genomic best linear unbiased prediction method. The method is further illustrated by an application to the sequencing data from the Alzheimer's Disease Neuroimaging Initiative. Copyright © 2018 John Wiley & Sons, Ltd.

  1. Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

    Science.gov (United States)

    Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph

    2017-10-01

    In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.

  2. Bayesian LASSO, scale space and decision making in association genetics.

    Science.gov (United States)

    Pasanen, Leena; Holmström, Lasse; Sillanpää, Mikko J

    2015-01-01

    LASSO is a penalized regression method that facilitates model fitting in situations where there are as many, or even more explanatory variables than observations, and only a few variables are relevant in explaining the data. We focus on the Bayesian version of LASSO and consider four problems that need special attention: (i) controlling false positives, (ii) multiple comparisons, (iii) collinearity among explanatory variables, and (iv) the choice of the tuning parameter that controls the amount of shrinkage and the sparsity of the estimates. The particular application considered is association genetics, where LASSO regression can be used to find links between chromosome locations and phenotypic traits in a biological organism. However, the proposed techniques are relevant also in other contexts where LASSO is used for variable selection. We separate the true associations from false positives using the posterior distribution of the effects (regression coefficients) provided by Bayesian LASSO. We propose to solve the multiple comparisons problem by using simultaneous inference based on the joint posterior distribution of the effects. Bayesian LASSO also tends to distribute an effect among collinear variables, making detection of an association difficult. We propose to solve this problem by considering not only individual effects but also their functionals (i.e. sums and differences). Finally, whereas in Bayesian LASSO the tuning parameter is often regarded as a random variable, we adopt a scale space view and consider a whole range of fixed tuning parameters, instead. The effect estimates and the associated inference are considered for all tuning parameters in the selected range and the results are visualized with color maps that provide useful insights into data and the association problem considered. The methods are illustrated using two sets of artificial data and one real data set, all representing typical settings in association genetics.

  3. Toward Probabilistic Diagnosis and Understanding of Depression Based on Functional MRI Data Analysis with Logistic Group LASSO.

    Directory of Open Access Journals (Sweden)

    Yu Shimizu

    Full Text Available Diagnosis of psychiatric disorders based on brain imaging data is highly desirable in clinical applications. However, a common problem in applying machine learning algorithms is that the number of imaging data dimensions often greatly exceeds the number of available training samples. Furthermore, interpretability of the learned classifier with respect to brain function and anatomy is an important, but non-trivial issue. We propose the use of logistic regression with a least absolute shrinkage and selection operator (LASSO to capture the most critical input features. In particular, we consider application of group LASSO to select brain areas relevant to diagnosis. An additional advantage of LASSO is its probabilistic output, which allows evaluation of diagnosis certainty. To verify our approach, we obtained semantic and phonological verbal fluency fMRI data from 31 depression patients and 31 control subjects, and compared the performances of group LASSO (gLASSO, and sparse group LASSO (sgLASSO to those of standard LASSO (sLASSO, Support Vector Machine (SVM, and Random Forest. Over 90% classification accuracy was achieved with gLASSO, sgLASSO, as well as SVM; however, in contrast to SVM, LASSO approaches allow for identification of the most discriminative weights and estimation of prediction reliability. Semantic task data revealed contributions to the classification from left precuneus, left precentral gyrus, left inferior frontal cortex (pars triangularis, and left cerebellum (c rus1. Weights for the phonological task indicated contributions from left inferior frontal operculum, left post central gyrus, left insula, left middle frontal cortex, bilateral middle temporal cortices, bilateral precuneus, left inferior frontal cortex (pars triangularis, and left precentral gyrus. The distribution of normalized odds ratios further showed, that predictions with absolute odds ratios higher than 0.2 could be regarded as certain.

  4. The Bayesian group lasso for confounded spatial data

    Science.gov (United States)

    Hefley, Trevor J.; Hooten, Mevin B.; Hanks, Ephraim M.; Russell, Robin E.; Walsh, Daniel P.

    2017-01-01

    Generalized linear mixed models for spatial processes are widely used in applied statistics. In many applications of the spatial generalized linear mixed model (SGLMM), the goal is to obtain inference about regression coefficients while achieving optimal predictive ability. When implementing the SGLMM, multicollinearity among covariates and the spatial random effects can make computation challenging and influence inference. We present a Bayesian group lasso prior with a single tuning parameter that can be chosen to optimize predictive ability of the SGLMM and jointly regularize the regression coefficients and spatial random effect. We implement the group lasso SGLMM using efficient Markov chain Monte Carlo (MCMC) algorithms and demonstrate how multicollinearity among covariates and the spatial random effect can be monitored as a derived quantity. To test our method, we compared several parameterizations of the SGLMM using simulated data and two examples from plant ecology and disease ecology. In all examples, problematic levels multicollinearity occurred and influenced sampling efficiency and inference. We found that the group lasso prior resulted in roughly twice the effective sample size for MCMC samples of regression coefficients and can have higher and less variable predictive accuracy based on out-of-sample data when compared to the standard SGLMM.

  5. The Los Alamos Space Science Outreach (LASSO) Program

    Science.gov (United States)

    Barker, P. L.; Skoug, R. M.; Alexander, R. J.; Thomsen, M. F.; Gary, S. P.

    2002-12-01

    The Los Alamos Space Science Outreach (LASSO) program features summer workshops in which K-14 teachers spend several weeks at LANL learning space science from Los Alamos scientists and developing methods and materials for teaching this science to their students. The program is designed to provide hands-on space science training to teachers as well as assistance in developing lesson plans for use in their classrooms. The program supports an instructional model based on education research and cognitive theory. Students and teachers engage in activities that encourage critical thinking and a constructivist approach to learning. LASSO is run through the Los Alamos Science Education Team (SET). SET personnel have many years of experience in teaching, education research, and science education programs. Their involvement ensures that the teacher workshop program is grounded in sound pedagogical methods and meets current educational standards. Lesson plans focus on current LANL satellite projects to study the solar wind and the Earth's magnetosphere. LASSO is an umbrella program for space science education activities at Los Alamos National Laboratory (LANL) that was created to enhance the science and math interests and skills of students from New Mexico and the nation. The LASSO umbrella allows maximum leveraging of EPO funding from a number of projects (and thus maximum educational benefits to both students and teachers), while providing a format for the expression of the unique science perspective of each project.

  6. LASSO NTCP predictors for the incidence of xerostomia in patients with head and neck squamous cell carcinoma and nasopharyngeal carcinoma

    Science.gov (United States)

    Lee, Tsair-Fwu; Liou, Ming-Hsiang; Huang, Yu-Jie; Chao, Pei-Ju; Ting, Hui-Min; Lee, Hsiao-Yi

    2014-01-01

    To predict the incidence of moderate-to-severe patient-reported xerostomia among head and neck squamous cell carcinoma (HNSCC) and nasopharyngeal carcinoma (NPC) patients treated with intensity-modulated radiotherapy (IMRT). Multivariable normal tissue complication probability (NTCP) models were developed by using quality of life questionnaire datasets from 152 patients with HNSCC and 84 patients with NPC. The primary endpoint was defined as moderate-to-severe xerostomia after IMRT. The numbers of predictive factors for a multivariable logistic regression model were determined using the least absolute shrinkage and selection operator (LASSO) with bootstrapping technique. Four predictive models were achieved by LASSO with the smallest number of factors while preserving predictive value with higher AUC performance. For all models, the dosimetric factors for the mean dose given to the contralateral and ipsilateral parotid gland were selected as the most significant predictors. Followed by the different clinical and socio-economic factors being selected, namely age, financial status, T stage, and education for different models were chosen. The predicted incidence of xerostomia for HNSCC and NPC patients can be improved by using multivariable logistic regression models with LASSO technique. The predictive model developed in HNSCC cannot be generalized to NPC cohort treated with IMRT without validation and vice versa. PMID:25163814

  7. Sparse inverse covariance estimation with the graphical lasso.

    Science.gov (United States)

    Friedman, Jerome; Hastie, Trevor; Tibshirani, Robert

    2008-07-01

    We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm--the graphical lasso--that is remarkably fast: It solves a 1000-node problem ( approximately 500,000 parameters) in at most a minute and is 30-4000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinshausen and Bühlmann (2006). We illustrate the method on some cell-signaling data from proteomics.

  8. Structural Graphical Lasso for Learning Mouse Brain Connectivity

    KAUST Repository

    Yang, Sen

    2015-08-07

    Investigations into brain connectivity aim to recover networks of brain regions connected by anatomical tracts or by functional associations. The inference of brain networks has recently attracted much interest due to the increasing availability of high-resolution brain imaging data. Sparse inverse covariance estimation with lasso and group lasso penalty has been demonstrated to be a powerful approach to discover brain networks. Motivated by the hierarchical structure of the brain networks, we consider the problem of estimating a graphical model with tree-structural regularization in this paper. The regularization encourages the graphical model to exhibit a brain-like structure. Specifically, in this hierarchical structure, hundreds of thousands of voxels serve as the leaf nodes of the tree. A node in the intermediate layer represents a region formed by voxels in the subtree rooted at that node. The whole brain is considered as the root of the tree. We propose to apply the tree-structural regularized graphical model to estimate the mouse brain network. However, the dimensionality of whole-brain data, usually on the order of hundreds of thousands, poses significant computational challenges. Efficient algorithms that are capable of estimating networks from high-dimensional data are highly desired. To address the computational challenge, we develop a screening rule which can quickly identify many zero blocks in the estimated graphical model, thereby dramatically reducing the computational cost of solving the proposed model. It is based on a novel insight on the relationship between screening and the so-called proximal operator that we first establish in this paper. We perform experiments on both synthetic data and real data from the Allen Developing Mouse Brain Atlas; results demonstrate the effectiveness and efficiency of the proposed approach.

  9. Inference for feature selection using the Lasso with high-dimensional data

    DEFF Research Database (Denmark)

    Brink-Jensen, Kasper; Ekstrøm, Claus Thorn

    2014-01-01

    Penalized regression models such as the Lasso have proved useful for variable selection in many fields - especially for situations with high-dimensional data where the numbers of predictors far exceeds the number of observations. These methods identify and rank variables of importance but do...... not generally provide any inference of the selected variables. Thus, the variables selected might be the "most important" but need not be significant. We propose a significance test for the selection found by the Lasso. We introduce a procedure that computes inference and p-values for features chosen...... by the Lasso. This method rephrases the null hypothesis and uses a randomization approach which ensures that the error rate is controlled even for small samples. We demonstrate the ability of the algorithm to compute $p$-values of the expected magnitude with simulated data using a multitude of scenarios...

  10. Statistically Modeling I-V Characteristics of CNT-FET with LASSO

    Science.gov (United States)

    Ma, Dongsheng; Ye, Zuochang; Wang, Yan

    2017-08-01

    With the advent of internet of things (IOT), the need for studying new material and devices for various applications is increasing. Traditionally we build compact models for transistors on the basis of physics. But physical models are expensive and need a very long time to adjust for non-ideal effects. As the vision for the application of many novel devices is not certain or the manufacture process is not mature, deriving generalized accurate physical models for such devices is very strenuous, whereas statistical modeling is becoming a potential method because of its data oriented property and fast implementation. In this paper, one classical statistical regression method, LASSO, is used to model the I-V characteristics of CNT-FET and a pseudo-PMOS inverter simulation based on the trained model is implemented in Cadence. The normalized relative mean square prediction error of the trained model versus experiment sample data and the simulation results show that the model is acceptable for digital circuit static simulation. And such modeling methodology can extend to general devices.

  11. Controlling the local false discovery rate in the adaptive Lasso

    KAUST Repository

    Sampson, J. N.

    2013-04-09

    The Lasso shrinkage procedure achieved its popularity, in part, by its tendency to shrink estimated coefficients to zero, and its ability to serve as a variable selection procedure. Using data-adaptive weights, the adaptive Lasso modified the original procedure to increase the penalty terms for those variables estimated to be less important by ordinary least squares. Although this modified procedure attained the oracle properties, the resulting models tend to include a large number of "false positives" in practice. Here, we adapt the concept of local false discovery rates (lFDRs) so that it applies to the sequence, λn, of smoothing parameters for the adaptive Lasso. We define the lFDR for a given λn to be the probability that the variable added to the model by decreasing λn to λn-δ is not associated with the outcome, where δ is a small value. We derive the relationship between the lFDR and λn, show lFDR =1 for traditional smoothing parameters, and show how to select λn so as to achieve a desired lFDR. We compare the smoothing parameters chosen to achieve a specified lFDR and those chosen to achieve the oracle properties, as well as their resulting estimates for model coefficients, with both simulation and an example from a genetic study of prostate specific antigen.

  12. Supervised group Lasso with applications to microarray data analysis

    Directory of Open Access Journals (Sweden)

    Huang Jian

    2007-02-01

    Full Text Available Abstract Background A tremendous amount of efforts have been devoted to identifying genes for diagnosis and prognosis of diseases using microarray gene expression data. It has been demonstrated that gene expression data have cluster structure, where the clusters consist of co-regulated genes which tend to have coordinated functions. However, most available statistical methods for gene selection do not take into consideration the cluster structure. Results We propose a supervised group Lasso approach that takes into account the cluster structure in gene expression data for gene selection and predictive model building. For gene expression data without biological cluster information, we first divide genes into clusters using the K-means approach and determine the optimal number of clusters using the Gap method. The supervised group Lasso consists of two steps. In the first step, we identify important genes within each cluster using the Lasso method. In the second step, we select important clusters using the group Lasso. Tuning parameters are determined using V-fold cross validation at both steps to allow for further flexibility. Prediction performance is evaluated using leave-one-out cross validation. We apply the proposed method to disease classification and survival analysis with microarray data. Conclusion We analyze four microarray data sets using the proposed approach: two cancer data sets with binary cancer occurrence as outcomes and two lymphoma data sets with survival outcomes. The results show that the proposed approach is capable of identifying a small number of influential gene clusters and important genes within those clusters, and has better prediction performance than existing methods.

  13. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.

    Science.gov (United States)

    Kong, Shengchun; Nan, Bin

    2014-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.

  14. Fast empirical Bayesian LASSO for multiple quantitative trait locus mapping

    Directory of Open Access Journals (Sweden)

    Xu Shizhong

    2011-05-01

    Full Text Available Abstract Background The Bayesian shrinkage technique has been applied to multiple quantitative trait loci (QTLs mapping to estimate the genetic effects of QTLs on quantitative traits from a very large set of possible effects including the main and epistatic effects of QTLs. Although the recently developed empirical Bayes (EB method significantly reduced computation comparing with the fully Bayesian approach, its speed and accuracy are limited by the fact that numerical optimization is required to estimate the variance components in the QTL model. Results We developed a fast empirical Bayesian LASSO (EBLASSO method for multiple QTL mapping. The fact that the EBLASSO can estimate the variance components in a closed form along with other algorithmic techniques render the EBLASSO method more efficient and accurate. Comparing with the EB method, our simulation study demonstrated that the EBLASSO method could substantially improve the computational speed and detect more QTL effects without increasing the false positive rate. Particularly, the EBLASSO algorithm running on a personal computer could easily handle a linear QTL model with more than 100,000 variables in our simulation study. Real data analysis also demonstrated that the EBLASSO method detected more reasonable effects than the EB method. Comparing with the LASSO, our simulation showed that the current version of the EBLASSO implemented in Matlab had similar speed as the LASSO implemented in Fortran, and that the EBLASSO detected the same number of true effects as the LASSO but a much smaller number of false positive effects. Conclusions The EBLASSO method can handle a large number of effects possibly including both the main and epistatic QTL effects, environmental effects and the effects of gene-environment interactions. It will be a very useful tool for multiple QTL mapping.

  15. Exact Covariance Thresholding into Connected Components for Large-Scale Graphical Lasso.

    Science.gov (United States)

    Mazumder, Rahul; Hastie, Trevor

    2012-03-01

    We consider the sparse inverse covariance regularization problem or graphical lasso with regularization parameter λ. Suppose the sample covariance graph formed by thresholding the entries of the sample covariance matrix at λ is decomposed into connected components. We show that the vertex-partition induced by the connected components of the thresholded sample covariance graph (at λ) is exactly equal to that induced by the connected components of the estimated concentration graph, obtained by solving the graphical lasso problem for the same λ. This characterizes a very interesting property of a path of graphical lasso solutions. Furthermore, this simple rule, when used as a wrapper around existing algorithms for the graphical lasso, leads to enormous performance gains. For a range of values of λ, our proposal splits a large graphical lasso problem into smaller tractable problems, making it possible to solve an otherwise infeasible large-scale problem. We illustrate the graceful scalability of our proposal via synthetic and real-life microarray examples.

  16. Logistic LASSO regression for the diagnosis of breast cancer using clinical demographic data and the BI-RADS lexicon for ultrasonography.

    Science.gov (United States)

    Kim, Sun Mi; Kim, Yongdai; Jeong, Kuhwan; Jeong, Heeyeong; Kim, Jiyoung

    2018-01-01

    The aim of this study was to compare the performance of image analysis for predicting breast cancer using two distinct regression models and to evaluate the usefulness of incorporating clinical and demographic data (CDD) into the image analysis in order to improve the diagnosis of breast cancer. This study included 139 solid masses from 139 patients who underwent a ultrasonography-guided core biopsy and had available CDD between June 2009 and April 2010. Three breast radiologists retrospectively reviewed 139 breast masses and described each lesion using the Breast Imaging Reporting and Data System (BI-RADS) lexicon. We applied and compared two regression methods-stepwise logistic (SL) regression and logistic least absolute shrinkage and selection operator (LASSO) regression-in which the BI-RADS descriptors and CDD were used as covariates. We investigated the performances of these regression methods and the agreement of radiologists in terms of test misclassification error and the area under the curve (AUC) of the tests. Logistic LASSO regression was superior (Pcomparable to the AUC with CDD (0.873 vs. 0.880, P=0.141). Logistic LASSO regression based on BI-RADS descriptors and CDD showed better performance than SL in predicting the presence of breast cancer. The use of CDD as a supplement to the BI-RADS descriptors significantly improved the prediction of breast cancer using logistic LASSO regression.

  17. Lasso and probabilistic inequalities for multivariate point processes

    DEFF Research Database (Denmark)

    Hansen, Niels Richard; Reynaud-Bouret, Patricia; Rivoirard, Vincent

    2015-01-01

    Due to its low computational cost, Lasso is an attractive regularization method for high-dimensional statistical settings. In this paper, we consider multivariate counting processes depending on an unknown function parameter to be estimated by linear combinations of a fixed dictionary. To select...... for multivariate Hawkes processes are proven, which allows us to check these assumptions by considering general dictionaries based on histograms, Fourier or wavelet bases. Motivated by problems of neuronal activity inference, we finally carry out a simulation study for multivariate Hawkes processes and compare our...... methodology with the adaptive Lasso procedure proposed by Zou in (J. Amer. Statist. Assoc. 101 (2006) 1418–1429). We observe an excellent behavior of our procedure. We rely on theoretical aspects for the essential question of tuning our methodology. Unlike adaptive Lasso of (J. Amer. Statist. Assoc. 101 (2006...

  18. Detection of radionuclides from weak and poorly resolved spectra using Lasso and subsampling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Er-Wei, E-mail: er-wei-bai@uiowa.edu [Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA 52242 (United States); Chan, Kung-sik, E-mail: kung-sik-chan@uiowa.edu [Department of Statistical and Actuarial Science, University of Iowa, Iowa City, IA 52242 (United States); Eichinger, William, E-mail: william-eichinger@uiowa.edu [Department of Civil and Environmental Engineering, University of Iowa, Iowa City, IA 52242 (United States); Kump, Paul [Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA 52242 (United States)

    2011-10-15

    We consider a problem of identification of nuclides from weak and poorly resolved spectra. A two stage algorithm is proposed and tested based on the principle of majority voting. The idea is to model gamma-ray counts as Poisson processes. Then, the average part is taken to be the model and the difference between the observed gamma-ray counts and the average is considered as random noise. In the linear part, the unknown coefficients correspond to if isotopes of interest are present or absent. Lasso types of algorithms are applied to find non-vanishing coefficients. Since Lasso or any prediction error based algorithm is inconsistent with variable selection for finite data length, an estimate of parameter distribution based on subsampling techniques is added in addition to Lasso. Simulation examples are provided in which the traditional peak detection algorithms fail to work and the proposed two stage algorithm performs well in terms of both the False Negative and False Positive errors. - Highlights: > Identification of nuclides from weak and poorly resolved spectra. > An algorithm is proposed and tested based on the principle of majority voting. > Lasso types of algorithms are applied to find non-vanishing coefficients. > An estimate of parameter distribution based on sub-sampling techniques is included. > Simulations compare the results of the proposed method with those of peak detection.

  19. Detection of radionuclides from weak and poorly resolved spectra using Lasso and subsampling techniques

    International Nuclear Information System (INIS)

    Bai, Er-Wei; Chan, Kung-sik; Eichinger, William; Kump, Paul

    2011-01-01

    We consider a problem of identification of nuclides from weak and poorly resolved spectra. A two stage algorithm is proposed and tested based on the principle of majority voting. The idea is to model gamma-ray counts as Poisson processes. Then, the average part is taken to be the model and the difference between the observed gamma-ray counts and the average is considered as random noise. In the linear part, the unknown coefficients correspond to if isotopes of interest are present or absent. Lasso types of algorithms are applied to find non-vanishing coefficients. Since Lasso or any prediction error based algorithm is inconsistent with variable selection for finite data length, an estimate of parameter distribution based on subsampling techniques is added in addition to Lasso. Simulation examples are provided in which the traditional peak detection algorithms fail to work and the proposed two stage algorithm performs well in terms of both the False Negative and False Positive errors. - Highlights: → Identification of nuclides from weak and poorly resolved spectra. → An algorithm is proposed and tested based on the principle of majority voting. → Lasso types of algorithms are applied to find non-vanishing coefficients. → An estimate of parameter distribution based on sub-sampling techniques is included. → Simulations compare the results of the proposed method with those of peak detection.

  20. Validating the LASSO algorithm by unmixing spectral signatures in multicolor phantoms

    Science.gov (United States)

    Samarov, Daniel V.; Clarke, Matthew; Lee, Ji Yoon; Allen, David; Litorja, Maritoni; Hwang, Jeeseong

    2012-03-01

    As hyperspectral imaging (HSI) sees increased implementation into the biological and medical elds it becomes increasingly important that the algorithms being used to analyze the corresponding output be validated. While certainly important under any circumstance, as this technology begins to see a transition from benchtop to bedside ensuring that the measurements being given to medical professionals are accurate and reproducible is critical. In order to address these issues work has been done in generating a collection of datasets which could act as a test bed for algorithms validation. Using a microarray spot printer a collection of three food color dyes, acid red 1 (AR), brilliant blue R (BBR) and erioglaucine (EG) are mixed together at dierent concentrations in varying proportions at dierent locations on a microarray chip. With the concentration and mixture proportions known at each location, using HSI an algorithm should in principle, based on estimates of abundances, be able to determine the concentrations and proportions of each dye at each location on the chip. These types of data are particularly important in the context of medical measurements as the resulting estimated abundances will be used to make critical decisions which can have a serious impact on an individual's health. In this paper we present a novel algorithm for processing and analyzing HSI data based on the LASSO algorithm (similar to "basis pursuit"). The LASSO is a statistical method for simultaneously performing model estimation and variable selection. In the context of estimating abundances in an HSI scene these so called "sparse" representations provided by the LASSO are appropriate as not every pixel will be expected to contain every endmember. The algorithm we present takes the general framework of the LASSO algorithm a step further and incorporates the rich spatial information which is available in HSI to further improve the estimates of abundance. We show our algorithm's improvement

  1. The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection.

    Science.gov (United States)

    Tang, Zaixiang; Shen, Yueping; Zhang, Xinyan; Yi, Nengjun

    2017-01-01

    Large-scale "omics" data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, there are considerable challenges in analyzing high-dimensional molecular data, including the large number of potential molecular predictors, limited number of samples, and small effect of each predictor. We propose new Bayesian hierarchical generalized linear models, called spike-and-slab lasso GLMs, for prognostic prediction and detection of associated genes using large-scale molecular data. The proposed model employs a spike-and-slab mixture double-exponential prior for coefficients that can induce weak shrinkage on large coefficients, and strong shrinkage on irrelevant coefficients. We have developed a fast and stable algorithm to fit large-scale hierarchal GLMs by incorporating expectation-maximization (EM) steps into the fast cyclic coordinate descent algorithm. The proposed approach integrates nice features of two popular methods, i.e., penalized lasso and Bayesian spike-and-slab variable selection. The performance of the proposed method is assessed via extensive simulation studies. The results show that the proposed approach can provide not only more accurate estimates of the parameters, but also better prediction. We demonstrate the proposed procedure on two cancer data sets: a well-known breast cancer data set consisting of 295 tumors, and expression data of 4919 genes; and the ovarian cancer data set from TCGA with 362 tumors, and expression data of 5336 genes. Our analyses show that the proposed procedure can generate powerful models for predicting outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). Copyright © 2017 by the Genetics Society of America.

  2. Matlab implementation of LASSO, LARS, the elastic net and SPCA

    DEFF Research Database (Denmark)

    2005-01-01

    There are a number of interesting variable selection methods available beside the regular forward selection and stepwise selection methods. Such approaches include LASSO (Least Absolute Shrinkage and Selection Operator), least angle regression (LARS) and elastic net (LARS-EN) regression. There al...... exists a method for calculating principal components with sparse loadings. This software package contains Matlab implementations of these functions. The standard implementations of these functions are available as add-on packages in S-Plus and R....

  3. Association between biomarkers and clinical characteristics in chronic subdural hematoma patients assessed with lasso regression.

    Directory of Open Access Journals (Sweden)

    Are Hugo Pripp

    Full Text Available Chronic subdural hematoma (CSDH is characterized by an "old" encapsulated collection of blood and blood breakdown products between the brain and its outermost covering (the dura. Recognized risk factors for development of CSDH are head injury, old age and using anticoagulation medication, but its underlying pathophysiological processes are still unclear. It is assumed that a complex local process of interrelated mechanisms including inflammation, neomembrane formation, angiogenesis and fibrinolysis could be related to its development and propagation. However, the association between the biomarkers of inflammation and angiogenesis, and the clinical and radiological characteristics of CSDH patients, need further investigation. The high number of biomarkers compared to the number of observations, the correlation between biomarkers, missing data and skewed distributions may limit the usefulness of classical statistical methods. We therefore explored lasso regression to assess the association between 30 biomarkers of inflammation and angiogenesis at the site of lesions, and selected clinical and radiological characteristics in a cohort of 93 patients. Lasso regression performs both variable selection and regularization to improve the predictive accuracy and interpretability of the statistical model. The results from the lasso regression showed analysis exhibited lack of robust statistical association between the biomarkers in hematoma fluid with age, gender, brain infarct, neurological deficiencies and volume of hematoma. However, there were associations between several of the biomarkers with postoperative recurrence requiring reoperation. The statistical analysis with lasso regression supported previous findings that the immunological characteristics of CSDH are local. The relationship between biomarkers, the radiological appearance of lesions and recurrence requiring reoperation have been inclusive using classical statistical methods on these data

  4. Consistent and Conservative Model Selection with the Adaptive LASSO in Stationary and Nonstationary Autoregressions

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    2016-01-01

    We show that the adaptive Lasso is oracle efficient in stationary and nonstationary autoregressions. This means that it estimates parameters consistently, selects the correct sparsity pattern, and estimates the coefficients belonging to the relevant variables at the same asymptotic efficiency...

  5. Sparse EEG/MEG source estimation via a group lasso.

    Directory of Open Access Journals (Sweden)

    Michael Lim

    Full Text Available Non-invasive recordings of human brain activity through electroencephalography (EEG or magnetoencelphalography (MEG are of value for both basic science and clinical applications in sensory, cognitive, and affective neuroscience. Here we introduce a new approach to estimating the intra-cranial sources of EEG/MEG activity measured from extra-cranial sensors. The approach is based on the group lasso, a sparse-prior inverse that has been adapted to take advantage of functionally-defined regions of interest for the definition of physiologically meaningful groups within a functionally-based common space. Detailed simulations using realistic source-geometries and data from a human Visual Evoked Potential experiment demonstrate that the group-lasso method has improved performance over traditional ℓ2 minimum-norm methods. In addition, we show that pooling source estimates across subjects over functionally defined regions of interest results in improvements in the accuracy of source estimates for both the group-lasso and minimum-norm approaches.

  6. Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models.

    Science.gov (United States)

    Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A

    2017-01-01

    The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods

  7. Comparison of partial least squares and lasso regression techniques as applied to laser-induced breakdown spectroscopy of geological samples

    International Nuclear Information System (INIS)

    Dyar, M.D.; Carmosino, M.L.; Breves, E.A.; Ozanne, M.V.; Clegg, S.M.; Wiens, R.C.

    2012-01-01

    A remote laser-induced breakdown spectrometer (LIBS) designed to simulate the ChemCam instrument on the Mars Science Laboratory Rover Curiosity was used to probe 100 geologic samples at a 9-m standoff distance. ChemCam consists of an integrated remote LIBS instrument that will probe samples up to 7 m from the mast of the rover and a remote micro-imager (RMI) that will record context images. The elemental compositions of 100 igneous and highly-metamorphosed rocks are determined with LIBS using three variations of multivariate analysis, with a goal of improving the analytical accuracy. Two forms of partial least squares (PLS) regression are employed with finely-tuned parameters: PLS-1 regresses a single response variable (elemental concentration) against the observation variables (spectra, or intensity at each of 6144 spectrometer channels), while PLS-2 simultaneously regresses multiple response variables (concentrations of the ten major elements in rocks) against the observation predictor variables, taking advantage of natural correlations between elements. Those results are contrasted with those from the multivariate regression technique of the least absolute shrinkage and selection operator (lasso), which is a penalized shrunken regression method that selects the specific channels for each element that explain the most variance in the concentration of that element. To make this comparison, we use results of cross-validation and of held-out testing, and employ unscaled and uncentered spectral intensity data because all of the input variables are already in the same units. Results demonstrate that the lasso, PLS-1, and PLS-2 all yield comparable results in terms of accuracy for this dataset. However, the interpretability of these methods differs greatly in terms of fundamental understanding of LIBS emissions. PLS techniques generate principal components, linear combinations of intensities at any number of spectrometer channels, which explain as much variance in the

  8. Multivariate sparse group lasso for the multivariate multiple linear regression with an arbitrary group structure.

    Science.gov (United States)

    Li, Yanming; Nan, Bin; Zhu, Ji

    2015-06-01

    We propose a multivariate sparse group lasso variable selection and estimation method for data with high-dimensional predictors as well as high-dimensional response variables. The method is carried out through a penalized multivariate multiple linear regression model with an arbitrary group structure for the regression coefficient matrix. It suits many biology studies well in detecting associations between multiple traits and multiple predictors, with each trait and each predictor embedded in some biological functional groups such as genes, pathways or brain regions. The method is able to effectively remove unimportant groups as well as unimportant individual coefficients within important groups, particularly for large p small n problems, and is flexible in handling various complex group structures such as overlapping or nested or multilevel hierarchical structures. The method is evaluated through extensive simulations with comparisons to the conventional lasso and group lasso methods, and is applied to an eQTL association study. © 2015, The International Biometric Society.

  9. Dictionary-Based Image Denoising by Fused-Lasso Atom Selection

    Directory of Open Access Journals (Sweden)

    Ao Li

    2014-01-01

    Full Text Available We proposed an efficient image denoising scheme by fused lasso with dictionary learning. The scheme has two important contributions. The first one is that we learned the patch-based adaptive dictionary by principal component analysis (PCA with clustering the image into many subsets, which can better preserve the local geometric structure. The second one is that we coded the patches in each subset by fused lasso with the clustering learned dictionary and proposed an iterative Split Bregman to solve it rapidly. We present the capabilities with several experiments. The results show that the proposed scheme is competitive to some excellent denoising algorithms.

  10. Comparison of partial least squares and lasso regression techniques as applied to laser-induced breakdown spectroscopy of geological samples

    Energy Technology Data Exchange (ETDEWEB)

    Dyar, M.D., E-mail: mdyar@mtholyoke.edu [Dept. of Astronomy, Mount Holyoke College, 50 College St., South Hadley, MA 01075 (United States); Carmosino, M.L.; Breves, E.A.; Ozanne, M.V. [Dept. of Astronomy, Mount Holyoke College, 50 College St., South Hadley, MA 01075 (United States); Clegg, S.M.; Wiens, R.C. [Los Alamos National Laboratory, P.O. Box 1663, MS J565, Los Alamos, NM 87545 (United States)

    2012-04-15

    A remote laser-induced breakdown spectrometer (LIBS) designed to simulate the ChemCam instrument on the Mars Science Laboratory Rover Curiosity was used to probe 100 geologic samples at a 9-m standoff distance. ChemCam consists of an integrated remote LIBS instrument that will probe samples up to 7 m from the mast of the rover and a remote micro-imager (RMI) that will record context images. The elemental compositions of 100 igneous and highly-metamorphosed rocks are determined with LIBS using three variations of multivariate analysis, with a goal of improving the analytical accuracy. Two forms of partial least squares (PLS) regression are employed with finely-tuned parameters: PLS-1 regresses a single response variable (elemental concentration) against the observation variables (spectra, or intensity at each of 6144 spectrometer channels), while PLS-2 simultaneously regresses multiple response variables (concentrations of the ten major elements in rocks) against the observation predictor variables, taking advantage of natural correlations between elements. Those results are contrasted with those from the multivariate regression technique of the least absolute shrinkage and selection operator (lasso), which is a penalized shrunken regression method that selects the specific channels for each element that explain the most variance in the concentration of that element. To make this comparison, we use results of cross-validation and of held-out testing, and employ unscaled and uncentered spectral intensity data because all of the input variables are already in the same units. Results demonstrate that the lasso, PLS-1, and PLS-2 all yield comparable results in terms of accuracy for this dataset. However, the interpretability of these methods differs greatly in terms of fundamental understanding of LIBS emissions. PLS techniques generate principal components, linear combinations of intensities at any number of spectrometer channels, which explain as much variance in the

  11. YM2: Continuum expectations, lattice convergence, and lassos

    International Nuclear Information System (INIS)

    Driver, B.K.

    1989-01-01

    The two dimensional Yang-Mills theory (YM 2 ) is analyzed in both the continuum and the lattice. In the complete axial gauge the continuum theory may be defined in terms of a Lie algebra valued white noise, and parallel translation may be defined by stochastic differential equations. This machinery is used to compute the expectations of gauge invariant functions of the parallel translation operators along a collection of curves C. The expectation values are expressed as finite dimensional integrals with densities that are products of the heat kernel on the structure group. The time parameters of the heat kernels are determined by the areas enclosed by the collection C, and the arguments are determined by the crossing topologies of the curves in C. The expectations for the Wilson lattice models have a similar structure, and from this it follows that in the limit of small lattice spacing the lattice expectations converge to the continuum expectations. It is also shown that the lasso variables advocated by L. Gross exist and are sufficient to generate all the measurable functions on the YM 2 -measure space. (orig.)

  12. LASSO observations at McDonald and OCA/CERGA: A preliminary analysis

    Science.gov (United States)

    Veillet, CH.; Fridelance, P.; Feraudy, D.; Boudon, Y.; Shelus, P. J.; Ricklefs, R. L.; Wiant, J. R.

    1993-01-01

    The Laser Synchronization from Synchronous Orbit (LASSO) observations between USA and Europe were made possible with the move of Meteosat 3/P2 toward 50 deg W. Two Lunar Laser Ranging stations participated into the observations: the MLRS at McDonald Observatory (Texas, USA) and OCA/CERGA (Grasse, France). Common sessions were performed since 30 Apr. 1992, and will be continued up to the next Meteosat 3/P2 move further West (planned for January 1993). The preliminary analysis made with the data already collected by the end of Nov. 1992 shows that the precision which can be obtained from LASSO is better than 100 ps, the accuracy depending on how well the stations maintain their time metrology, as well as on the quality of the calibration (still to be made.) For extracting such a precision from the data, the processing has been drastically changed compared to the initial LASSO data analysis. It takes into account all the measurements made, timings on board, and echoes at each station. This complete use of the data increased dramatically the confidence into the synchronization results.

  13. Logistic LASSO regression for the diagnosis of breast cancer using clinical demographic data and the BI-RADS lexicon for ultrasonography

    Directory of Open Access Journals (Sweden)

    Sun Mi Kim

    2018-01-01

    Full Text Available Purpose The aim of this study was to compare the performance of image analysis for predicting breast cancer using two distinct regression models and to evaluate the usefulness of incorporating clinical and demographic data (CDD into the image analysis in order to improve the diagnosis of breast cancer. Methods This study included 139 solid masses from 139 patients who underwent a ultrasonography-guided core biopsy and had available CDD between June 2009 and April 2010. Three breast radiologists retrospectively reviewed 139 breast masses and described each lesion using the Breast Imaging Reporting and Data System (BI-RADS lexicon. We applied and compared two regression methods-stepwise logistic (SL regression and logistic least absolute shrinkage and selection operator (LASSO regression-in which the BI-RADS descriptors and CDD were used as covariates. We investigated the performances of these regression methods and the agreement of radiologists in terms of test misclassification error and the area under the curve (AUC of the tests. Results Logistic LASSO regression was superior (P<0.05 to SL regression, regardless of whether CDD was included in the covariates, in terms of test misclassification errors (0.234 vs. 0.253, without CDD; 0.196 vs. 0.258, with CDD and AUC (0.785 vs. 0.759, without CDD; 0.873 vs. 0.735, with CDD. However, it was inferior (P<0.05 to the agreement of three radiologists in terms of test misclassification errors (0.234 vs. 0.168, without CDD; 0.196 vs. 0.088, with CDD and the AUC without CDD (0.785 vs. 0.844, P<0.001, but was comparable to the AUC with CDD (0.873 vs. 0.880, P=0.141. Conclusion Logistic LASSO regression based on BI-RADS descriptors and CDD showed better performance than SL in predicting the presence of breast cancer. The use of CDD as a supplement to the BI-RADS descriptors significantly improved the prediction of breast cancer using logistic LASSO regression.

  14. LASSO-ligand activity by surface similarity order: a new tool for ligand based virtual screening.

    Science.gov (United States)

    Reid, Darryl; Sadjad, Bashir S; Zsoldos, Zsolt; Simon, Aniko

    2008-01-01

    Virtual Ligand Screening (VLS) has become an integral part of the drug discovery process for many pharmaceutical companies. Ligand similarity searches provide a very powerful method of screening large databases of ligands to identify possible hits. If these hits belong to new chemotypes the method is deemed even more successful. eHiTS LASSO uses a new interacting surface point types (ISPT) molecular descriptor that is generated from the 3D structure of the ligand, but unlike most 3D descriptors it is conformation independent. Combined with a neural network machine learning technique, LASSO screens molecular databases at an ultra fast speed of 1 million structures in under 1 min on a standard PC. The results obtained from eHiTS LASSO trained on relatively small training sets of just 2, 4 or 8 actives are presented using the diverse directory of useful decoys (DUD) dataset. It is shown that over a wide range of receptor families, eHiTS LASSO is consistently able to enrich screened databases and provides scaffold hopping ability.

  15. Geographically weighted lasso (GWL) study for modeling the diarrheic to achieve open defecation free (ODF) target

    Science.gov (United States)

    Arumsari, Nurvita; Sutidjo, S. U.; Brodjol; Soedjono, Eddy S.

    2014-03-01

    Diarrhea has been one main cause of morbidity and mortality to children around the world, especially in the developing countries According to available data that was mentioned. It showed that sanitary and healthy lifestyle implementation by the inhabitants was not good yet. Inadequacy of environmental influence and the availability of health services were suspected factors which influenced diarrhea cases happened followed by heightened percentage of the diarrheic. This research is aimed at modelling the diarrheic by using Geographically Weighted Lasso method. With the existence of spatial heterogeneity was tested by Breusch Pagan, it was showed that diarrheic modeling with weighted regression, especially GWR and GWL, can explain the variation in each location. But, the absence of multi-collinearity cases on predictor variables, which were affecting the diarrheic, resulted in GWR and GWL modelling to be not different or identical. It is shown from the resulting MSE value. While from R2 value which usually higher on GWL model showed a significant variable predictor based on more parametric shrinkage value.

  16. IPF-LASSO: Integrative L1-Penalized Regression with Penalty Factors for Prediction Based on Multi-Omics Data

    Directory of Open Access Journals (Sweden)

    Anne-Laure Boulesteix

    2017-01-01

    Full Text Available As modern biotechnologies advance, it has become increasingly frequent that different modalities of high-dimensional molecular data (termed “omics” data in this paper, such as gene expression, methylation, and copy number, are collected from the same patient cohort to predict the clinical outcome. While prediction based on omics data has been widely studied in the last fifteen years, little has been done in the statistical literature on the integration of multiple omics modalities to select a subset of variables for prediction, which is a critical task in personalized medicine. In this paper, we propose a simple penalized regression method to address this problem by assigning different penalty factors to different data modalities for feature selection and prediction. The penalty factors can be chosen in a fully data-driven fashion by cross-validation or by taking practical considerations into account. In simulation studies, we compare the prediction performance of our approach, called IPF-LASSO (Integrative LASSO with Penalty Factors and implemented in the R package ipflasso, with the standard LASSO and sparse group LASSO. The use of IPF-LASSO is also illustrated through applications to two real-life cancer datasets. All data and codes are available on the companion website to ensure reproducibility.

  17. Factors associated with performing tuberculosis screening of HIV-positive patients in Ghana: LASSO-based predictor selection in a large public health data set

    Directory of Open Access Journals (Sweden)

    Susanne Mueller-Using

    2016-07-01

    Full Text Available Abstract Background The purpose of this study is to propose the Least Absolute Shrinkage and Selection Operators procedure (LASSO as an alternative to conventional variable selection models, as it allows for easy interpretation and handles multicollinearities. We developed a model on the basis of LASSO-selected parameters in order to link associated demographical, socio-economical, clinical and immunological factors to performing tuberculosis screening in HIV-positive patients in Ghana. Methods Applying the LASSO method and multivariate logistic regression analysis on a large public health data set, we selected relevant predictors related to tuberculosis screening. Results One Thousand Ninety Five patients infected with HIV were enrolled into this study with 691 (63.2 % of them having tuberculosis screening documented in their patient folders. Predictors found to be significantly associated with performance of tuberculosis screening can be classified into factors related to the clinician’s perception of the clinical state, as well as those related to PLHIV’s awareness. These factors include newly diagnosed HIV infections (n = 354 (32.42 %, aOR 1.84, current CD4+ T cell count (aOR 0.92, non-availability of HIV type (n = 787 (72.07 %, aOR 0.56, chronic cough (n = 32 (2.93 %, aOR 5.07, intake of co-trimoxazole (n = 271 (24.82 %, aOR 2.31, vitamin supplementation (n = 220 (20.15 %, aOR 2.64 as well as the use of mosquito bed nets (n = 613 (56.14 %, aOR 1.53. Conclusions Accelerated TB screening among newly diagnosed HIV-patients indicates that application of the WHO screening form for intensifying tuberculosis case finding among HIV-positive individuals in resource-limited settings is increasingly adopted. However, screening for TB in PLHIV is still impacted by clinician’s perception of patient’s health state and PLHIV’s health awareness. Education of staff, counselling of PLHIV and sufficient financing are

  18. OPTIMAL WAVELENGTH SELECTION ON HYPERSPECTRAL DATA WITH FUSED LASSO FOR BIOMASS ESTIMATION OF TROPICAL RAIN FOREST

    Directory of Open Access Journals (Sweden)

    T. Takayama

    2016-06-01

    Full Text Available Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.

  19. Simultaneous Channel and Feature Selection of Fused EEG Features Based on Sparse Group Lasso

    Directory of Open Access Journals (Sweden)

    Jin-Jia Wang

    2015-01-01

    Full Text Available Feature extraction and classification of EEG signals are core parts of brain computer interfaces (BCIs. Due to the high dimension of the EEG feature vector, an effective feature selection algorithm has become an integral part of research studies. In this paper, we present a new method based on a wrapped Sparse Group Lasso for channel and feature selection of fused EEG signals. The high-dimensional fused features are firstly obtained, which include the power spectrum, time-domain statistics, AR model, and the wavelet coefficient features extracted from the preprocessed EEG signals. The wrapped channel and feature selection method is then applied, which uses the logistical regression model with Sparse Group Lasso penalized function. The model is fitted on the training data, and parameter estimation is obtained by modified blockwise coordinate descent and coordinate gradient descent method. The best parameters and feature subset are selected by using a 10-fold cross-validation. Finally, the test data is classified using the trained model. Compared with existing channel and feature selection methods, results show that the proposed method is more suitable, more stable, and faster for high-dimensional feature fusion. It can simultaneously achieve channel and feature selection with a lower error rate. The test accuracy on the data used from international BCI Competition IV reached 84.72%.

  20. Fused Adaptive Lasso for Spatial and Temporal Quantile Function Estimation

    KAUST Repository

    Sun, Ying; Wang, Huixia J.; Fuentes, Montserrat

    2015-01-01

    and temporal data with a fused adaptive Lasso penalty to accommodate the dependence in space and time. This method penalizes the difference among neighboring quantiles, hence it is desirable for applications with features ordered in time or space without

  1. Sungsanpin, a lasso peptide from a deep-sea streptomycete.

    Science.gov (United States)

    Um, Soohyun; Kim, Young-Joo; Kwon, Hyuknam; Wen, He; Kim, Seong-Hwan; Kwon, Hak Cheol; Park, Sunghyouk; Shin, Jongheon; Oh, Dong-Chan

    2013-05-24

    Sungsanpin (1), a new 15-amino-acid peptide, was discovered from a Streptomyces species isolated from deep-sea sediment collected off Jeju Island, Korea. The planar structure of 1 was determined by 1D and 2D NMR spectroscopy, mass spectrometry, and UV spectroscopy. The absolute configurations of the stereocenters in this compound were assigned by derivatizations of the hydrolysate of 1 with Marfey's reagents and 2,3,4,6-tetra-O-acetyl-β-d-glucopyranosyl isothiocyanate, followed by LC-MS analysis. Careful analysis of the ROESY NMR spectrum and three-dimensional structure calculations revealed that sungsanpin possesses the features of a lasso peptide: eight amino acids (-Gly(1)-Phe-Gly-Ser-Lys-Pro-Ile-Asp(8)-) that form a cyclic peptide and seven amino acids (-Ser(9)-Phe-Gly-Leu-Ser-Trp-Leu(15)) that form a tail that loops through the ring. Sungsanpin is thus the first example of a lasso peptide isolated from a marine-derived microorganism. Sungsanpin displayed inhibitory activity in a cell invasion assay with the human lung cancer cell line A549.

  2. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  4. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    International Nuclear Information System (INIS)

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t

    2012-01-01

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  5. Lasso and probabilistic inequalities for multivariate point processes

    OpenAIRE

    Hansen, Niels Richard; Reynaud-Bouret, Patricia; Rivoirard, Vincent

    2012-01-01

    Due to its low computational cost, Lasso is an attractive regularization method for high-dimensional statistical settings. In this paper, we consider multivariate counting processes depending on an unknown function parameter to be estimated by linear combinations of a fixed dictionary. To select coefficients, we propose an adaptive $\\ell_{1}$-penalization methodology, where data-driven weights of the penalty are derived from new Bernstein type inequalities for martingales. Oracle inequalities...

  6. Breast cancer tumor classification using LASSO method selection approach

    International Nuclear Information System (INIS)

    Celaya P, J. M.; Ortiz M, J. A.; Martinez B, M. R.; Solis S, L. O.; Castaneda M, R.; Garza V, I.; Martinez F, M.; Ortiz R, J. M.

    2016-10-01

    Breast cancer is one of the leading causes of deaths worldwide among women. Early tumor detection is key in reducing breast cancer deaths and screening mammography is the widest available method for early detection. Mammography is the most common and effective breast cancer screening test. However, the rate of positive findings is very low, making the radiologic interpretation monotonous and biased toward errors. In an attempt to alleviate radiological workload, this work presents a computer-aided diagnosis (CAD x) method aimed to automatically classify tumor lesions into malign or benign as a means to a second opinion. The CAD x methos, extracts image features, and classifies the screening mammogram abnormality into one of two categories: subject at risk of having malignant tumor (malign), and healthy subject (benign). In this study, 143 abnormal segmentation s (57 malign and 86 benign) from the Breast Cancer Digital Repository (BCD R) public database were used to train and evaluate the CAD x system. Percentile-rank (p-rank) was used to standardize the data. Using the LASSO feature selection methodology, the model achieved a Leave-one-out-cross-validation area under the receiver operating characteristic curve (Auc) of 0.950. The proposed method has the potential to rank abnormal lesions with high probability of malignant findings aiding in the detection of potential malign cases as a second opinion to the radiologist. (Author)

  7. Breast cancer tumor classification using LASSO method selection approach

    Energy Technology Data Exchange (ETDEWEB)

    Celaya P, J. M.; Ortiz M, J. A.; Martinez B, M. R.; Solis S, L. O.; Castaneda M, R.; Garza V, I.; Martinez F, M.; Ortiz R, J. M., E-mail: morvymm@yahoo.com.mx [Universidad Autonoma de Zacatecas, Av. Ramon Lopez Velarde 801, Col. Centro, 98000 Zacatecas, Zac. (Mexico)

    2016-10-15

    Breast cancer is one of the leading causes of deaths worldwide among women. Early tumor detection is key in reducing breast cancer deaths and screening mammography is the widest available method for early detection. Mammography is the most common and effective breast cancer screening test. However, the rate of positive findings is very low, making the radiologic interpretation monotonous and biased toward errors. In an attempt to alleviate radiological workload, this work presents a computer-aided diagnosis (CAD x) method aimed to automatically classify tumor lesions into malign or benign as a means to a second opinion. The CAD x methos, extracts image features, and classifies the screening mammogram abnormality into one of two categories: subject at risk of having malignant tumor (malign), and healthy subject (benign). In this study, 143 abnormal segmentation s (57 malign and 86 benign) from the Breast Cancer Digital Repository (BCD R) public database were used to train and evaluate the CAD x system. Percentile-rank (p-rank) was used to standardize the data. Using the LASSO feature selection methodology, the model achieved a Leave-one-out-cross-validation area under the receiver operating characteristic curve (Auc) of 0.950. The proposed method has the potential to rank abnormal lesions with high probability of malignant findings aiding in the detection of potential malign cases as a second opinion to the radiologist. (Author)

  8. Fine-mapping additive and dominant SNP effects using group-LASSO and Fractional Resample Model Averaging

    Science.gov (United States)

    Sabourin, Jeremy; Nobel, Andrew B.; Valdar, William

    2014-01-01

    Genomewide association studies sometimes identify loci at which both the number and identities of the underlying causal variants are ambiguous. In such cases, statistical methods that model effects of multiple SNPs simultaneously can help disentangle the observed patterns of association and provide information about how those SNPs could be prioritized for follow-up studies. Current multi-SNP methods, however, tend to assume that SNP effects are well captured by additive genetics; yet when genetic dominance is present, this assumption translates to reduced power and faulty prioritizations. We describe a statistical procedure for prioritizing SNPs at GWAS loci that efficiently models both additive and dominance effects. Our method, LLARRMA-dawg, combines a group LASSO procedure for sparse modeling of multiple SNP effects with a resampling procedure based on fractional observation weights; it estimates for each SNP the robustness of association with the phenotype both to sampling variation and to competing explanations from other SNPs. In producing a SNP prioritization that best identifies underlying true signals, we show that: our method easily outperforms a single marker analysis; when additive-only signals are present, our joint model for additive and dominance is equivalent to or only slightly less powerful than modeling additive-only effects; and, when dominance signals are present, even in combination with substantial additive effects, our joint model is unequivocally more powerful than a model assuming additivity. We also describe how performance can be improved through calibrated randomized penalization, and discuss how dominance in ungenotyped SNPs can be incorporated through either heterozygote dosage or multiple imputation. PMID:25417853

  9. Similarity regularized sparse group lasso for cup to disc ratio computation.

    Science.gov (United States)

    Cheng, Jun; Zhang, Zhuo; Tao, Dacheng; Wong, Damon Wing Kee; Liu, Jiang; Baskaran, Mani; Aung, Tin; Wong, Tien Yin

    2017-08-01

    Automatic cup to disc ratio (CDR) computation from color fundus images has shown to be promising for glaucoma detection. Over the past decade, many algorithms have been proposed. In this paper, we first review the recent work in the area and then present a novel similarity-regularized sparse group lasso method for automated CDR estimation. The proposed method reconstructs the testing disc image based on a set of reference disc images by integrating the similarity between testing and the reference disc images with the sparse group lasso constraints. The reconstruction coefficients are then used to estimate the CDR of the testing image. The proposed method has been validated using 650 images with manually annotated CDRs. Experimental results show an average CDR error of 0.0616 and a correlation coefficient of 0.7, outperforming other methods. The areas under curve in the diagnostic test reach 0.843 and 0.837 when manual and automatically segmented discs are used respectively, better than other methods as well.

  10. Discovery and replication of gene influences on brain structure using LASSO regression

    Directory of Open Access Journals (Sweden)

    Omid eKohannim

    2012-08-01

    Full Text Available We implemented LASSO (least absolute shrinkage and selection operator regression to evaluate gene effects in genome-wide association studies (GWAS of brain images, using an MRI-derived temporal lobe volume measure from 729 subjects scanned as part of the Alzheimer’s Disease Neuroimaging Initiative (ADNI. Sparse groups of SNPs in individual genes were selected by LASSO, which identifies efficient sets of variants influencing the data. These SNPs were considered jointly when assessing their association with neuroimaging measures. We discovered 22 genes that passed genome-wide significance for influencing temporal lobe volume. This was a substantially greater number of significant genes compared to those found with standard, univariate GWAS. These top genes are all expressed in the brain and include genes previously related to brain function or neuropsychiatric disorders such as MACROD2, SORCS2, GRIN2B, MAGI2, NPAS3, CLSTN2, GABRG3, NRXN3, PRKAG2, GAS7, RBFOX1, ADARB2, CHD4 and CDH13. The top genes we identified with this method also displayed significant and widespread post-hoc effects on voxelwise, tensor-based morphometry (TBM maps of the temporal lobes. The most significantly associated gene was an autism susceptibility gene known as MACROD2. We were able to successfully replicate the effect of the MACROD2 gene in an independent cohort of 564 young, Australian healthy adult twins and siblings scanned with MRI (mean age: 23.8±2.2 SD years. In exploratory analyses, three selected SNPs in the MACROD2 gene were also significantly associated with performance intelligence quotient (PIQ. Our approach powerfully complements univariate techniques in detecting influences of genes on the living brain.

  11. Pierced Lasso Bundles are a new class of knot-like motifs.

    Directory of Open Access Journals (Sweden)

    Ellinor Haglund

    2014-06-01

    Full Text Available A four-helix bundle is a well-characterized motif often used as a target for designed pharmaceutical therapeutics and nutritional supplements. Recently, we discovered a new structural complexity within this motif created by a disulphide bridge in the long-chain helical bundle cytokine leptin. When oxidized, leptin contains a disulphide bridge creating a covalent-loop through which part of the polypeptide chain is threaded (as seen in knotted proteins. We explored whether other proteins contain a similar intriguing knot-like structure as in leptin and discovered 11 structurally homologous proteins in the PDB. We call this new helical family class the Pierced Lasso Bundle (PLB and the knot-like threaded structural motif a Pierced Lasso (PL. In the current study, we use structure-based simulation to investigate the threading/folding mechanisms for all the PLBs along with three unthreaded homologs as the covalent loop (or lasso in leptin is important in folding dynamics and activity. We find that the presence of a small covalent loop leads to a mechanism where structural elements slipknot to thread through the covalent loop. Larger loops use a piercing mechanism where the free terminal plugs through the covalent loop. Remarkably, the position of the loop as well as its size influences the native state dynamics, which can impact receptor binding and biological activity. This previously unrecognized complexity of knot-like proteins within the helical bundle family comprises a completely new class within the knot family, and the hidden complexity we unraveled in the PLBs is expected to be found in other protein structures outside the four-helix bundles. The insights gained here provide critical new elements for future investigation of this emerging class of proteins, where function and the energetic landscape can be controlled by hidden topology, and should be take into account in ab initio predictions of newly identified protein targets.

  12. Handling high predictor dimensionality in slope-unit-based landslide susceptibility models through LASSO-penalized Generalized Linear Model

    KAUST Repository

    Camilo, Daniela Castro; Lombardo, Luigi; Mai, Paul Martin; Dou, Jie; Huser, Raphaë l

    2017-01-01

    Grid-based landslide susceptibility models at regional scales are computationally demanding when using a fine grid resolution. Conversely, Slope-Unit (SU) based susceptibility models allows to investigate the same areas offering two main advantages: 1) a smaller computational burden and 2) a more geomorphologically-oriented interpretation. In this contribution, we generate SU-based landslide susceptibility for the Sado Island in Japan. This island is characterized by deep-seated landslides which we assume can only limitedly be explained by the first two statistical moments (mean and variance) of a set of predictors within each slope unit. As a consequence, in a nested experiment, we first analyse the distributions of a set of continuous predictors within each slope unit computing the standard deviation and quantiles from 0.05 to 0.95 with a step of 0.05. These are then used as predictors for landslide susceptibility. In addition, we combine shape indices for polygon features and the normalized extent of each class belonging to the outcropping lithology in a given SU. This procedure significantly enlarges the size of the predictor hyperspace, thus producing a high level of slope-unit characterization. In a second step, we adopt a LASSO-penalized Generalized Linear Model to shrink back the predictor set to a sensible and interpretable number, carrying only the most significant covariates in the models. As a result, we are able to document the geomorphic features (e.g., 95% quantile of Elevation and 5% quantile of Plan Curvature) that primarily control the SU-based susceptibility within the test area while producing high predictive performances. The implementation of the statistical analyses are included in a parallelized R script (LUDARA) which is here made available for the community to replicate analogous experiments.

  13. Handling high predictor dimensionality in slope-unit-based landslide susceptibility models through LASSO-penalized Generalized Linear Model

    KAUST Repository

    Camilo, Daniela Castro

    2017-08-30

    Grid-based landslide susceptibility models at regional scales are computationally demanding when using a fine grid resolution. Conversely, Slope-Unit (SU) based susceptibility models allows to investigate the same areas offering two main advantages: 1) a smaller computational burden and 2) a more geomorphologically-oriented interpretation. In this contribution, we generate SU-based landslide susceptibility for the Sado Island in Japan. This island is characterized by deep-seated landslides which we assume can only limitedly be explained by the first two statistical moments (mean and variance) of a set of predictors within each slope unit. As a consequence, in a nested experiment, we first analyse the distributions of a set of continuous predictors within each slope unit computing the standard deviation and quantiles from 0.05 to 0.95 with a step of 0.05. These are then used as predictors for landslide susceptibility. In addition, we combine shape indices for polygon features and the normalized extent of each class belonging to the outcropping lithology in a given SU. This procedure significantly enlarges the size of the predictor hyperspace, thus producing a high level of slope-unit characterization. In a second step, we adopt a LASSO-penalized Generalized Linear Model to shrink back the predictor set to a sensible and interpretable number, carrying only the most significant covariates in the models. As a result, we are able to document the geomorphic features (e.g., 95% quantile of Elevation and 5% quantile of Plan Curvature) that primarily control the SU-based susceptibility within the test area while producing high predictive performances. The implementation of the statistical analyses are included in a parallelized R script (LUDARA) which is here made available for the community to replicate analogous experiments.

  14. On the Oracle Property of the Adaptive LASSO in Stationary and Nonstationary Autoregressions

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    We show that the Adaptive LASSO is oracle efficient in stationary and non-stationary autoregressions. This means that it estimates parameters consistently, selects the correct sparsity pattern, and estimates the coefficients belonging to the relevant variables at the same asymptotic efficiency...

  15. Least absolute shrinkage and selection operator type methods for the identification of serum biomarkers of overweight and obesity: simulation and application

    Directory of Open Access Journals (Sweden)

    Monica M. Vasquez

    2016-11-01

    Full Text Available Abstract Background The study of circulating biomarkers and their association with disease outcomes has become progressively complex due to advances in the measurement of these biomarkers through multiplex technologies. The Least Absolute Shrinkage and Selection Operator (LASSO is a data analysis method that may be utilized for biomarker selection in these high dimensional data. However, it is unclear which LASSO-type method is preferable when considering data scenarios that may be present in serum biomarker research, such as high correlation between biomarkers, weak associations with the outcome, and sparse number of true signals. The goal of this study was to compare the LASSO to five LASSO-type methods given these scenarios. Methods A simulation study was performed to compare the LASSO, Adaptive LASSO, Elastic Net, Iterated LASSO, Bootstrap-Enhanced LASSO, and Weighted Fusion for the binary logistic regression model. The simulation study was designed to reflect the data structure of the population-based Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD, specifically the sample size (N = 1000 for total population, 500 for sub-analyses, correlation of biomarkers (0.20, 0.50, 0.80, prevalence of overweight (40% and obese (12% outcomes, and the association of outcomes with standardized serum biomarker concentrations (log-odds ratio = 0.05–1.75. Each LASSO-type method was then applied to the TESAOD data of 306 overweight, 66 obese, and 463 normal-weight subjects with a panel of 86 serum biomarkers. Results Based on the simulation study, no method had an overall superior performance. The Weighted Fusion correctly identified more true signals, but incorrectly included more noise variables. The LASSO and Elastic Net correctly identified many true signals and excluded more noise variables. In the application study, biomarkers of overweight and obesity selected by all methods were Adiponectin, Apolipoprotein H, Calcitonin, CD

  16. Least absolute shrinkage and selection operator type methods for the identification of serum biomarkers of overweight and obesity: simulation and application.

    Science.gov (United States)

    Vasquez, Monica M; Hu, Chengcheng; Roe, Denise J; Chen, Zhao; Halonen, Marilyn; Guerra, Stefano

    2016-11-14

    The study of circulating biomarkers and their association with disease outcomes has become progressively complex due to advances in the measurement of these biomarkers through multiplex technologies. The Least Absolute Shrinkage and Selection Operator (LASSO) is a data analysis method that may be utilized for biomarker selection in these high dimensional data. However, it is unclear which LASSO-type method is preferable when considering data scenarios that may be present in serum biomarker research, such as high correlation between biomarkers, weak associations with the outcome, and sparse number of true signals. The goal of this study was to compare the LASSO to five LASSO-type methods given these scenarios. A simulation study was performed to compare the LASSO, Adaptive LASSO, Elastic Net, Iterated LASSO, Bootstrap-Enhanced LASSO, and Weighted Fusion for the binary logistic regression model. The simulation study was designed to reflect the data structure of the population-based Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD), specifically the sample size (N = 1000 for total population, 500 for sub-analyses), correlation of biomarkers (0.20, 0.50, 0.80), prevalence of overweight (40%) and obese (12%) outcomes, and the association of outcomes with standardized serum biomarker concentrations (log-odds ratio = 0.05-1.75). Each LASSO-type method was then applied to the TESAOD data of 306 overweight, 66 obese, and 463 normal-weight subjects with a panel of 86 serum biomarkers. Based on the simulation study, no method had an overall superior performance. The Weighted Fusion correctly identified more true signals, but incorrectly included more noise variables. The LASSO and Elastic Net correctly identified many true signals and excluded more noise variables. In the application study, biomarkers of overweight and obesity selected by all methods were Adiponectin, Apolipoprotein H, Calcitonin, CD14, Complement 3, C-reactive protein, Ferritin

  17. Fused Adaptive Lasso for Spatial and Temporal Quantile Function Estimation

    KAUST Repository

    Sun, Ying

    2015-09-01

    Quantile functions are important in characterizing the entire probability distribution of a random variable, especially when the tail of a skewed distribution is of interest. This article introduces new quantile function estimators for spatial and temporal data with a fused adaptive Lasso penalty to accommodate the dependence in space and time. This method penalizes the difference among neighboring quantiles, hence it is desirable for applications with features ordered in time or space without replicated observations. The theoretical properties are investigated and the performances of the proposed methods are evaluated by simulations. The proposed method is applied to particulate matter (PM) data from the Community Multiscale Air Quality (CMAQ) model to characterize the upper quantiles, which are crucial for studying spatial association between PM concentrations and adverse human health effects. © 2016 American Statistical Association and the American Society for Quality.

  18. Improved Variable Selection Algorithm Using a LASSO-Type Penalty, with an Application to Assessing Hepatitis B Infection Relevant Factors in Community Residents

    Science.gov (United States)

    Guo, Pi; Zeng, Fangfang; Hu, Xiaomin; Zhang, Dingmei; Zhu, Shuming; Deng, Yu; Hao, Yuantao

    2015-01-01

    Objectives In epidemiological studies, it is important to identify independent associations between collective exposures and a health outcome. The current stepwise selection technique ignores stochastic errors and suffers from a lack of stability. The alternative LASSO-penalized regression model can be applied to detect significant predictors from a pool of candidate variables. However, this technique is prone to false positives and tends to create excessive biases. It remains challenging to develop robust variable selection methods and enhance predictability. Material and methods Two improved algorithms denoted the two-stage hybrid and bootstrap ranking procedures, both using a LASSO-type penalty, were developed for epidemiological association analysis. The performance of the proposed procedures and other methods including conventional LASSO, Bolasso, stepwise and stability selection models were evaluated using intensive simulation. In addition, methods were compared by using an empirical analysis based on large-scale survey data of hepatitis B infection-relevant factors among Guangdong residents. Results The proposed procedures produced comparable or less biased selection results when compared to conventional variable selection models. In total, the two newly proposed procedures were stable with respect to various scenarios of simulation, demonstrating a higher power and a lower false positive rate during variable selection than the compared methods. In empirical analysis, the proposed procedures yielding a sparse set of hepatitis B infection-relevant factors gave the best predictive performance and showed that the procedures were able to select a more stringent set of factors. The individual history of hepatitis B vaccination, family and individual history of hepatitis B infection were associated with hepatitis B infection in the studied residents according to the proposed procedures. Conclusions The newly proposed procedures improve the identification of

  19. The Bayesian Covariance Lasso.

    Science.gov (United States)

    Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G

    2013-04-01

    Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size ( n ) is less than the dimension ( d ), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data.

  20. Selecting Sensitive Parameter Subsets in Dynamical Models With Application to Biomechanical System Identification.

    Science.gov (United States)

    Ramadan, Ahmed; Boss, Connor; Choi, Jongeun; Peter Reeves, N; Cholewicki, Jacek; Popovich, John M; Radcliffe, Clark J

    2018-07-01

    Estimating many parameters of biomechanical systems with limited data may achieve good fit but may also increase 95% confidence intervals in parameter estimates. This results in poor identifiability in the estimation problem. Therefore, we propose a novel method to select sensitive biomechanical model parameters that should be estimated, while fixing the remaining parameters to values obtained from preliminary estimation. Our method relies on identifying the parameters to which the measurement output is most sensitive. The proposed method is based on the Fisher information matrix (FIM). It was compared against the nonlinear least absolute shrinkage and selection operator (LASSO) method to guide modelers on the pros and cons of our FIM method. We present an application identifying a biomechanical parametric model of a head position-tracking task for ten human subjects. Using measured data, our method (1) reduced model complexity by only requiring five out of twelve parameters to be estimated, (2) significantly reduced parameter 95% confidence intervals by up to 89% of the original confidence interval, (3) maintained goodness of fit measured by variance accounted for (VAF) at 82%, (4) reduced computation time, where our FIM method was 164 times faster than the LASSO method, and (5) selected similar sensitive parameters to the LASSO method, where three out of five selected sensitive parameters were shared by FIM and LASSO methods.

  1. [Multi-mathematical modelings for compatibility optimization of Jiangzhi granules].

    Science.gov (United States)

    Yang, Ming; Zhang, Li; Ge, Yingli; Lu, Yanliu; Ji, Guang

    2011-12-01

    To investigate into the method of "multi activity index evaluation and combination optimized of mult-component" for Chinese herbal formulas. According to the scheme of uniform experimental design, efficacy experiment, multi index evaluation, least absolute shrinkage, selection operator (LASSO) modeling, evolutionary optimization algorithm, validation experiment, we optimized the combination of Jiangzhi granules based on the activity indexes of blood serum ALT, ALT, AST, TG, TC, HDL, LDL and TG level of liver tissues, ratio of liver tissue to body. Analytic hierarchy process (AHP) combining with criteria importance through intercriteria correlation (CRITIC) for multi activity index evaluation was more reasonable and objective, it reflected the information of activity index's order and objective sample data. LASSO algorithm modeling could accurately reflect the relationship between different combination of Jiangzhi granule and the activity comprehensive indexes. The optimized combination of Jiangzhi granule showed better values of the activity comprehensive indexed than the original formula after the validation experiment. AHP combining with CRITIC can be used for multi activity index evaluation and LASSO algorithm, it is suitable for combination optimized of Chinese herbal formulas.

  2. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...

  3. Group spike-and-slab lasso generalized linear models for disease prediction and associated genes detection by incorporating pathway information.

    Science.gov (United States)

    Tang, Zaixiang; Shen, Yueping; Li, Yan; Zhang, Xinyan; Wen, Jia; Qian, Chen'ao; Zhuang, Wenzhuo; Shi, Xinghua; Yi, Nengjun

    2018-03-15

    Large-scale molecular data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, standard approaches for omics data analysis ignore the group structure among genes encoded in functional relationships or pathway information. We propose new Bayesian hierarchical generalized linear models, called group spike-and-slab lasso GLMs, for predicting disease outcomes and detecting associated genes by incorporating large-scale molecular data and group structures. The proposed model employs a mixture double-exponential prior for coefficients that induces self-adaptive shrinkage amount on different coefficients. The group information is incorporated into the model by setting group-specific parameters. We have developed a fast and stable deterministic algorithm to fit the proposed hierarchal GLMs, which can perform variable selection within groups. We assess the performance of the proposed method on several simulated scenarios, by varying the overlap among groups, group size, number of non-null groups, and the correlation within group. Compared with existing methods, the proposed method provides not only more accurate estimates of the parameters but also better prediction. We further demonstrate the application of the proposed procedure on three cancer datasets by utilizing pathway structures of genes. Our results show that the proposed method generates powerful models for predicting disease outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). nyi@uab.edu. Supplementary data are available at Bioinformatics online.

  4. EM Adaptive LASSO – A Multilocus Modeling Strategy for Detecting SNPs Associated With Zero-inflated Count Phenotypes

    Directory of Open Access Journals (Sweden)

    Himel eMallick

    2016-03-01

    Full Text Available Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP, Negative Binomial, and Zero-inflated Negative Binomial (ZINB. However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely

  5. PERBANDINGAN ANALISIS LEAST ABSOLUTE SHRINKAGE AND SELECTION OPERATOR DAN PARTIAL LEAST SQUARES (Studi Kasus: Data Microarray

    Directory of Open Access Journals (Sweden)

    KADEK DWI FARMANI

    2012-09-01

    Full Text Available Linear regression analysis is one of the parametric statistical methods which utilize the relationship between two or more quantitative variables. In linear regression analysis, there are several assumptions that must be met that is normal distribution of errors, there is no correlation between the error and error variance is constant and homogent. There are some constraints that caused the assumption can not be met, for example, the correlation between independent variables (multicollinearity, constraints on the number of data and independent variables are obtained. When the number of samples obtained less than the number of independent variables, then the data is called the microarray data. Least Absolute shrinkage and Selection Operator (LASSO and Partial Least Squares (PLS is a statistical method that can be used to overcome the microarray, overfitting, and multicollinearity. From the above description, it is necessary to study with the intention of comparing LASSO and PLS method. This study uses coronary heart and stroke patients data which is a microarray data and contain multicollinearity. With these two characteristics of the data that most have a weak correlation between independent variables, LASSO method produces a better model than PLS seen from the large RMSEP.

  6. Perceptual quality estimation of H.264/AVC videos using reduced-reference and no-reference models

    Science.gov (United States)

    Shahid, Muhammad; Pandremmenou, Katerina; Kondi, Lisimachos P.; Rossholm, Andreas; Lövström, Benny

    2016-09-01

    Reduced-reference (RR) and no-reference (NR) models for video quality estimation, using features that account for the impact of coding artifacts, spatio-temporal complexity, and packet losses, are proposed. The purpose of this study is to analyze a number of potentially quality-relevant features in order to select the most suitable set of features for building the desired models. The proposed sets of features have not been used in the literature and some of the features are used for the first time in this study. The features are employed by the least absolute shrinkage and selection operator (LASSO), which selects only the most influential of them toward perceptual quality. For comparison, we apply feature selection in the complete feature sets and ridge regression on the reduced sets. The models are validated using a database of H.264/AVC encoded videos that were subjectively assessed for quality in an ITU-T compliant laboratory. We infer that just two features selected by RR LASSO and two bitstream-based features selected by NR LASSO are able to estimate perceptual quality with high accuracy, higher than that of ridge, which uses more features. The comparisons with competing works and two full-reference metrics also verify the superiority of our models.

  7. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  8. Measurement error correction in the least absolute shrinkage and selection operator model when validation data are available.

    Science.gov (United States)

    Vasquez, Monica M; Hu, Chengcheng; Roe, Denise J; Halonen, Marilyn; Guerra, Stefano

    2017-01-01

    Measurement of serum biomarkers by multiplex assays may be more variable as compared to single biomarker assays. Measurement error in these data may bias parameter estimates in regression analysis, which could mask true associations of serum biomarkers with an outcome. The Least Absolute Shrinkage and Selection Operator (LASSO) can be used for variable selection in these high-dimensional data. Furthermore, when the distribution of measurement error is assumed to be known or estimated with replication data, a simple measurement error correction method can be applied to the LASSO method. However, in practice the distribution of the measurement error is unknown and is expensive to estimate through replication both in monetary cost and need for greater amount of sample which is often limited in quantity. We adapt an existing bias correction approach by estimating the measurement error using validation data in which a subset of serum biomarkers are re-measured on a random subset of the study sample. We evaluate this method using simulated data and data from the Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD). We show that the bias in parameter estimation is reduced and variable selection is improved.

  9. AN ANALYTIC OUTLOOK OF THE MADRIGAL MORO LASSO AL MIO DUOLO BY GESUALDO DA VENOSA

    Directory of Open Access Journals (Sweden)

    MURARU AUREL

    2015-09-01

    Full Text Available The analysis of the madrigal Moro lasso al mio duolo reveals the melancholic, thoughtful and grieving atmosphere, gene­rating shady, silent, sometimes dark soundscapes. Gesualdo shapes the poliphony through chromatic licenses, in order to create a tense musical discourse, permanently yearning for stability and balance amidst a harmonic construction lacking any attempt for resolution. Thus the strange harmonies of Gesualdo are shaped, giving birth to a unique musical style, full of dissonances and endless musical tension.

  10. Economic sustainability in franchising: a model to predict franchisor success or failure

    OpenAIRE

    Calderón Monge, Esther; Pastor Sanz, Ivan .; Huerta Zavala, Pilar Angélica

    2017-01-01

    As a business model, franchising makes a major contribution to gross domestic product (GDP). A model that predicts franchisor success or failure is therefore necessary to ensure economic sustainability. In this study, such a model was developed by applying Lasso regression to a sample of franchises operating between 2002 and 2013. For franchises with the highest likelihood of survival, the franchise fees and the ratio of company-owned to franchised outlets were suited to the age ...

  11. A Novel SCCA Approach via Truncated ℓ1-norm and Truncated Group Lasso for Brain Imaging Genetics.

    Science.gov (United States)

    Du, Lei; Liu, Kefei; Zhang, Tuo; Yao, Xiaohui; Yan, Jingwen; Risacher, Shannon L; Han, Junwei; Guo, Lei; Saykin, Andrew J; Shen, Li

    2017-09-18

    Brain imaging genetics, which studies the linkage between genetic variations and structural or functional measures of the human brain, has become increasingly important in recent years. Discovering the bi-multivariate relationship between genetic markers such as single-nucleotide polymorphisms (SNPs) and neuroimaging quantitative traits (QTs) is one major task in imaging genetics. Sparse Canonical Correlation Analysis (SCCA) has been a popular technique in this area for its powerful capability in identifying bi-multivariate relationships coupled with feature selection. The existing SCCA methods impose either the ℓ 1 -norm or its variants to induce sparsity. The ℓ 0 -norm penalty is a perfect sparsity-inducing tool which, however, is an NP-hard problem. In this paper, we propose the truncated ℓ 1 -norm penalized SCCA to improve the performance and effectiveness of the ℓ 1 -norm based SCCA methods. Besides, we propose an efficient optimization algorithms to solve this novel SCCA problem. The proposed method is an adaptive shrinkage method via tuning τ . It can avoid the time intensive parameter tuning if given a reasonable small τ . Furthermore, we extend it to the truncated group-lasso (TGL), and propose TGL-SCCA model to improve the group-lasso-based SCCA methods. The experimental results, compared with four benchmark methods, show that our SCCA methods identify better or similar correlation coefficients, and better canonical loading profiles than the competing methods. This demonstrates the effectiveness and efficiency of our methods in discovering interesting imaging genetic associations. The Matlab code and sample data are freely available at http://www.iu.edu/∼shenlab/tools/tlpscca/ . © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  12. Quality optimization of H.264/AVC video transmission over noisy environments using a sparse regression framework

    Science.gov (United States)

    Pandremmenou, K.; Tziortziotis, N.; Paluri, S.; Zhang, W.; Blekas, K.; Kondi, L. P.; Kumar, S.

    2015-03-01

    We propose the use of the Least Absolute Shrinkage and Selection Operator (LASSO) regression method in order to predict the Cumulative Mean Squared Error (CMSE), incurred by the loss of individual slices in video transmission. We extract a number of quality-relevant features from the H.264/AVC video sequences, which are given as input to the LASSO. This method has the benefit of not only keeping a subset of the features that have the strongest effects towards video quality, but also produces accurate CMSE predictions. Particularly, we study the LASSO regression through two different architectures; the Global LASSO (G.LASSO) and Local LASSO (L.LASSO). In G.LASSO, a single regression model is trained for all slice types together, while in L.LASSO, motivated by the fact that the values for some features are closely dependent on the considered slice type, each slice type has its own regression model, in an e ort to improve LASSO's prediction capability. Based on the predicted CMSE values, we group the video slices into four priority classes. Additionally, we consider a video transmission scenario over a noisy channel, where Unequal Error Protection (UEP) is applied to all prioritized slices. The provided results demonstrate the efficiency of LASSO in estimating CMSE with high accuracy, using only a few features. les that typically contain high-entropy data, producing a footprint that is far less conspicuous than existing methods. The system uses a local web server to provide a le system, user interface and applications through an web architecture.

  13. Statistical validation of normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  15. Performance Analysis of Hospitals Affiliated to Mashhad University of Medical Sciences Using the Pabon Lasso Model: A Six-Year-Trend Study

    Directory of Open Access Journals (Sweden)

    Kalhor

    2016-08-01

    Full Text Available Background Nowadays, productivity and efficiency are considered a culture and a perspective in both life and work environments. This is the starting point of human development. Objectives The aim of the present study was to investigate the performance of hospitals affiliated to Mashhad University of Medical Sciences using the Pabon Lasso Model. Methods The present study was a descriptive-analytic research, with a cross-sectional design, conducted during six years (2009 - 2014, at selected hospitals. The studied hospitals of this study were 21 public hospitals affiliated to Mashhad University of Medical Sciences. The data was obtained from the treatment Deputy of Khorasan Razavi province. Results Results from the present study showed that only 19% of the studied hospitals were located in zone 3 of the diagram, indicating a perfect performance. Twenty-eight percent were in zone 1, 19% in zone 2, and 28% in zone 4. Conclusions According to the findings, only a few hospitals are at the desirable zone (zone 3; the rest of the hospitals fell in other zones, which could be a result of poor performance and poor management of hospital resources. Most of the hospitals were in zones 1 and 4, whose characteristics are low bed turnover and longer stay, indicating higher bed supply than demand for healthcare services or longer hospitalization, less outpatient equipment use, and higher costs.

  16. Pierced Lasso Proteins

    Science.gov (United States)

    Jennings, Patricia

    Entanglement and knots are naturally occurring, where, in the microscopic world, knots in DNA and homopolymers are well characterized. The most complex knots are observed in proteins which are harder to investigate, as proteins are heteropolymers composed of a combination of 20 different amino acids with different individual biophysical properties. As new-knotted topologies and new proteins containing knots continue to be discovered and characterized, the investigation of knots in proteins has gained intense interest. Thus far, the principle focus has been on the evolutionary origin of tying a knot, with questions of how a protein chain `self-ties' into a knot, what the mechanism(s) are that contribute to threading, and the biological relevance and functional implication of a knotted topology in vivo gaining the most insight. Efforts to study the fully untied and unfolded chain indicate that the knot is highly stable, remaining intact in the unfolded state orders of magnitude longer than first anticipated. The persistence of ``stable'' knots in the unfolded state, together with the challenge of defining an unfolded and untied chain from an unfolded and knotted chain, complicates the study of fully untied protein in vitro. Our discovery of a new class of knotted proteins, the Pierced Lassos (PL) loop topology, simplifies the knotting approach. While PLs are not easily recognizable by the naked eye, they have now been identified in many proteins in the PDB through the use of computation tools. PL topologies are diverse proteins found in all kingdoms of life, performing a large variety of biological responses such as cell signaling, immune responses, transporters and inhibitors (http://lassoprot.cent.uw.edu.pl/). Many of these PL topologies are secreted proteins, extracellular proteins, as well as, redox sensors, enzymes and metal and co-factor binding proteins; all of which provide a favorable environment for the formation of the disulphide bridge. In the PL

  17. Scoring relevancy of features based on combinatorial analysis of Lasso with application to lymphoma diagnosis

    Directory of Open Access Journals (Sweden)

    Zare Habil

    2013-01-01

    Full Text Available Abstract One challenge in applying bioinformatic tools to clinical or biological data is high number of features that might be provided to the learning algorithm without any prior knowledge on which ones should be used. In such applications, the number of features can drastically exceed the number of training instances which is often limited by the number of available samples for the study. The Lasso is one of many regularization methods that have been developed to prevent overfitting and improve prediction performance in high-dimensional settings. In this paper, we propose a novel algorithm for feature selection based on the Lasso and our hypothesis is that defining a scoring scheme that measures the "quality" of each feature can provide a more robust feature selection method. Our approach is to generate several samples from the training data by bootstrapping, determine the best relevance-ordering of the features for each sample, and finally combine these relevance-orderings to select highly relevant features. In addition to the theoretical analysis of our feature scoring scheme, we provided empirical evaluations on six real datasets from different fields to confirm the superiority of our method in exploratory data analysis and prediction performance. For example, we applied FeaLect, our feature scoring algorithm, to a lymphoma dataset, and according to a human expert, our method led to selecting more meaningful features than those commonly used in the clinics. This case study built a basis for discovering interesting new criteria for lymphoma diagnosis. Furthermore, to facilitate the use of our algorithm in other applications, the source code that implements our algorithm was released as FeaLect, a documented R package in CRAN.

  18. Multi-omics facilitated variable selection in Cox-regression model for cancer prognosis prediction.

    Science.gov (United States)

    Liu, Cong; Wang, Xujun; Genchev, Georgi Z; Lu, Hui

    2017-07-15

    New developments in high-throughput genomic technologies have enabled the measurement of diverse types of omics biomarkers in a cost-efficient and clinically-feasible manner. Developing computational methods and tools for analysis and translation of such genomic data into clinically-relevant information is an ongoing and active area of investigation. For example, several studies have utilized an unsupervised learning framework to cluster patients by integrating omics data. Despite such recent advances, predicting cancer prognosis using integrated omics biomarkers remains a challenge. There is also a shortage of computational tools for predicting cancer prognosis by using supervised learning methods. The current standard approach is to fit a Cox regression model by concatenating the different types of omics data in a linear manner, while penalty could be added for feature selection. A more powerful approach, however, would be to incorporate data by considering relationships among omics datatypes. Here we developed two methods: a SKI-Cox method and a wLASSO-Cox method to incorporate the association among different types of omics data. Both methods fit the Cox proportional hazards model and predict a risk score based on mRNA expression profiles. SKI-Cox borrows the information generated by these additional types of omics data to guide variable selection, while wLASSO-Cox incorporates this information as a penalty factor during model fitting. We show that SKI-Cox and wLASSO-Cox models select more true variables than a LASSO-Cox model in simulation studies. We assess the performance of SKI-Cox and wLASSO-Cox using TCGA glioblastoma multiforme and lung adenocarcinoma data. In each case, mRNA expression, methylation, and copy number variation data are integrated to predict the overall survival time of cancer patients. Our methods achieve better performance in predicting patients' survival in glioblastoma and lung adenocarcinoma. Copyright © 2017. Published by Elsevier

  19. A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection

    Science.gov (United States)

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-01-01

    Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050

  20. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions, in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform ℓ1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  1. Integrative Sparse K-Means With Overlapping Group Lasso in Genomic Applications for Disease Subtype Discovery.

    Science.gov (United States)

    Huo, Zhiguang; Tseng, George

    2017-06-01

    Cancer subtypes discovery is the first step to deliver personalized medicine to cancer patients. With the accumulation of massive multi-level omics datasets and established biological knowledge databases, omics data integration with incorporation of rich existing biological knowledge is essential for deciphering a biological mechanism behind the complex diseases. In this manuscript, we propose an integrative sparse K -means (is- K means) approach to discover disease subtypes with the guidance of prior biological knowledge via sparse overlapping group lasso. An algorithm using an alternating direction method of multiplier (ADMM) will be applied for fast optimization. Simulation and three real applications in breast cancer and leukemia will be used to compare is- K means with existing methods and demonstrate its superior clustering accuracy, feature selection, functional annotation of detected molecular features and computing efficiency.

  2. Prediction models for solitary pulmonary nodules based on curvelet textural features and clinical parameters.

    Science.gov (United States)

    Wang, Jing-Jing; Wu, Hai-Feng; Sun, Tao; Li, Xia; Wang, Wei; Tao, Li-Xin; Huo, Da; Lv, Ping-Xin; He, Wen; Guo, Xiu-Hua

    2013-01-01

    Lung cancer, one of the leading causes of cancer-related deaths, usually appears as solitary pulmonary nodules (SPNs) which are hard to diagnose using the naked eye. In this paper, curvelet-based textural features and clinical parameters are used with three prediction models [a multilevel model, a least absolute shrinkage and selection operator (LASSO) regression method, and a support vector machine (SVM)] to improve the diagnosis of benign and malignant SPNs. Dimensionality reduction of the original curvelet-based textural features was achieved using principal component analysis. In addition, non-conditional logistical regression was used to find clinical predictors among demographic parameters and morphological features. The results showed that, combined with 11 clinical predictors, the accuracy rates using 12 principal components were higher than those using the original curvelet-based textural features. To evaluate the models, 10-fold cross validation and back substitution were applied. The results obtained, respectively, were 0.8549 and 0.9221 for the LASSO method, 0.9443 and 0.9831 for SVM, and 0.8722 and 0.9722 for the multilevel model. All in all, it was found that using curvelet-based textural features after dimensionality reduction and using clinical predictors, the highest accuracy rate was achieved with SVM. The method may be used as an auxiliary tool to differentiate between benign and malignant SPNs in CT images.

  3. Ensembling Variable Selectors by Stability Selection for the Cox Model

    Directory of Open Access Journals (Sweden)

    Qing-Yan Yin

    2017-01-01

    Full Text Available As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs have gained much interest due to their many advantages. Stability selection (Meinshausen and Bühlmann, 2010, a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region Λ in lasso and the parameter λmin properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Λ and λmin. Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.

  4. Modeling and Forecasting Large Realized Covariance Matrices and Portfolio Choice

    NARCIS (Netherlands)

    Callot, Laurent A.F.; Kock, Anders B.; Medeiros, Marcelo C.

    2017-01-01

    We consider modeling and forecasting large realized covariance matrices by penalized vector autoregressive models. We consider Lasso-type estimators to reduce the dimensionality and provide strong theoretical guarantees on the forecast capability of our procedure. We show that we can forecast

  5. Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.

    Science.gov (United States)

    Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic

    2017-03-01

    Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.

  6. Statistical validation of normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; van t Veld, Aart; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    PURPOSE: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: A penalized regression method, LASSO (least absolute shrinkage

  7. Operator spin foam models

    International Nuclear Information System (INIS)

    Bahr, Benjamin; Hellmann, Frank; Kaminski, Wojciech; Kisielowski, Marcin; Lewandowski, Jerzy

    2011-01-01

    The goal of this paper is to introduce a systematic approach to spin foams. We define operator spin foams, that is foams labelled by group representations and operators, as our main tool. A set of moves we define in the set of the operator spin foams (among other operations) allows us to split the faces and the edges of the foams. We assign to each operator spin foam a contracted operator, by using the contractions at the vertices and suitably adjusted face amplitudes. The emergence of the face amplitudes is the consequence of assuming the invariance of the contracted operator with respect to the moves. Next, we define spin foam models and consider the class of models assumed to be symmetric with respect to the moves we have introduced, and assuming their partition functions (state sums) are defined by the contracted operators. Briefly speaking, those operator spin foam models are invariant with respect to the cellular decomposition, and are sensitive only to the topology and colouring of the foam. Imposing an extra symmetry leads to a family we call natural operator spin foam models. This symmetry, combined with assumed invariance with respect to the edge splitting move, determines a complete characterization of a general natural model. It can be obtained by applying arbitrary (quantum) constraints on an arbitrary BF spin foam model. In particular, imposing suitable constraints on a spin(4) BF spin foam model is exactly the way we tend to view 4D quantum gravity, starting with the BC model and continuing with the Engle-Pereira-Rovelli-Livine (EPRL) or Freidel-Krasnov (FK) models. That makes our framework directly applicable to those models. Specifically, our operator spin foam framework can be translated into the language of spin foams and partition functions. Among our natural spin foam models there are the BF spin foam model, the BC model, and a model corresponding to the EPRL intertwiners. Our operator spin foam framework can also be used for more general spin

  8. Ultrahigh Dimensional Variable Selection for Interpolation of Point Referenced Spatial Data: A Digital Soil Mapping Case Study

    Science.gov (United States)

    Lamb, David W.; Mengersen, Kerrie

    2016-01-01

    Modern soil mapping is characterised by the need to interpolate point referenced (geostatistical) observations and the availability of large numbers of environmental characteristics for consideration as covariates to aid this interpolation. Modelling tasks of this nature also occur in other fields such as biogeography and environmental science. This analysis employs the Least Angle Regression (LAR) algorithm for fitting Least Absolute Shrinkage and Selection Operator (LASSO) penalized Multiple Linear Regressions models. This analysis demonstrates the efficiency of the LAR algorithm at selecting covariates to aid the interpolation of geostatistical soil carbon observations. Where an exhaustive search of the models that could be constructed from 800 potential covariate terms and 60 observations would be prohibitively demanding, LASSO variable selection is accomplished with trivial computational investment. PMID:27603135

  9. Comparison between linear and non-parametric regression models for genome-enabled prediction in wheat.

    Science.gov (United States)

    Pérez-Rodríguez, Paulino; Gianola, Daniel; González-Camacho, Juan Manuel; Crossa, José; Manès, Yann; Dreisigacker, Susanne

    2012-12-01

    In genome-enabled prediction, parametric, semi-parametric, and non-parametric regression models have been used. This study assessed the predictive ability of linear and non-linear models using dense molecular markers. The linear models were linear on marker effects and included the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B. The non-linear models (this refers to non-linearity on markers) were reproducing kernel Hilbert space (RKHS) regression, Bayesian regularized neural networks (BRNN), and radial basis function neural networks (RBFNN). These statistical models were compared using 306 elite wheat lines from CIMMYT genotyped with 1717 diversity array technology (DArT) markers and two traits, days to heading (DTH) and grain yield (GY), measured in each of 12 environments. It was found that the three non-linear models had better overall prediction accuracy than the linear regression specification. Results showed a consistent superiority of RKHS and RBFNN over the Bayesian LASSO, Bayesian ridge regression, Bayes A, and Bayes B models.

  10. PLS-based and regularization-based methods for the selection of relevant variables in non-targeted metabolomics data

    Directory of Open Access Journals (Sweden)

    Renata Bujak

    2016-07-01

    Full Text Available Non-targeted metabolomics constitutes a part of systems biology and aims to determine many metabolites in complex biological samples. Datasets obtained in non-targeted metabolomics studies are multivariate and high-dimensional due to the sensitivity of mass spectrometry-based detection methods as well as complexity of biological matrices. Proper selection of variables which contribute into group classification is a crucial step, especially in metabolomics studies which are focused on searching for disease biomarker candidates. In the present study, three different statistical approaches were tested using two metabolomics datasets (RH and PH study. Orthogonal projections to latent structures-discriminant analysis (OPLS-DA without and with multiple testing correction as well as least absolute shrinkage and selection operator (LASSO were tested and compared. For the RH study, OPLS-DA model built without multiple testing correction, selected 46 and 218 variables based on VIP criteria using Pareto and UV scaling, respectively. In the case of the PH study, 217 and 320 variables were selected based on VIP criteria using Pareto and UV scaling, respectively. In the RH study, OPLS-DA model built with multiple testing correction, selected 4 and 19 variables as statistically significant in terms of Pareto and UV scaling, respectively. For PH study, 14 and 18 variables were selected based on VIP criteria in terms of Pareto and UV scaling, respectively. Additionally, the concept and fundaments of the least absolute shrinkage and selection operator (LASSO with bootstrap procedure evaluating reproducibility of results, was demonstrated. In the RH and PH study, the LASSO selected 14 and 4 variables with reproducibility between 99.3% and 100%. However, apart from the popularity of PLS-DA and OPLS-DA methods in metabolomics, it should be highlighted that they do not control type I or type II error, but only arbitrarily establish a cut-off value for PLS-DA loadings

  11. Improved Sparse Channel Estimation for Cooperative Communication Systems

    Directory of Open Access Journals (Sweden)

    Guan Gui

    2012-01-01

    Full Text Available Accurate channel state information (CSI is necessary at receiver for coherent detection in amplify-and-forward (AF cooperative communication systems. To estimate the channel, traditional methods, that is, least squares (LS and least absolute shrinkage and selection operator (LASSO, are based on assumptions of either dense channel or global sparse channel. However, LS-based linear method neglects the inherent sparse structure information while LASSO-based sparse channel method cannot take full advantage of the prior information. Based on the partial sparse assumption of the cooperative channel model, we propose an improved channel estimation method with partial sparse constraint. At first, by using sparse decomposition theory, channel estimation is formulated as a compressive sensing problem. Secondly, the cooperative channel is reconstructed by LASSO with partial sparse constraint. Finally, numerical simulations are carried out to confirm the superiority of proposed methods over global sparse channel estimation methods.

  12. Inference in High-dimensional Dynamic Panel Data Models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Tang, Haihan

    We establish oracle inequalities for a version of the Lasso in high-dimensional fixed effects dynamic panel data models. The inequalities are valid for the coefficients of the dynamic and exogenous regressors. Separate oracle inequalities are derived for the fixed effects. Next, we show how one can...

  13. A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data

    Science.gov (United States)

    Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence

    2013-01-01

    Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011

  14. Evaluation of the Achieve Mapping Catheter in cryoablation for atrial fibrillation: a prospective randomized trial.

    Science.gov (United States)

    Gang, Yi; Gonna, Hanney; Domenichini, Giulia; Sampson, Michael; Aryan, Niloufar; Norman, Mark; Behr, Elijah R; Zuberi, Zia; Dhillon, Paramdeep; Gallagher, Mark M

    2016-03-01

    The purpose of this study is to establish the role of Achieve Mapping Catheter in cryoablation for paroxysmal atrial fibrillation (PAF) in a randomized trial. A total of 102 patients undergoing their first ablation for PAF were randomized at 2:1 to an Achieve- or Lasso-guided procedure. Study patients were systematically followed up for 12 months with Holter monitoring. Primary study endpoint was acute procedure success. Secondary endpoint was clinical outcomes assessed by AF free at 6 and 12 months after the procedure. Of 102 participants, 99 % of acute procedure success was achieved. Significantly shorter procedure duration with the Achieve-guided group than with the Lasso-guided group (118 ± 18 vs. 129 ± 21 min, p < 0.05) was observed as was the duration of fluoroscopy (17 ± 5 vs. 20 ± 7 min, p < 0.05) by subgroup analysis focused on procedures performed by experienced operators. In the whole study patients, procedure and fluoroscopic durations were similar in the Achieve- (n = 68) and Lasso-guided groups (n = 34). Transient phrenic nerve weakening was equally prevalent with the Achieve and Lasso. No association was found between clinical outcomes and the mapping catheter used. The use of second-generation cryoballoon (n = 68) reduced procedure time significantly compared to the first-generation balloon (n = 34); more patients were free of AF in the former than the latter group during follow-up. The use of the Achieve Mapping Catheter can reduce procedure and fluoroscopic durations compared with Lasso catheters in cryoablation for PAF after operators gained sufficient experience. The type of mapping catheter used does not affect procedure efficiency and safety by models of cryoballoon.

  15. Applying Least Absolute Shrinkage Selection Operator and Akaike Information Criterion Analysis to Find the Best Multiple Linear Regression Models between Climate Indices and Components of Cow's Milk.

    Science.gov (United States)

    Marami Milani, Mohammad Reza; Hense, Andreas; Rahmani, Elham; Ploeger, Angelika

    2016-07-23

    This study focuses on multiple linear regression models relating six climate indices (temperature humidity THI, environmental stress ESI, equivalent temperature index ETI, heat load HLI, modified HLI (HLI new ), and respiratory rate predictor RRP) with three main components of cow's milk (yield, fat, and protein) for cows in Iran. The least absolute shrinkage selection operator (LASSO) and the Akaike information criterion (AIC) techniques are applied to select the best model for milk predictands with the smallest number of climate predictors. Uncertainty estimation is employed by applying bootstrapping through resampling. Cross validation is used to avoid over-fitting. Climatic parameters are calculated from the NASA-MERRA global atmospheric reanalysis. Milk data for the months from April to September, 2002 to 2010 are used. The best linear regression models are found in spring between milk yield as the predictand and THI, ESI, ETI, HLI, and RRP as predictors with p -value < 0.001 and R ² (0.50, 0.49) respectively. In summer, milk yield with independent variables of THI, ETI, and ESI show the highest relation ( p -value < 0.001) with R ² (0.69). For fat and protein the results are only marginal. This method is suggested for the impact studies of climate variability/change on agriculture and food science fields when short-time series or data with large uncertainty are available.

  16. Predicting adenocarcinoma recurrence using computational texture models of nodule components in lung CT

    International Nuclear Information System (INIS)

    Depeursinge, Adrien; Yanagawa, Masahiro; Leung, Ann N.; Rubin, Daniel L.

    2015-01-01

    Purpose: To investigate the importance of presurgical computed tomography (CT) intensity and texture information from ground-glass opacities (GGO) and solid nodule components for the prediction of adenocarcinoma recurrence. Methods: For this study, 101 patients with surgically resected stage I adenocarcinoma were selected. During the follow-up period, 17 patients had disease recurrence with six associated cancer-related deaths. GGO and solid tumor components were delineated on presurgical CT scans by a radiologist. Computational texture models of GGO and solid regions were built using linear combinations of steerable Riesz wavelets learned with linear support vector machines (SVMs). Unlike other traditional texture attributes, the proposed texture models are designed to encode local image scales and directions that are specific to GGO and solid tissue. The responses of the locally steered models were used as texture attributes and compared to the responses of unaligned Riesz wavelets. The texture attributes were combined with CT intensities to predict tumor recurrence and patient hazard according to disease-free survival (DFS) time. Two families of predictive models were compared: LASSO and SVMs, and their survival counterparts: Cox-LASSO and survival SVMs. Results: The best-performing predictive model of patient hazard was associated with a concordance index (C-index) of 0.81 ± 0.02 and was based on the combination of the steered models and CT intensities with survival SVMs. The same feature group and the LASSO model yielded the highest area under the receiver operating characteristic curve (AUC) of 0.8 ± 0.01 for predicting tumor recurrence, although no statistically significant difference was found when compared to using intensity features solely. For all models, the performance was found to be significantly higher when image attributes were based on the solid components solely versus using the entire tumors (p < 3.08 × 10 −5 ). Conclusions: This study

  17. Modelling arithmetic operations

    Energy Technology Data Exchange (ETDEWEB)

    Shabanov-kushnarenk, Yu P

    1981-01-01

    The possibility of modelling finite alphabetic operators using formal intelligence theory, is explored, with the setting up of models of a 3-digit adder and a multidigit subtractor, as examples. 2 references.

  18. Mental models of the operator

    International Nuclear Information System (INIS)

    Stary, I.

    2004-01-01

    A brief explanation is presented of the mental model concept, properties of mental models and fundamentals of mental models theory. Possible applications of such models in nuclear power plants are described in more detail. They include training of power plant operators, research into their behaviour and design of the operator-control process interface. The design of a mental model of an operator working in abnormal conditions due to power plant malfunction is outlined as an example taken from the literature. The model has been created based on analysis of experiments performed on a nuclear power plant simulator, run by a training center. (author)

  19. Academic Education Chain Operation Model

    OpenAIRE

    Ruskov, Petko; Ruskov, Andrey

    2007-01-01

    This paper presents an approach for modelling the educational processes as a value added chain. It is an attempt to use a business approach to interpret and compile existing business and educational processes towards reference models and suggest an Academic Education Chain Operation Model. The model can be used to develop an Academic Chain Operation Reference Model.

  20. Integrative Modeling and Inference in High Dimensional Genomic and Metabolic Data

    DEFF Research Database (Denmark)

    Brink-Jensen, Kasper

    in Manuscript I preserves the attributes of the compounds found in LC–MS samples while identifying genes highly associated with these. The main obstacles that must be overcome with this approach are dimension reduction and variable selection, here done with PARAFAC and LASSO respectively. One important drawback...... of the LASSO has been the lack of inference, the variables selected could potentially just be the most important from a set of non–important variables. Manuscript II addresses this problem with a permutation based significance test for the variables chosen by the LASSO. Once a set of relevant variables has......, particularly it scales to many lists and it provides an intuitive interpretation of the measure....

  1. Reliability analysis and operator modelling

    International Nuclear Information System (INIS)

    Hollnagel, Erik

    1996-01-01

    The paper considers the state of operator modelling in reliability analysis. Operator models are needed in reliability analysis because operators are needed in process control systems. HRA methods must therefore be able to account both for human performance variability and for the dynamics of the interaction. A selected set of first generation HRA approaches is briefly described in terms of the operator model they use, their classification principle, and the actual method they propose. In addition, two examples of second generation methods are also considered. It is concluded that first generation HRA methods generally have very simplistic operator models, either referring to the time-reliability relationship or to elementary information processing concepts. It is argued that second generation HRA methods must recognise that cognition is embedded in a context, and be able to account for that in the way human reliability is analysed and assessed

  2. TU-CD-BRB-01: Normal Lung CT Texture Features Improve Predictive Models for Radiation Pneumonitis

    International Nuclear Information System (INIS)

    Krafft, S; Briere, T; Court, L; Martel, M

    2015-01-01

    Purpose: Existing normal tissue complication probability (NTCP) models for radiation pneumonitis (RP) traditionally rely on dosimetric and clinical data but are limited in terms of performance and generalizability. Extraction of pre-treatment image features provides a potential new category of data that can improve NTCP models for RP. We consider quantitative measures of total lung CT intensity and texture in a framework for prediction of RP. Methods: Available clinical and dosimetric data was collected for 198 NSCLC patients treated with definitive radiotherapy. Intensity- and texture-based image features were extracted from the T50 phase of the 4D-CT acquired for treatment planning. A total of 3888 features (15 clinical, 175 dosimetric, and 3698 image features) were gathered and considered candidate predictors for modeling of RP grade≥3. A baseline logistic regression model with mean lung dose (MLD) was first considered. Additionally, a least absolute shrinkage and selection operator (LASSO) logistic regression was applied to the set of clinical and dosimetric features, and subsequently to the full set of clinical, dosimetric, and image features. Model performance was assessed by comparing area under the curve (AUC). Results: A simple logistic fit of MLD was an inadequate model of the data (AUC∼0.5). Including clinical and dosimetric parameters within the framework of the LASSO resulted in improved performance (AUC=0.648). Analysis of the full cohort of clinical, dosimetric, and image features provided further and significant improvement in model performance (AUC=0.727). Conclusions: To achieve significant gains in predictive modeling of RP, new categories of data should be considered in addition to clinical and dosimetric features. We have successfully incorporated CT image features into a framework for modeling RP and have demonstrated improved predictive performance. Validation and further investigation of CT image features in the context of RP NTCP

  3. ORACLE INEQUALITIES FOR THE LASSO IN THE COX MODEL.

    Science.gov (United States)

    Huang, Jian; Sun, Tingni; Ying, Zhiliang; Yu, Yi; Zhang, Cun-Hui

    2013-06-01

    We study the absolute penalized maximum partial likelihood estimator in sparse, high-dimensional Cox proportional hazards regression models where the number of time-dependent covariates can be larger than the sample size. We establish oracle inequalities based on natural extensions of the compatibility and cone invertibility factors of the Hessian matrix at the true regression coefficients. Similar results based on an extension of the restricted eigenvalue can be also proved by our method. However, the presented oracle inequalities are sharper since the compatibility and cone invertibility factors are always greater than the corresponding restricted eigenvalue. In the Cox regression model, the Hessian matrix is based on time-dependent covariates in censored risk sets, so that the compatibility and cone invertibility factors, and the restricted eigenvalue as well, are random variables even when they are evaluated for the Hessian at the true regression coefficients. Under mild conditions, we prove that these quantities are bounded from below by positive constants for time-dependent covariates, including cases where the number of covariates is of greater order than the sample size. Consequently, the compatibility and cone invertibility factors can be treated as positive constants in our oracle inequalities.

  4. Robust estimation of the expected survival probabilities from high-dimensional Cox models with biomarker-by-treatment interactions in randomized clinical trials

    Directory of Open Access Journals (Sweden)

    Nils Ternès

    2017-05-01

    Full Text Available Abstract Background Thanks to the advances in genomics and targeted treatments, more and more prediction models based on biomarkers are being developed to predict potential benefit from treatments in a randomized clinical trial. Despite the methodological framework for the development and validation of prediction models in a high-dimensional setting is getting more and more established, no clear guidance exists yet on how to estimate expected survival probabilities in a penalized model with biomarker-by-treatment interactions. Methods Based on a parsimonious biomarker selection in a penalized high-dimensional Cox model (lasso or adaptive lasso, we propose a unified framework to: estimate internally the predictive accuracy metrics of the developed model (using double cross-validation; estimate the individual survival probabilities at a given timepoint; construct confidence intervals thereof (analytical or bootstrap; and visualize them graphically (pointwise or smoothed with spline. We compared these strategies through a simulation study covering scenarios with or without biomarker effects. We applied the strategies to a large randomized phase III clinical trial that evaluated the effect of adding trastuzumab to chemotherapy in 1574 early breast cancer patients, for which the expression of 462 genes was measured. Results In our simulations, penalized regression models using the adaptive lasso estimated the survival probability of new patients with low bias and standard error; bootstrapped confidence intervals had empirical coverage probability close to the nominal level across very different scenarios. The double cross-validation performed on the training data set closely mimicked the predictive accuracy of the selected models in external validation data. We also propose a useful visual representation of the expected survival probabilities using splines. In the breast cancer trial, the adaptive lasso penalty selected a prediction model with 4

  5. Relationships Between the External and Internal Training Load in Professional Soccer: What Can We Learn From Machine Learning?

    Science.gov (United States)

    Jaspers, Arne; Beéck, Tim Op De; Brink, Michel S; Frencken, Wouter G P; Staes, Filip; Davis, Jesse J; Helsen, Werner F

    2017-12-28

    Machine learning may contribute to understanding the relationship between the external load and internal load in professional soccer. Therefore, the relationship between external load indicators and the rating of perceived exertion (RPE) was examined using machine learning techniques on a group and individual level. Training data were collected from 38 professional soccer players over two seasons. The external load was measured using global positioning system technology and accelerometry. The internal load was obtained using the RPE. Predictive models were constructed using two machine learning techniques, artificial neural networks (ANNs) and least absolute shrinkage and selection operator (LASSO), and one naive baseline method. The predictions were based on a large set of external load indicators. Using each technique, one group model involving all players and one individual model for each player was constructed. These models' performance on predicting the reported RPE values for future training sessions was compared to the naive baseline's performance. Both the ANN and LASSO models outperformed the baseline. Additionally, the LASSO model made more accurate predictions for the RPE than the ANN model. Furthermore, decelerations were identified as important external load indicators. Regardless of the applied machine learning technique, the group models resulted in equivalent or better predictions for the reported RPE values than the individual models. Machine learning techniques may have added value in predicting the RPE for future sessions to optimize training design and evaluation. Additionally, these techniques may be used in conjunction with expert knowledge to select key external load indicators for load monitoring.

  6. Computer-aided operations engineering with integrated models of systems and operations

    Science.gov (United States)

    Malin, Jane T.; Ryan, Dan; Fleming, Land

    1994-01-01

    CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.

  7. Modeling Operating Modes during Plant Life Cycle

    DEFF Research Database (Denmark)

    Jørgensen, Sten Bay; Lind, Morten

    2012-01-01

    Modelling process plants during normal operation requires a set a basic assumptions to define the desired functionalities which lead to fullfillment of the operational goal(-s) for the plant. However during during start-up and shut down as well as during batch operation an ensemble of interrelated...... modes are required to cover the whole operational window of a processs plant including intermediary operating modes. Development of such an model ensemble for a plant would constitute a systematic way of defining the possible plant operating modes and thus provide a platform for also defining a set...... of candidate control structures. The present contribution focuses on development of a model ensemble for a plant with an illustartive example for a bioreactor. Starting from a functional model a process plant may be conceptually designed and qualitative operating models may be developed to cover the different...

  8. Operating cost model for local service airlines

    Science.gov (United States)

    Anderson, J. L.; Andrastek, D. A.

    1976-01-01

    Several mathematical models now exist which determine the operating economics for a United States trunk airline. These models are valuable in assessing the impact of new aircraft into an airline's fleet. The use of a trunk airline cost model for the local service airline does not result in representative operating costs. A new model is presented which is representative of the operating conditions and resultant costs for the local service airline. The calculated annual direct and indirect operating costs for two multiequipment airlines are compared with their actual operating experience.

  9. T2L2 on JASON-2: First Evaluation of the Flying Model

    Science.gov (United States)

    2007-01-01

    Para, J.-M. Torre R&D Metrology CNRS/GEMINI Observatoire de la Côte d’Azur Caussol, France E-mail: philippe.guillemot@cnes.fr Abstract...Laser Link” experiment T2L2 [1], under development at OCA (Observatoire de la Côte d’Azur) and CNES (Centre National d’Etudes Spatiales), France, will be...Experimental Astronomy, 7, 191-207. [2] P. Fridelance and C. Veillet, 1995, “Operation and data analysis in the LASSO experiment,” Metrologia

  10. A proposal for operator team behavior model and operator's thinking mechanism

    International Nuclear Information System (INIS)

    Yoshimura, Seiichi; Takano, Kenichi; Sasou, Kunihide

    1995-01-01

    Operating environment in huge systems like nuclear power plants or airplanes is changing rapidly with the advance of computer technology. It is necessary to elucidate thinking process of operators and decision-making process of an operator team in abnormal situations, in order to prevent human errors under such environment. The Central Research Institute of Electric Power Industry is promoting a research project to establish human error prevention countermeasures by modeling and simulating the thinking process of operators and decision-making process of an operator team. In the previous paper, application of multilevel flow modeling was proposed to a mental model which conducts future prediction and cause identification, and the characteristics were verified by experienced plant operators. In this paper, an operator team behavior model and a fundamental operator's thinking mechanism especially 'situation understanding' are proposed, and the proposals are evaluated by experiments using a full-scale simulator. The results reveal that some assumptions such as 'communication is done between a leader and a follower' are almost appropriate and that the situation understanding can be represented by 'probable candidates for cause, determination of a parameter which changes when an event occurs, determination of parameters which are influenced by the change of the previous parameter, determination of a principal parameter and future prediction of the principal parameter'. (author)

  11. Visualization study of operators' plant knowledge model

    International Nuclear Information System (INIS)

    Kanno, Tarou; Furuta, Kazuo; Yoshikawa, Shinji

    1999-03-01

    Nuclear plants are typically very complicated systems and are required extremely high level safety on the operations. Since it is never possible to include all the possible anomaly scenarios in education/training curriculum, plant knowledge formation is desired for operators to enable thein to act against unexpected anomalies based on knowledge base decision making. The authors have been conducted a study on operators' plant knowledge model for the purpose of supporting operators' effort in forming this kind of plant knowledge. In this report, an integrated plant knowledge model consisting of configuration space, causality space, goal space and status space is proposed. The authors examined appropriateness of this model and developed a prototype system to support knowledge formation by visualizing the operators' knowledge model and decision making process in knowledge-based actions with this model on a software system. Finally the feasibility of this prototype as a supportive method in operator education/training to enhance operators' ability in knowledge-based performance has been evaluated. (author)

  12. Glass operational file. Operational models and integration calculations

    International Nuclear Information System (INIS)

    Ribet, I.

    2004-01-01

    This document presents the operational choices of dominating phenomena, hypotheses, equations and numerical data of the parameters used in the two operational models elaborated for the calculation of the glass source terms with respect to the waste packages considered: existing packages (R7T7, AVM and CEA glasses) and future ones (UOX2, UOX3, UMo, others). The overall operational choices are justified and demonstrated and a critical analysis of the approach is systematically proposed. The use of the operational model (OPM) V 0 → V r , realistic, conservative and robust, is recommended for glasses with a high thermal and radioactive load, which represent the main part of the vitrified wastes. The OPM V 0 S, much more overestimating but faster to parameterize, can be used for the long-term behaviour forecasting of glasses with low thermal and radioactive load, considering today's lack of knowledge for the parameterization of a V 0 → V r type OPM. Efficiency estimations have been made for R7T7 glasses (OPM V 0 → V r ) and AVM glasses (OPM V 0 S), which correspond to more than 99.9% of the vitrified waste packages activity. The very contrasted results obtained, illustrate the importance of the choice of operational models: in conditions representative of a geologic disposal, the estimation of R7T7-type package lifetime exceeds several hundred thousands years. Even if the estimated lifetime of AVM packages is much shorter (because of the overestimating character of the OPM V 0 S), the release potential radiotoxicity is of the same order as the one of R7T7 packages. (J.S.)

  13. Diseño de un modelo de descripción, valoración, clasificación y remuneración de puestos para la empresa Novacero S.A., planta Lasso

    OpenAIRE

    Cajas Garzón, Alexandra Maribel

    2012-01-01

    208 hojas : ilustraciones, 29 x 21 cm El presente proyecto de titulación tiene por objetivo diseñar un Modelo de Descripción, Valoración, Clasificación y Remuneración de Puestos, aplicando la Metodología HAY de Valoración de Cargos por Perfiles y Escalas, para la Empresa NOVACERO S.A. Planta Lasso. Se definió un Mapa de Procesos, considerando los que están orientados a satisfacer las necesidades del cliente interno y externo lo cual es un insumo básico para proceder con la identificación ...

  14. Oracle Inequalities for High Dimensional Vector Autoregressions

    DEFF Research Database (Denmark)

    Callot, Laurent; Kock, Anders Bredahl

    This paper establishes non-asymptotic oracle inequalities for the prediction error and estimation accuracy of the LASSO in stationary vector autoregressive models. These inequalities are used to establish consistency of the LASSO even when the number of parameters is of a much larger order...

  15. Modeling and simulation with operator scaling

    OpenAIRE

    Cohen, Serge; Meerschaert, Mark M.; Rosiński, Jan

    2010-01-01

    Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical application...

  16. The Launch Systems Operations Cost Model

    Science.gov (United States)

    Prince, Frank A.; Hamaker, Joseph W. (Technical Monitor)

    2001-01-01

    One of NASA's primary missions is to reduce the cost of access to space while simultaneously increasing safety. A key component, and one of the least understood, is the recurring operations and support cost for reusable launch systems. In order to predict these costs, NASA, under the leadership of the Independent Program Assessment Office (IPAO), has commissioned the development of a Launch Systems Operations Cost Model (LSOCM). LSOCM is a tool to predict the operations & support (O&S) cost of new and modified reusable (and partially reusable) launch systems. The requirements are to predict the non-recurring cost for the ground infrastructure and the recurring cost of maintaining that infrastructure, performing vehicle logistics, and performing the O&S actions to return the vehicle to flight. In addition, the model must estimate the time required to cycle the vehicle through all of the ground processing activities. The current version of LSOCM is an amalgamation of existing tools, leveraging our understanding of shuttle operations cost with a means of predicting how the maintenance burden will change as the vehicle becomes more aircraft like. The use of the Conceptual Operations Manpower Estimating Tool/Operations Cost Model (COMET/OCM) provides a solid point of departure based on shuttle and expendable launch vehicle (ELV) experience. The incorporation of the Reliability and Maintainability Analysis Tool (RMAT) as expressed by a set of response surface model equations gives a method for estimating how changing launch system characteristics affects cost and cycle time as compared to today's shuttle system. Plans are being made to improve the model. The development team will be spending the next few months devising a structured methodology that will enable verified and validated algorithms to give accurate cost estimates. To assist in this endeavor the LSOCM team is part of an Agency wide effort to combine resources with other cost and operations professionals to

  17. Risk Prediction Using Genome-Wide Association Studies on Type 2 Diabetes

    Directory of Open Access Journals (Sweden)

    Sungkyoung Choi

    2016-12-01

    Full Text Available The success of genome-wide association studies (GWASs has enabled us to improve risk assessment and provide novel genetic variants for diagnosis, prevention, and treatment. However, most variants discovered by GWASs have been reported to have very small effect sizes on complex human diseases, which has been a big hurdle in building risk prediction models. Recently, many statistical approaches based on penalized regression have been developed to solve the “large p and small n” problem. In this report, we evaluated the performance of several statistical methods for predicting a binary trait: stepwise logistic regression (SLR, least absolute shrinkage and selection operator (LASSO, and Elastic-Net (EN. We first built a prediction model by combining variable selection and prediction methods for type 2 diabetes using Affymetrix Genome-Wide Human SNP Array 5.0 from the Korean Association Resource project. We assessed the risk prediction performance using area under the receiver operating characteristic curve (AUC for the internal and external validation datasets. In the internal validation, SLR-LASSO and SLR-EN tended to yield more accurate predictions than other combinations. During the external validation, the SLR-SLR and SLR-EN combinations achieved the highest AUC of 0.726. We propose these combinations as a potentially powerful risk prediction model for type 2 diabetes.

  18. Academic Education Chain Operation Model

    NARCIS (Netherlands)

    Ruskov, Petko; Ruskov, Andrey

    2007-01-01

    This paper presents an approach for modelling the educational processes as a value added chain. It is an attempt to use a business approach to interpret and compile existing business and educational processes towards reference models and suggest an Academic Education Chain Operation Model. The model

  19. Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.

    Science.gov (United States)

    Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan

    2011-11-01

    When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.

  20. Implementation of an operator model with error mechanisms for nuclear power plant control room operation

    International Nuclear Information System (INIS)

    Suh, Sang Moon; Cheon, Se Woo; Lee, Yong Hee; Lee, Jung Woon; Park, Young Taek

    1996-01-01

    SACOM(Simulation Analyser with Cognitive Operator Model) is being developed at Korea Atomic Energy Research Institute to simulate human operator's cognitive characteristics during the emergency situations of nuclear power plans. An operator model with error mechanisms has been developed and combined into SACOM to simulate human operator's cognitive information process based on the Rasmussen's decision ladder model. The operational logic for five different cognitive activities (Agents), operator's attentional control (Controller), short-term memory (Blackboard), and long-term memory (Knowledge Base) have been developed and implemented on blackboard architecture. A trial simulation with a scenario for emergency operation has been performed to verify the operational logic. It was found that the operator model with error mechanisms is suitable for the simulation of operator's cognitive behavior in emergency situation

  1. Genome-Wide Association Studies and Comparison of Models and Cross-Validation Strategies for Genomic Prediction of Quality Traits in Advanced Winter Wheat Breeding Lines

    Directory of Open Access Journals (Sweden)

    Peter S. Kristensen

    2018-02-01

    Full Text Available The aim of the this study was to identify SNP markers associated with five important wheat quality traits (grain protein content, Zeleny sedimentation, test weight, thousand-kernel weight, and falling number, and to investigate the predictive abilities of GBLUP and Bayesian Power Lasso models for genomic prediction of these traits. In total, 635 winter wheat lines from two breeding cycles in the Danish plant breeding company Nordic Seed A/S were phenotyped for the quality traits and genotyped for 10,802 SNPs. GWAS were performed using single marker regression and Bayesian Power Lasso models. SNPs with large effects on Zeleny sedimentation were found on chromosome 1B, 1D, and 5D. However, GWAS failed to identify single SNPs with significant effects on the other traits, indicating that these traits were controlled by many QTL with small effects. The predictive abilities of the models for genomic prediction were studied using different cross-validation strategies. Leave-One-Out cross-validations resulted in correlations between observed phenotypes corrected for fixed effects and genomic estimated breeding values of 0.50 for grain protein content, 0.66 for thousand-kernel weight, 0.70 for falling number, 0.71 for test weight, and 0.79 for Zeleny sedimentation. Alternative cross-validations showed that the genetic relationship between lines in training and validation sets had a bigger impact on predictive abilities than the number of lines included in the training set. Using Bayesian Power Lasso instead of GBLUP models, gave similar or slightly higher predictive abilities. Genomic prediction based on all SNPs was more effective than prediction based on few associated SNPs.

  2. On Weighted Support Vector Regression

    DEFF Research Database (Denmark)

    Han, Xixuan; Clemmensen, Line Katrine Harder

    2014-01-01

    We propose a new type of weighted support vector regression (SVR), motivated by modeling local dependencies in time and space in prediction of house prices. The classic weights of the weighted SVR are added to the slack variables in the objective function (OF‐weights). This procedure directly...... shrinks the coefficient of each observation in the estimated functions; thus, it is widely used for minimizing influence of outliers. We propose to additionally add weights to the slack variables in the constraints (CF‐weights) and call the combination of weights the doubly weighted SVR. We illustrate...... the differences and similarities of the two types of weights by demonstrating the connection between the Least Absolute Shrinkage and Selection Operator (LASSO) and the SVR. We show that an SVR problem can be transformed to a LASSO problem plus a linear constraint and a box constraint. We demonstrate...

  3. Risk management model of winter navigation operations

    International Nuclear Information System (INIS)

    Valdez Banda, Osiris A.; Goerlandt, Floris; Kuzmin, Vladimir; Kujala, Pentti; Montewka, Jakub

    2016-01-01

    The wintertime maritime traffic operations in the Gulf of Finland are managed through the Finnish–Swedish Winter Navigation System. This establishes the requirements and limitations for the vessels navigating when ice covers this area. During winter navigation in the Gulf of Finland, the largest risk stems from accidental ship collisions which may also trigger oil spills. In this article, a model for managing the risk of winter navigation operations is presented. The model analyses the probability of oil spills derived from collisions involving oil tanker vessels and other vessel types. The model structure is based on the steps provided in the Formal Safety Assessment (FSA) by the International Maritime Organization (IMO) and adapted into a Bayesian Network model. The results indicate that ship independent navigation and convoys are the operations with higher probability of oil spills. Minor spills are most probable, while major oil spills found very unlikely but possible. - Highlights: •A model to assess and manage the risk of winter navigation operations is proposed. •The risks of oil spills in winter navigation in the Gulf of Finland are analysed. •The model assesses and prioritizes actions to control the risk of the operations. •The model suggests navigational training as the most efficient risk control option.

  4. Predictive Modeling in Race Walking

    Directory of Open Access Journals (Sweden)

    Krzysztof Wiktorowicz

    2015-01-01

    Full Text Available This paper presents the use of linear and nonlinear multivariable models as tools to support training process of race walkers. These models are calculated using data collected from race walkers’ training events and they are used to predict the result over a 3 km race based on training loads. The material consists of 122 training plans for 21 athletes. In order to choose the best model leave-one-out cross-validation method is used. The main contribution of the paper is to propose the nonlinear modifications for linear models in order to achieve smaller prediction error. It is shown that the best model is a modified LASSO regression with quadratic terms in the nonlinear part. This model has the smallest prediction error and simplified structure by eliminating some of the predictors.

  5. Operational characteristics of nuclear power plants - modelling of operational safety

    International Nuclear Information System (INIS)

    Studovic, M.

    1984-01-01

    By operational experience of nuclear power plants and realize dlevel of availability of plant, systems and componenst reliabiliuty, operational safety and public protection, as a source on nature of distrurbances in power plant systems and lessons drawn by the TMI-2, in th epaper are discussed: examination of design safety for ultimate ensuring of safe operational conditions of the nuclear power plant; significance of the adequate action for keeping proess parameters in prescribed limits and reactor cooling rquirements; developed systems for measurements detection and monitoring all critical parameters in the nuclear steam supply system; contents of theoretical investigation and mathematical modeling of the physical phenomena and process in nuclear power plant system and components as software, supporting for ensuring of operational safety and new access in staff education process; program and progress of the investigation of some physical phenomena and mathematical modeling of nuclear plant transients, prepared at faculty of mechanical Engineering in Belgrade. (author)

  6. Analytical modeling of nuclear power station operator reliability

    International Nuclear Information System (INIS)

    Sabri, Z.A.; Husseiny, A.A.

    1979-01-01

    The operator-plant interface is a critical component of power stations which requires the formulation of mathematical models to be applied in plant reliability analysis. The human model introduced here is based on cybernetic interactions and allows for use of available data from psychological experiments, hot and cold training and normal operation. The operator model is identified and integrated in the control and protection systems. The availability and reliability are given for different segments of the operator task and for specific periods of the operator life: namely, training, operation and vigilance or near retirement periods. The results can be easily and directly incorporated in system reliability analysis. (author)

  7. Modeling Control Situations in Power System Operations

    DEFF Research Database (Denmark)

    Saleem, Arshad; Lind, Morten; Singh, Sri Niwas

    2010-01-01

    for intelligent operation and control must represent system features, so that information from measurements can be related to possible system states and to control actions. These general modeling requirements are well understood, but it is, in general, difficult to translate them into a model because of the lack...... of explicit principles for model construction. This paper presents a work on using explicit means-ends model based reasoning about complex control situations which results in maintaining consistent perspectives and selecting appropriate control action for goal driven agents. An example of power system......Increased interconnection and loading of the power system along with deregulation has brought new challenges for electric power system operation, control and automation. Traditional power system models used in intelligent operation and control are highly dependent on the task purpose. Thus, a model...

  8. Fuzzy rule-based model for hydropower reservoirs operation

    Energy Technology Data Exchange (ETDEWEB)

    Moeini, R.; Afshar, A.; Afshar, M.H. [School of Civil Engineering, Iran University of Science and Technology, Tehran (Iran, Islamic Republic of)

    2011-02-15

    Real-time hydropower reservoir operation is a continuous decision-making process of determining the water level of a reservoir or the volume of water released from it. The hydropower operation is usually based on operating policies and rules defined and decided upon in strategic planning. This paper presents a fuzzy rule-based model for the operation of hydropower reservoirs. The proposed fuzzy rule-based model presents a set of suitable operating rules for release from the reservoir based on ideal or target storage levels. The model operates on an 'if-then' principle, in which the 'if' is a vector of fuzzy premises and the 'then' is a vector of fuzzy consequences. In this paper, reservoir storage, inflow, and period are used as premises and the release as the consequence. The steps involved in the development of the model include, construction of membership functions for the inflow, storage and the release, formulation of fuzzy rules, implication, aggregation and defuzzification. The required knowledge bases for the formulation of the fuzzy rules is obtained form a stochastic dynamic programming (SDP) model with a steady state policy. The proposed model is applied to the hydropower operation of ''Dez'' reservoir in Iran and the results are presented and compared with those of the SDP model. The results indicate the ability of the method to solve hydropower reservoir operation problems. (author)

  9. Business Intelligence Modeling in Launch Operations

    Science.gov (United States)

    Bardina, Jorge E.; Thirumalainambi, Rajkumar; Davis, Rodney D.

    2005-01-01

    This technology project is to advance an integrated Planning and Management Simulation Model for evaluation of risks, costs, and reliability of launch systems from Earth to Orbit for Space Exploration. The approach builds on research done in the NASA ARC/KSC developed Virtual Test Bed (VTB) to integrate architectural, operations process, and mission simulations for the purpose of evaluating enterprise level strategies to reduce cost, improve systems operability, and reduce mission risks. The objectives are to understand the interdependency of architecture and process on recurring launch cost of operations, provide management a tool for assessing systems safety and dependability versus cost, and leverage lessons learned and empirical models from Shuttle and International Space Station to validate models applied to Exploration. The systems-of-systems concept is built to balance the conflicting objectives of safety, reliability, and process strategy in order to achieve long term sustainability. A planning and analysis test bed is needed for evaluation of enterprise level options and strategies for transit and launch systems as well as surface and orbital systems. This environment can also support agency simulation .based acquisition process objectives. The technology development approach is based on the collaborative effort set forth in the VTB's integrating operations. process models, systems and environment models, and cost models as a comprehensive disciplined enterprise analysis environment. Significant emphasis is being placed on adapting root cause from existing Shuttle operations to exploration. Technical challenges include cost model validation, integration of parametric models with discrete event process and systems simulations. and large-scale simulation integration. The enterprise architecture is required for coherent integration of systems models. It will also require a plan for evolution over the life of the program. The proposed technology will produce

  10. Business intelligence modeling in launch operations

    Science.gov (United States)

    Bardina, Jorge E.; Thirumalainambi, Rajkumar; Davis, Rodney D.

    2005-05-01

    The future of business intelligence in space exploration will focus on the intelligent system-of-systems real-time enterprise. In present business intelligence, a number of technologies that are most relevant to space exploration are experiencing the greatest change. Emerging patterns of set of processes rather than organizational units leading to end-to-end automation is becoming a major objective of enterprise information technology. The cost element is a leading factor of future exploration systems. This technology project is to advance an integrated Planning and Management Simulation Model for evaluation of risks, costs, and reliability of launch systems from Earth to Orbit for Space Exploration. The approach builds on research done in the NASA ARC/KSC developed Virtual Test Bed (VTB) to integrate architectural, operations process, and mission simulations for the purpose of evaluating enterprise level strategies to reduce cost, improve systems operability, and reduce mission risks. The objectives are to understand the interdependency of architecture and process on recurring launch cost of operations, provide management a tool for assessing systems safety and dependability versus cost, and leverage lessons learned and empirical models from Shuttle and International Space Station to validate models applied to Exploration. The systems-of-systems concept is built to balance the conflicting objectives of safety, reliability, and process strategy in order to achieve long term sustainability. A planning and analysis test bed is needed for evaluation of enterprise level options and strategies for transit and launch systems as well as surface and orbital systems. This environment can also support agency simulation based acquisition process objectives. The technology development approach is based on the collaborative effort set forth in the VTB's integrating operations, process models, systems and environment models, and cost models as a comprehensive disciplined

  11. Reactor core modeling practice: Operational requirements, model characteristics, and model validation

    International Nuclear Information System (INIS)

    Zerbino, H.

    1997-01-01

    The physical models implemented in power plant simulators have greatly increased in performance and complexity in recent years. This process has been enabled by the ever increasing computing power available at affordable prices. This paper describes this process from several angles: First the operational requirements which are more critical from the point of view of model performance, both for normal and off-normal operating conditions; A second section discusses core model characteristics in the light of the solutions implemented by Thomson Training and Simulation (TT and S) in several full-scope simulators recently built and delivered for Dutch, German, and French nuclear power plants; finally we consider the model validation procedures, which are of course an integral part of model development, and which are becoming more and more severe as performance expectations increase. As a conclusion, it may be asserted that in the core modeling field, as in other areas, the general improvement in the quality of simulation codes has resulted in a fairly rapid convergence towards mainstream engineering-grade calculations. This is remarkable performance in view of the stringent real-time requirements which the simulation codes must satisfy as well as the extremely wide range of operating conditions that they are called upon to cover with good accuracy. (author)

  12. Modeling Operations Costs for Human Exploration Architectures

    Science.gov (United States)

    Shishko, Robert

    2013-01-01

    Operations and support (O&S) costs for human spaceflight have not received the same attention in the cost estimating community as have development costs. This is unfortunate as O&S costs typically comprise a majority of life-cycle costs (LCC) in such programs as the International Space Station (ISS) and the now-cancelled Constellation Program. Recognizing this, the Constellation Program and NASA HQs supported the development of an O&S cost model specifically for human spaceflight. This model, known as the Exploration Architectures Operations Cost Model (ExAOCM), provided the operations cost estimates for a variety of alternative human missions to the moon, Mars, and Near-Earth Objects (NEOs) in architectural studies. ExAOCM is philosophically based on the DoD Architecture Framework (DoDAF) concepts of operational nodes, systems, operational functions, and milestones. This paper presents some of the historical background surrounding the development of the model, and discusses the underlying structure, its unusual user interface, and lastly, previous examples of its use in the aforementioned architectural studies.

  13. Modeling Optimal Scheduling for Pumping System to Minimize Operation Cost and Enhance Operation Reliability

    Directory of Open Access Journals (Sweden)

    Yin Luo

    2012-01-01

    Full Text Available Traditional pump scheduling models neglect the operation reliability which directly relates with the unscheduled maintenance cost and the wear cost during the operation. Just for this, based on the assumption that the vibration directly relates with the operation reliability and the degree of wear, it could express the operation reliability as the normalization of the vibration level. The characteristic of the vibration with the operation point was studied, it could be concluded that idealized flow versus vibration plot should be a distinct bathtub shape. There is a narrow sweet spot (80 to 100 percent BEP to obtain low vibration levels in this shape, and the vibration also follows similar law with the square of the rotation speed without resonance phenomena. Then, the operation reliability could be modeled as the function of the capacity and rotation speed of the pump and add this function to the traditional model to form the new. And contrast with the tradition method, the result shown that the new model could fix the result produced by the traditional, make the pump operate in low vibration, then the operation reliability could increase and the maintenance cost could decrease.

  14. Operational Plan Ontology Model for Interconnection and Interoperability

    Science.gov (United States)

    Long, F.; Sun, Y. K.; Shi, H. Q.

    2017-03-01

    Aiming at the assistant decision-making system’s bottleneck of processing the operational plan data and information, this paper starts from the analysis of the problem of traditional expression and the technical advantage of ontology, and then it defines the elements of the operational plan ontology model and determines the basis of construction. Later, it builds up a semi-knowledge-level operational plan ontology model. Finally, it probes into the operational plan expression based on the operational plan ontology model and the usage of the application software. Thus, this paper has the theoretical significance and application value in the improvement of interconnection and interoperability of the operational plan among assistant decision-making systems.

  15. Development of operator thinking model and its application to nuclear reactor plant operation system

    International Nuclear Information System (INIS)

    Miki, Tetsushi; Endou, Akira; Himeno, Yoshiaki

    1992-01-01

    At first, this paper presents the developing method of an operator thinking model and the outline of the developed model. In next, it describes the nuclear reactor plant operation system which has been developed based on this model. Finally, it has been confirmed that the method described in this paper is very effective in order to construct expert systems which replace the reactor operator's role with AI (artificial intelligence) systems. (author)

  16. Why operational risk modelling creates inverse incentives

    NARCIS (Netherlands)

    Doff, R.

    2015-01-01

    Operational risk modelling has become commonplace in large international banks and is gaining popularity in the insurance industry as well. This is partly due to financial regulation (Basel II, Solvency II). This article argues that operational risk modelling is fundamentally flawed, despite efforts

  17. Modelling of Batch Process Operations

    DEFF Research Database (Denmark)

    Abdul Samad, Noor Asma Fazli; Cameron, Ian; Gani, Rafiqul

    2011-01-01

    Here a batch cooling crystalliser is modelled and simulated as is a batch distillation system. In the batch crystalliser four operational modes of the crystalliser are considered, namely: initial cooling, nucleation, crystal growth and product removal. A model generation procedure is shown that s...

  18. An approach to modeling operator's cognitive behavior using artificial intelligence techniques in emergency operating event sequences

    International Nuclear Information System (INIS)

    Cheon, Se Woo; Sur, Sang Moon; Lee, Yong Hee; Park, Young Taeck; Moon, Sang Joon

    1994-01-01

    Computer modeling of an operator's cognitive behavior is a promising approach for the purpose of human factors study and man-machine systems assessment. In this paper, the states of the art in modeling operator behavior and the current status in developing an operator's model (MINERVA - NPP) are presented. The model is constructed as a knowledge-based system of a blackboard framework and is simulated based on emergency operating procedures

  19. Modeling operators' emergency response time for chemical processing operations.

    Science.gov (United States)

    Murray, Susan L; Harputlu, Emrah; Mentzer, Ray A; Mannan, M Sam

    2014-01-01

    Operators have a crucial role during emergencies at a variety of facilities such as chemical processing plants. When an abnormality occurs in the production process, the operator often has limited time to either take corrective actions or evacuate before the situation becomes deadly. It is crucial that system designers and safety professionals can estimate the time required for a response before procedures and facilities are designed and operations are initiated. There are existing industrial engineering techniques to establish time standards for tasks performed at a normal working pace. However, it is reasonable to expect the time required to take action in emergency situations will be different than working at a normal production pace. It is possible that in an emergency, operators will act faster compared to a normal pace. It would be useful for system designers to be able to establish a time range for operators' response times for emergency situations. This article develops a modeling approach to estimate the time standard range for operators taking corrective actions or following evacuation procedures in emergency situations. This will aid engineers and managers in establishing time requirements for operators in emergency situations. The methodology used for this study combines a well-established industrial engineering technique for determining time requirements (predetermined time standard system) and adjustment coefficients for emergency situations developed by the authors. Numerous videos of workers performing well-established tasks at a maximum pace were studied. As an example, one of the tasks analyzed was pit crew workers changing tires as quickly as they could during a race. The operations in these videos were decomposed into basic, fundamental motions (such as walking, reaching for a tool, and bending over) by studying the videos frame by frame. A comparison analysis was then performed between the emergency pace and the normal working pace operations

  20. Operations and Modeling Analysis

    Science.gov (United States)

    Ebeling, Charles

    2005-01-01

    The Reliability and Maintainability Analysis Tool (RMAT) provides NASA the capability to estimate reliability and maintainability (R&M) parameters and operational support requirements for proposed space vehicles based upon relationships established from both aircraft and Shuttle R&M data. RMAT has matured both in its underlying database and in its level of sophistication in extrapolating this historical data to satisfy proposed mission requirements, maintenance concepts and policies, and type of vehicle (i.e. ranging from aircraft like to shuttle like). However, a companion analyses tool, the Logistics Cost Model (LCM) has not reached the same level of maturity as RMAT due, in large part, to nonexistent or outdated cost estimating relationships and underlying cost databases, and it's almost exclusive dependence on Shuttle operations and logistics cost input parameters. As a result, the full capability of the RMAT/LCM suite of analysis tools to take a conceptual vehicle and derive its operations and support requirements along with the resulting operating and support costs has not been realized.

  1. Inference in partially identified models with many moment inequalities using Lasso

    DEFF Research Database (Denmark)

    Bugni, Federico A.; Caner, Mehmet; Kock, Anders Bredahl

    This paper considers the problem of inference in a partially identified moment (in)equality model with possibly many moment inequalities. Our contribution is to propose a novel two-step new inference method based on the combination of two ideas. On the one hand, our test statistic and critical...

  2. Operator expansion in σ-model

    International Nuclear Information System (INIS)

    Terent'ev, M.V.

    1986-01-01

    The operator expansion is studied in two dimensional σ-model with O(N) symmetry group at large values of N for the Green function at x 2 → 0 (Here n(x) is the dynamical field of σ-model). As a preliminary step the renormalization scheme is formulated in framework of I/N expansion where the intermediate scale μ 2 is introdused and regions of large (p > μ) and small (p 2 )/N in composite operators (here f(μ 2 ) is the effective coupling constant at the point μ 2 ) and the corrections of order of m 2 x 2 f(μ 2 )/N in the coefficient functions (here m is the dynamical mass-scale factor of σ-model) decisively depend on the recipe of factorization of small and large momenta regions. Due to the analogy between σ-model and quantum chromodynamics (QCD) the obtained result indicates the theoretical limitations to the accuracy of sum rule method in QCD

  3. Launch and Landing Effects Ground Operations (LLEGO) Model

    Science.gov (United States)

    2008-01-01

    LLEGO is a model for understanding recurring launch and landing operations costs at Kennedy Space Center for human space flight. Launch and landing operations are often referred to as ground processing, or ground operations. Currently, this function is specific to the ground operations for the Space Shuttle Space Transportation System within the Space Shuttle Program. The Constellation system to follow the Space Shuttle consists of the crewed Orion spacecraft atop an Ares I launch vehicle and the uncrewed Ares V cargo launch vehicle. The Constellation flight and ground systems build upon many elements of the existing Shuttle flight and ground hardware, as well as upon existing organizations and processes. In turn, the LLEGO model builds upon past ground operations research, modeling, data, and experience in estimating for future programs. Rather than to simply provide estimates, the LLEGO model s main purpose is to improve expenses by relating complex relationships among functions (ground operations contractor, subcontractors, civil service technical, center management, operations, etc.) to tangible drivers. Drivers include flight system complexity and reliability, as well as operations and supply chain management processes and technology. Together these factors define the operability and potential improvements for any future system, from the most direct to the least direct expenses.

  4. Detection of Independent Associations of Plasma Lipidomic Parameters with Insulin Sensitivity Indices Using Data Mining Methodology.

    Directory of Open Access Journals (Sweden)

    Steffi Kopprasch

    Full Text Available Glucolipotoxicity is a major pathophysiological mechanism in the development of insulin resistance and type 2 diabetes mellitus (T2D. We aimed to detect subtle changes in the circulating lipid profile by shotgun lipidomics analyses and to associate them with four different insulin sensitivity indices.The cross-sectional study comprised 90 men with a broad range of insulin sensitivity including normal glucose tolerance (NGT, n = 33, impaired glucose tolerance (IGT, n = 32 and newly detected T2D (n = 25. Prior to oral glucose challenge plasma was obtained and quantitatively analyzed for 198 lipid molecular species from 13 different lipid classes including triacylglycerls (TAGs, phosphatidylcholine plasmalogen/ether (PC O-s, sphingomyelins (SMs, and lysophosphatidylcholines (LPCs. To identify a lipidomic signature of individual insulin sensitivity we applied three data mining approaches, namely least absolute shrinkage and selection operator (LASSO, Support Vector Regression (SVR and Random Forests (RF for the following insulin sensitivity indices: homeostasis model of insulin resistance (HOMA-IR, glucose insulin sensitivity index (GSI, insulin sensitivity index (ISI, and disposition index (DI. The LASSO procedure offers a high prediction accuracy and and an easier interpretability than SVR and RF.After LASSO selection, the plasma lipidome explained 3% (DI to maximal 53% (HOMA-IR variability of the sensitivity indexes. Among the lipid species with the highest positive LASSO regression coefficient were TAG 54:2 (HOMA-IR, PC O- 32:0 (GSI, and SM 40:3:1 (ISI. The highest negative regression coefficient was obtained for LPC 22:5 (HOMA-IR, TAG 51:1 (GSI, and TAG 58:6 (ISI.Although a substantial part of lipid molecular species showed a significant correlation with insulin sensitivity indices we were able to identify a limited number of lipid metabolites of particular importance based on the LASSO approach. These few selected lipids with the closest

  5. Operations planning simulation: Model study

    Science.gov (United States)

    1974-01-01

    The use of simulation modeling for the identification of system sensitivities to internal and external forces and variables is discussed. The technique provides a means of exploring alternate system procedures and processes, so that these alternatives may be considered on a mutually comparative basis permitting the selection of a mode or modes of operation which have potential advantages to the system user and the operator. These advantages are measurements is system efficiency are: (1) the ability to meet specific schedules for operations, mission or mission readiness requirements or performance standards and (2) to accomplish the objectives within cost effective limits.

  6. Comparing models of offensive cyber operations

    CSIR Research Space (South Africa)

    Grant, T

    2015-10-01

    Full Text Available would be needed by a Cyber Security Operations Centre in order to perform offensive cyber operations?". The analysis was performed, using as a springboard seven models of cyber-attack, and resulted in the development of what is described as a canonical...

  7. Operational risk quantification and modelling within Romanian insurance industry

    Directory of Open Access Journals (Sweden)

    Tudor Răzvan

    2017-07-01

    Full Text Available This paper aims at covering and describing the shortcomings of various models used to quantify and model the operational risk within insurance industry with a particular focus on Romanian specific regulation: Norm 6/2015 concerning the operational risk issued by IT systems. While most of the local insurers are focusing on implementing the standard model to compute the Operational Risk solvency capital required, the local regulator has issued a local norm that requires to identify and assess the IT based operational risks from an ISO 27001 perspective. The challenges raised by the correlations assumed in the Standard model are substantially increased by this new regulation that requires only the identification and quantification of the IT operational risks. The solvency capital requirement stipulated by the implementation of Solvency II doesn’t recommend a model or formula on how to integrate the newly identified risks in the Operational Risk capital requirements. In this context we are going to assess the academic and practitioner’s understanding in what concerns: The Frequency-Severity approach, Bayesian estimation techniques, Scenario Analysis and Risk Accounting based on risk units, and how they could support the modelling of operational risk that are IT based. Developing an internal model only for the operational risk capital requirement proved to be, so far, costly and not necessarily beneficial for the local insurers. As the IT component will play a key role in the future of the insurance industry, the result of this analysis will provide a specific approach in operational risk modelling that can be implemented in the context of Solvency II, in a particular situation when (internal or external operational risk databases are scarce or not available.

  8. An operator calculus for surface and volume modeling

    Science.gov (United States)

    Gordon, W. J.

    1984-01-01

    The mathematical techniques which form the foundation for most of the surface and volume modeling techniques used in practice are briefly described. An outline of what may be termed an operator calculus for the approximation and interpolation of functions of more than one independent variable is presented. By considering the linear operators associated with bivariate and multivariate interpolation/approximation schemes, it is shown how they can be compounded by operator multiplication and Boolean addition to obtain a distributive lattice of approximation operators. It is then demonstrated via specific examples how this operator calculus leads to practical techniques for sculptured surface and volume modeling.

  9. Proposal for operator's mental model using the concept of multilevel flow modeling

    International Nuclear Information System (INIS)

    Yoshimura, Seiichi; Takano, Kenichi; Sasou, Kunihide

    1995-01-01

    It is necessary to analyze an operator's thinking process and a operator team's intension forming process for preventing human errors in a highly advanced huge system like a nuclear power plant. Central Research Institute of Electric Power Industry is promoting a research project to establish human error prevention countermeasures by modeling the thinking and intension forming process. The important is the future prediction and the cause identification when abnormal situations occur in a nuclear power plant. The concept of Multilevel Flow Modeling (MFM) seems to be effective as an operator's mental model which performs the future prediction and the cause identification. MFM is a concept which qualitatively describes the plant functions by energy and mass flows and also describes the plant status by breaking down the targets in a hierarchical manner which a plant should achieve. In this paper, an operator's mental model using the concept of MFM was proposed and a nuclear power plant diagnosis support system using MFM was developed. The system evaluation test by personnel who have operational experience in nuclear power plants revealed that MFM was superior in the future prediction and the cause identification to a traditional nuclear power plant status display system which used mimics and trends. MFM proved to be useful as an operator's mental model by the test. (author)

  10. Multivariate operational risk: dependence modelling with Lévy copulas

    OpenAIRE

    Böcker, K. and Klüppelberg, C.

    2015-01-01

    Simultaneous modelling of operational risks occurring in different event type/business line cells poses the challenge for operational risk quantification. Invoking the new concept of L´evy copulas for dependence modelling yields simple approximations of high quality for multivariate operational VAR.

  11. Regularized rare variant enrichment analysis for case-control exome sequencing data.

    Science.gov (United States)

    Larson, Nicholas B; Schaid, Daniel J

    2014-02-01

    Rare variants have recently garnered an immense amount of attention in genetic association analysis. However, unlike methods traditionally used for single marker analysis in GWAS, rare variant analysis often requires some method of aggregation, since single marker approaches are poorly powered for typical sequencing study sample sizes. Advancements in sequencing technologies have rendered next-generation sequencing platforms a realistic alternative to traditional genotyping arrays. Exome sequencing in particular not only provides base-level resolution of genetic coding regions, but also a natural paradigm for aggregation via genes and exons. Here, we propose the use of penalized regression in combination with variant aggregation measures to identify rare variant enrichment in exome sequencing data. In contrast to marginal gene-level testing, we simultaneously evaluate the effects of rare variants in multiple genes, focusing on gene-based least absolute shrinkage and selection operator (LASSO) and exon-based sparse group LASSO models. By using gene membership as a grouping variable, the sparse group LASSO can be used as a gene-centric analysis of rare variants while also providing a penalized approach toward identifying specific regions of interest. We apply extensive simulations to evaluate the performance of these approaches with respect to specificity and sensitivity, comparing these results to multiple competing marginal testing methods. Finally, we discuss our findings and outline future research. © 2013 WILEY PERIODICALS, INC.

  12. Operator formulation of the droplet model

    International Nuclear Information System (INIS)

    Lee, B.W.

    1987-01-01

    We study in detail the implications of the operator formulation of the droplet model. The picture of high-energy scattering that emerges from this model attributed the interaction between two colliding particles at high energies to an instantaneous, multiple exchange between two extended charge distributions. Thus the study of charge correlation functions becomes the most important problem in the droplet model. We find that in order for the elastic cross section to have a finite limit at infinite energy, the charge must be a conserved one. In quantum electrodynamics the charge in question is the electric charge. In hadronic physics, we conjecture, it is the baryonic charge. Various arguments for and implications of this hypothesis are presented. We study formal properties of the charge correlation functions that follow from microcausality, T, C, P invariances, and charge conservation. Perturbation expansion of the correlation functions is studied, and their cluster properties are deduced. A cluster expansion of the high-energy T matrix is developed, and the exponentiation of the interaction potential in this scheme is noted. The operator droplet model is put to the test of reproducing the high-energy limit of elastic scattering quantum electrodynamics found by Cheng and Wu in perturbation theory. We find that the droplet model reproduces exactly the results of Cheng and Wu as to the impact factor. In fact, the ''impact picture'' of Cheng and Wu is completely equivalent to the droplet model in the operator version. An appraisal is made of the possible limitation of the model. (author). 13 refs

  13. Systems Integration Operations/Logistics Model (SOLMOD)

    International Nuclear Information System (INIS)

    Vogel, L.W.; Joy, D.S.

    1990-01-01

    SOLMOD is a discrete event simulation model written in FORTRAN 77 and operates in a VAX or PC environment. The model emulates the movement and interaction of equipment and radioactive waste as it is processed through the FWMS. SOLMOD can be used to measure the impacts of different operating schedules and rules, system configurations, reliability, availability, maintainability (RAM) considerations, and equipment and other resource availabilities on the performance of processes comprising the FWMS and how these factors combine to determine overall system performance. Model outputs are a series of measurements of the amount and characteristics of waste at selected points in the FWMS and the utilization of resources needed to transport and process the waste. The model results may be reported on a yearly, monthly, weekly, or daily basis to facilitate analysis. 3 refs., 3 figs., 2 tabs

  14. Improved intact soil-core carbon determination applying regression shrinkage and variable selection techniques to complete spectrum laser-induced breakdown spectroscopy (LIBS).

    Science.gov (United States)

    Bricklemyer, Ross S; Brown, David J; Turk, Philip J; Clegg, Sam M

    2013-10-01

    Laser-induced breakdown spectroscopy (LIBS) provides a potential method for rapid, in situ soil C measurement. In previous research on the application of LIBS to intact soil cores, we hypothesized that ultraviolet (UV) spectrum LIBS (200-300 nm) might not provide sufficient elemental information to reliably discriminate between soil organic C (SOC) and inorganic C (IC). In this study, using a custom complete spectrum (245-925 nm) core-scanning LIBS instrument, we analyzed 60 intact soil cores from six wheat fields. Predictive multi-response partial least squares (PLS2) models using full and reduced spectrum LIBS were compared for directly determining soil total C (TC), IC, and SOC. Two regression shrinkage and variable selection approaches, the least absolute shrinkage and selection operator (LASSO) and sparse multivariate regression with covariance estimation (MRCE), were tested for soil C predictions and the identification of wavelengths important for soil C prediction. Using complete spectrum LIBS for PLS2 modeling reduced the calibration standard error of prediction (SEP) 15 and 19% for TC and IC, respectively, compared to UV spectrum LIBS. The LASSO and MRCE approaches provided significantly improved calibration accuracy and reduced SEP 32-55% over UV spectrum PLS2 models. We conclude that (1) complete spectrum LIBS is superior to UV spectrum LIBS for predicting soil C for intact soil cores without pretreatment; (2) LASSO and MRCE approaches provide improved calibration prediction accuracy over PLS2 but require additional testing with increased soil and target analyte diversity; and (3) measurement errors associated with analyzing intact cores (e.g., sample density and surface roughness) require further study and quantification.

  15. The DIAMOND Model of Peace Support Operations

    National Research Council Canada - National Science Library

    Bailey, Peter

    2005-01-01

    DIAMOND (Diplomatic And Military Operations in a Non-warfighting Domain) is a high-level stochastic simulation developed at Dstl as a key centerpiece within the Peace Support Operations (PSO) 'modelling jigsaw...

  16. Study on modeling of operator's learning mechanism

    International Nuclear Information System (INIS)

    Yoshimura, Seichi; Hasegawa, Naoko

    1998-01-01

    One effective method to analyze the causes of human errors is to model the behavior of human and to simulate it. The Central Research Institute of Electric Power Industry (CRIEPI) has developed an operator team behavior simulation system called SYBORG (Simulation System for the Behavior of an Operating Group) to analyze the human errors and to establish the countermeasures for them. As an operator behavior model which composes SYBORG has no learning mechanism and the knowledge of a plant is fixed, it cannot take suitable actions when unknown situations occur nor learn anything from the experience. However, considering actual operators, learning is an essential human factor to enhance their abilities to diagnose plant anomalies. In this paper, Q learning with 1/f fluctuation was proposed as a learning mechanism of an operator and simulation using the mechanism was conducted. The results showed the effectiveness of the learning mechanism. (author)

  17. Operation quality assessment model for video conference system

    Science.gov (United States)

    Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian

    2018-01-01

    Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.

  18. Model Based Autonomy for Robust Mars Operations

    Science.gov (United States)

    Kurien, James A.; Nayak, P. Pandurang; Williams, Brian C.; Lau, Sonie (Technical Monitor)

    1998-01-01

    Space missions have historically relied upon a large ground staff, numbering in the hundreds for complex missions, to maintain routine operations. When an anomaly occurs, this small army of engineers attempts to identify and work around the problem. A piloted Mars mission, with its multiyear duration, cost pressures, half-hour communication delays and two-week blackouts cannot be closely controlled by a battalion of engineers on Earth. Flight crew involvement in routine system operations must also be minimized to maximize science return. It also may be unrealistic to require the crew have the expertise in each mission subsystem needed to diagnose a system failure and effect a timely repair, as engineers did for Apollo 13. Enter model-based autonomy, which allows complex systems to autonomously maintain operation despite failures or anomalous conditions, contributing to safe, robust, and minimally supervised operation of spacecraft, life support, In Situ Resource Utilization (ISRU) and power systems. Autonomous reasoning is central to the approach. A reasoning algorithm uses a logical or mathematical model of a system to infer how to operate the system, diagnose failures and generate appropriate behavior to repair or reconfigure the system in response. The 'plug and play' nature of the models enables low cost development of autonomy for multiple platforms. Declarative, reusable models capture relevant aspects of the behavior of simple devices (e.g. valves or thrusters). Reasoning algorithms combine device models to create a model of the system-wide interactions and behavior of a complex, unique artifact such as a spacecraft. Rather than requiring engineers to all possible interactions and failures at design time or perform analysis during the mission, the reasoning engine generates the appropriate response to the current situation, taking into account its system-wide knowledge, the current state, and even sensor failures or unexpected behavior.

  19. Renormalizations and operator expansion in sigma model

    International Nuclear Information System (INIS)

    Terentyev, M.V.

    1988-01-01

    The operator expansion (OPE) is studied for the Green function at x 2 → 0 (n(x) is the dynamical field ofσ-model) in the framework of the two-dimensional σ-model with the O(N) symmetry group at large N. As a preliminary step we formulate the renormalization scheme which permits introduction of an arbitrary intermediate scale μ 2 in the framework of 1/N expansion and discuss factorization (separation) of small (p μ) momentum region. It is shown that definition of composite local operators and coefficient functions figuring in OPE is unambiguous only in the leading order in 1/N expansion when dominant are the solutions with extremum of action. Corrections of order f(μ 2 )/N (here f(μ 2 ) is the effective interaction constant at the point μ 2 ) in composite operators and coefficient functions essentially depend on factorization method of high and low momentum regions. It is shown also that contributions to the power corrections of order m 2 x 2 f(μ 2 )/N in the Green function (here m is the dynamical mass-scale factor in σ-model) arise simultaneously from two sources: from the mean vacuum value of the composite operator n ∂ 2 n and from the hard particle contributions in the coefficient function of unite operator. Due to the analogy between σ-model and QCD the obtained result indicates theoretical limitations to the sum rule method in QCD. (author)

  20. An operator model-based filtering scheme

    International Nuclear Information System (INIS)

    Sawhney, R.S.; Dodds, H.L.; Schryer, J.C.

    1990-01-01

    This paper presents a diagnostic model developed at Oak Ridge National Laboratory (ORNL) for off-normal nuclear power plant events. The diagnostic model is intended to serve as an embedded module of a cognitive model of the human operator, one application of which could be to assist control room operators in correctly responding to off-normal events by providing a rapid and accurate assessment of alarm patterns and parameter trends. The sequential filter model is comprised of two distinct subsystems --- an alarm analysis followed by an analysis of interpreted plant signals. During the alarm analysis phase, the alarm pattern is evaluated to generate hypotheses of possible initiating events in order of likelihood of occurrence. Each hypothesis is further evaluated through analysis of the current trends of state variables in order to validate/reject (in the form of increased/decreased certainty factor) the given hypothesis. 7 refs., 4 figs

  1. Advancing reservoir operation description in physically based hydrological models

    Science.gov (United States)

    Anghileri, Daniela; Giudici, Federico; Castelletti, Andrea; Burlando, Paolo

    2016-04-01

    Last decades have seen significant advances in our capacity of characterizing and reproducing hydrological processes within physically based models. Yet, when the human component is considered (e.g. reservoirs, water distribution systems), the associated decisions are generally modeled with very simplistic rules, which might underperform in reproducing the actual operators' behaviour on a daily or sub-daily basis. For example, reservoir operations are usually described by a target-level rule curve, which represents the level that the reservoir should track during normal operating conditions. The associated release decision is determined by the current state of the reservoir relative to the rule curve. This modeling approach can reasonably reproduce the seasonal water volume shift due to reservoir operation. Still, it cannot capture more complex decision making processes in response, e.g., to the fluctuations of energy prices and demands, the temporal unavailability of power plants or varying amount of snow accumulated in the basin. In this work, we link a physically explicit hydrological model with detailed hydropower behavioural models describing the decision making process by the dam operator. In particular, we consider two categories of behavioural models: explicit or rule-based behavioural models, where reservoir operating rules are empirically inferred from observational data, and implicit or optimization based behavioural models, where, following a normative economic approach, the decision maker is represented as a rational agent maximising a utility function. We compare these two alternate modelling approaches on the real-world water system of Lake Como catchment in the Italian Alps. The water system is characterized by the presence of 18 artificial hydropower reservoirs generating almost 13% of the Italian hydropower production. Results show to which extent the hydrological regime in the catchment is affected by different behavioural models and reservoir

  2. The national operational environment model (NOEM)

    Science.gov (United States)

    Salerno, John J.; Romano, Brian; Geiler, Warren

    2011-06-01

    The National Operational Environment Model (NOEM) is a strategic analysis/assessment tool that provides insight into the complex state space (as a system) that is today's modern operational environment. The NOEM supports baseline forecasts by generating plausible futures based on the current state. It supports what-if analysis by forecasting ramifications of potential "Blue" actions on the environment. The NOEM also supports sensitivity analysis by identifying possible pressure (leverage) points in support of the Commander that resolves forecasted instabilities, and by ranking sensitivities in a list for each leverage point and response. The NOEM can be used to assist Decision Makers, Analysts and Researchers with understanding the inter-workings of a region or nation state, the consequences of implementing specific policies, and the ability to plug in new operational environment theories/models as they mature. The NOEM is built upon an open-source, license-free set of capabilities, and aims to provide support for pluggable modules that make up a given model. The NOEM currently has an extensive number of modules (e.g. economic, security & social well-being pieces such as critical infrastructure) completed along with a number of tools to exercise them. The focus this year is on modeling the social and behavioral aspects of a populace within their environment, primarily the formation of various interest groups, their beliefs, their requirements, their grievances, their affinities, and the likelihood of a wide range of their actions, depending on their perceived level of security and happiness. As such, several research efforts are currently underway to model human behavior from a group perspective, in the pursuit of eventual integration and balance of populace needs/demands within their respective operational environment and the capacity to meet those demands. In this paper we will provide an overview of the NOEM, the need for and a description of its main components

  3. Ecole d'été de probabilités de Saint-Flour XLV

    CERN Document Server

    van de Geer, Sara

    2016-01-01

    Taking the Lasso method as its starting point, this book describes the main ingredients needed to study general loss functions and sparsity-inducing regularizers. It also provides a semi-parametric approach to establishing confidence intervals and tests. Sparsity-inducing methods have proven to be very useful in the analysis of high-dimensional data. Examples include the Lasso and group Lasso methods, and the least squares method with other norm-penalties, such as the nuclear norm. The illustrations provided include generalized linear models, density estimation, matrix completion and sparse principal components. Each chapter ends with a problem section. The book can be used as a textbook for a graduate or PhD course.

  4. An Optimal DEM Reconstruction Method for Linear Array Synthetic Aperture Radar Based on Variational Model

    Directory of Open Access Journals (Sweden)

    Shi Jun

    2015-02-01

    Full Text Available Downward-looking Linear Array Synthetic Aperture Radar (LASAR has many potential applications in the topographic mapping, disaster monitoring and reconnaissance applications, especially in the mountainous area. However, limited by the sizes of platforms, its resolution in the linear array direction is always far lower than those in the range and azimuth directions. This disadvantage leads to the blurring of Three-Dimensional (3D images in the linear array direction, and restricts the application of LASAR. To date, the research on 3D SAR image enhancement has focused on the sparse recovery technique. In this case, the one-to-one mapping of Digital Elevation Model (DEM brakes down. To overcome this, an optimal DEM reconstruction method for LASAR based on the variational model is discussed in an effort to optimize the DEM and the associated scattering coefficient map, and to minimize the Mean Square Error (MSE. Using simulation experiments, it is found that the variational model is more suitable for DEM enhancement applications to all kinds of terrains compared with the Orthogonal Matching Pursuit (OMPand Least Absolute Shrinkage and Selection Operator (LASSO methods.

  5. Role of cognitive models of operators in the design, operation and licensing of nuclear power plants

    International Nuclear Information System (INIS)

    Rasmussen, J.

    1982-01-01

    Cognitive models of the behavior of nuclear power plant operators - that is, models developed in terms of human properties rather than external task characteristics - are assuming increasingly important roles in plant design, operation and licensing. This is partly due to an increased concern for human decision making during unfamiliar plant conditions, and partly due to problems that arise when modern information technology is used to support operators in complex situations. Some of the problems identified during work on interface design and risk analysis are described. First, the question of categories of models is raised. Next, the use of cognitive models for system design is discussed. The use of the available cognitive models for more effective operator training is also advocated. The need for using cognitive models in risk analysis is also emphasized. Finally, the sources of human performance data, that is, event reports, incident analysis, experiments, and training simulators are mentioned, and the need for a consistent framework for data analysis based on cognitive models is discussed

  6. Role of conceptual models in nuclear power plant operation

    International Nuclear Information System (INIS)

    Williams, M.D.; Moran, T.P.; Brown, J.S.

    1982-01-01

    A crucial objective in plant operation (and perhaps licensing) ought to be to explicitly train operators to develop, perhaps with computer aids, robust conceptual models of the plants they control. The question is whether we are actually able to develop robust conceptual models and validate their robustness. Cognitive science is just beginning to come to grips with this problem. This paper describes some of the evolving technology for building conceptual models of physical mechanisms and some of the implications of such models in the context of nuclear power plant operation

  7. Relaxed memory models: an operational approach

    OpenAIRE

    Boudol , Gérard; Petri , Gustavo

    2009-01-01

    International audience; Memory models define an interface between programs written in some language and their implementation, determining which behaviour the memory (and thus a program) is allowed to have in a given model. A minimal guarantee memory models should provide to the programmer is that well-synchronized, that is, data-race free code has a standard semantics. Traditionally, memory models are defined axiomatically, setting constraints on the order in which memory operations are allow...

  8. An Economic Model of U.S. Airline Operating Expenses

    Science.gov (United States)

    Harris, Franklin D.

    2005-01-01

    This report presents a new economic model of operating expenses for 67 airlines. The model is based on data that the airlines reported to the United States Department of Transportation in 1999. The model incorporates expense-estimating equations that capture direct and indirect expenses of both passenger and cargo airlines. The variables and business factors included in the equations are detailed enough to calculate expenses at the flight equipment reporting level. Total operating expenses for a given airline are then obtained by summation over all aircraft operated by the airline. The model's accuracy is demonstrated by correlation with the DOT Form 41 data from which it was derived. Passenger airlines are more accurately modeled than cargo airlines. An appendix presents a concise summary of the expense estimating equations with explanatory notes. The equations include many operational and aircraft variables, which accommodate any changes that airline and aircraft manufacturers might make to lower expenses in the future. In 1999, total operating expenses of the 67 airlines included in this study amounted to slightly over $100.5 billion. The economic model reported herein estimates $109.3 billion.

  9. A computational model for biosonar echoes from foliage.

    Directory of Open Access Journals (Sweden)

    Chen Ming

    Full Text Available Since many bat species thrive in densely vegetated habitats, echoes from foliage are likely to be of prime importance to the animals' sensory ecology, be it as clutter that masks prey echoes or as sources of information about the environment. To better understand the characteristics of foliage echoes, a new model for the process that generates these signals has been developed. This model takes leaf size and orientation into account by representing the leaves as circular disks of varying diameter. The two added leaf parameters are of potential importance to the sensory ecology of bats, e.g., with respect to landmark recognition and flight guidance along vegetation contours. The full model is specified by a total of three parameters: leaf density, average leaf size, and average leaf orientation. It assumes that all leaf parameters are independently and identically distributed. Leaf positions were drawn from a uniform probability density function, sizes and orientations each from a Gaussian probability function. The model was found to reproduce the first-order amplitude statistics of measured example echoes and showed time-variant echo properties that depended on foliage parameters. Parameter estimation experiments using lasso regression have demonstrated that a single foliage parameter can be estimated with high accuracy if the other two parameters are known a priori. If only one parameter is known a priori, the other two can still be estimated, but with a reduced accuracy. Lasso regression did not support simultaneous estimation of all three parameters. Nevertheless, these results demonstrate that foliage echoes contain accessible information on foliage type and orientation that could play a role in supporting sensory tasks such as landmark identification and contour following in echolocating bats.

  10. Use of an operational model evaluation system for model intercomparison

    Energy Technology Data Exchange (ETDEWEB)

    Foster, K. T., LLNL

    1998-03-01

    The Atmospheric Release Advisory Capability (ARAC) is a centralized emergency response system used to assess the impact from atmospheric releases of hazardous materials. As part of an on- going development program, new three-dimensional diagnostic windfield and Lagrangian particle dispersion models will soon replace ARAC`s current operational windfield and dispersion codes. A prototype model performance evaluation system has been implemented to facilitate the study of the capabilities and performance of early development versions of these new models relative to ARAC`s current operational codes. This system provides tools for both objective statistical analysis using common performance measures and for more subjective visualization of the temporal and spatial relationships of model results relative to field measurements. Supporting this system is a database of processed field experiment data (source terms and meteorological and tracer measurements) from over 100 individual tracer releases.

  11. The role of personality, disability and physical activity in the development of medication-overuse headache: a prospective observational study.

    Science.gov (United States)

    Mose, Louise S; Pedersen, Susanne S; Debrabant, Birgit; Jensen, Rigmor H; Gram, Bibi

    2018-05-25

    Factors associated with development of medication-overuse headache (MOH) in migraine patients are not fully understood, but with respect to prevention, the ability to predict the onset of MOH is clinically important. The aims were to examine if personality characteristics, disability and physical activity level are associated with the onset of MOH in a group of migraine patients and explore to which extend these factors combined can predict the onset of MOH. The study was a single-center prospective observational study of migraine patients. At inclusion, all patients completed questionnaires evaluating 1) personality (NEO Five-Factor Inventory), 2) disability (Migraine Disability Assessment), and 3) physical activity level (Physical Activity Scale 2.1). Diagnostic codes from patients' electronic health records confirmed if they had developed MOH during the study period of 20 months. Analyses of associations were performed and to identify which of the variables predict onset MOH, a multivariable least absolute shrinkage and selection operator (LASSO) logistic regression model was fitted to predict presence or absence of MOH. Out of 131 participants, 12 % (n=16) developed MOH. Migraine disability score (OR=1.02, 95 % CI: 1.00 to 1.04), intensity of headache (OR=1.49, 95 % CI: 1.03 to 2.15) and headache frequency (OR=1.02, 95 % CI: 1.00 to 1.04) were associated with the onset of MOH adjusting for age and gender. To identify which of the variables predict onset MOH, we used a LASSO regression model, and evaluating the predictive performance of the LASSO-mode (containing the predictors MIDAS score, MIDAS-intensity and -frequency, neuroticism score, time with moderate physical activity, educational level, hours of sleep daily and number of contacts to the headache clinic) in terms of area under the curve (AUC) was weak (apparent AUC=0.62, 95% CI: 0.41-0.82). Disability, headache intensity and frequency were associated with the onset of MOH whereas personality and the

  12. Making Deformable Template Models Operational

    DEFF Research Database (Denmark)

    Fisker, Rune

    2000-01-01

    for estimation of the model parameters, which applies a combination of a maximum likelihood and minimum distance criterion. Another contribution is a very fast search based initialization algorithm using a filter interpretation of the likelihood model. These two methods can be applied to most deformable template......Deformable template models are a very popular and powerful tool within the field of image processing and computer vision. This thesis treats this type of models extensively with special focus on handling their common difficulties, i.e. model parameter selection, initialization and optimization....... A proper handling of the common difficulties is essential for making the models operational by a non-expert user, which is a requirement for intensifying and commercializing the use of deformable template models. The thesis is organized as a collection of the most important articles, which has been...

  13. Operator regularization in the Weinberg-Salam model

    International Nuclear Information System (INIS)

    Chowdhury, A.M.; McKeon, D.G.C.

    1987-01-01

    The technique of operator regularization is applied to the Weinberg-Salam model. By directly regulating operators that arise in the course of evaluating path integrals in the background-field formalism, we preserve all symmetries of the theory. An expansion due to Schwinger is employed to compute amplitudes perturbatively, thereby avoiding Feynman diagrams. No explicitly divergent quantities arise in this approach. The general features of the method are outlined with particular attention paid to the problem of simultaneously regulating functions of an operator A and inverse functions upon which A itself depends. Specific application is made to computation of the one-loop contribution to the muon-photon vertex in the Weinberg-Salam model in the limit of zero momentum transfer to the photon

  14. Modeling of HVAC operational faults in building performance simulation

    International Nuclear Information System (INIS)

    Zhang, Rongpeng; Hong, Tianzhen

    2017-01-01

    Highlights: •Discuss significance of capturing operational faults in existing buildings. •Develop a novel feature in EnergyPlus to model operational faults of HVAC systems. •Compare three approaches to faults modeling using EnergyPlus. •A case study demonstrates the use of the fault-modeling feature. •Future developments of new faults are discussed. -- Abstract: Operational faults are common in the heating, ventilating, and air conditioning (HVAC) systems of existing buildings, leading to a decrease in energy efficiency and occupant comfort. Various fault detection and diagnostic methods have been developed to identify and analyze HVAC operational faults at the component or subsystem level. However, current methods lack a holistic approach to predicting the overall impacts of faults at the building level—an approach that adequately addresses the coupling between various operational components, the synchronized effect between simultaneous faults, and the dynamic nature of fault severity. This study introduces the novel development of a fault-modeling feature in EnergyPlus which fills in the knowledge gap left by previous studies. This paper presents the design and implementation of the new feature in EnergyPlus and discusses in detail the fault-modeling challenges faced. The new fault-modeling feature enables EnergyPlus to quantify the impacts of faults on building energy use and occupant comfort, thus supporting the decision making of timely fault corrections. Including actual building operational faults in energy models also improves the accuracy of the baseline model, which is critical in the measurement and verification of retrofit or commissioning projects. As an example, EnergyPlus version 8.6 was used to investigate the impacts of a number of typical operational faults in an office building across several U.S. climate zones. The results demonstrate that the faults have significant impacts on building energy performance as well as on occupant

  15. Rotorwash Operational Footprint Modeling

    Science.gov (United States)

    2014-07-01

    I-13. Francis, J. K., and Gillespie, A., “Relating Gust Speed to Tree Damage in Hurricane Hugo , 1989,” Journal of Arboriculture, November 1993...statement has been Rotorwash Operational Footprint Modeling 72 found to be correct. In many parts of the United States, the requirements for hurricane ...On August 18, 1983, Hurricane Alicia struck downtown Houston, Texas. Researchers were allowed into downtown Houston the following day to help survey

  16. Categorical model of structural operational semantics for imperative language

    Directory of Open Access Journals (Sweden)

    William Steingartner

    2016-12-01

    Full Text Available Definition of programming languages consists of the formal definition of syntax and semantics. One of the most popular semantic methods used in various stages of software engineering is structural operational semantics. It describes program behavior in the form of state changes after execution of elementary steps of program. This feature makes structural operational semantics useful for implementation of programming languages and also for verification purposes. In our paper we present a new approach to structural operational semantics. We model behavior of programs in category of states, where objects are states, an abstraction of computer memory and morphisms model state changes, execution of a program in elementary steps. The advantage of using categorical model is its exact mathematical structure with many useful proved properties and its graphical illustration of program behavior as a path, i.e. a composition of morphisms. Our approach is able to accentuate dynamics of structural operational semantics. For simplicity, we assume that data are intuitively typed. Visualization and facility of our model is  not only  a  new model of structural operational semantics of imperative programming languages but it can also serve for education purposes.

  17. The Application of Architecture Frameworks to Modelling Exploration Operations Costs

    Science.gov (United States)

    Shishko, Robert

    2006-01-01

    Developments in architectural frameworks and system-of-systems thinking have provided useful constructs for systems engineering. DoDAF concepts, language, and formalisms, in particular, provide a natural way of conceptualizing an operations cost model applicable to NASA's space exploration vision. Not all DoDAF products have meaning or apply to a DoDAF inspired operations cost model, but this paper describes how such DoDAF concepts as nodes, systems, and operational activities relate to the development of a model to estimate exploration operations costs. The paper discusses the specific implementation to the Mission Operations Directorate (MOD) operational functions/activities currently being developed and presents an overview of how this powerful representation can apply to robotic space missions as well.

  18. Quantifying predictive capability of electronic health records for the most harmful breast cancer

    Science.gov (United States)

    Wu, Yirong; Fan, Jun; Peissig, Peggy; Berg, Richard; Tafti, Ahmad Pahlavan; Yin, Jie; Yuan, Ming; Page, David; Cox, Jennifer; Burnside, Elizabeth S.

    2018-03-01

    Improved prediction of the "most harmful" breast cancers that cause the most substantive morbidity and mortality would enable physicians to target more intense screening and preventive measures at those women who have the highest risk; however, such prediction models for the "most harmful" breast cancers have rarely been developed. Electronic health records (EHRs) represent an underused data source that has great research and clinical potential. Our goal was to quantify the value of EHR variables in the "most harmful" breast cancer risk prediction. We identified 794 subjects who had breast cancer with primary non-benign tumors with their earliest diagnosis on or after 1/1/2004 from an existing personalized medicine data repository, including 395 "most harmful" breast cancer cases and 399 "least harmful" breast cancer cases. For these subjects, we collected EHR data comprised of 6 components: demographics, diagnoses, symptoms, procedures, medications, and laboratory results. We developed two regularized prediction models, Ridge Logistic Regression (Ridge-LR) and Lasso Logistic Regression (Lasso-LR), to predict the "most harmful" breast cancer one year in advance. The area under the ROC curve (AUC) was used to assess model performance. We observed that the AUCs of Ridge-LR and Lasso-LR models were 0.818 and 0.839 respectively. For both the Ridge-LR and LassoLR models, the predictive performance of the whole EHR variables was significantly higher than that of each individual component (pbreast cancer, providing the possibility to personalize care for those women at the highest risk in clinical practice.

  19. Following an Optimal Batch Bioreactor Operations Model

    DEFF Research Database (Denmark)

    Ibarra-Junquera, V.; Jørgensen, Sten Bay; Virgen-Ortíz, J.J.

    2012-01-01

    The problem of following an optimal batch operation model for a bioreactor in the presence of uncertainties is studied. The optimal batch bioreactor operation model (OBBOM) refers to the bioreactor trajectory for nominal cultivation to be optimal. A multiple-variable dynamic optimization of fed...... as the master system which includes the optimal cultivation trajectory for the feed flow rate and the substrate concentration. The “real” bioreactor, the one with unknown dynamics and perturbations, is considered as the slave system. Finally, the controller is designed such that the real bioreactor...

  20. Representing Operational Knowledge of PWR Plant by Using Multilevel Flow Modelling

    DEFF Research Database (Denmark)

    Zhang, Xinxin; Lind, Morten; Jørgensen, Sten Bay

    2014-01-01

    situation and support operational decisions. This paper will provide a general MFM model of the primary side in a standard Westinghouse Pressurized Water Reactor ( PWR ) system including sub - systems of Reactor Coolant System, Rod Control System, Chemical and Volume Control System, emergency heat removal......The aim of this paper is to explore the capability of representing operational knowledge by using Multilevel Flow Modelling ( MFM ) methodology. The paper demonstrate s how the operational knowledge can be inserted into the MFM models and be used to evaluate the plant state, identify the current...... systems. And the sub - systems’ functions will be decomposed into sub - models according to different operational situations. An operational model will be developed based on the operating procedure by using MFM symbols and this model can be used to implement coordination rules for organize the utilizati...

  1. Dynamic and adaptive policy models for coalition operations

    Science.gov (United States)

    Verma, Dinesh; Calo, Seraphin; Chakraborty, Supriyo; Bertino, Elisa; Williams, Chris; Tucker, Jeremy; Rivera, Brian; de Mel, Geeth R.

    2017-05-01

    It is envisioned that the success of future military operations depends on the better integration, organizationally and operationally, among allies, coalition members, inter-agency partners, and so forth. However, this leads to a challenging and complex environment where the heterogeneity and dynamism in the operating environment intertwines with the evolving situational factors that affect the decision-making life cycle of the war fighter. Therefore, the users in such environments need secure, accessible, and resilient information infrastructures where policy-based mechanisms adopt the behaviours of the systems to meet end user goals. By specifying and enforcing a policy based model and framework for operations and security which accommodates heterogeneous coalitions, high levels of agility can be enabled to allow rapid assembly and restructuring of system and information resources. However, current prevalent policy models (e.g., rule based event-condition-action model and its variants) are not sufficient to deal with the highly dynamic and plausibly non-deterministic nature of these environments. Therefore, to address the above challenges, in this paper, we present a new approach for policies which enables managed systems to take more autonomic decisions regarding their operations.

  2. Preliminary Hybrid Modeling of the Panama Canal: Operations and Salinity Diffusion

    Directory of Open Access Journals (Sweden)

    Luis Rabelo

    2012-01-01

    Full Text Available This paper deals with the initial modeling of water salinity and its diffusion into the lakes during lock operation on the Panama Canal. A hybrid operational model was implemented using the AnyLogic software simulation environment. This was accomplished by generating an operational discrete-event simulation model and a continuous simulation model based on differential equations, which modeled the salinity diffusion in the lakes. This paper presents that unique application and includes the effective integration of lock operations and its impact on the environment.

  3. Neutron field control cybernetics model of RBMK reactor operator

    International Nuclear Information System (INIS)

    Polyakov, V.V.; Postnikov, V.V.; Sviridenkov, A.N.

    1992-01-01

    Results on parameter optimization for cybernetics model of RBMK reactor operator by power release control function are presented. Convolutions of various criteria applied previously in algorithms of the program 'Adviser to reactor operator' formed the basis of the model. 7 refs.; 4 figs

  4. Novel high-resolution computed tomography-based radiomic classifier for screen-identified pulmonary nodules in the National Lung Screening Trial.

    Science.gov (United States)

    Peikert, Tobias; Duan, Fenghai; Rajagopalan, Srinivasan; Karwoski, Ronald A; Clay, Ryan; Robb, Richard A; Qin, Ziling; Sicks, JoRean; Bartholmai, Brian J; Maldonado, Fabien

    2018-01-01

    Optimization of the clinical management of screen-detected lung nodules is needed to avoid unnecessary diagnostic interventions. Herein we demonstrate the potential value of a novel radiomics-based approach for the classification of screen-detected indeterminate nodules. Independent quantitative variables assessing various radiologic nodule features such as sphericity, flatness, elongation, spiculation, lobulation and curvature were developed from the NLST dataset using 726 indeterminate nodules (all ≥ 7 mm, benign, n = 318 and malignant, n = 408). Multivariate analysis was performed using least absolute shrinkage and selection operator (LASSO) method for variable selection and regularization in order to enhance the prediction accuracy and interpretability of the multivariate model. The bootstrapping method was then applied for the internal validation and the optimism-corrected AUC was reported for the final model. Eight of the originally considered 57 quantitative radiologic features were selected by LASSO multivariate modeling. These 8 features include variables capturing Location: vertical location (Offset carina centroid z), Size: volume estimate (Minimum enclosing brick), Shape: flatness, Density: texture analysis (Score Indicative of Lesion/Lung Aggression/Abnormality (SILA) texture), and surface characteristics: surface complexity (Maximum shape index and Average shape index), and estimates of surface curvature (Average positive mean curvature and Minimum mean curvature), all with Pscreen-detected nodule characterization appears extremely promising however independent external validation is needed.

  5. Operations management research methodologies using quantitative modeling

    NARCIS (Netherlands)

    Bertrand, J.W.M.; Fransoo, J.C.

    2002-01-01

    Gives an overview of quantitative model-based research in operations management, focusing on research methodology. Distinguishes between empirical and axiomatic research, and furthermore between descriptive and normative research. Presents guidelines for doing quantitative model-based research in

  6. THE HANFORD WASTE FEED DELIVERY OPERATIONS RESEARCH MODEL

    International Nuclear Information System (INIS)

    Berry, J.; Gallaher, B.N.

    2011-01-01

    Washington River Protection Solutions (WRPS), the Hanford tank farm contractor, is tasked with the long term planning of the cleanup mission. Cleanup plans do not explicitly reflect the mission effects associated with tank farm operating equipment failures. EnergySolutions, a subcontractor to WRPS has developed, in conjunction with WRPS tank farms staff, an Operations Research (OR) model to assess and identify areas to improve the performance of the Waste Feed Delivery Systems. This paper provides an example of how OR modeling can be used to help identify and mitigate operational risks at the Hanford tank farms.

  7. Model improves oil field operating cost estimates

    International Nuclear Information System (INIS)

    Glaeser, J.L.

    1996-01-01

    A detailed operating cost model that forecasts operating cost profiles toward the end of a field's life should be constructed for testing depletion strategies and plans for major oil fields. Developing a good understanding of future operating cost trends is important. Incorrectly forecasting the trend can result in bad decision making regarding investments and reservoir operating strategies. Recent projects show that significant operating expense reductions can be made in the latter stages o field depletion without significantly reducing the expected ultimate recoverable reserves. Predicting future operating cost trends is especially important for operators who are currently producing a field and must forecast the economic limit of the property. For reasons presented in this article, it is usually not correct to either assume that operating expense stays fixed in dollar terms throughout the lifetime of a field, nor is it correct to assume that operating costs stay fixed on a dollar per barrel basis

  8. Sparse Additive Ordinary Differential Equations for Dynamic Gene Regulatory Network Modeling.

    Science.gov (United States)

    Wu, Hulin; Lu, Tao; Xue, Hongqi; Liang, Hua

    2014-04-02

    The gene regulation network (GRN) is a high-dimensional complex system, which can be represented by various mathematical or statistical models. The ordinary differential equation (ODE) model is one of the popular dynamic GRN models. High-dimensional linear ODE models have been proposed to identify GRNs, but with a limitation of the linear regulation effect assumption. In this article, we propose a sparse additive ODE (SA-ODE) model, coupled with ODE estimation methods and adaptive group LASSO techniques, to model dynamic GRNs that could flexibly deal with nonlinear regulation effects. The asymptotic properties of the proposed method are established and simulation studies are performed to validate the proposed approach. An application example for identifying the nonlinear dynamic GRN of T-cell activation is used to illustrate the usefulness of the proposed method.

  9. Modeling for operational event risk assessment

    International Nuclear Information System (INIS)

    Sattison, M.B.

    1997-01-01

    The U.S. Nuclear Regulatory Commission has been using risk models to evaluate the risk significance of operational events in U.S. commercial nuclear power plants for more seventeen years. During that time, the models have evolved in response to the advances in risk assessment technology and insights gained with experience. Evaluation techniques fall into two categories, initiating event assessments and condition assessments. The models used for these analyses have become uniquely specialized for just this purpose

  10. On the selection of ordinary differential equation models with application to predator-prey dynamical models.

    Science.gov (United States)

    Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J

    2015-03-01

    We consider model selection and estimation in a context where there are competing ordinary differential equation (ODE) models, and all the models are special cases of a "full" model. We propose a computationally inexpensive approach that employs statistical estimation of the full model, followed by a combination of a least squares approximation (LSA) and the adaptive Lasso. We show the resulting method, here called the LSA method, to be an (asymptotically) oracle model selection method. The finite sample performance of the proposed LSA method is investigated with Monte Carlo simulations, in which we examine the percentage of selecting true ODE models, the efficiency of the parameter estimation compared to simply using the full and true models, and coverage probabilities of the estimated confidence intervals for ODE parameters, all of which have satisfactory performances. Our method is also demonstrated by selecting the best predator-prey ODE to model a lynx and hare population dynamical system among some well-known and biologically interpretable ODE models. © 2014, The International Biometric Society.

  11. Operator modeling of a loss-of-pumping accident using MicroSAINT

    International Nuclear Information System (INIS)

    Olsen, L.M.

    1992-01-01

    The Savannah River Laboratory (SRL) human factors group has been developing methods for analyzing nuclear reactor operator actions during hypothetical design-basis accident scenarios. The SRL reactors operate at a lower temperature and pressure than power reactors resulting in accident sequences that differ from those of power reactors. Current methodology development is focused on modeling control room operator response times dictated by system event times specified in the Savannah River Site Reactor Safety Analysis Report (SAR). The modeling methods must be flexible enough to incorporate changes to hardware, procedures, or postulated system event times and permit timely evaluation. The initial model developed was for the loss-of-pumping accident (LOPA) because a significant number of operator actions are required to respond to this postulated event. Human factors engineers had been researching and testing a network modeling simulation language called MicroSAINT to simulate operators' personal and interpersonal actions relative to operating system events. The LOPA operator modeling project demonstrated the versatility and flexibility of MicroSAINT for modeling control room crew interactions

  12. Modelling of innovative SANEX process mal-operations

    International Nuclear Information System (INIS)

    McLachlan, F.; Taylor, R.; Whittaker, D.; Woodhead, D.; Geist, A.

    2016-01-01

    The innovative (i-) SANEX process for the separation of minor actinides from PUREX highly active raffinate is expected to employ a solvent phase comprising 0.2 M TODGA with 5 v/v% 1-octanol in an inert diluent. An initial extract / scrub section would be used to extract trivalent actinides and lanthanides from the feed whilst leaving other fission products in the aqueous phase, before the loaded solvent is contacted with a low acidity aqueous phase containing a sulphonated bis-triazinyl pyridine ligand (BTP) to effect a selective strip of the actinides, so yielding separate actinide (An) and lanthanide (Ln) product streams. This process has been demonstrated in lab scale trials at Juelich (FZJ). The SACSESS (Safety of Actinide Separation processes) project is focused on the evaluation and improvement of the safety of such future systems. A key element of this is the development of an understanding of the response of a process to upsets (mal-operations). It is only practical to study a small subset of possible mal-operations experimentally and consideration of the majority of mal-operations entails the use of a validated dynamic model of the process. Distribution algorithms for HNO_3, Am, Cm and the lanthanides have been developed and incorporated into a dynamic flowsheet model that has, so far, been configured to correspond to the extract-scrub section of the i-SANEX flowsheet trial undertaken at FZJ in 2013. Comparison is made between the steady state model results and experimental results. Results from modelling of low acidity and high temperature mal-operations are presented. (authors)

  13. Operator model-based design and evaluation of advanced systems

    International Nuclear Information System (INIS)

    Schryver, J.C.

    1988-01-01

    A multi-level operator modeling approach is recommended to provide broad support for the integrated design of advanced control and protection systems for new nuclear power plants. Preliminary design should address the symbiosis of automated systems and human operator by giving careful attention to the roles assigned to these two system elements. A conceptual model of the operator role is developed in the context of a command control-communication problem. According to this approach, joint responsibility can be realized in at least two ways: sharing or allocation. The inherent stabilities of different regions of the operator role space are considered

  14. Do Red Edge and Texture Attributes from High-Resolution Satellite Data Improve Wood Volume Estimation in a Semi-Arid Mountainous Region?

    DEFF Research Database (Denmark)

    Schumacher, Paul; Mislimshoeva, Bunafsha; Brenning, Alexander

    2016-01-01

    to overcome this issue. However, clear recommendations on the suitability of specific proxies to provide accurate biomass information in semi-arid to arid environments are still lacking. This study contributes to the understanding of using multispectral high-resolution satellite data (RapidEye), specifically...... red edge and texture attributes, to estimate wood volume in semi-arid ecosystems characterized by scarce vegetation. LASSO (Least Absolute Shrinkage and Selection Operator) and random forest were used as predictive models relating in situ-measured aboveground standing wood volume to satellite data...

  15. Stochastic and simulation models of maritime intercept operations capabilities

    OpenAIRE

    Sato, Hiroyuki

    2005-01-01

    The research formulates and exercises stochastic and simulation models to assess the Maritime Intercept Operations (MIO) capabilities. The models focus on the surveillance operations of the Maritime Patrol Aircraft (MPA). The analysis using the models estimates the probability with which a terrorist vessel (Red) is detected, correctly classified, and escorted for intensive investigation and neutralization before it leaves an area of interest (AOI). The difficulty of obtaining adequate int...

  16. Multiple operating models for data linkage: A privacy positive

    Directory of Open Access Journals (Sweden)

    Katrina Irvine

    2017-04-01

    Our data linkage centre will implement new operating models with cascading levels of data handling on behalf of custodians. Sharing or publication of empirical evidence on timeframes, efficiency and quality can provide useful inputs in the design of new operating models and assist with the development of stakeholder and public confidence.

  17. Multiobjective Optimization Modeling Approach for Multipurpose Single Reservoir Operation

    Directory of Open Access Journals (Sweden)

    Iosvany Recio Villa

    2018-04-01

    Full Text Available The water resources planning and management discipline recognizes the importance of a reservoir’s carryover storage. However, mathematical models for reservoir operation that include carryover storage are scarce. This paper presents a novel multiobjective optimization modeling framework that uses the constraint-ε method and genetic algorithms as optimization techniques for the operation of multipurpose simple reservoirs, including carryover storage. The carryover storage was conceived by modifying Kritsky and Menkel’s method for reservoir design at the operational stage. The main objective function minimizes the cost of the total annual water shortage for irrigation areas connected to a reservoir, while the secondary one maximizes its energy production. The model includes operational constraints for the reservoir, Kritsky and Menkel’s method, irrigation areas, and the hydropower plant. The study is applied to Carlos Manuel de Céspedes reservoir, establishing a 12-month planning horizon and an annual reliability of 75%. The results highly demonstrate the applicability of the model, obtaining monthly releases from the reservoir that include the carryover storage, degree of reservoir inflow regulation, water shortages in irrigation areas, and the energy generated by the hydroelectric plant. The main product is an operational graph that includes zones as well as rule and guide curves, which are used as triggers for long-term reservoir operation.

  18. Prostate cancer detection from model-free T1-weighted time series and diffusion imaging

    Science.gov (United States)

    Haq, Nandinee F.; Kozlowski, Piotr; Jones, Edward C.; Chang, Silvia D.; Goldenberg, S. Larry; Moradi, Mehdi

    2015-03-01

    The combination of Dynamic Contrast Enhanced (DCE) images with diffusion MRI has shown great potential in prostate cancer detection. The parameterization of DCE images to generate cancer markers is traditionally performed based on pharmacokinetic modeling. However, pharmacokinetic models make simplistic assumptions about the tissue perfusion process, require the knowledge of contrast agent concentration in a major artery, and the modeling process is sensitive to noise and fitting instabilities. We address this issue by extracting features directly from the DCE T1-weighted time course without modeling. In this work, we employed a set of data-driven features generated by mapping the DCE T1 time course to its principal component space, along with diffusion MRI features to detect prostate cancer. The optimal set of DCE features is extracted with sparse regularized regression through a Least Absolute Shrinkage and Selection Operator (LASSO) model. We show that when our proposed features are used within the multiparametric MRI protocol to replace the pharmacokinetic parameters, the area under ROC curve is 0.91 for peripheral zone classification and 0.87 for whole gland classification. We were able to correctly classify 32 out of 35 peripheral tumor areas identified in the data when the proposed features were used with support vector machine classification. The proposed feature set was used to generate cancer likelihood maps for the prostate gland.

  19. Individual-Tree Diameter Growth Models for Mixed Nothofagus Second Growth Forests in Southern Chile

    Directory of Open Access Journals (Sweden)

    Paulo C. Moreno

    2017-12-01

    Full Text Available Second growth forests of Nothofagus obliqua (roble, N. alpina (raulí, and N. dombeyi (coihue, known locally as RORACO, are among the most important native mixed forests in Chile. To improve the sustainable management of these forests, managers need adequate information and models regarding not only existing forest conditions, but their future states with varying alternative silvicultural activities. In this study, an individual-tree diameter growth model was developed for the full geographical distribution of the RORACO forest type. This was achieved by fitting a complete model by comparing two variable selection procedures: cross-validation (CV, and least absolute shrinkage and selection operator (LASSO regression. A small set of predictors successfully explained a large portion of the annual increment in diameter at breast height (DBH growth, particularly variables associated with competition at both the tree- and stand-level. Goodness-of-fit statistics for this final model showed an empirical coefficient of correlation (R2emp of 0.56, relative root mean square error of 44.49% and relative bias of −1.96% for annual DBH growth predictions, and R2emp of 0.98 and 0.97 for DBH projection at 6 and 12 years, respectively. This model constitutes a simple and useful tool to support management plans for these forest ecosystems.

  20. Deterministic operations research models and methods in linear optimization

    CERN Document Server

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  1. Channel selection for simultaneous and proportional myoelectric prosthesis control of multiple degrees-of-freedom

    Science.gov (United States)

    Hwang, Han-Jeong; Hahne, Janne Mathias; Müller, Klaus-Robert

    2014-10-01

    Objective. Recent studies have shown the possibility of simultaneous and proportional control of electrically powered upper-limb prostheses, but there has been little investigation on optimal channel selection. The objective of this study is to find a robust channel selection method and the channel subsets most suitable for simultaneous and proportional myoelectric prosthesis control of multiple degrees-of-freedom (DoFs). Approach. Ten able-bodied subjects and one person with congenital upper-limb deficiency took part in this study, and performed wrist movements with various combinations of two DoFs (flexion/extension and radial/ulnar deviation). During the experiment, high density electromyographic (EMG) signals and the actual wrist angles were recorded with an 8 × 24 electrode array and a motion tracking system, respectively. The wrist angles were estimated from EMG features with ridge regression using the subsets of channels chosen by three different channel selection methods: (1) least absolute shrinkage and selection operator (LASSO), (2) sequential feature selection (SFS), and (3) uniform selection (UNI). Main results. SFS generally showed higher estimation accuracy than LASSO and UNI, but LASSO always outperformed SFS in terms of robustness, such as noise addition, channel shift and training data reduction. It was also confirmed that about 95% of the original performance obtained using all channels can be retained with only 12 bipolar channels individually selected by LASSO and SFS. Significance. From the analysis results, it can be concluded that LASSO is a promising channel selection method for accurate simultaneous and proportional prosthesis control. We expect that our results will provide a useful guideline to select optimal channel subsets when developing clinical myoelectric prosthesis control systems based on continuous movements with multiple DoFs.

  2. Quark shell model using projection operators

    International Nuclear Information System (INIS)

    Ullah, N.

    1988-01-01

    Using the projection operators in the quark shell model, the wave functions for proton are calculated and expressions for calculating the wave function of neutron and also magnetic moment of proton and neutron are derived. (M.G.B.)

  3. Analysis and Modeling of Ground Operations at Hub Airports

    Science.gov (United States)

    Atkins, Stephen (Technical Monitor); Andersson, Kari; Carr, Francis; Feron, Eric; Hall, William D.

    2000-01-01

    Building simple and accurate models of hub airports can considerably help one understand airport dynamics, and may provide quantitative estimates of operational airport improvements. In this paper, three models are proposed to capture the dynamics of busy hub airport operations. Two simple queuing models are introduced to capture the taxi-out and taxi-in processes. An integer programming model aimed at representing airline decision-making attempts to capture the dynamics of the aircraft turnaround process. These models can be applied for predictive purposes. They may also be used to evaluate control strategies for improving overall airport efficiency.

  4. Multilevel flow models studio: human-centralized development for operation support system

    International Nuclear Information System (INIS)

    Zhou Yangping; Hidekazu Yoshikawa; Liu Jingquan; Yang Ming; Ouyang Jun

    2005-01-01

    Computerized Operation Support Systems (COSS), integrating Artificial Intelligence, Multimedia and Network Technology, are now being proposed for reducing operator's cognitive load for process operation. This study proposed a Human-Centralized Development (HCD) that COSS can be developed and maintained independently, conveniently and flexibly by operator and expert of industry system with little expertise on software development. A graphical interface system for HCD, Multilevel Flow Models Studio (MFMS), is proposed for development assistance of COSS. An Extensible Markup Language based file structure is designed to represent the Multilevel Flow Models (MFM) model for the target system. With a friendly graphical interface, MFMS mainly consists of two components: 1) an editor to intelligently assist user establish and maintain the MFM model; 2) an executor to implement the application for monitoring, diagnosis and operational instruction in terms of the established MFM model. A prototype MFMS system has been developed and applied to construct a trial operation support system for a Nuclear Power Plant simulated by RELAP5/MOD2. (authors)

  5. Model of environmental life cycle assessment for coal mining operations

    Energy Technology Data Exchange (ETDEWEB)

    Burchart-Korol, Dorota, E-mail: dburchart@gig.eu; Fugiel, Agata, E-mail: afugiel@gig.eu; Czaplicka-Kolarz, Krystyna, E-mail: kczaplicka@gig.eu; Turek, Marian, E-mail: mturek@gig.eu

    2016-08-15

    This paper presents a novel approach to environmental assessment of coal mining operations, which enables assessment of the factors that are both directly and indirectly affecting the environment and are associated with the production of raw materials and energy used in processes. The primary novelty of the paper is the development of a computational environmental life cycle assessment (LCA) model for coal mining operations and the application of the model for coal mining operations in Poland. The LCA model enables the assessment of environmental indicators for all identified unit processes in hard coal mines with the life cycle approach. The proposed model enables the assessment of greenhouse gas emissions (GHGs) based on the IPCC method and the assessment of damage categories, such as human health, ecosystems and resources based on the ReCiPe method. The model enables the assessment of GHGs for hard coal mining operations in three time frames: 20, 100 and 500 years. The model was used to evaluate the coal mines in Poland. It was demonstrated that the largest environmental impacts in damage categories were associated with the use of fossil fuels, methane emissions and the use of electricity, processing of wastes, heat, and steel supports. It was concluded that an environmental assessment of coal mining operations, apart from direct influence from processing waste, methane emissions and drainage water, should include the use of electricity, heat and steel, particularly for steel supports. Because the model allows the comparison of environmental impact assessment for various unit processes, it can be used for all hard coal mines, not only in Poland but also in the world. This development is an important step forward in the study of the impacts of fossil fuels on the environment with the potential to mitigate the impact of the coal industry on the environment. - Highlights: • A computational LCA model for assessment of coal mining operations • Identification of

  6. Model of environmental life cycle assessment for coal mining operations

    International Nuclear Information System (INIS)

    Burchart-Korol, Dorota; Fugiel, Agata; Czaplicka-Kolarz, Krystyna; Turek, Marian

    2016-01-01

    This paper presents a novel approach to environmental assessment of coal mining operations, which enables assessment of the factors that are both directly and indirectly affecting the environment and are associated with the production of raw materials and energy used in processes. The primary novelty of the paper is the development of a computational environmental life cycle assessment (LCA) model for coal mining operations and the application of the model for coal mining operations in Poland. The LCA model enables the assessment of environmental indicators for all identified unit processes in hard coal mines with the life cycle approach. The proposed model enables the assessment of greenhouse gas emissions (GHGs) based on the IPCC method and the assessment of damage categories, such as human health, ecosystems and resources based on the ReCiPe method. The model enables the assessment of GHGs for hard coal mining operations in three time frames: 20, 100 and 500 years. The model was used to evaluate the coal mines in Poland. It was demonstrated that the largest environmental impacts in damage categories were associated with the use of fossil fuels, methane emissions and the use of electricity, processing of wastes, heat, and steel supports. It was concluded that an environmental assessment of coal mining operations, apart from direct influence from processing waste, methane emissions and drainage water, should include the use of electricity, heat and steel, particularly for steel supports. Because the model allows the comparison of environmental impact assessment for various unit processes, it can be used for all hard coal mines, not only in Poland but also in the world. This development is an important step forward in the study of the impacts of fossil fuels on the environment with the potential to mitigate the impact of the coal industry on the environment. - Highlights: • A computational LCA model for assessment of coal mining operations • Identification of

  7. A High-Speed Train Operation Plan Inspection Simulation Model

    Directory of Open Access Journals (Sweden)

    Yang Rui

    2018-01-01

    Full Text Available We developed a train operation simulation tool to inspect a train operation plan. In applying an improved Petri Net, the train was regarded as a token, and the line and station were regarded as places, respectively, in accordance with the high-speed train operation characteristics and network function. Location change and running information transfer of the high-speed train were realized by customizing a variety of transitions. The model was built based on the concept of component combination, considering the random disturbance in the process of train running. The simulation framework can be generated quickly and the system operation can be completed according to the different test requirements and the required network data. We tested the simulation tool when used for the real-world Wuhan to Guangzhou high-speed line. The results showed that the proposed model can be developed, the simulation results basically coincide with the objective reality, and it can not only test the feasibility of the high-speed train operation plan, but also be used as a support model to develop the simulation platform with more capabilities.

  8. Cognitive modeling and dynamic probabilistic simulation of operating crew response to complex system accidents. Part 4: IDAC causal model of operator problem-solving response

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Y.H.J. [Center for Risk and Reliability, University of Maryland, College Park, MD 20742 (United States) and Paul Scherrer Institute, 5232 Villigen PSI (Switzerland)]. E-mail: yhc@umd.edu; Mosleh, A. [Center for Risk and Reliability, University of Maryland, College Park, MD 20742 (United States)

    2007-08-15

    This is the fourth in a series of five papers describing the Information, Decision, and Action in Crew context (IDAC) operator response model for human reliability analysis. An example application of this modeling technique is also discussed in this series. The model has been developed to probabilistically predicts the responses of a nuclear power plant control room operating crew in accident conditions. The operator response spectrum includes cognitive, emotional, and physical activities during the course of an accident. This paper assesses the effects of the performance-influencing factors (PIFs) affecting the operators' problem-solving responses including information pre-processing (I), diagnosis and decision making (D), and action execution (A). Literature support and justifications are provided for the assessment on the influences of PIFs.

  9. Cognitive modeling and dynamic probabilistic simulation of operating crew response to complex system accidents. Part 4: IDAC causal model of operator problem-solving response

    International Nuclear Information System (INIS)

    Chang, Y.H.J.; Mosleh, A.

    2007-01-01

    This is the fourth in a series of five papers describing the Information, Decision, and Action in Crew context (IDAC) operator response model for human reliability analysis. An example application of this modeling technique is also discussed in this series. The model has been developed to probabilistically predicts the responses of a nuclear power plant control room operating crew in accident conditions. The operator response spectrum includes cognitive, emotional, and physical activities during the course of an accident. This paper assesses the effects of the performance-influencing factors (PIFs) affecting the operators' problem-solving responses including information pre-processing (I), diagnosis and decision making (D), and action execution (A). Literature support and justifications are provided for the assessment on the influences of PIFs

  10. Marine Vessel Models in Changing Operational Conditions - A Tutorial

    DEFF Research Database (Denmark)

    Perez, Tristan; Sørensen, Asgeir; Blanke, Mogens

    2006-01-01

    conditions (VOC). However, since marine systems operate in changing VOCs, there is a need to adapt the models. To date, there is no theory available to describe a general model valid across different VOCs due to the complexity of the hydrodynamic involved. It is believed that system identification could......This tutorial paper provides an introduction, from a systems perspective, to the topic of ship motion dynamics of surface ships. It presents a classification of parametric models currently used for monitoring and control of marine vessels. These models are valid for certain vessel operational...

  11. A toy model for higher spin Dirac operators

    International Nuclear Information System (INIS)

    Eelbode, D.; Van de Voorde, L.

    2010-01-01

    This paper deals with the higher spin Dirac operator Q 2,1 acting on functions taking values in an irreducible representation space for so(m) with highest weight (5/2, 3/2, 1/2,..., 1/2). . This operator acts as a toy model for generalizations of the classical Rarita-Schwinger equations in Clifford analysis. Polynomial null solutions for this operator are studied in particular.

  12. Modeling and Simulation for Mission Operations Work System Design

    Science.gov (United States)

    Sierhuis, Maarten; Clancey, William J.; Seah, Chin; Trimble, Jay P.; Sims, Michael H.

    2003-01-01

    Work System analysis and design is complex and non-deterministic. In this paper we describe Brahms, a multiagent modeling and simulation environment for designing complex interactions in human-machine systems. Brahms was originally conceived as a business process design tool that simulates work practices, including social systems of work. We describe our modeling and simulation method for mission operations work systems design, based on a research case study in which we used Brahms to design mission operations for a proposed discovery mission to the Moon. We then describe the results of an actual method application project-the Brahms Mars Exploration Rover. Space mission operations are similar to operations of traditional organizations; we show that the application of Brahms for space mission operations design is relevant and transferable to other types of business processes in organizations.

  13. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  14. Bayesian network modeling of operator's state recognition process

    International Nuclear Information System (INIS)

    Hatakeyama, Naoki; Furuta, Kazuo

    2000-01-01

    Nowadays we are facing a difficult problem of establishing a good relation between humans and machines. To solve this problem, we suppose that machine system need to have a model of human behavior. In this study we model the state cognition process of a PWR plant operator as an example. We use a Bayesian network as an inference engine. We incorporate the knowledge hierarchy in the Bayesian network and confirm its validity using the example of PWR plant operator. (author)

  15. Koopman Operator Framework for Time Series Modeling and Analysis

    Science.gov (United States)

    Surana, Amit

    2018-01-01

    We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.

  16. Improving traffic signal management and operations : a basic service model.

    Science.gov (United States)

    2009-12-01

    This report provides a guide for achieving a basic service model for traffic signal management and : operations. The basic service model is based on simply stated and defensible operational objectives : that consider the staffing level, expertise and...

  17. Modelling the basic error tendencies of human operators

    Energy Technology Data Exchange (ETDEWEB)

    Reason, J.

    1988-01-01

    The paper outlines the primary structural features of human cognition: a limited, serial workspace interacting with a parallel distributed knowledge base. It is argued that the essential computational features of human cognition - to be captured by an adequate operator model - reside in the mechanisms by which stored knowledge structures are selected and brought into play. Two such computational 'primitives' are identified: similarity-matching and frequency-gambling. These two retrieval heuristics, it is argued, shape both the overall character of human performance (i.e. its heavy reliance on pattern-matching) and its basic error tendencies ('strong-but-wrong' responses, confirmation, similarity and frequency biases, and cognitive 'lock-up'). The various features of human cognition are integrated with a dynamic operator model capable of being represented in software form. This computer model, when run repeatedly with a variety of problem configurations, should produce a distribution of behaviours which, in toto, simulate the general character of operator performance.

  18. Modelling the basic error tendencies of human operators

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    The paper outlines the primary structural features of human cognition: a limited, serial workspace interacting with a parallel distributed knowledge base. It is argued that the essential computational features of human cognition - to be captured by an adequate operator model - reside in the mechanisms by which stored knowledge structures are selected and brought into play. Two such computational 'primitives' are identified: similarity-matching and frequency-gambling. These two retrieval heuristics, it is argued, shape both the overall character of human performance (i.e. its heavy reliance on pattern-matching) and its basic error tendencies ('strong-but-wrong' responses, confirmation, similarity and frequency biases, and cognitive 'lock-up'). The various features of human cognition are integrated with a dynamic operator model capable of being represented in software form. This computer model, when run repeatedly with a variety of problem configurations, should produce a distribution of behaviours which, in total, simulate the general character of operator performance. (author)

  19. Modelling the basic error tendencies of human operators

    International Nuclear Information System (INIS)

    Reason, James

    1988-01-01

    The paper outlines the primary structural features of human cognition: a limited, serial workspace interacting with a parallel distributed knowledge base. It is argued that the essential computational features of human cognition - to be captured by an adequate operator model - reside in the mechanisms by which stored knowledge structures are selected and brought into play. Two such computational 'primitives' are identified: similarity-matching and frequency-gambling. These two retrieval heuristics, it is argued, shape both the overall character of human performance (i.e. its heavy reliance on pattern-matching) and its basic error tendencies ('strong-but-wrong' responses, confirmation, similarity and frequency biases, and cognitive 'lock-up'). The various features of human cognition are integrated with a dynamic operator model capable of being represented in software form. This computer model, when run repeatedly with a variety of problem configurations, should produce a distribution of behaviours which, in toto, simulate the general character of operator performance. (author)

  20. Prediction of genetic values of quantitative traits with epistatic effects in plant breeding populations.

    Science.gov (United States)

    Wang, D; Salah El-Basyoni, I; Stephen Baenziger, P; Crossa, J; Eskridge, K M; Dweikat, I

    2012-11-01

    Though epistasis has long been postulated to have a critical role in genetic regulation of important pathways as well as provide a major source of variation in the process of speciation, the importance of epistasis for genomic selection in the context of plant breeding is still being debated. In this paper, we report the results on the prediction of genetic values with epistatic effects for 280 accessions in the Nebraska Wheat Breeding Program using adaptive mixed least absolute shrinkage and selection operator (LASSO). The development of adaptive mixed LASSO, originally designed for association mapping, for the context of genomic selection is reported. The results show that adaptive mixed LASSO can be successfully applied to the prediction of genetic values while incorporating both marker main effects and epistatic effects. Especially, the prediction accuracy is substantially improved by the inclusion of two-locus epistatic effects (more than onefold in some cases as measured by cross-validation correlation coefficient), which is observed for multiple traits and planting locations. This points to significant potential in using non-additive genetic effects for genomic selection in crop breeding practices.

  1. Operations and support cost modeling of conceptual space vehicles

    Science.gov (United States)

    Ebeling, Charles

    1994-01-01

    The University of Dayton is pleased to submit this annual report to the National Aeronautics and Space Administration (NASA) Langley Research Center which documents the development of an operations and support (O&S) cost model as part of a larger life cycle cost (LCC) structure. It is intended for use during the conceptual design of new launch vehicles and spacecraft. This research is being conducted under NASA Research Grant NAG-1-1327. This research effort changes the focus from that of the first two years in which a reliability and maintainability model was developed to the initial development of an operations and support life cycle cost model. Cost categories were initially patterned after NASA's three axis work breakdown structure consisting of a configuration axis (vehicle), a function axis, and a cost axis. A revised cost element structure (CES), which is currently under study by NASA, was used to established the basic cost elements used in the model. While the focus of the effort was on operations and maintenance costs and other recurring costs, the computerized model allowed for other cost categories such as RDT&E and production costs to be addressed. Secondary tasks performed concurrent with the development of the costing model included support and upgrades to the reliability and maintainability (R&M) model. The primary result of the current research has been a methodology and a computer implementation of the methodology to provide for timely operations and support cost analysis during the conceptual design activities.

  2. An Operational Model for the Prediction of Jet Blast

    Science.gov (United States)

    2012-01-09

    This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...

  3. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    KAUST Repository

    Liang, Faming

    2013-06-01

    This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  4. Spectral decomposition of model operators in de Branges spaces

    International Nuclear Information System (INIS)

    Gubreev, Gennady M; Tarasenko, Anna A

    2011-01-01

    The paper is devoted to studying a class of completely continuous nonselfadjoint operators in de Branges spaces of entire functions. Among other results, a class of unconditional bases of de Branges spaces consisting of values of their reproducing kernels is constructed. The operators that are studied are model operators in the class of completely continuous non-dissipative operators with two-dimensional imaginary parts. Bibliography: 22 titles.

  5. Life Modeling for Nickel-Hydrogen Batteries in Geosynchronous Satellite Operation

    National Research Council Canada - National Science Library

    Zimmerman, A. H; Ang, V. J

    2005-01-01

    .... The model has been used to predict how properly designed and operated nickel-hydrogen battery lifetimes should depend on the operating environments and charge control methods typically used in GEO operation...

  6. Covariate selection for the semiparametric additive risk model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers covariate selection for the additive hazards model. This model is particularly simple to study theoretically and its practical implementation has several major advantages to the similar methodology for the proportional hazards model. One complication compared...... and study their large sample properties for the situation where the number of covariates p is smaller than the number of observations. We also show that the adaptive Lasso has the oracle property. In many practical situations, it is more relevant to tackle the situation with large p compared with the number...... of observations. We do this by studying the properties of the so-called Dantzig selector in the setting of the additive risk model. Specifically, we establish a bound on how close the solution is to a true sparse signal in the case where the number of covariates is large. In a simulation study, we also compare...

  7. Development of a Multicomponent Prediction Model for Acute Esophagitis in Lung Cancer Patients Receiving Chemoradiotherapy

    International Nuclear Information System (INIS)

    De Ruyck, Kim; Sabbe, Nick; Oberije, Cary; Vandecasteele, Katrien; Thas, Olivier; De Ruysscher, Dirk; Lambin, Phillipe; Van Meerbeeck, Jan; De Neve, Wilfried; Thierens, Hubert

    2011-01-01

    Purpose: To construct a model for the prediction of acute esophagitis in lung cancer patients receiving chemoradiotherapy by combining clinical data, treatment parameters, and genotyping profile. Patients and Methods: Data were available for 273 lung cancer patients treated with curative chemoradiotherapy. Clinical data included gender, age, World Health Organization performance score, nicotine use, diabetes, chronic disease, tumor type, tumor stage, lymph node stage, tumor location, and medical center. Treatment parameters included chemotherapy, surgery, radiotherapy technique, tumor dose, mean fractionation size, mean and maximal esophageal dose, and overall treatment time. A total of 332 genetic polymorphisms were considered in 112 candidate genes. The predicting model was achieved by lasso logistic regression for predictor selection, followed by classic logistic regression for unbiased estimation of the coefficients. Performance of the model was expressed as the area under the curve of the receiver operating characteristic and as the false-negative rate in the optimal point on the receiver operating characteristic curve. Results: A total of 110 patients (40%) developed acute esophagitis Grade ≥2 (Common Terminology Criteria for Adverse Events v3.0). The final model contained chemotherapy treatment, lymph node stage, mean esophageal dose, gender, overall treatment time, radiotherapy technique, rs2302535 (EGFR), rs16930129 (ENG), rs1131877 (TRAF3), and rs2230528 (ITGB2). The area under the curve was 0.87, and the false-negative rate was 16%. Conclusion: Prediction of acute esophagitis can be improved by combining clinical, treatment, and genetic factors. A multicomponent prediction model for acute esophagitis with a sensitivity of 84% was constructed with two clinical parameters, four treatment parameters, and four genetic polymorphisms.

  8. Dynamic Computation of Change Operations in Version Management of Business Process Models

    Science.gov (United States)

    Küster, Jochen Malte; Gerth, Christian; Engels, Gregor

    Version management of business process models requires that changes can be resolved by applying change operations. In order to give a user maximal freedom concerning the application order of change operations, position parameters of change operations must be computed dynamically during change resolution. In such an approach, change operations with computed position parameters must be applicable on the model and dependencies and conflicts of change operations must be taken into account because otherwise invalid models can be constructed. In this paper, we study the concept of partially specified change operations where parameters are computed dynamically. We provide a formalization for partially specified change operations using graph transformation and provide a concept for their applicability. Based on this, we study potential dependencies and conflicts of change operations and show how these can be taken into account within change resolution. Using our approach, a user can resolve changes of business process models without being unnecessarily restricted to a certain order.

  9. A multi-stage intelligent approach based on an ensemble of two-way interaction model for forecasting the global horizontal radiation of India

    International Nuclear Information System (INIS)

    Jiang, He; Dong, Yao; Xiao, Ling

    2017-01-01

    Highlights: • Ensemble learning system is proposed to forecast the global solar radiation. • LASSO is utilized as feature selection method for subset model. • GSO is used to select the weight vector aggregating the response of subset model. • A simple and efficient algorithm is designed based on thresholding function. • Theoretical analysis focusing on error rate is provided. - Abstract: Forecasting of effective solar irradiation has developed a huge interest in recent decades, mainly due to its various applications in grid connect photovoltaic installations. This paper develops and investigates an ensemble learning based multistage intelligent approach to forecast 5 days global horizontal radiation at four given locations of India. The two-way interaction model is considered with purpose of detecting the associated correlation between the features. The main structure of the novel method is the ensemble learning, which is based on Divide and Conquer principle, is applied to enhance the forecasting accuracy and model stability. An efficient feature selection method LASSO is performed in the input space with the regularization parameter selected by Cross-Validation. A weight vector which best represents the importance of each individual model in ensemble system is provided by glowworm swarm optimization. The combination of feature selection and parameter selection are helpful in creating the diversity of the ensemble learning. In order to illustrate the validity of the proposed method, the datasets at four different locations of the India are split into training and test datasets. The results of the real data experiments demonstrate the efficiency and efficacy of the proposed method comparing with other competitors.

  10. Simulation of nuclear plant operation into a stochastic energy production model

    International Nuclear Information System (INIS)

    Pacheco, R.L.

    1983-04-01

    A simulation model of nuclear plant operation is developed to fit into a stochastic energy production model. In order to improve the stochastic model used, and also reduce its computational time burdened by the aggregation of the model of nuclear plant operation, a study of tail truncation of the unsupplied demand distribution function has been performed. (E.G.) [pt

  11. River and Reservoir Operations Model, Truckee River basin, California and Nevada, 1998

    Science.gov (United States)

    Berris, Steven N.; Hess, Glen W.; Bohman, Larry R.

    2001-01-01

    The demand for all uses of water in the Truckee River Basin, California and Nevada, commonly is greater than can be supplied. Storage reservoirs in the system have a maximum effective total capacity equivalent to less than two years of average river flows, so longer-term droughts can result in substantial water-supply shortages for irrigation and municipal users and may stress fish and wildlife ecosystems. Title II of Public Law (P.L.) 101-618, the Truckee?Carson?Pyramid Lake Water Rights Settlement Act of 1990, provides a foundation for negotiating and developing operating criteria, known as the Truckee River Operating Agreement (TROA), to balance interstate and interbasin allocation of water rights among the many interests competing for water from the Truckee River. In addition to TROA, the Truckee River Water Quality Settlement Agreement (WQSA), signed in 1996, provides for acquisition of water rights to resolve water-quality problems during low flows along the Truckee River in Nevada. Efficient execution of many of the planning, management, or environmental assessment requirements of TROA and WQSA will require detailed water-resources data coupled with sound analytical tools. Analytical modeling tools constructed and evaluated with such data could help assess effects of alternative operational scenarios related to reservoir and river operations, water-rights transfers, and changes in irrigation practices. The Truckee?Carson Program of the U.S. Geological Survey, to support U.S. Department of the Interior implementation of P.L. 101-618, is developing a modeling system to support efficient water-resources planning, management, and allocation. The daily operations model documented herein is a part of the modeling system that includes a database management program, a graphical user interface program, and a program with modules that simulate river/reservoir operations and a variety of hydrologic processes. The operations module is capable of simulating lake

  12. Aircraft operational reliability—A model-based approach and a case study

    International Nuclear Information System (INIS)

    Tiassou, Kossi; Kanoun, Karama; Kaâniche, Mohamed; Seguin, Christel; Papadopoulos, Chris

    2013-01-01

    The success of an aircraft mission is subject to the fulfillment of some operational requirements before and during each flight. As these requirements depend essentially on the aircraft system components and the mission profile, the effects of failures can be very severe if they are not anticipated. Hence, one should be able to assess the aircraft operational reliability with regard to its missions in order to be able to cope with failures. We address aircraft operational reliability modeling to support maintenance planning during the mission achievement. We develop a modeling approach, based on a meta-model that is used as a basis: (i) to structure the information needed to assess aircraft operational reliability and (ii) to build a stochastic model that can be tuned dynamically, in order to take into account the aircraft system operational state, a mission profile and the maintenance facilities available at the flight stop locations involved in the mission. The aim is to enable operational reliability assessment online. A case study, based on an aircraft subsystem, is considered for illustration using the Stochastic Activity Networks (SANs) formalism

  13. The use of flow models for design of plant operating procedures

    International Nuclear Information System (INIS)

    Lind, M.

    1982-03-01

    The report describe a systematic approach to the design of operating procedures or sequence automatics for process plant control. It is shown how flow models representing the topology of mass and energy flows on different levels of function provide plant information which is important for the considered design problem. The modelling methodology leads to the definition of three categories of control tasks. Two tasks relate to the regulation and control of changes of levels and flows of mass and energy in a system within a defined mode of operation. The third type relate to the control actions necessary for switching operations involved in changes of operating mode. These control tasks are identified for a given plant as part of the flow modelling activity. It is discussed how the flow model deal with the problem of assigning control task precedence in time eg. during start-up or shut-down operations. The method may be a basis for providing automated procedure support to the operator in unforeseen situations or may be a tool for control design. (auth.)

  14. Diffusion Indexes With Sparse Loadings

    DEFF Research Database (Denmark)

    Kristensen, Johannes Tang

    2017-01-01

    The use of large-dimensional factor models in forecasting has received much attention in the literature with the consensus being that improvements on forecasts can be achieved when comparing with standard models. However, recent contributions in the literature have demonstrated that care needs...... to the problem by using the least absolute shrinkage and selection operator (LASSO) as a variable selection method to choose between the possible variables and thus obtain sparse loadings from which factors or diffusion indexes can be formed. This allows us to build a more parsimonious factor model...... in forecasting accuracy and thus find it to be an important alternative to PC. Supplementary materials for this article are available online....

  15. Automated particulate sampler field test model operations guide

    Energy Technology Data Exchange (ETDEWEB)

    Bowyer, S.M.; Miley, H.S.

    1996-10-01

    The Automated Particulate Sampler Field Test Model Operations Guide is a collection of documents which provides a complete picture of the Automated Particulate Sampler (APS) and the Field Test in which it was evaluated. The Pacific Northwest National Laboratory (PNNL) Automated Particulate Sampler was developed for the purpose of radionuclide particulate monitoring for use under the Comprehensive Test Ban Treaty (CTBT). Its design was directed by anticipated requirements of small size, low power consumption, low noise level, fully automatic operation, and most predominantly the sensitivity requirements of the Conference on Disarmament Working Paper 224 (CDWP224). This guide is intended to serve as both a reference document for the APS and to provide detailed instructions on how to operate the sampler. This document provides a complete description of the APS Field Test Model and all the activity related to its evaluation and progression.

  16. A flexible model for economic operational management of grid battery energy storage

    International Nuclear Information System (INIS)

    Fares, Robert L.; Webber, Michael E.

    2014-01-01

    To connect energy storage operational planning with real-time battery control, this paper integrates a dynamic battery model with an optimization program. First, we transform a behavioral circuit model designed to describe a variety of battery chemistries into a set of coupled nonlinear differential equations. Then, we discretize the differential equations to integrate the battery model with a GAMS (General Algebraic Modeling System) optimization program, which decides when the battery should charge and discharge to maximize its operating revenue. We demonstrate the capabilities of our model by applying it to lithium-ion (Li-ion) energy storage operating in Texas' restructured electricity market. By simulating 11 years of operation, we find that our model can robustly compute an optimal charge-discharge schedule that maximizes daily operating revenue without violating a battery's operating constraints. Furthermore, our results show there is significant variation in potential operating revenue from one day to the next. The revenue potential of Li-ion storage varies from approximately $0–1800/MWh of energy discharged, depending on the volatility of wholesale electricity prices during an operating day. Thus, it is important to consider the material degradation-related “cost” of performing a charge-discharge cycle in battery operational management, so that the battery only operates when revenue exceeds cost. - Highlights: • A flexible, dynamic battery model is integrated with an optimization program. • Electricity price data is used to simulate 11 years of Li-ion operation on the grid. • The optimization program robustly computes an optimal charge-discharge schedule. • Variation in daily Li-ion battery revenue potential from 2002 to 2012 is shown. • We find it is important to consider the cost of a grid duty cycle

  17. Nordic Model of Subregional Co-Operation

    Directory of Open Access Journals (Sweden)

    Grzela Joanna

    2017-12-01

    Full Text Available Nordic co-operation is renowned throughout the world and perceived as the collaboration of a group of countries which are similar in their views and activities. The main pillars of the Nordic model of co-operation are the tradition of constitutional principles, activity of public movements and organisations, freedom of speech, equality, solidarity, and respect for the natural environment. In connection with labour and entrepreneurship, these elements are the features of a society which favours efficiency, a sense of security and balance between an individual and a group. Currently, the collaboration is a complex process, including many national, governmental and institutional connections which form the “Nordic family”.

  18. Tracing the breeding farm of domesticated pig using feature selection (

    Directory of Open Access Journals (Sweden)

    Taehyung Kwon

    2017-11-01

    Full Text Available Objective Increasing food safety demands in the animal product market have created a need for a system to trace the food distribution process, from the manufacturer to the retailer, and genetic traceability is an effective method to trace the origin of animal products. In this study, we successfully achieved the farm tracing of 6,018 multi-breed pigs, using single nucleotide polymorphism (SNP markers strictly selected through least absolute shrinkage and selection operator (LASSO feature selection. Methods We performed farm tracing of domesticated pig (Sus scrofa from SNP markers and selected the most relevant features for accurate prediction. Considering multi-breed composition of our data, we performed feature selection using LASSO penalization on 4,002 SNPs that are shared between breeds, which also includes 179 SNPs with small between-breed difference. The 100 highest-scored features were extracted from iterative simulations and then evaluated using machine-leaning based classifiers. Results We selected 1,341 SNPs from over 45,000 SNPs through iterative LASSO feature selection, to minimize between-breed differences. We subsequently selected 100 highest-scored SNPs from iterative scoring, and observed high statistical measures in classification of breeding farms by cross-validation only using these SNPs. Conclusion The study represents a successful application of LASSO feature selection on multi-breed pig SNP data to trace the farm information, which provides a valuable method and possibility for further researches on genetic traceability.

  19. Knowledge-enhanced network simulation modeling of the nuclear power plant operator

    International Nuclear Information System (INIS)

    Schryver, J.C.; Palko, L.E.

    1988-01-01

    Simulation models of the human operator of advanced control systems must provide an adequate account of the cognitive processes required to control these systems. The Integrated Reactor Operator/System (INTEROPS) prototype model was developed at Oak Ridge National Laboratory (ORNL) to demonstrate the feasibility of dynamically integrating a cognitive operator model and a continuous plant process model (ARIES-P) to provide predictions of the total response of a nuclear power plant during upset/emergency conditions. The model consists of a SAINT network of cognitive tasks enhanced with expertise provided by a knowledge-based fault diagnosis model. The INTEROPS prototype has been implemented in both closed and open loop modes. The prototype model is shown to be cognitively relevant by accounting for cognitive tunneling, confirmation bias, evidence chunking, intentional error, and forgetting

  20. Model of the naval base logistic interoperability within the multinational operations

    Directory of Open Access Journals (Sweden)

    Bohdan Pac

    2011-12-01

    Full Text Available The paper concerns the model of the naval base logistics interoperability within the multinational operations conducted at sea by NATO or EU nations. The model includes the set of logistic requirements that NATO and EU expect from the contributing nations within the area of the logistic support provided to the forces operating out of the home bases. Model may reflect the scheme configuration, the set of requirements and its mathematical description for the naval base supporting multinational forces within maritime operations.

  1. Equivalence of the super Lax and local Dunkl operators for Calogero-like models

    International Nuclear Information System (INIS)

    Neelov, A I

    2004-01-01

    Following Shastry and Sutherland I construct the super Lax operators for the Calogero model in the oscillator potential. These operators can be used for the derivation of the eigenfunctions and integrals of motion of the Calogero model and its supersymmetric version. They allow us to infer several relations involving the Lax matrices for this model in a fast way. It is shown that the super Lax operators for the Calogero and Sutherland models can be expressed in terms of the supercharges and so-called local Dunkl operators constructed in our recent paper with M Ioffe. Several important relations involving Lax matrices and Hamiltonians of the Calogero and Sutherland models are easily derived from the properties of Dunkl operators

  2. Fuzzy multiobjective models for optimal operation of a hydropower system

    Science.gov (United States)

    Teegavarapu, Ramesh S. V.; Ferreira, André R.; Simonovic, Slobodan P.

    2013-06-01

    Optimal operation models for a hydropower system using new fuzzy multiobjective mathematical programming models are developed and evaluated in this study. The models use (i) mixed integer nonlinear programming (MINLP) with binary variables and (ii) integrate a new turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream impairment. Reardon method used in solution of genetic algorithm optimization problems forms the basis for development of a new fuzzy multiobjective hydropower system optimization model with creation of Reardon type fuzzy membership functions. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to (i) solve the optimization formulations to avoid computational intractability and combinatorial problems associated with binary variables in unit commitment, (ii) efficiently address Reardon method formulations, and (iii) deal with local optimal solutions obtained from the use of traditional gradient-based solvers. Decision maker's preferences are incorporated within fuzzy mathematical programming formulations to obtain compromise operating rules for a multiobjective reservoir operation problem dominated by conflicting goals of energy production, water quality and conservation releases. Results provide insight into compromise operation rules obtained using the new Reardon fuzzy multiobjective optimization framework and confirm its applicability to a variety of multiobjective water resources problems.

  3. Model of environmental life cycle assessment for coal mining operations.

    Science.gov (United States)

    Burchart-Korol, Dorota; Fugiel, Agata; Czaplicka-Kolarz, Krystyna; Turek, Marian

    2016-08-15

    This paper presents a novel approach to environmental assessment of coal mining operations, which enables assessment of the factors that are both directly and indirectly affecting the environment and are associated with the production of raw materials and energy used in processes. The primary novelty of the paper is the development of a computational environmental life cycle assessment (LCA) model for coal mining operations and the application of the model for coal mining operations in Poland. The LCA model enables the assessment of environmental indicators for all identified unit processes in hard coal mines with the life cycle approach. The proposed model enables the assessment of greenhouse gas emissions (GHGs) based on the IPCC method and the assessment of damage categories, such as human health, ecosystems and resources based on the ReCiPe method. The model enables the assessment of GHGs for hard coal mining operations in three time frames: 20, 100 and 500years. The model was used to evaluate the coal mines in Poland. It was demonstrated that the largest environmental impacts in damage categories were associated with the use of fossil fuels, methane emissions and the use of electricity, processing of wastes, heat, and steel supports. It was concluded that an environmental assessment of coal mining operations, apart from direct influence from processing waste, methane emissions and drainage water, should include the use of electricity, heat and steel, particularly for steel supports. Because the model allows the comparison of environmental impact assessment for various unit processes, it can be used for all hard coal mines, not only in Poland but also in the world. This development is an important step forward in the study of the impacts of fossil fuels on the environment with the potential to mitigate the impact of the coal industry on the environment. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Identification of human operator performance models utilizing time series analysis

    Science.gov (United States)

    Holden, F. M.; Shinners, S. M.

    1973-01-01

    The results of an effort performed by Sperry Systems Management Division for AMRL in applying time series analysis as a tool for modeling the human operator are presented. This technique is utilized for determining the variation of the human transfer function under various levels of stress. The human operator's model is determined based on actual input and output data from a tracking experiment.

  5. Overall feature of EAST operation space by using simple Core-SOL-Divertor model

    International Nuclear Information System (INIS)

    Hiwatari, R.; Hatayama, A.; Zhu, S.; Takizuka, T.; Tomita, Y.

    2005-01-01

    We have developed a simple Core-SOL-Divertor (C-S-D) model to investigate qualitatively the overall features of the operational space for the integrated core and edge plasma. To construct the simple C-S-D model, a simple core plasma model of ITER physics guidelines and a two-point SOL-divertor model are used. The simple C-S-D model is applied to the study of the EAST operational space with lower hybrid current drive experiments under various kinds of trade-off for the basic plasma parameters. Effective methods for extending the operation space are also presented. As shown by this study for the EAST operation space, it is evident that the C-S-D model is a useful tool to understand qualitatively the overall features of the plasma operation space. (author)

  6. Practical applications of age-dependent reliability models and analysis of operational data

    Energy Technology Data Exchange (ETDEWEB)

    Lannoy, A.; Nitoi, M.; Backstrom, O.; Burgazzi, L.; Couallier, V.; Nikulin, M.; Derode, A.; Rodionov, A.; Atwood, C.; Fradet, F.; Antonov, A.; Berezhnoy, A.; Choi, S.Y.; Starr, F.; Dawson, J.; Palmen, H.; Clerjaud, L

    2005-07-01

    The purpose of the workshop was to present the experience of practical application of time-dependent reliability models. The program of the workshop comprises the following sessions: -) aging management and aging PSA (Probabilistic Safety Assessment), -) modeling, -) operation experience, and -) accelerating aging tests. In order to introduce time aging effect of particular component to the PSA model, it has been proposed to use the constant unavailability values on the short period of time (one year for example) calculated on the basis of age-dependent reliability models. As for modeling, it appears that the problem of too detailed statistical models for application is the lack of data for required parameters. As for operating experience, several methods of operating experience analysis have been presented (algorithms for reliability data elaboration and statistical identification of aging trend). As for accelerated aging tests, it is demonstrated that a combination of operating experience analysis with the results of accelerated aging tests of naturally aged equipment could provide a good basis for continuous operation of instrumentation and control systems.

  7. Practical applications of age-dependent reliability models and analysis of operational data

    International Nuclear Information System (INIS)

    Lannoy, A.; Nitoi, M.; Backstrom, O.; Burgazzi, L.; Couallier, V.; Nikulin, M.; Derode, A.; Rodionov, A.; Atwood, C.; Fradet, F.; Antonov, A.; Berezhnoy, A.; Choi, S.Y.; Starr, F.; Dawson, J.; Palmen, H.; Clerjaud, L.

    2005-01-01

    The purpose of the workshop was to present the experience of practical application of time-dependent reliability models. The program of the workshop comprises the following sessions: -) aging management and aging PSA (Probabilistic Safety Assessment), -) modeling, -) operation experience, and -) accelerating aging tests. In order to introduce time aging effect of particular component to the PSA model, it has been proposed to use the constant unavailability values on the short period of time (one year for example) calculated on the basis of age-dependent reliability models. As for modeling, it appears that the problem of too detailed statistical models for application is the lack of data for required parameters. As for operating experience, several methods of operating experience analysis have been presented (algorithms for reliability data elaboration and statistical identification of aging trend). As for accelerated aging tests, it is demonstrated that a combination of operating experience analysis with the results of accelerated aging tests of naturally aged equipment could provide a good basis for continuous operation of instrumentation and control systems

  8. Inclusive zero-angle neutron spectra at the ISR and OPER-model

    International Nuclear Information System (INIS)

    Grigoryan, A.A.

    1977-01-01

    The invlusive zero-angle neutron spectra in pp-collisions measured at the ISR are compared with the OPER-model predictions. OPER-model rather well describes the experimental data. Some features of the spectra behaviour at fixed transverse momentum and large x are considered

  9. A simplified thermal model for a clothed human operator with thermoregulation

    Directory of Open Access Journals (Sweden)

    Zahid Akhtar khan

    2010-09-01

    Full Text Available This paper presents a simplified yet comprehensive mathematical model to predict steady state temperature distribution for various regions of male clothed human operators who are healthy, passive/active and lean/obese under the influence of different environmental conditions using thermoregulatory control concept. The present model is able to predict the core temperature, close to 37oC for a healthy, passive/active and lean/obese operator at normal ambient temperatures. It is observed that due to increase in body fat, BF the skin temperature, of the operator decreases by a small amount. However, effect of age of the operator on is found to be insignificant. The present model has been validated against the experimental data available in the literature.

  10. Upcrowding energy co-operatives - Evaluating the potential of crowdfunding for business model innovation of energy co-operatives.

    Science.gov (United States)

    Dilger, Mathias Georg; Jovanović, Tanja; Voigt, Kai-Ingo

    2017-08-01

    Practice and theory have proven the relevance of energy co-operatives for civic participation in the energy turnaround. However, due to a still low awareness and changing regulation, there seems an unexploited potential of utilizing the legal form 'co-operative' in this context. The aim of this study is therefore to investigate the crowdfunding implementation in the business model of energy co-operatives in order to cope with the mentioned challenges. Based on a theoretical framework, we derive a Business Model Innovation (BMI) through crowdfunding including synergies and differences. A qualitative study design, particularly a multiple-case study of energy co-operatives, was chosen to prove the BMI and to reveal barriers. The results show that although most co-operatives are not familiar with crowdfunding, there is strong potential in opening up predominantly local structures to a broader group of members. Building on this, equity-based crowdfunding is revealed to be suitable for energy co-operatives as BMI and to accompany other challenges in the same way. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Identifying Associations Between Brain Imaging Phenotypes and Genetic Factors via A Novel Structured SCCA Approach.

    Science.gov (United States)

    Du, Lei; Zhang, Tuo; Liu, Kefei; Yan, Jingwen; Yao, Xiaohui; Risacher, Shannon L; Saykin, Andrew J; Han, Junwei; Guo, Lei; Shen, Li

    2017-06-01

    Brain imaging genetics attracts more and more attention since it can reveal associations between genetic factors and the structures or functions of human brain. Sparse canonical correlation analysis (SCCA) is a powerful bi-multivariate association identification technique in imaging genetics. There have been many SCCA methods which could capture different types of structured imaging genetic relationships. These methods either use the group lasso to recover the group structure, or employ the graph/network guided fused lasso to find out the network structure. However, the group lasso methods have limitation in generalization because of the incomplete or unavailable prior knowledge in real world. The graph/network guided methods are sensitive to the sign of the sample correlation which may be incorrectly estimated. We introduce a new SCCA model using a novel graph guided pairwise group lasso penalty, and propose an efficient optimization algorithm. The proposed method has a strong upper bound for the grouping effect for both positively and negatively correlated variables. We show that our method performs better than or equally to two state-of-the-art SCCA methods on both synthetic and real neuroimaging genetics data. In particular, our method identifies stronger canonical correlations and captures better canonical loading profiles, showing its promise for revealing biologically meaningful imaging genetic associations.

  12. Running scenarios using the Waste Tank Safety and Operations Hanford Site model

    International Nuclear Information System (INIS)

    Stahlman, E.J.

    1995-11-01

    Management of the Waste Tank Safety and Operations (WTS ampersand O) at Hanford is a large and complex task encompassing 177 tanks and having a budget of over $500 million per year. To assist managers in this task, a model based on system dynamics was developed by the Massachusetts Institute of Technology. The model simulates the WTS ampersand O at the Hanford Tank Farms by modeling the planning, control, and flow of work conducted by Managers, Engineers, and Crafts. The model is described in Policy Analysis of Hanford Tank Farm Operations with System Dynamics Approach (Kwak 1995b) and Management Simulator for Hanford Tank Farm Operations (Kwak 1995a). This document provides guidance for users of the model in developing, running, and analyzing results of management scenarios. The reader is assumed to have an understanding of the model and its operation. Important parameters and variables in the model are described, and two scenarios are formulated as examples

  13. Standard model baryogenesis through four-fermion operators in braneworlds

    International Nuclear Information System (INIS)

    Chung, Daniel J.H.; Dent, Thomas

    2002-01-01

    We study a new baryogenesis scenario in a class of braneworld models with low fundamental scale, which typically have difficulty with baryogenesis. The scenario is characterized by its minimal nature: the field content is that of the standard model and all interactions consistent with the gauge symmetry are admitted. Baryon number is violated via a dimension-6 proton decay operator, suppressed today by the mechanism of quark-lepton separation in extra dimensions; we assume that this operator was unsuppressed in the early Universe due to a time-dependent quark-lepton separation. The source of CP violation is the CKM matrix, in combination with the dimension-6 operators. We find that almost independently of cosmology, sufficient baryogenesis is nearly impossible in such a scenario if the fundamental scale is above 100 TeV, as required by an unsuppressed neutron-antineutron oscillation operator. The only exception producing sufficient baryon asymmetry is a scenario involving out-of-equilibrium c quarks interacting with equilibrium b quarks

  14. Activating Global Operating Models: The bridge from organization design to performance

    Directory of Open Access Journals (Sweden)

    Amy Kates

    2015-07-01

    Full Text Available This article introduces the concept of activation and discusses its use in the implementation of global operating models by large multinational companies. We argue that five particular activators help set in motion the complex strategies and organizations required by global operating models.

  15. A practical model for sustainable operational performance

    International Nuclear Information System (INIS)

    Vlek, C.A.J.; Steg, E.M.; Feenstra, D.; Gerbens-Leenis, W.; Lindenberg, S.; Moll, H.; Schoot Uiterkamp, A.; Sijtsma, F.; Van Witteloostuijn, A.

    2002-01-01

    By means of a concrete model for sustainable operational performance enterprises can report uniformly on the sustainability of their contributions to the economy, welfare and the environment. The development and design of a three-dimensional monitoring system is presented and discussed [nl

  16. Operative and diagnostic hysteroscopy: A novel learning model combining new animal models and virtual reality simulation.

    Science.gov (United States)

    Bassil, Alfred; Rubod, Chrystèle; Borghesi, Yves; Kerbage, Yohan; Schreiber, Elie Servan; Azaïs, Henri; Garabedian, Charles

    2017-04-01

    Hysteroscopy is one of the most common gynaecological procedure. Training for diagnostic and operative hysteroscopy can be achieved through numerous previously described models like animal models or virtual reality simulation. We present our novel combined model associating virtual reality and bovine uteruses and bladders. End year residents in obstetrics and gynaecology attended a full day workshop. The workshop was divided in theoretical courses from senior surgeons and hands-on training in operative hysteroscopy and virtual reality Essure ® procedures using the EssureSim™ and Pelvicsim™ simulators with multiple scenarios. Theoretical and operative knowledge was evaluated before and after the workshop and General Points Averages (GPAs) were calculated and compared using a Student's T test. GPAs were significantly higher after the workshop was completed. The biggest difference was observed in operative knowledge (0,28 GPA before workshop versus 0,55 after workshop, pvirtual reality simulation is an efficient model not described before. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. High-gradient operators in the psl(2|2 Gross–Neveu model

    Directory of Open Access Journals (Sweden)

    Alessandra Cagnazzo

    2015-03-01

    Full Text Available It has been observed more than 25 years ago that sigma model perturbation theory suffers from strongly RG-relevant high-gradient operators. The phenomenon was first seen in 1-loop calculations for the O(N vector model and it is known to persist at least to two loops. More recently, Ryu et al. suggested that a certain deformation of the psl(N|N WZNW-model at level k=1, or equivalently the psl(N|N  Gross–Neveu model, could be free of RG-relevant high-gradient operators and they tested their suggestion to leading order in perturbation theory. In this note we establish the absence of strongly RG-relevant high-gradient operators in the psl(2|2 Gross–Neveu model to all loops. In addition, we determine the spectrum for a large subsector of the model at infinite coupling and observe that all scaling weights become half-integer. Evidence for a conjectured relation with the CP1|2 sigma model is not found.

  18. Modeling of a dependence between human operators in advanced main control rooms

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jaewhan; Jang, Seung-Cheol; Shin, Yeong Cheol

    2009-01-01

    For the human reliability analysis of main control room (MCR) operations, not only parameters such as the given situation and capability of the operators but also the dependence between the actions of the operators should be considered because MCR operations are team operations. The dependence between operators might be more prevalent in an advanced MCR in which operators share the same information using a computerized monitoring system or a computerized procedure system. Therefore, this work focused on the computerized operation environment of advanced MCRs and proposed a model to consider the dependence representing the recovery possibility of an operator error by another operator. The proposed model estimates human error probability values by considering adjustment values for a situation and dependence values for operators during the same operation using independent event trees. This work can be used to quantitatively calculate a more reliable operation failure probability for an advanced MCR. (author)

  19. Trajectory-based morphological operators: a model for efficient image processing.

    Science.gov (United States)

    Jimeno-Morenilla, Antonio; Pujol, Francisco A; Molina-Carmona, Rafael; Sánchez-Romero, José L; Pujol, Mar

    2014-01-01

    Mathematical morphology has been an area of intensive research over the last few years. Although many remarkable advances have been achieved throughout these years, there is still a great interest in accelerating morphological operations in order for them to be implemented in real-time systems. In this work, we present a new model for computing mathematical morphology operations, the so-called morphological trajectory model (MTM), in which a morphological filter will be divided into a sequence of basic operations. Then, a trajectory-based morphological operation (such as dilation, and erosion) is defined as the set of points resulting from the ordered application of the instant basic operations. The MTM approach allows working with different structuring elements, such as disks, and from the experiments, it can be extracted that our method is independent of the structuring element size and can be easily applied to industrial systems and high-resolution images.

  20. Quantitative, steady-state properties of Catania's computational model of the operant reserve.

    Science.gov (United States)

    Berg, John P; McDowell, J J

    2011-05-01

    Catania (2005) found that a computational model of the operant reserve (Skinner, 1938) produced realistic behavior in initial, exploratory analyses. Although Catania's operant reserve computational model demonstrated potential to simulate varied behavioral phenomena, the model was not systematically tested. The current project replicated and extended the Catania model, clarified its capabilities through systematic testing, and determined the extent to which it produces behavior corresponding to matching theory. Significant departures from both classic and modern matching theory were found in behavior generated by the model across all conditions. The results suggest that a simple, dynamic operant model of the reflex reserve does not simulate realistic steady state behavior. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Eigentumors for prediction of treatment failure in patients with early-stage breast cancer using dynamic contrast-enhanced MRI: a feasibility study

    Science.gov (United States)

    Chan, H. M.; van der Velden, B. H. M.; E Loo, C.; Gilhuijs, K. G. A.

    2017-08-01

    We present a radiomics model to discriminate between patients at low risk and those at high risk of treatment failure at long-term follow-up based on eigentumors: principal components computed from volumes encompassing tumors in washin and washout images of pre-treatment dynamic contrast-enhanced (DCE-) MR images. Eigentumors were computed from the images of 563 patients from the MARGINS study. Subsequently, a least absolute shrinkage selection operator (LASSO) selected candidates from the components that contained 90% of the variance of the data. The model for prediction of survival after treatment (median follow-up time 86 months) was based on logistic regression. Receiver operating characteristic (ROC) analysis was applied and area-under-the-curve (AUC) values were computed as measures of training and cross-validated performances. The discriminating potential of the model was confirmed using Kaplan-Meier survival curves and log-rank tests. From the 322 principal components that explained 90% of the variance of the data, the LASSO selected 28 components. The ROC curves of the model yielded AUC values of 0.88, 0.77 and 0.73, for the training, leave-one-out cross-validated and bootstrapped performances, respectively. The bootstrapped Kaplan-Meier survival curves confirmed significant separation for all tumors (P  <  0.0001). Survival analysis on immunohistochemical subgroups shows significant separation for the estrogen-receptor subtype tumors (P  <  0.0001) and the triple-negative subtype tumors (P  =  0.0039), but not for tumors of the HER2 subtype (P  =  0.41). The results of this retrospective study show the potential of early-stage pre-treatment eigentumors for use in prediction of treatment failure of breast cancer.

  2. eWaterCycle: A global operational hydrological forecasting model

    Science.gov (United States)

    van de Giesen, Nick; Bierkens, Marc; Donchyts, Gennadii; Drost, Niels; Hut, Rolf; Sutanudjaja, Edwin

    2015-04-01

    Development of an operational hyper-resolution hydrological global model is a central goal of the eWaterCycle project (www.ewatercycle.org). This operational model includes ensemble forecasts (14 days) to predict water related stress around the globe. Assimilation of near-real time satellite data is part of the intended product that will be launched at EGU 2015. The challenges come from several directions. First, there are challenges that are mainly computer science oriented but have direct practical hydrological implications. For example, we aim to make use as much as possible of existing standards and open-source software. For example, different parts of our system are coupled through the Basic Model Interface (BMI) developed in the framework of the Community Surface Dynamics Modeling System (CSDMS). The PCR-GLOBWB model, built by Utrecht University, is the basic hydrological model that is the engine of the eWaterCycle project. Re-engineering of parts of the software was needed for it to run efficiently in a High Performance Computing (HPC) environment, and to be able to interface using BMI, and run on multiple compute nodes in parallel. The final aim is to have a spatial resolution of 1km x 1km, which is currently 10 x 10km. This high resolution is computationally not too demanding but very memory intensive. The memory bottleneck becomes especially apparent for data assimilation, for which we use OpenDA. OpenDa allows for different data assimilation techniques without the need to build these from scratch. We have developed a BMI adaptor for OpenDA, allowing OpenDA to use any BMI compatible model. To circumvent memory shortages which would result from standard applications of the Ensemble Kalman Filter, we have developed a variant that does not need to keep all ensemble members in working memory. At EGU, we will present this variant and how it fits well in HPC environments. An important step in the eWaterCycle project was the coupling between the hydrological and

  3. MODELING THE FLIGHT TRAJECTORY OF OPERATIONAL-TACTICAL BALLISTIC MISSILES

    Directory of Open Access Journals (Sweden)

    I. V. Filipchenko

    2018-01-01

    Full Text Available The article gives the basic approaches to updating the systems of combat operations modeling in the part of enemy missile attack simulation taking into account the possibility of tactical ballistic missile maneuvering during the flight. The results of simulation of combat tactical missile defense operations are given. 

  4. Effective operator treatment of the Lipkin model

    International Nuclear Information System (INIS)

    Abraham, K.J.; Vary, J.P.

    2004-01-01

    We analyze the Lipkin model in the strong coupling limit using effective operator techniques. We present both analytical and numerical results for low energy effective Hamiltonians. We investigate the reliability of various approximations used to simplify the nuclear many body problem, such as the cluster approximation. We demonstrate, in explicit examples, certain limits to the validity of the cluster approximation but caution that these limits may be particular to this model where the interactions are of unlimited range

  5. Aviation Shipboard Operations Modeling and Simulation (ASOMS) Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — Purpose:It is the mission of the Aviation Shipboard Operations Modeling and Simulation (ASOMS) Laboratory to provide a means by which to virtually duplicate products...

  6. Wake meandering of a model wind turbine operating in two different regimes

    Science.gov (United States)

    Foti, Daniel; Yang, Xiaolei; Campagnolo, Filippo; Maniaci, David; Sotiropoulos, Fotis

    2018-05-01

    The flow behind a model wind turbine under two different turbine operating regimes (region 2 for turbine operating at optimal condition with the maximum power coefficient and 1.4-deg pitch angle and region 3 for turbine operating at suboptimal condition with a lower power coefficient and 7-deg pitch angle) is investigated using wind tunnel experiments and numerical experiments using large-eddy simulation (LES) with actuator surface models for turbine blades and nacelle. Measurements from the model wind turbine experiment reveal that the power coefficient and turbine wake are affected by the operating regime. Simulations with and without a nacelle model are carried out for each operating condition to study the influence of the operating regime and nacelle on the formation of the hub vortex and wake meandering. Statistics and energy spectra of the simulated wakes are in good agreement with the measurements. For simulations with a nacelle model, the mean flow field is composed of an outer wake, caused by energy extraction by turbine blades, and an inner wake directly behind the nacelle, while for the simulations without a nacelle model, the central region of the wake is occupied by a jet. The simulations with the nacelle model reveal an unstable helical hub vortex expanding outward toward the outer wake, while the simulations without a nacelle model show a stable and columnar hub vortex. Because of the different interactions of the inner region of the wake with the outer region of the wake, a region with higher turbulence intensity is observed in the tip shear layer for the simulation with a nacelle model. The hub vortex for the turbine operating in region 3 remains in a tight helical spiral and intercepts the outer wake a few diameters further downstream than for the turbine operating in region 2. Wake meandering, a low-frequency large-scale motion of the wake, commences in the region of high turbulence intensity for all simulations with and without a nacelle model

  7. Can We Practically Bring Physics-based Modeling Into Operational Analytics Tools?

    Energy Technology Data Exchange (ETDEWEB)

    Granderson, Jessica [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bonvini, Marco [Whisker Labs, Oakland, CA (United States); Piette, Mary Ann [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Page, Janie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lin, Guanjing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hu, R. Lilly [Univ. of California, Berkeley, CA (United States)

    2017-08-11

    We present that analytics software is increasingly used to improve and maintain operational efficiency in commercial buildings. Energy managers, owners, and operators are using a diversity of commercial offerings often referred to as Energy Information Systems, Fault Detection and Diagnostic (FDD) systems, or more broadly Energy Management and Information Systems, to cost-effectively enable savings on the order of ten to twenty percent. Most of these systems use data from meters and sensors, with rule-based and/or data-driven models to characterize system and building behavior. In contrast, physics-based modeling uses first-principles and engineering models (e.g., efficiency curves) to characterize system and building behavior. Historically, these physics-based approaches have been used in the design phase of the building life cycle or in retrofit analyses. Researchers have begun exploring the benefits of integrating physics-based models with operational data analytics tools, bridging the gap between design and operations. In this paper, we detail the development and operator use of a software tool that uses hybrid data-driven and physics-based approaches to cooling plant FDD and optimization. Specifically, we describe the system architecture, models, and FDD and optimization algorithms; advantages and disadvantages with respect to purely data-driven approaches; and practical implications for scaling and replicating these techniques. Finally, we conclude with an evaluation of the future potential for such tools and future research opportunities.

  8. Modeling the wind-fields of accidental releases with an operational regional forecast model

    International Nuclear Information System (INIS)

    Albritton, J.R.; Lee, R.L.; Sugiyama, G.

    1995-01-01

    The Atmospheric Release Advisory Capability (ARAC) is an operational emergency preparedness and response organization supported primarily by the Departments of Energy and Defense. ARAC can provide real-time assessments of atmospheric releases of radioactive materials at any location in the world. ARAC uses robust three-dimensional atmospheric transport and dispersion models, extensive geophysical and dose-factor databases, meteorological data-acquisition systems, and an experienced staff. Although it was originally conceived and developed as an emergency response and assessment service for nuclear accidents, the ARAC system has been adapted to also simulate non-radiological hazardous releases. For example, in 1991 ARAC responded to three major events: the oil fires in Kuwait, the eruption of Mt. Pinatubo in the Philippines, and the herbicide spill into the upper Sacramento River in California. ARAC's operational simulation system, includes two three-dimensional finite-difference models: a diagnostic wind-field scheme, and a Lagrangian particle-in-cell transport and dispersion scheme. The meteorological component of ARAC's real-time response system employs models using real-time data from all available stations near the accident site to generate a wind-field for input to the transport and dispersion model. Here we report on simulation studies of past and potential release sites to show that even in the absence of local meteorological observational data, readily available gridded analysis and forecast data and a prognostic model, the Navy Operational Regional Atmospheric Prediction System, applied at an appropriate grid resolution can successfully simulate complex local flows

  9. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    Science.gov (United States)

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  10. Model selection emphasises the importance of non-chromosomal information in genetic studies.

    Directory of Open Access Journals (Sweden)

    Reda Rawi

    Full Text Available Ever since the case of the missing heritability was highlighted some years ago, scientists have been investigating various possible explanations for the issue. However, none of these explanations include non-chromosomal genetic information. Here we describe explicitly how chromosomal and non-chromosomal modifiers collectively influence the heritability of a trait, in this case, the growth rate of yeast. Our results show that the non-chromosomal contribution can be large, adding another dimension to the estimation of heritability. We also discovered, combining the strength of LASSO with model selection, that the interaction of chromosomal and non-chromosomal information is essential in describing phenotypes.

  11. Operator realization of the SU(2) WZNW model

    International Nuclear Information System (INIS)

    Furlan, P.; Hadjiivanov, L.K.; Todorov, I.T.

    1996-01-01

    Decoupling the chiral dynamics in the canonical approach to the WZNW model requires an extended phase space that includes left and right monodromy variables M and M. Earlier work on the subject, which traced back the quantum group symmetry of the model to the Lie-Poisson symmetry of the chiral symplectic form, left some open questions: - How to reconcile the necessity to set MM -1 =1 (in order to recover the monodromy invariance of the local 2D group valued field g=uu) with the fact the M and M obey different exchange relations? - What is the status of the quantum symmetry in the 2D theory in which the chiral fields u(x-t) and u(x+t) commute? - Is there a consistent operator formalism in the chiral (and the extended 2D) theory in the continuum limit? We propose a constructive affirmative answer to these questions for G=SU(2) by presenting the quantum fields u and u as sums of products of chiral vertex operators and q-Bose creation and annihilation operators. (orig.)

  12. A Stochastic Operational Planning Model for Smart Power Systems

    Directory of Open Access Journals (Sweden)

    Sh. Jadid

    2014-12-01

    Full Text Available Smart Grids are result of utilizing novel technologies such as distributed energy resources, and communication technologies in power system to compensate some of its defects. Various power resources provide some benefits for operation domain however, power system operator should use a powerful methodology to manage them. Renewable resources and load add uncertainty to the problem. So, independent system operator should use a stochastic method to manage them. A Stochastic unit commitment is presented in this paper to schedule various power resources such as distributed generation units, conventional thermal generation units, wind and PV farms, and demand response resources. Demand response resources, interruptible loads, distributed generation units, and conventional thermal generation units are used to provide required reserve for compensating stochastic nature of various resources and loads. In the presented model, resources connected to distribution network can participate in wholesale market through aggregators. Moreover, a novel three-program model which can be used by aggregators is presented in this article. Loads and distributed generation can contract with aggregators by these programs. A three-bus test system and the IEEE RTS are used to illustrate usefulness of the presented model. The results show that ISO can manage the system effectively by using this model

  13. Joint effect of unlinked genotypes: application to type 2 diabetes in the EPIC-Potsdam case-cohort study.

    Science.gov (United States)

    Knüppel, Sven; Meidtner, Karina; Arregui, Maria; Holzhütter, Hermann-Georg; Boeing, Heiner

    2015-07-01

    Analyzing multiple single nucleotide polymorphisms (SNPs) is a promising approach to finding genetic effects beyond single-locus associations. We proposed the use of multilocus stepwise regression (MSR) to screen for allele combinations as a method to model joint effects, and compared the results with the often used genetic risk score (GRS), conventional stepwise selection, and the shrinkage method LASSO. In contrast to MSR, the GRS, conventional stepwise selection, and LASSO model each genotype by the risk allele doses. We reanalyzed 20 unlinked SNPs related to type 2 diabetes (T2D) in the EPIC-Potsdam case-cohort study (760 cases, 2193 noncases). No SNP-SNP interactions and no nonlinear effects were found. Two SNP combinations selected by MSR (Nagelkerke's R² = 0.050 and 0.048) included eight SNPs with mean allele combination frequency of 2%. GRS and stepwise selection selected nearly the same SNP combinations consisting of 12 and 13 SNPs (Nagelkerke's R² ranged from 0.020 to 0.029). LASSO showed similar results. The MSR method showed the best model fit measured by Nagelkerke's R² suggesting that further improvement may render this method a useful tool in genetic research. However, our comparison suggests that the GRS is a simple way to model genetic effects since it does not consider linkage, SNP-SNP interactions, and no non-linear effects. © 2015 John Wiley & Sons Ltd/University College London.

  14. Structured sparse canonical correlation analysis for brain imaging genetics: an improved GraphNet method.

    Science.gov (United States)

    Du, Lei; Huang, Heng; Yan, Jingwen; Kim, Sungeun; Risacher, Shannon L; Inlow, Mark; Moore, Jason H; Saykin, Andrew J; Shen, Li

    2016-05-15

    Structured sparse canonical correlation analysis (SCCA) models have been used to identify imaging genetic associations. These models either use group lasso or graph-guided fused lasso to conduct feature selection and feature grouping simultaneously. The group lasso based methods require prior knowledge to define the groups, which limits the capability when prior knowledge is incomplete or unavailable. The graph-guided methods overcome this drawback by using the sample correlation to define the constraint. However, they are sensitive to the sign of the sample correlation, which could introduce undesirable bias if the sign is wrongly estimated. We introduce a novel SCCA model with a new penalty, and develop an efficient optimization algorithm. Our method has a strong upper bound for the grouping effect for both positively and negatively correlated features. We show that our method performs better than or equally to three competing SCCA models on both synthetic and real data. In particular, our method identifies stronger canonical correlations and better canonical loading patterns, showing its promise for revealing interesting imaging genetic associations. The Matlab code and sample data are freely available at http://www.iu.edu/∼shenlab/tools/angscca/ shenli@iu.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Computer-Aided Transformation of PDE Models: Languages, Representations, and a Calculus of Operations

    Science.gov (United States)

    2016-01-05

    Computer-aided transformation of PDE models: languages, representations, and a calculus of operations A domain-specific embedded language called...languages, representations, and a calculus of operations Report Title A domain-specific embedded language called ibvp was developed to model initial...Computer-aided transformation of PDE models: languages, representations, and a calculus of operations 1 Vision and background Physical and engineered systems

  16. Hysteresis modeling based on saturation operator without constraints

    International Nuclear Information System (INIS)

    Park, Y.W.; Seok, Y.T.; Park, H.J.; Chung, J.Y.

    2007-01-01

    This paper proposes a simple way to model complex hysteresis in a magnetostrictive actuator by employing the saturation operators without constraints. Having no constraints causes a singularity problem, i.e. the inverse matrix cannot be obtained during calculating the weights. To overcome it, a pseudoinverse concept is introduced. Simulation results are compared with the experimental data, based on a Terfenol-D actuator. It is clear that the proposed model is much closer to the experimental data than the modified PI model. The relative error is calculated as 12% and less than 1% with the modified PI Model and proposed model, respectively

  17. Optimizing Biorefinery Design and Operations via Linear Programming Models

    Energy Technology Data Exchange (ETDEWEB)

    Talmadge, Michael; Batan, Liaw; Lamers, Patrick; Hartley, Damon; Biddy, Mary; Tao, Ling; Tan, Eric

    2017-03-28

    The ability to assess and optimize economics of biomass resource utilization for the production of fuels, chemicals and power is essential for the ultimate success of a bioenergy industry. The team of authors, consisting of members from the National Renewable Energy Laboratory (NREL) and the Idaho National Laboratory (INL), has developed simple biorefinery linear programming (LP) models to enable the optimization of theoretical or existing biorefineries. The goal of this analysis is to demonstrate how such models can benefit the developing biorefining industry. It focuses on a theoretical multi-pathway, thermochemical biorefinery configuration and demonstrates how the biorefinery can use LP models for operations planning and optimization in comparable ways to the petroleum refining industry. Using LP modeling tools developed under U.S. Department of Energy's Bioenergy Technologies Office (DOE-BETO) funded efforts, the authors investigate optimization challenges for the theoretical biorefineries such as (1) optimal feedstock slate based on available biomass and prices, (2) breakeven price analysis for available feedstocks, (3) impact analysis for changes in feedstock costs and product prices, (4) optimal biorefinery operations during unit shutdowns / turnarounds, and (5) incentives for increased processing capacity. These biorefinery examples are comparable to crude oil purchasing and operational optimization studies that petroleum refiners perform routinely using LPs and other optimization models. It is important to note that the analyses presented in this article are strictly theoretical and they are not based on current energy market prices. The pricing structure assigned for this demonstrative analysis is consistent with $4 per gallon gasoline, which clearly assumes an economic environment that would favor the construction and operation of biorefineries. The analysis approach and examples provide valuable insights into the usefulness of analysis tools for

  18. Operations and support cost modeling using Markov chains

    Science.gov (United States)

    Unal, Resit

    1989-01-01

    Systems for future missions will be selected with life cycle costs (LCC) as a primary evaluation criterion. This reflects the current realization that only systems which are considered affordable will be built in the future due to the national budget constaints. Such an environment calls for innovative cost modeling techniques which address all of the phases a space system goes through during its life cycle, namely: design and development, fabrication, operations and support; and retirement. A significant portion of the LCC for reusable systems are generated during the operations and support phase (OS). Typically, OS costs can account for 60 to 80 percent of the total LCC. Clearly, OS costs are wholly determined or at least strongly influenced by decisions made during the design and development phases of the project. As a result OS costs need to be considered and estimated early in the conceptual phase. To be effective, an OS cost estimating model needs to account for actual instead of ideal processes by associating cost elements with probabilities. One approach that may be suitable for OS cost modeling is the use of the Markov Chain Process. Markov chains are an important method of probabilistic analysis for operations research analysts but they are rarely used for life cycle cost analysis. This research effort evaluates the use of Markov Chains in LCC analysis by developing OS cost model for a hypothetical reusable space transportation vehicle (HSTV) and suggests further uses of the Markov Chain process as a design-aid tool.

  19. AN-type Dunkl operators and new spin Calogero-Sutherland models

    International Nuclear Information System (INIS)

    Finkel, F.; Gomez-Ullate, D.; Gonzalez-Lopez, A.; Rodriguez, M.A.; Zhdanov, R.

    2001-01-01

    A new family of A N -type Dunkl operators preserving a polynomial subspace of finite dimension is constructed. Using a general quadratic combination of these operators and the usual Dunkl operators, several new families of exactly and quasi-exactly solvable quantum spin Calogero-Sutherland models are obtained. These include, in particular, three families of quasi-exactly solvable elliptic spin Hamiltonians. (orig.)

  20. EnergySolution's Clive Disposal Facility Operational Research Model - 13475

    Energy Technology Data Exchange (ETDEWEB)

    Nissley, Paul; Berry, Joanne [EnergySolutions, 2345 Stevens Dr. Richland, WA 99354 (United States)

    2013-07-01

    EnergySolutions owns and operates a licensed, commercial low-level radioactive waste disposal facility located in Clive, Utah. The Clive site receives low-level radioactive waste from various locations within the United States via bulk truck, containerised truck, enclosed truck, bulk rail-cars, rail boxcars, and rail inter-modals. Waste packages are unloaded, characterized, processed, and disposed of at the Clive site. Examples of low-level radioactive waste arriving at Clive include, but are not limited to, contaminated soil/debris, spent nuclear power plant components, and medical waste. Generators of low-level radioactive waste typically include nuclear power plants, hospitals, national laboratories, and various United States government operated waste sites. Over the past few years, poor economic conditions have significantly reduced the number of shipments to Clive. With less revenue coming in from processing shipments, Clive needed to keep its expenses down if it was going to maintain past levels of profitability. The Operational Research group of EnergySolutions were asked to develop a simulation model to help identify any improvement opportunities that would increase overall operating efficiency and reduce costs at the Clive Facility. The Clive operations research model simulates the receipt, movement, and processing requirements of shipments arriving at the facility. The model includes shipment schedules, processing times of various waste types, labor requirements, shift schedules, and site equipment availability. The Clive operations research model has been developed using the WITNESS{sup TM} process simulation software, which is developed by the Lanner Group. The major goals of this project were to: - identify processing bottlenecks that could reduce the turnaround time from shipment arrival to disposal; - evaluate the use (or idle time) of labor and equipment; - project future operational requirements under different forecasted scenarios. By identifying

  1. Transparent settlement model between mobile network operator and mobile voice over Internet protocol operator

    Directory of Open Access Journals (Sweden)

    Luzango Pangani Mfupe

    2014-12-01

    Full Text Available Advances in technology have enabled network-less mobile voice over internet protocol operator (MVoIPO to offer data services (i.e. voice, text and video to mobile network operator's (MNO's subscribers through an application enabled on subscriber's user equipment using MNO's packet-based cellular network infrastructure. However, this raises the problem of how to handle interconnection settlements between the two types of operators, particularly how to deal with users who now have the ability to make ‘free’ on-net MVoIP calls among themselves within the MNO's network. This study proposes a service level agreement-based transparent settlement model (TSM to solve this problem. The model is based on concepts of achievement and reward, not violation and punishment. The TSM calculates the MVoIPO's throughput distribution by monitoring the variations of peaks and troughs at the edge of a network. This facilitates the determination of conformance and non-conformance levels to the pre-set throughput thresholds and, subsequently, the issuing of compensation to the MVoIPO by the MNO as a result of generating an economically acceptable volume of data traffic.

  2. Modeling operational risks of the nuclear industry with Bayesian networks

    Energy Technology Data Exchange (ETDEWEB)

    Wieland, Patricia [Pontificia Univ. Catolica do Rio de Janeiro (PUC-Rio), RJ (Brazil). Dept. de Engenharia Industrial; Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil)], e-mail: pwieland@cnen.gov.br; Lustosa, Leonardo J. [Pontificia Univ. Catolica do Rio de Janeiro (PUC-Rio), RJ (Brazil). Dept. de Engenharia Industrial], e-mail: ljl@puc-rio.br

    2009-07-01

    Basically, planning a new industrial plant requires information on the industrial management, regulations, site selection, definition of initial and planned capacity, and on the estimation of the potential demand. However, this is far from enough to assure the success of an industrial enterprise. Unexpected and extremely damaging events may occur that deviates from the original plan. The so-called operational risks are not only in the system, equipment, process or human (technical or managerial) failures. They are also in intentional events such as frauds and sabotage, or extreme events like terrorist attacks or radiological accidents and even on public reaction to perceived environmental or future generation impacts. For the nuclear industry, it is a challenge to identify and to assess the operational risks and their various sources. Early identification of operational risks can help in preparing contingency plans, to delay the decision to invest or to approve a project that can, at an extreme, affect the public perception of the nuclear energy. A major problem in modeling operational risk losses is the lack of internal data that are essential, for example, to apply the loss distribution approach. As an alternative, methods that consider qualitative and subjective information can be applied, for example, fuzzy logic, neural networks, system dynamic or Bayesian networks. An advantage of applying Bayesian networks to model operational risk is the possibility to include expert opinions and variables of interest, to structure the model via causal dependencies among these variables, and to specify subjective prior and conditional probabilities distributions at each step or network node. This paper suggests a classification of operational risks in industry and discusses the benefits and obstacles of the Bayesian networks approach to model those risks. (author)

  3. Modeling operational risks of the nuclear industry with Bayesian networks

    International Nuclear Information System (INIS)

    Wieland, Patricia; Lustosa, Leonardo J.

    2009-01-01

    Basically, planning a new industrial plant requires information on the industrial management, regulations, site selection, definition of initial and planned capacity, and on the estimation of the potential demand. However, this is far from enough to assure the success of an industrial enterprise. Unexpected and extremely damaging events may occur that deviates from the original plan. The so-called operational risks are not only in the system, equipment, process or human (technical or managerial) failures. They are also in intentional events such as frauds and sabotage, or extreme events like terrorist attacks or radiological accidents and even on public reaction to perceived environmental or future generation impacts. For the nuclear industry, it is a challenge to identify and to assess the operational risks and their various sources. Early identification of operational risks can help in preparing contingency plans, to delay the decision to invest or to approve a project that can, at an extreme, affect the public perception of the nuclear energy. A major problem in modeling operational risk losses is the lack of internal data that are essential, for example, to apply the loss distribution approach. As an alternative, methods that consider qualitative and subjective information can be applied, for example, fuzzy logic, neural networks, system dynamic or Bayesian networks. An advantage of applying Bayesian networks to model operational risk is the possibility to include expert opinions and variables of interest, to structure the model via causal dependencies among these variables, and to specify subjective prior and conditional probabilities distributions at each step or network node. This paper suggests a classification of operational risks in industry and discusses the benefits and obstacles of the Bayesian networks approach to model those risks. (author)

  4. Integrated model of port oil piping transportation system safety including operating environment threats

    Directory of Open Access Journals (Sweden)

    Kołowrocki Krzysztof

    2017-06-01

    Full Text Available The paper presents an integrated general model of complex technical system, linking its multistate safety model and the model of its operation process including operating environment threats and considering variable at different operation states its safety structures and its components safety parameters. Under the assumption that the system has exponential safety function, the safety characteristics of the port oil piping transportation system are determined.

  5. Integrated model of port oil piping transportation system safety including operating environment threats

    OpenAIRE

    Kołowrocki, Krzysztof; Kuligowska, Ewa; Soszyńska-Budny, Joanna

    2017-01-01

    The paper presents an integrated general model of complex technical system, linking its multistate safety model and the model of its operation process including operating environment threats and considering variable at different operation states its safety structures and its components safety parameters. Under the assumption that the system has exponential safety function, the safety characteristics of the port oil piping transportation system are determined.

  6. Application of multi-SNP approaches Bayesian LASSO and AUC-RF to detect main effects of inflammatory-gene variants associated with bladder cancer risk.

    Directory of Open Access Journals (Sweden)

    Evangelina López de Maturana

    Full Text Available The relationship between inflammation and cancer is well established in several tumor types, including bladder cancer. We performed an association study between 886 inflammatory-gene variants and bladder cancer risk in 1,047 cases and 988 controls from the Spanish Bladder Cancer (SBC/EPICURO Study. A preliminary exploration with the widely used univariate logistic regression approach did not identify any significant SNP after correcting for multiple testing. We further applied two more comprehensive methods to capture the complexity of bladder cancer genetic susceptibility: Bayesian Threshold LASSO (BTL, a regularized regression method, and AUC-Random Forest, a machine-learning algorithm. Both approaches explore the joint effect of markers. BTL analysis identified a signature of 37 SNPs in 34 genes showing an association with bladder cancer. AUC-RF detected an optimal predictive subset of 56 SNPs. 13 SNPs were identified by both methods in the total population. Using resources from the Texas Bladder Cancer study we were able to replicate 30% of the SNPs assessed. The associations between inflammatory SNPs and bladder cancer were reexamined among non-smokers to eliminate the effect of tobacco, one of the strongest and most prevalent environmental risk factor for this tumor. A 9 SNP-signature was detected by BTL. Here we report, for the first time, a set of SNP in inflammatory genes jointly associated with bladder cancer risk. These results highlight the importance of the complex structure of genetic susceptibility associated with cancer risk.

  7. Intelligent control for modeling of real-time reservoir operation, part II: artificial neural network with operating rule curves

    Science.gov (United States)

    Chang, Ya-Ting; Chang, Li-Chiu; Chang, Fi-John

    2005-04-01

    To bridge the gap between academic research and actual operation, we propose an intelligent control system for reservoir operation. The methodology includes two major processes, the knowledge acquired and implemented, and the inference system. In this study, a genetic algorithm (GA) and a fuzzy rule base (FRB) are used to extract knowledge based on the historical inflow data with a design objective function and on the operating rule curves respectively. The adaptive network-based fuzzy inference system (ANFIS) is then used to implement the knowledge, to create the fuzzy inference system, and then to estimate the optimal reservoir operation. To investigate its applicability and practicability, the Shihmen reservoir, Taiwan, is used as a case study. For the purpose of comparison, a simulation of the currently used M-5 operating rule curve is also performed. The results demonstrate that (1) the GA is an efficient way to search the optimal input-output patterns, (2) the FRB can extract the knowledge from the operating rule curves, and (3) the ANFIS models built on different types of knowledge can produce much better performance than the traditional M-5 curves in real-time reservoir operation. Moreover, we show that the model can be more intelligent for reservoir operation if more information (or knowledge) is involved.

  8. Design and modeling of reservoir operation strategies for sediment management

    NARCIS (Netherlands)

    Sloff, C.J.; Omer, A.Y.A.; Heynert, K.V.; Mohamed, Y.A.

    2015-01-01

    Appropriate operation strategies that allow for sediment flushing and sluicing (sediment routing) can reduce rapid storage losses of (hydropower and water-supply) reservoirs. In this study we have shown, using field observations and computational models, that the efficiency of these operations

  9. Variance Component Selection With Applications to Microbiome Taxonomic Data

    Directory of Open Access Journals (Sweden)

    Jing Zhai

    2018-03-01

    Full Text Available High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Microbiome data are summarized as counts or composition of the bacterial taxa at different taxonomic levels. An important problem is to identify the bacterial taxa that are associated with a response. One method is to test the association of specific taxon with phenotypes in a linear mixed effect model, which incorporates phylogenetic information among bacterial communities. Another type of approaches consider all taxa in a joint model and achieves selection via penalization method, which ignores phylogenetic information. In this paper, we consider regression analysis by treating bacterial taxa at different level as multiple random effects. For each taxon, a kernel matrix is calculated based on distance measures in the phylogenetic tree and acts as one variance component in the joint model. Then taxonomic selection is achieved by the lasso (least absolute shrinkage and selection operator penalty on variance components. Our method integrates biological information into the variable selection problem and greatly improves selection accuracies. Simulation studies demonstrate the superiority of our methods versus existing methods, for example, group-lasso. Finally, we apply our method to a longitudinal microbiome study of Human Immunodeficiency Virus (HIV infected patients. We implement our method using the high performance computing language Julia. Software and detailed documentation are freely available at https://github.com/JingZhai63/VCselection.

  10. Bernstein approximations in glasso-based estimation of biological networks

    NARCIS (Netherlands)

    Purutcuoglu, Vilda; Agraz, Melih; Wit, Ernst

    The Gaussian graphical model (GGM) is one of the common dynamic modelling approaches in the construction of gene networks. In inference of this modelling the interaction between genes can be detected mainly via graphical lasso (glasso) or coordinate descent-based approaches. Although these methods

  11. Development of an inpatient operational pharmacy productivity model.

    Science.gov (United States)

    Naseman, Ryan W; Lopez, Ben R; Forrey, Ryan A; Weber, Robert J; Kipp, Kris M

    2015-02-01

    An innovative model for measuring the operational productivity of medication order management in inpatient settings is described. Order verification within a computerized prescriber order-entry system was chosen as the pharmacy workload driver. To account for inherent variability in the tasks involved in processing different types of orders, pharmaceutical products were grouped by class, and each class was assigned a time standard, or "medication complexity weight" reflecting the intensity of pharmacist and technician activities (verification of drug indication, verification of appropriate dosing, adverse-event prevention and monitoring, medication preparation, product checking, product delivery, returns processing, nurse/provider education, and problem-order resolution). The resulting "weighted verifications" (WV) model allows productivity monitoring by job function (pharmacist versus technician) to guide hiring and staffing decisions. A 9-month historical sample of verified medication orders was analyzed using the WV model, and the calculations were compared with values derived from two established models—one based on the Case Mix Index (CMI) and the other based on the proprietary Pharmacy Intensity Score (PIS). Evaluation of Pearson correlation coefficients indicated that values calculated using the WV model were highly correlated with those derived from the CMI-and PIS-based models (r = 0.845 and 0.886, respectively). Relative to the comparator models, the WV model offered the advantage of less period-to-period variability. The WV model yielded productivity data that correlated closely with values calculated using two validated workload management models. The model may be used as an alternative measure of pharmacy operational productivity. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  12. Sequence Tree Modeling for Combined Accident and Feed-and-Bleed Operation

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang Hyun Gook; Yoon, Ho Joon

    2016-01-01

    In order to address this issue, this study suggests the sequence tree model to analyze accident sequence systematically. Using the sequence tree model, all possible scenarios which need a specific safety action to prevent the core damage can be identified and success conditions of safety action under complicated situation such as combined accident will be also identified. Sequence tree is branch model to divide plant condition considering the plant dynamics. Since sequence tree model can reflect the plant dynamics, arising from interaction of different accident timing and plant condition and from the interaction between the operator action, mitigation system, and the indicators for operation, sequence tree model can be used to develop the dynamic event tree model easily. Target safety action for this study is a feed-and-bleed (F and B) operation. A F and B operation directly cools down the reactor cooling system (RCS) using the primary cooling system when residual heat removal by the secondary cooling system is not available. In this study, a TLOFW accident and a TLOFW accident with LOCA were the target accidents. Based on the conventional PSA model and indicators, the sequence tree model for a TLOFW accident was developed. If sampling analysis is performed, practical accident sequences can be identified based on the sequence analysis. If a realistic distribution for the variables can be obtained for sampling analysis, much more realistic accident sequences can be described. Moreover, if the initiating event frequency under a combined accident can be quantified, the sequence tree model can translate into a dynamic event tree model based on the sampling analysis results

  13. Sequence Tree Modeling for Combined Accident and Feed-and-Bleed Operation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Bo Gyung; Kang Hyun Gook [KAIST, Daejeon (Korea, Republic of); Yoon, Ho Joon [Khalifa University of Science, Abu Dhabi (United Arab Emirates)

    2016-05-15

    In order to address this issue, this study suggests the sequence tree model to analyze accident sequence systematically. Using the sequence tree model, all possible scenarios which need a specific safety action to prevent the core damage can be identified and success conditions of safety action under complicated situation such as combined accident will be also identified. Sequence tree is branch model to divide plant condition considering the plant dynamics. Since sequence tree model can reflect the plant dynamics, arising from interaction of different accident timing and plant condition and from the interaction between the operator action, mitigation system, and the indicators for operation, sequence tree model can be used to develop the dynamic event tree model easily. Target safety action for this study is a feed-and-bleed (F and B) operation. A F and B operation directly cools down the reactor cooling system (RCS) using the primary cooling system when residual heat removal by the secondary cooling system is not available. In this study, a TLOFW accident and a TLOFW accident with LOCA were the target accidents. Based on the conventional PSA model and indicators, the sequence tree model for a TLOFW accident was developed. If sampling analysis is performed, practical accident sequences can be identified based on the sequence analysis. If a realistic distribution for the variables can be obtained for sampling analysis, much more realistic accident sequences can be described. Moreover, if the initiating event frequency under a combined accident can be quantified, the sequence tree model can translate into a dynamic event tree model based on the sampling analysis results.

  14. Chemical agnostic hazard prediction: Statistical inference of toxicity pathways - data for Figure 2

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset comprises one SigmaPlot 13 file containing measured survival data and survival data predicted from the model coefficients selected by the LASSO...

  15. Developing a dengue forecast model using machine learning: A case study in China.

    Science.gov (United States)

    Guo, Pi; Liu, Tao; Zhang, Qin; Wang, Li; Xiao, Jianpeng; Zhang, Qingying; Luo, Ganfeng; Li, Zhihao; He, Jianfeng; Zhang, Yonghui; Ma, Wenjun

    2017-10-01

    In China, dengue remains an important public health issue with expanded areas and increased incidence recently. Accurate and timely forecasts of dengue incidence in China are still lacking. We aimed to use the state-of-the-art machine learning algorithms to develop an accurate predictive model of dengue. Weekly dengue cases, Baidu search queries and climate factors (mean temperature, relative humidity and rainfall) during 2011-2014 in Guangdong were gathered. A dengue search index was constructed for developing the predictive models in combination with climate factors. The observed year and week were also included in the models to control for the long-term trend and seasonality. Several machine learning algorithms, including the support vector regression (SVR) algorithm, step-down linear regression model, gradient boosted regression tree algorithm (GBM), negative binomial regression model (NBM), least absolute shrinkage and selection operator (LASSO) linear regression model and generalized additive model (GAM), were used as candidate models to predict dengue incidence. Performance and goodness of fit of the models were assessed using the root-mean-square error (RMSE) and R-squared measures. The residuals of the models were examined using the autocorrelation and partial autocorrelation function analyses to check the validity of the models. The models were further validated using dengue surveillance data from five other provinces. The epidemics during the last 12 weeks and the peak of the 2014 large outbreak were accurately forecasted by the SVR model selected by a cross-validation technique. Moreover, the SVR model had the consistently smallest prediction error rates for tracking the dynamics of dengue and forecasting the outbreaks in other areas in China. The proposed SVR model achieved a superior performance in comparison with other forecasting techniques assessed in this study. The findings can help the government and community respond early to dengue epidemics.

  16. Optimal Operational Monetary Policy Rules in an Endogenous Growth Model: a calibrated analysis

    OpenAIRE

    Arato, Hiroki

    2009-01-01

    This paper constructs an endogenous growth New Keynesian model and considers growth and welfare effect of Taylor-type (operational) monetary policy rules. The Ramsey equilibrium and optimal operational monetary policy rule is also computed. In the calibrated model, the Ramseyoptimal volatility of inflation rate is smaller than that in standard exogenous growth New Keynesian model with physical capital accumulation. Optimal operational monetary policy rule makes nominal interest rate respond s...

  17. Lean waste classification model to support the sustainable operational practice

    Science.gov (United States)

    Sutrisno, A.; Vanany, I.; Gunawan, I.; Asjad, M.

    2018-04-01

    Driven by growing pressure for a more sustainable operational practice, improvement on the classification of non-value added (waste) is one of the prerequisites to realize sustainability of a firm. While the use of the 7 (seven) types of the Ohno model now becoming a versatile tool to reveal the lean waste occurrence. In many recent investigations, the use of the Seven Waste model of Ohno is insufficient to cope with the types of waste occurred in industrial practices at various application levels. Intended to a narrowing down this limitation, this paper presented an improved waste classification model based on survey to recent studies discussing on waste at various operational stages. Implications on the waste classification model to the body of knowledge and industrial practices are provided.

  18. Theory model and experiment research about the cognition reliability of nuclear power plant operators

    International Nuclear Information System (INIS)

    Fang Xiang; Zhao Bingquan

    2000-01-01

    In order to improve the reliability of NPP operation, the simulation research on the reliability of nuclear power plant operators is needed. Making use of simulator of nuclear power plant as research platform, and taking the present international reliability research model-human cognition reliability for reference, the part of the model is modified according to the actual status of Chinese nuclear power plant operators and the research model of Chinese nuclear power plant operators obtained based on two-parameter Weibull distribution. Experiments about the reliability of nuclear power plant operators are carried out using the two-parameter Weibull distribution research model. Compared with those in the world, the same results are achieved. The research would be beneficial to the operation safety of nuclear power plant

  19. Hadron matrix elements of quark operators in the relativistic quark model, 2. Model calculation

    Energy Technology Data Exchange (ETDEWEB)

    Arisue, H; Bando, M; Toya, M [Kyoto Univ. (Japan). Dept. of Physics; Sugimoto, H

    1979-11-01

    Phenomenological studies of the matrix elements of two- and four-quark operators are made on the basis of relativistic independent quark model for typical three cases of the potentials: rigid wall, linearly rising and Coulomb-like potentials. The values of the matrix elements of two-quark operators are relatively well reproduced in each case, but those of four-quark operators prove to be too small in the independent particle treatment. It is suggested that the short-range two-quark correlations must be taken into account in order to improve the values of the matrix elements of the four-quark operators.

  20. The operable modeling of simultaneous saccharification and fermentation of ethanol production from cellulose.

    Science.gov (United States)

    Shen, Jiacheng; Agblevor, Foster A

    2010-03-01

    An operable batch model of simultaneous saccharification and fermentation (SSF) for ethanol production from cellulose has been developed. The model includes four ordinary differential equations that describe the changes of cellobiose, glucose, yeast, and ethanol concentrations with respect to time. These equations were used to simulate the experimental data of the four main components in the SSF process of ethanol production from microcrystalline cellulose (Avicel PH101). The model parameters at 95% confidence intervals were determined by a MATLAB program based on the batch experimental data of the SSF. Both experimental data and model simulations showed that the cell growth was the rate-controlling step at the initial period in a series of reactions of cellulose to ethanol, and later, the conversion of cellulose to cellobiose controlled the process. The batch model was extended to the continuous and fed-batch operating models. For the continuous operation in the SSF, the ethanol productivities increased with increasing dilution rate, until a maximum value was attained, and rapidly decreased as the dilution rate approached the washout point. The model also predicted a relatively high ethanol mass for the fed-batch operation than the batch operation.

  1. Real-time x-ray fluoroscopy-based catheter detection and tracking for cardiac electrophysiology interventions

    Energy Technology Data Exchange (ETDEWEB)

    Ma Yingliang; Housden, R. James; Razavi, Reza; Rhode, Kawal S. [Division of Imaging Sciences and Biomedical Engineering, King' s College London, London SE1 7EH (United Kingdom); Gogin, Nicolas; Cathier, Pascal [Medisys Research Group, Philips Healthcare, Paris 92156 (France); Gijsbers, Geert [Interventional X-ray, Philips Healthcare, Best 5680 DA (Netherlands); Cooklin, Michael; O' Neill, Mark; Gill, Jaswinder; Rinaldi, C. Aldo [Department of Cardiology, Guys and St. Thomas' Hospitals NHS Foundation Trust, London SE1 7EH (United Kingdom)

    2013-07-15

    Purpose: X-ray fluoroscopically guided cardiac electrophysiology (EP) procedures are commonly carried out to treat patients with arrhythmias. X-ray images have poor soft tissue contrast and, for this reason, overlay of a three-dimensional (3D) roadmap derived from preprocedural volumetric images can be used to add anatomical information. It is useful to know the position of the catheter electrodes relative to the cardiac anatomy, for example, to record ablation therapy locations during atrial fibrillation therapy. Also, the electrode positions of the coronary sinus (CS) catheter or lasso catheter can be used for road map motion correction.Methods: In this paper, the authors present a novel unified computational framework for image-based catheter detection and tracking without any user interaction. The proposed framework includes fast blob detection, shape-constrained searching and model-based detection. In addition, catheter tracking methods were designed based on the customized catheter models input from the detection method. Three real-time detection and tracking methods are derived from the computational framework to detect or track the three most common types of catheters in EP procedures: the ablation catheter, the CS catheter, and the lasso catheter. Since the proposed methods use the same blob detection method to extract key information from x-ray images, the ablation, CS, and lasso catheters can be detected and tracked simultaneously in real-time.Results: The catheter detection methods were tested on 105 different clinical fluoroscopy sequences taken from 31 clinical procedures. Two-dimensional (2D) detection errors of 0.50 {+-} 0.29, 0.92 {+-} 0.61, and 0.63 {+-} 0.45 mm as well as success rates of 99.4%, 97.2%, and 88.9% were achieved for the CS catheter, ablation catheter, and lasso catheter, respectively. With the tracking method, accuracies were increased to 0.45 {+-} 0.28, 0.64 {+-} 0.37, and 0.53 {+-} 0.38 mm and success rates increased to 100%, 99

  2. EMMA model: an advanced operational mesoscale air quality model for urban and regional environments

    International Nuclear Information System (INIS)

    Jose, R.S.; Rodriguez, M.A.; Cortes, E.; Gonzalez, R.M.

    1999-01-01

    Mesoscale air quality models are an important tool to forecast and analyse the air quality in regional and urban areas. In recent years an increased interest has been shown by decision makers in these types of software tools. The complexity of such a model has grown exponentially with the increase of computer power. Nowadays, medium workstations can run operational versions of these modelling systems successfully. Presents a complex mesoscale air quality model which has been installed in the Environmental Office of the Madrid community (Spain) in order to forecast accurately the ozone, nitrogen dioxide and sulphur dioxide air concentrations in a 3D domain centred on Madrid city. Describes the challenging scientific matters to be solved in order to develop an operational version of the atmospheric mesoscale numerical pollution model for urban and regional areas (ANA). Some encouraging results have been achieved in the attempts to improve the accuracy of the predictions made by the version already installed. (Author)

  3. Model based decision support system of operating settings for MMAT nozzles

    Directory of Open Access Journals (Sweden)

    Fritz Bradley Keith

    2016-04-01

    Full Text Available Droplet size, which is affected by nozzle type, nozzle setups and operation, and spray solution, is one of the most critical factors influencing spray performance, environment pollution, food safety, and must be considered as part of any application scenario. Characterizing spray nozzles can be a timely and expensive proposition if the entire operational space (all combinations of spray pressure and orifice size, what influence flow rate is to be evaluated. This research proposes a structured, experimental design that allows for the development of computational models for droplet size based on any combination of a nozzle’s potential operational settings. The developed droplet size determination model can be used as Decision Support System (DSS for precise selection of sprayer working parameters to adapt to local field scenarios. Five nozzle types (designs were evaluated across their complete range of orifice size (flow rate* and spray pressures using a response surface experimental design. Several of the models showed high level fits of the modeled to the measured data while several did not as a result of the lack of significant effect from either orifice size (flow rate* or spray pressure. The computational models were integrated into a spreadsheet based user interface for ease of use. The proposed experimental design provides for efficient nozzle evaluations and development of computational models that allow for the determination of droplet size spectrum and spraying classification for any combination of a given nozzle’s operating settings. The proposed DSS will allow for the ready assessment and modification of a sprayers performance based on the operational settings, to ensure the application is made following recommendations in plant protection products (PPP labels.

  4. Operator realization of the SU(2) WZNW model

    International Nuclear Information System (INIS)

    Furlan, P.; Todorov, I.T.

    1995-12-01

    Decoupling the chiral dynamics in the canonical approach to the WZNW model requires an extended phase space that includes left and right monodromy variables M and M-bar. Earlier work on the subject, which traced back the quantum group symmetry of the model to the Lie-Poisson symmetry of the chiral symplectic form, left some open questions: How to reconcile the necessity to set M M-bar -1 = 1 (in order to recover the monodromy invariance of the local 2D group valued field g = uu-bar) with the fact the M and M-bar obey different exchange relations? What is the status of the quantum symmetry in the 2D theory in which the chiral fields u(x-t) and u-bar(x+t) commute? Is there a consistent operator formalism in the chiral (and the extended 2D) theory in the continuum limit? We propose a constructive affirmative answer to these questions for G = SU(2) by presenting the quantum field u and u-bar as sums of products of chiral vertex operators and q Bose creation and annihilation operators. (author). 17 refs

  5. DL-ADR: a novel deep learning model for classifying genomic variants into adverse drug reactions.

    Science.gov (United States)

    Liang, Zhaohui; Huang, Jimmy Xiangji; Zeng, Xing; Zhang, Gang

    2016-08-10

    Genomic variations are associated with the metabolism and the occurrence of adverse reactions of many therapeutic agents. The polymorphisms on over 2000 locations of cytochrome P450 enzymes (CYP) due to many factors such as ethnicity, mutations, and inheritance attribute to the diversity of response and side effects of various drugs. The associations of the single nucleotide polymorphisms (SNPs), the internal pharmacokinetic patterns and the vulnerability of specific adverse reactions become one of the research interests of pharmacogenomics. The conventional genomewide association studies (GWAS) mainly focuses on the relation of single or multiple SNPs to a specific risk factors which are a one-to-many relation. However, there are no robust methods to establish a many-to-many network which can combine the direct and indirect associations between multiple SNPs and a serial of events (e.g. adverse reactions, metabolic patterns, prognostic factors etc.). In this paper, we present a novel deep learning model based on generative stochastic networks and hidden Markov chain to classify the observed samples with SNPs on five loci of two genes (CYP2D6 and CYP1A2) respectively to the vulnerable population of 14 types of adverse reactions. A supervised deep learning model is proposed in this study. The revised generative stochastic networks (GSN) model with transited by the hidden Markov chain is used. The data of the training set are collected from clinical observation. The training set is composed of 83 observations of blood samples with the genotypes respectively on CYP2D6*2, *10, *14 and CYP1A2*1C, *1 F. The samples are genotyped by the polymerase chain reaction (PCR) method. A hidden Markov chain is used as the transition operator to simulate the probabilistic distribution. The model can perform learning at lower cost compared to the conventional maximal likelihood method because the transition distribution is conditional on the previous state of the hidden Markov

  6. Mapping Haplotype-haplotype Interactions with Adaptive LASSO

    Directory of Open Access Journals (Sweden)

    Li Ming

    2010-08-01

    Full Text Available Abstract Background The genetic etiology of complex diseases in human has been commonly viewed as a complex process involving both genetic and environmental factors functioning in a complicated manner. Quite often the interactions among genetic variants play major roles in determining the susceptibility of an individual to a particular disease. Statistical methods for modeling interactions underlying complex diseases between single genetic variants (e.g. single nucleotide polymorphisms or SNPs have been extensively studied. Recently, haplotype-based analysis has gained its popularity among genetic association studies. When multiple sequence or haplotype interactions are involved in determining an individual's susceptibility to a disease, it presents daunting challenges in statistical modeling and testing of the interaction effects, largely due to the complicated higher order epistatic complexity. Results In this article, we propose a new strategy in modeling haplotype-haplotype interactions under the penalized logistic regression framework with adaptive L1-penalty. We consider interactions of sequence variants between haplotype blocks. The adaptive L1-penalty allows simultaneous effect estimation and variable selection in a single model. We propose a new parameter estimation method which estimates and selects parameters by the modified Gauss-Seidel method nested within the EM algorithm. Simulation studies show that it has low false positive rate and reasonable power in detecting haplotype interactions. The method is applied to test haplotype interactions involved in mother and offspring genome in a small for gestational age (SGA neonates data set, and significant interactions between different genomes are detected. Conclusions As demonstrated by the simulation studies and real data analysis, the approach developed provides an efficient tool for the modeling and testing of haplotype interactions. The implementation of the method in R codes can be

  7. An operations model of psychosocial structure and function and of psychotherapy.

    Science.gov (United States)

    Davidson, S

    2000-11-01

    Currently, personality theory and clinical psychology have a fairly substantial tradition of promoting a strongly scientific basis for clinical work and theorizing. However, an appropriate foundation model has been difficult to identify and establish. A theory of human operations, here proposed, may provide such an elementary model. The theory is rooted in the organizational and industrial field known as operations, which is a highly systematic, precise, flexible, scientific approach to the understanding and management of human goal-seeking action in the broadest sense. The proposed model includes the classical humanistic, clinical, and decision theoretic notions of values, cognition, emotions, ego, behavior, objectives, outcomes, feedback, and defenses. These notions are placed within an overall operations frame of reference and developed in such a manner that they can be used to assess human clinical problems and to design therapeutic interventions. The strengths and limitations of the model are discussed.

  8. A hypothesis generation model of initiating events for nuclear power plant operators

    International Nuclear Information System (INIS)

    Sawhney, R.S.; Dodds, H.L.; Schryver, J.C.; Knee, H.E.

    1989-01-01

    The goal of existing alarm-filtering models is to provide the operator with the most accurate assessment of patterns of annunciated alarms. Some models are based on event-tree analysis, such as DuPont's Diagnosis of Multiple Alarms. Other models focus on improving hypothesis generation by deemphasizing alarms not relevant to the current plant scenario. Many such models utilize the alarm filtering system as a basis of dynamic prioritization. The Lisp-based alarm analysis model presented in this paper was developed for the Advanced Controls Program at Oak Ridge National Laboratory to dynamically prioritize hypotheses via an AFS by incorporating an unannunciated alarm analysis with other plant-based concepts. The objective of this effort is to develop an alarm analysis model that would allow greater flexibility and more accurate hypothesis generation than the prototype fault diagnosis model utilized in the Integrated Reactor Operator/System (INTEROPS) model. INTEROPS is a time-based predictive model of the nuclear power plant operator, which utilizes alarm information in a manner similar to the human operator. This is achieved by recoding the knowledge base from the personal computer-based expert system shell to a common Lisp structure, providing the ability to easily modify both the manner in which the knowledge is structured as well as the logic by which the program performs fault diagnosis

  9. Designing visual displays and system models for safe reactor operations

    Energy Technology Data Exchange (ETDEWEB)

    Brown-VanHoozer, S.A.

    1995-12-31

    The material presented in this paper is based on two studies involving the design of visual displays and the user`s prospective model of a system. The studies involve a methodology known as Neuro-Linguistic Programming and its use in expanding design choices from the operator`s perspective image. The contents of this paper focuses on the studies and how they are applicable to the safety of operating reactors.

  10. Snow model design for operational purposes

    Science.gov (United States)

    Kolberg, Sjur

    2017-04-01

    A parsimonious distributed energy balance snow model intended for operational use is evaluated using discharge, snow covered area and grain size; the latter two as observed from the MODIS sensor. The snow model is an improvement of the existing GamSnow model, which is a part of the Enki modelling framework. Core requirements for the new version have been: 1. Reduction of calibration freedom, motivated by previous experience of non-identifiable parameters in the existing version 2. Improvement of process representation based on recent advances in physically based snow modelling 3. Limiting the sensitivity to forcing data which are poorly known over the spatial domain of interest (often in mountainous areas) 4. Preference for observable states, and the ability to improve from updates. The albedo calculation is completely revised, now based on grain size through an emulation of the SNICAR model (Flanner and Zender, 2006; Gardener and Sharp, 2010). The number of calibration parameters in the albedo model is reduced from 6 to 2. The wind function governing turbulent energy fluxes has been reduced from 2 to 1 parameter. Following Raleigh et al (2011), snow surface radiant temperature is split from the top layer thermodynamic temperature, using bias-corrected wet-bulb temperature to model the former. Analyses are ongoing, and the poster will bring evaluation results from 16 years of MODIS observations and more than 25 catchments in southern Norway.

  11. Prospects and requirements for an operational modelling unit in flood crisis situations

    Directory of Open Access Journals (Sweden)

    Anders Katharina

    2016-01-01

    Full Text Available Dike failure events pose severe flood crisis situations on areas in the hinterland of dikes. In recent decades the importance of being prepared for dike breaches has been increasingly recognized. However, the pre-assessment of inundation resulting from dike breaches is possible only based on scenarios, which might not reflect the situation of a real event. This paper presents a setup and workflow that allows to model dike breachinduced inundation operationally, i.e. when an event is imminent or occurring. A comprehensive system setup of an operational modelling unit has been developed and implemented in the frame of a federal project in Saxony-Anhalt, Germany. The modelling unit setup comprises a powerful methodology of flood modelling and elaborated operational guidelines for crisis situations. Nevertheless, it is of fundamental importance that the modelling unit is instated prior to flood events as a permanent system. Moreover the unit needs to be fully integrated in flood crisis management. If these crucial requirements are met, a modelling unit is capable of fundamentally supporting flood management with operational prognoses of adequate quality even in the limited timeframe of crisis situations.

  12. Cognitive model of the power unit operator activity

    International Nuclear Information System (INIS)

    Chachko, S.A.

    1992-01-01

    Basic notions making it possible to study and simulate the peculiarities of man-operator activity, in particular his way of thiking, are considered. Special attention is paid to cognitive models based on concept of decisive role of knowledge (its acquisition, storage and application) in the man mental processes and activity. The models are based on three basic notions, which are the professional world image, activity strategy and spontaneous decisions

  13. A framework for modelling the behaviour of a process control operator under stress

    International Nuclear Information System (INIS)

    Kan, C-C.F.; Roberts, P.D.; Smith, I.C.

    1990-01-01

    This paper proposes the basis for a framework for modelling effects of stress on the behaviour of a process control plant operator. The qualitative effects of stress on the cognitive processing ability of the operator are discussed. Stress is thought to mainly decrease the reasoning ability of the operator. The operator will experience increased rigidity in problem solving and the narrowing of his attention and perceptual field. At the same time, the operator will be increasingly reluctant in admitting that wrong decisions have been committed. Furthermore, he will revert to skill-based behaviours. The direct consequence of stress on the decision making mechanism of the operator is the selection of inappropriate choice of action. A formal representation of decision errors is proposed and various techniques are suggested for representing various mechanisms for decision error making. The degree of experience possessed by the operator is also an important factor to the operator's tolerance of stress. The framework also allows the experience of the operator to be integrated into the model. Such an operator model can be linked to a plant simulator and the complete behaviour of the plant then be simulated

  14. A model to predict productivity of different chipping operations ...

    African Journals Online (AJOL)

    Additional international case studies from North America, South America, and central and northern Europe were used to test the accuracy of the model, in which 15 studies confirmed the model's validity and two failed to pass the test. Keywords: average piece size, chipper, power, sensitivity analysis, type of operation, unit ...

  15. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP

    Directory of Open Access Journals (Sweden)

    Li Deng

    2015-01-01

    Full Text Available In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming, using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model’s input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators’ operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  16. A cost prediction model for machine operation in multi-field production systems

    Directory of Open Access Journals (Sweden)

    Alessandro Sopegno

    Full Text Available ABSTRACT Capacity planning in agricultural field operations needs to give consideration to the operational system design which involves the selection and dimensioning of production components, such as machinery and equipment. Capacity planning models currently onstream are generally based on average norm data and not on specific farm data which may vary from year to year. In this paper a model is presented for predicting the cost of in-field and transport operations for multiple-field and multiple-crop production systems. A case study from a real production system is presented in order to demonstrate the model’s functionalities and its sensitivity to parameters known to be somewhat imprecise. It was shown that the proposed model can provide operation cost predictions for complex cropping systems where labor and machinery are shared between the various operations which can be individually formulated for each individual crop. By so doing, the model can be used as a decision support system at the strategic level of management of agricultural production systems and specifically for the mid-term design process of systems in terms of labor/machinery and crop selection conforming to the criterion of profitability.

  17. River Stream-Flow and Zayanderoud Reservoir Operation Modeling Using the Fuzzy Inference System

    Directory of Open Access Journals (Sweden)

    Saeed Jamali

    2007-12-01

    Full Text Available The Zayanderoud basin is located in the central plateau of Iran. As a result of population increase and agricultural and industrial developments, water demand on this basin has increased extensively. Given the importance of reservoir operation in water resource and management studies, the performance of fuzzy inference system (FIS for Zayanderoud reservoir operation is investigated in this paper. The model of operation consists of two parts. In the first part, the seasonal river stream-flow is forecasted using the fuzzy rule-based system. The southern oscillated index, rain, snow, and discharge are inputs of the model and the seasonal river stream-flow its output. In the second part, the operation model is constructed. The amount of releases is first optimized by a nonlinear optimization model and then the rule curves are extracted using the fuzzy inference system. This model operates on an "if-then" principle, where the "if" is a vector of fuzzy permits and "then" is the fuzzy result. The reservoir storage capacity, inflow, demand, and year condition factor are used as permits. Monthly release is taken as the consequence. The Zayanderoud basin is investigated as a case study. Different performance indices such as reliability, resiliency, and vulnerability are calculated. According to results, FIS works more effectively than the traditional reservoir operation methods such as standard operation policy (SOP or linear regression.

  18. Development of a subway operation incident delay model using accelerated failure time approaches.

    Science.gov (United States)

    Weng, Jinxian; Zheng, Yang; Yan, Xuedong; Meng, Qiang

    2014-12-01

    This study aims to develop a subway operational incident delay model using the parametric accelerated time failure (AFT) approach. Six parametric AFT models including the log-logistic, lognormal and Weibull models, with fixed and random parameters are built based on the Hong Kong subway operation incident data from 2005 to 2012, respectively. In addition, the Weibull model with gamma heterogeneity is also considered to compare the model performance. The goodness-of-fit test results show that the log-logistic AFT model with random parameters is most suitable for estimating the subway incident delay. First, the results show that a longer subway operation incident delay is highly correlated with the following factors: power cable failure, signal cable failure, turnout communication disruption and crashes involving a casualty. Vehicle failure makes the least impact on the increment of subway operation incident delay. According to these results, several possible measures, such as the use of short-distance and wireless communication technology (e.g., Wifi and Zigbee) are suggested to shorten the delay caused by subway operation incidents. Finally, the temporal transferability test results show that the developed log-logistic AFT model with random parameters is stable over time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Simulation Modeling of a Facility Layout in Operations Management Classes

    Science.gov (United States)

    Yazici, Hulya Julie

    2006-01-01

    Teaching quantitative courses can be challenging. Similarly, layout modeling and lean production concepts can be difficult to grasp in an introductory OM (operations management) class. This article describes a simulation model developed in PROMODEL to facilitate the learning of layout modeling and lean manufacturing. Simulation allows for the…

  20. Contribution to multi-agents modeling of the operation of industrial processes: application to the operation of a pressurized water reactor under accidental situation

    International Nuclear Information System (INIS)

    Elias, P.

    1996-01-01

    This work is related to the CEA 'Escrime' project which concerns the reliability and functioning safety of nuclear reactors, and in particular the operation and supervision of nuclear installations. Its aim is the analysis and the formalizing of PWRs operation in order to define the collaboration and optimum sharing of tasks between human operators and automatized systems for an improved functioning safety. Chapter 1 describes the operation of nuclear reactors and the instrumentation and control activities. It focusses on the weaknesses of actual automatized systems and examines the interest of the multi-agents approach to build an improved automatized system. Chapter 2 presents the actual state of the art about multi-agent systems and about their application to reactor operation. Chapter 3 is devoted to the definition of the conceptual model of automatized systems developed in this work (distribution of operation activities, competition between agents, hierarchy, arbitration). Chapter 4 describes the computer model of the essential operating system elaborated according to the conceptual model defined above. Modeling is performed using Spirit and an application is described in chapter 5. (J.S.)

  1. A Model for Resource Allocation Using Operational Knowledge Assets

    Science.gov (United States)

    Andreou, Andreas N.; Bontis, Nick

    2007-01-01

    Purpose: The paper seeks to develop a business model that shows the impact of operational knowledge assets on intellectual capital (IC) components and business performance and use the model to show how knowledge assets can be prioritized in driving resource allocation decisions. Design/methodology/approach: Quantitative data were collected from 84…

  2. Exploring Machine Learning to Correct Satellite-Derived Sea Surface Temperatures

    Directory of Open Access Journals (Sweden)

    Stéphane Saux Picart

    2018-02-01

    Full Text Available Machine learning techniques are attractive tools to establish statistical models with a high degree of non linearity. They require a large amount of data to be trained and are therefore particularly suited to analysing remote sensing data. This work is an attempt at using advanced statistical methods of machine learning to predict the bias between Sea Surface Temperature (SST derived from infrared remote sensing and ground “truth” from drifting buoy measurements. A large dataset of collocation between satellite SST and in situ SST is explored. Four regression models are used: Simple multi-linear regression, Least Square Shrinkage and Selection Operator (LASSO, Generalised Additive Model (GAM and random forest. In the case of geostationary satellites for which a large number of collocations is available, results show that the random forest model is the best model to predict the systematic errors and it is computationally fast, making it a good candidate for operational processing. It is able to explain nearly 31% of the total variance of the bias (in comparison to about 24% for the multi-linear regression model.

  3. MAESTRO -- A Model and Expert System Tuning Resource for Operators

    International Nuclear Information System (INIS)

    Lager, D.L.; Brand, H.R.; Maurer, W.J.; Coffield, F.E.; Chambers, F.

    1989-01-01

    We have developed MAESTRO, a Model And Expert System Tuning Resource for Operators. It provides a unified software environment for optimizing the performance of large, complex machines, in particular the Advanced Test Accelerator and Experimental Test Accelerator at Lawrence Livermore National Laboratory. The system incorporates three approaches to tuning: a mouse-based manual interface to select and control magnets and to view displays of machine performance; an automation based on ''cloning the operator'' by implementing the strategies and reasoning used by the operator; an automation based on a simulator model which, when accurately matched to the machine, allows downloading of optimal sets of parameters and permits diagnosing errors in the beamline. The latter two approaches are based on the Artificial Intelligence technique known as Expert Systems. 4 refs., 4 figs

  4. Mathematical modelling of unglazed solar collectors under extreme operating conditions

    DEFF Research Database (Denmark)

    Bunea, M.; Perers, Bengt; Eicher, S.

    2015-01-01

    average temperature levels at the evaporator. Simulation of these systems requires a collector model that can take into account operation at very low temperatures (below freezing) and under various weather conditions, particularly operation without solar irradiation.A solar collector mathematical model......Combined heat pumps and solar collectors got a renewed interest on the heating system market worldwide. Connected to the heat pump evaporator, unglazed solar collectors can considerably increase their efficiency, but they also raise the coefficient of performance of the heat pump with higher...... was found due to the condensation phenomenon and up to 40% due to frost under no solar irradiation. This work also points out the influence of the operating conditions on the collector's characteristics.Based on experiments carried out at a test facility, every heat flux on the absorber was separately...

  5. MAESTRO - a model and expert system tuning resource for operators

    International Nuclear Information System (INIS)

    Lager, D.L.; Brand, H.R.; Maurer, W.J.; Coffield, F.; Chambers, F.

    1990-01-01

    We have developed MAESTRO, a model and expert system tuning resource for operators. It provides a unified software environment for optimizing the performance of large, complex machines, in particular the Advanced Test Accelerator and Experimental Test Accelerator at Lawrence Livermore National Laboratory. The system incorporates three approaches to tuning: a mouse-based manual interface to select and control magnets and to view displays of machine performance; an automation based on 'cloning the operator' by implementing the strategies and reasoning used by the operator; and an automation based on a simulator model which, when accurately matched to the machine, allows downloading of optimal sets of parameters and permits diagnosing errors in the beam line. The latter two approaches are based on the artificial-intelligence technique known as Expert Systems. (orig.)

  6. INTELLECTUAL MODEL FORMATION OF RAILWAY STATION WORK DURING THE TRAIN OPERATION EXECUTION

    Directory of Open Access Journals (Sweden)

    O. V. Lavrukhin

    2014-11-01

    Full Text Available Purpose. The aim of this research work is to develop an intelligent technology for determination of the optimal route of freight trains administration on the basis of the technical and technological parameters. This will allow receiving the operational informed decisions by the station duty officer regarding to the train operation execution within the railway station. Metodology. The main elements of the research are the technical and technological parameters of the train station during the train operation. The methods of neural networks in order to form the self-teaching automated system were put in the basis of the generated model of train operation execution. Findings. The presented model of train operation execution at the railway station is realized on the basis of artificial neural networks using learning algorithm with a «teacher» in Matlab environment. The Matlab is also used for the immediate implementation of the intelligent automated control system of the train operation designed for the integration into the automated workplace of the duty station officer. The developed system is also useful to integrate on workplace of the traffic controller. This proposal is viable in case of the availability of centralized traffic control on the separate section of railway track. Originality. The model of train station operation during the train operation execution with elements of artificial intelligence was formed. It allows providing informed decisions to the station duty officer concerning a choice of rational and a safe option of reception and non-stop run of the trains with the ability of self-learning and adaptation to changing conditions. This condition is achieved by the principles of the neural network functioning. Practical value. The model of the intelligent system management of the process control for determining the optimal route receptionfor different categories of trains was formed.In the operational mode it offers the possibility

  7. Towards The Operational Oceanographic Model System In Estonian Coastal Sea, Baltic Sea

    Science.gov (United States)

    Kõuts, T.; Elken, J.; Raudsepp, U.

    An integrated system of nested 2D and 3D hydrodynamic models together with real time forcing data asquisition is designed and set up in pre-operational mode in the Gulf of Finland and Gulf of Riga, the Baltic Sea. Along the Estonian coast, implicit time-stepping 3D models are used in the deep bays and 2D models in the shallow bays with ca 200 m horizontal grid step. Specific model setups have been verified by in situ current measurements. Optimum configuration of initial parameters has been found for certain critical locations, usually ports, oil terminals, etc. Operational system in- tegrates also section of historical database of most important hydrologic parameters in the region, allowing use of certain statistical analysis and proper setup of initial conditions for oceanographic models. There is large variety of applications for such model system, ranging from environmental impact assessment at local coastal sea pol- lution problems to forecast of offshore blue algal blooms. Most probable risk factor in the coastal sea engineering is oil pollution, therefore current operational model sys- tem has direct custom oriented output the oil spill forecast for critical locations. Oil spill module of the operational system consist the automatic weather and hydromet- ric station (distributed in real time to internet) and prognostic model of sea surface currents. System is run using last 48 hour wind data and wind forecast and estimates probable oil deposition areas on the shoreline under certain weather conditions. Cal- culated evolution of oil pollution has been compared with some real accidents in the past and there was found good agreement between model and measurements. Graphi- cal user interface of oil spill model is currently installed at location of port authorities (eg. Muuga port), so in case of accidents it could be used in real time supporting the rescue operations. In 2000 current pre-operational oceanographic model system has been sucessfully used to

  8. Facility Will Help Transition Models Into Operations

    Science.gov (United States)

    Kumar, Mohi

    2009-02-01

    The U.S. National Oceanic and Atmospheric Administration's Space Weather Prediction Center (NOAA SWPC), in partnership with the U.S. Air Force Weather Agency (AFWA), is establishing a center to promote and facilitate the transition of space weather models to operations. The new facility, called the Developmental Testbed Center (DTC), will take models used by researchers and rigorously test them to see if they can withstand continued use as viable warning systems. If a model used in a space weather warning system crashes or fails to perform well, severe consequences can result. These include increased radiation risks to astronauts and people traveling on high-altitude flights, national security vulnerabilities from the loss of military satellite communications, and the cost of replacing damaged military and commercial spacecraft.

  9. The development of a model of control room operator cognition

    International Nuclear Information System (INIS)

    Harrison, C. Felicity

    1998-01-01

    The nuclear generation station CRO is one of the main contributors to plant performance and safety. In the past, studies of operator behaviour have been made under emergency or abnormal situations, with little consideration being given to the more routine aspects of plant operation. One of the tasks of the operator is to detect the early signs of a problem, and to take steps to prevent a transition to an abnormal plant state. In order to do this CRO must determine that plant indications are no longer in the normal range, and take action to prevent a further move away from normal. This task is made more difficult by the extreme complexity of the control room, and by the may hindrances that the operator must face. It would therefore be of great benefit to understand CRO cognitive performance, especially under normal operating conditions. Through research carried out at several Canadian nuclear facilities we were able to develop a deeper understanding of CRO monitoring of highly automated systems during normal operations, and specifically to investigate the contributions of cognitive skills to monitoring performance. The consultants were asked to develop a deeper understanding of CRO monitoring during normal operations, and specifically to investigate the contributions of cognitive skills to monitoring performance. The overall objective of this research was to develop and validate a model of CRO monitoring. The findings of this research have practical implications for systems integration, training, and interface design. The result of this work was a model of operator monitoring activities. (author)

  10. Dynamic modeling of temperature change in outdoor operated tubular photobioreactors.

    Science.gov (United States)

    Androga, Dominic Deo; Uyar, Basar; Koku, Harun; Eroglu, Inci

    2017-07-01

    In this study, a one-dimensional transient model was developed to analyze the temperature variation of tubular photobioreactors operated outdoors and the validity of the model was tested by comparing the predictions of the model with the experimental data. The model included the effects of convection and radiative heat exchange on the reactor temperature throughout the day. The temperatures in the reactors increased with increasing solar radiation and air temperatures, and the predicted reactor temperatures corresponded well to the measured experimental values. The heat transferred to the reactor was mainly through radiation: the radiative heat absorbed by the reactor medium, ground radiation, air radiation, and solar (direct and diffuse) radiation, while heat loss was mainly through the heat transfer to the cooling water and forced convection. The amount of heat transferred by reflected radiation and metabolic activities of the bacteria and pump work was negligible. Counter-current cooling was more effective in controlling reactor temperature than co-current cooling. The model developed identifies major heat transfer mechanisms in outdoor operated tubular photobioreactors, and accurately predicts temperature changes in these systems. This is useful in determining cooling duty under transient conditions and scaling up photobioreactors. The photobioreactor design and the thermal modeling were carried out and experimental results obtained for the case study of photofermentative hydrogen production by Rhodobacter capsulatus, but the approach is applicable to photobiological systems that are to be operated under outdoor conditions with significant cooling demands.

  11. Mathematical basis for the process of model simulation of drilling operations

    Energy Technology Data Exchange (ETDEWEB)

    Lipovetskiy, G M; Lebedinskiy, G L

    1979-01-01

    The authors describe the application of a method for the model simulation of drilling operations and for the solution of problems concerned with the planning and management of such operations. A description is offered for an approach to the simulator process when the drilling operations are part of a large system. An algorithm is provided for calculating complex events.

  12. The Role of a Mental Model in Learning to Operate a Device.

    Science.gov (United States)

    Kieras, David E.; Bovair, Susan

    1984-01-01

    Describes three studies concerned with learning to operate a control panel device and how this learning is affected by understanding a device model that describes its internal mechanism. Results indicate benefits of a device model depend on whether it supports direct inference of exact steps required to operate the device. (Author/MBR)

  13. Operational characteristics of nuclear power plants - modelling of operational safety; Pogonske karakteristike nuklearnih elektrana - modelsko izucavanje pogonske sigurnosti

    Energy Technology Data Exchange (ETDEWEB)

    Studovic, M [Masinski fakultet, Beograd (Yugoslavia)

    1984-07-01

    By operational experience of nuclear power plants and realize dlevel of availability of plant, systems and componenst reliabiliuty, operational safety and public protection, as a source on nature of distrurbances in power plant systems and lessons drawn by the TMI-2, in th epaper are discussed: examination of design safety for ultimate ensuring of safe operational conditions of the nuclear power plant; significance of the adequate action for keeping proess parameters in prescribed limits and reactor cooling rquirements; developed systems for measurements detection and monitoring all critical parameters in the nuclear steam supply system; contents of theoretical investigation and mathematical modeling of the physical phenomena and process in nuclear power plant system and components as software, supporting for ensuring of operational safety and new access in staff education process; program and progress of the investigation of some physical phenomena and mathematical modeling of nuclear plant transients, prepared at faculty of mechanical Engineering in Belgrade. (author)

  14. Modeling the Environmental Impact of Air Traffic Operations

    Science.gov (United States)

    Chen, Neil

    2011-01-01

    There is increased interest to understand and mitigate the impacts of air traffic on the climate, since greenhouse gases, nitrogen oxides, and contrails generated by air traffic can have adverse impacts on the climate. The models described in this presentation are useful for quantifying these impacts and for studying alternative environmentally aware operational concepts. These models have been developed by leveraging and building upon existing simulation and optimization techniques developed for the design of efficient traffic flow management strategies. Specific enhancements to the existing simulation and optimization techniques include new models that simulate aircraft fuel flow, emissions and contrails. To ensure that these new models are beneficial to the larger climate research community, the outputs of these new models are compatible with existing global climate modeling tools like the FAA's Aviation Environmental Design Tool.

  15. VERIFICATION OF GEAR DYNAMIC MODEL IN DIFFERENT OPERATING CONDITIONS

    Directory of Open Access Journals (Sweden)

    Grzegorz PERUŃ

    2014-09-01

    Full Text Available The article presents the results of verification of the drive system dynamic model with gear. Tests were carried out on the real object in different operating conditions. For the same assumed conditions were also carried out simulation studies. Comparison of the results obtained from those two series of tests helped determine the suitability of the model and verify the possibility of replacing experimental research by simulations with use of dynamic model.

  16. A model technology transfer program for independent operators

    Energy Technology Data Exchange (ETDEWEB)

    Schoeling, L.G.

    1996-08-01

    In August 1992, the Energy Research Center (ERC) at the University of Kansas was awarded a contract by the US Department of Energy (DOE) to develop a technology transfer regional model. This report describes the development and testing of the Kansas Technology Transfer Model (KTTM) which is to be utilized as a regional model for the development of other technology transfer programs for independent operators throughout oil-producing regions in the US. It describes the linkage of the regional model with a proposed national technology transfer plan, an evaluation technique for improving and assessing the model, and the methodology which makes it adaptable on a regional basis. The report also describes management concepts helpful in managing a technology transfer program.

  17. Structure of Pioncare covariant tensor operators in quantum mechanical models

    International Nuclear Information System (INIS)

    Polyzou, W.N.; Klink, W.H.

    1988-01-01

    The structure of operators that transform covariantly in Poincare invariant quantum mechanical models is analyzed. These operators are shown to have an interaction dependence that comes from the geometry of the Poincare group. The operators can be expressed in terms of matrix elements in a complete set of eigenstates of the mass and spin operators associated with the dynamical representation of the Poincare group. The matrix elements are factored into geometrical coefficients (Clebsch--Gordan coefficients for the Poincare group) and invariant matrix elements. The geometrical coefficients are fixed by the transformation properties of the operator and the eigenvalue spectrum of the mass and spin. The invariant matrix elements, which distinguish between different operators with the same transformation properties, are given in terms of a set of invariant form factors. copyright 1988 Academic Press, Inc

  18. Modeling lift operations with SASmacr Simulation Studio

    Science.gov (United States)

    Kar, Leow Soo

    2016-10-01

    Lifts or elevators are an essential part of multistorey buildings which provide vertical transportation for its occupants. In large and high-rise apartment buildings, its occupants are permanent, while in buildings, like hospitals or office blocks, the occupants are temporary or users of the buildings. They come in to work or to visit, and thus, the population of such buildings are much higher than those in residential apartments. It is common these days that large office blocks or hospitals have at least 8 to 10 lifts serving its population. In order to optimize the level of service performance, different transportation schemes are devised to control the lift operations. For example, one lift may be assigned to solely service the even floors and another solely for the odd floors, etc. In this paper, a basic lift system is modelled using SAS Simulation Studio to study the effect of factors such as the number of floors, capacity of the lift car, arrival rate and exit rate of passengers at each floor, peak and off peak periods on the system performance. The simulation is applied to a real lift operation in Sunway College's North Building to validate the model.

  19. Sleep duration, daytime napping, markers of obstructive sleep apnea and stroke in a population of southern China

    Science.gov (United States)

    Wen, Ye; Pi, Fu-Hua; Guo, Pi; Dong, Wen-Ya; Xie, Yu-Qing; Wang, Xiang-Yu; Xia, Fang-Fang; Pang, Shao-Jie; Wu, Yan-Chun; Wang, Yuan-Yuan; Zhang, Qing-Ying

    2016-01-01

    Sleep habits are associated with stroke in western populations, but this relation has been rarely investigated in China. Moreover, the differences among stroke subtypes remain unclear. This study aimed to explore the associations of total stroke, including ischemic and hemorrhagic type, with sleep habits of a population in southern China. We performed a case-control study in patients admitted to the hospital with first stroke and community control subjects. A total of 333 patients (n = 223, 67.0%, with ischemic stroke; n = 110, 23.0%, with hemorrhagic stroke) and 547 controls were enrolled in the study. Participants completed a structured questionnaire to identify sleep habits and other stroke risk factors. Least absolute shrinkage and selection operator (Lasso) and multiple logistic regression were performed to identify risk factors of disease. Incidence of stroke, and its subtypes, was significantly associated with snorting/gasping, snoring, sleep duration, and daytime napping. Snorting/gasping was identified as an important risk factor in the Lasso logistic regression model (Lasso’ β = 0.84), and the result was proven to be robust. This study showed the association between stroke and sleep habits in the southern Chinese population and might help in better detecting important sleep-related factors for stroke risk. PMID:27698374

  20. Computer-Aided Model Based Analysis for Design and Operation of a Copolymerization Process

    DEFF Research Database (Denmark)

    Lopez-Arenas, Maria Teresa; Sales-Cruz, Alfonso Mauricio; Gani, Rafiqul

    2006-01-01

    . This will allow analysis of the process behaviour, contribute to a better understanding of the polymerization process, help to avoid unsafe conditions of operation, and to develop operational and optimizing control strategies. In this work, through a computer-aided modeling system ICAS-MoT, two first......The advances in computer science and computational algorithms for process modelling, process simulation, numerical methods and design/synthesis algorithms, makes it advantageous and helpful to employ computer-aided modelling systems and tools for integrated process analysis. This is illustrated......-principles models have been investigated with respect to design and operational issues for solution copolymerization reactors in general, and for the methyl methacrylate/vinyl acetate system in particular. The Model 1 is taken from literature and is commonly used for low conversion region, while the Model 2 has...

  1. Multiscale Data Assimilation for Large-Eddy Simulations

    Science.gov (United States)

    Li, Z.; Cheng, X.; Gustafson, W. I., Jr.; Xiao, H.; Vogelmann, A. M.; Endo, S.; Toto, T.

    2017-12-01

    Large-eddy simulation (LES) is a powerful tool for understanding atmospheric turbulence, boundary layer physics and cloud development, and there is a great need for developing data assimilation methodologies that can constrain LES models. The U.S. Department of Energy Atmospheric Radiation Measurement (ARM) User Facility has been developing the capability to routinely generate ensembles of LES. The LES ARM Symbiotic Simulation and Observation (LASSO) project (https://www.arm.gov/capabilities/modeling/lasso) is generating simulations for shallow convection days at the ARM Southern Great Plains site in Oklahoma. One of major objectives of LASSO is to develop the capability to observationally constrain LES using a hierarchy of ARM observations. We have implemented a multiscale data assimilation (MSDA) scheme, which allows data assimilation to be implemented separately for distinct spatial scales, so that the localized observations can be effectively assimilated to constrain the mesoscale fields in the LES area of about 15 km in width. The MSDA analysis is used to produce forcing data that drive LES. With such LES workflow we have examined 13 days with shallow convection selected from the period May-August 2016. We will describe the implementation of MSDA, present LES results, and address challenges and opportunities for applying data assimilation to LES studies.

  2. Methodology of synchronization among strategy and operation. A standards-based modeling approach

    Directory of Open Access Journals (Sweden)

    VICTOR EDWIN COLLAZOS

    2017-05-01

    Full Text Available Enterprise Architecture (EA has gained importance in recent years, mainly for its concept of “alignment” between the strategic and operational levels of organizations. Such alignment occurs when Information Technology (IT is applied correctly and timely, working in synergy and harmony with strategy and the operation to achieve mutually their own goals and satisfy the organizational needs.Both the strategic and operational levels have standards that help model elements necessary to obtain desired results. In this sense, BMM and BPMN were selected because both have the support of OMG and they are fairly well known for modelling the strategic level and operational level, respectively. In addition, i* modeling goal can be used for reducing the gap between these two standards. This proposal may help both the high-level design of the information system and to the appropriate identification of the business processes that will support it.This paper presents a methodology for aligning strategy and the operation based on standards and heuristics. We have made a classification for elements of the models and, for some specific cases, an extension of the heuristics associated between them. This allows us to propose methodology, which uses above-mentioned standards and combines mappings, transformations and actions to be considered in the alignment process.

  3. System Dynamics Modeling of Multipurpose Reservoir Operation

    Directory of Open Access Journals (Sweden)

    Ebrahim Momeni

    2006-03-01

    Full Text Available System dynamics, a feedback – based object – oriented simulation approach, not only represents complex dynamic systemic systems in a realistic way but also allows the involvement of end users in model development to increase their confidence in modeling process. The increased speed of model development, the possibility of group model development, the effective communication of model results, and the trust developed in the model due to user participation are the main strengths of this approach. The ease of model modification in response to changes in the system and the ability to perform sensitivity analysis make this approach more attractive compared with systems analysis techniques for modeling water management systems. In this study, a system dynamics model was developed for the Zayandehrud basin in central Iran. This model contains river basin, dam reservoir, plains, irrigation systems, and groundwater. Current operation rule is conjunctive use of ground and surface water. Allocation factor for each irrigation system is computed based on the feedback from groundwater storage in its zone. Deficit water is extracted from groundwater.The results show that applying better rules can not only satisfy all demands such as Gawkhuni swamp environmental demand, but it can also  prevent groundwater level drawdown in future.

  4. Modeling Battery Behavior on Sensory Operations for Context-Aware Smartphone Sensing

    Directory of Open Access Journals (Sweden)

    Ozgur Yurur

    2015-05-01

    Full Text Available Energy consumption is a major concern in context-aware smartphone sensing. This paper first studies mobile device-based battery modeling, which adopts the kinetic battery model (KiBaM, under the scope of battery non-linearities with respect to variant loads. Second, this paper models the energy consumption behavior of accelerometers analytically and then provides extensive simulation results and a smartphone application to examine the proposed sensor model. Third, a Markov reward process is integrated to create energy consumption profiles, linking with sensory operations and their effects on battery non-linearity. Energy consumption profiles consist of different pairs of duty cycles and sampling frequencies during sensory operations. Furthermore, the total energy cost by each profile is represented by an accumulated reward in this process. Finally, three different methods are proposed on the evolution of the reward process, to present the linkage between different usage patterns on the accelerometer sensor through a smartphone application and the battery behavior. By doing this, this paper aims at achieving a fine efficiency in power consumption caused by sensory operations, while maintaining the accuracy of smartphone applications based on sensor usages. More importantly, this study intends that modeling the battery non-linearities together with investigating the effects of different usage patterns in sensory operations in terms of the power consumption and the battery discharge may lead to discovering optimal energy reduction strategies to extend the battery lifetime and help a continual improvement in context-aware mobile services.

  5. Modeling battery behavior on sensory operations for context-aware smartphone sensing.

    Science.gov (United States)

    Yurur, Ozgur; Liu, Chi Harold; Moreno, Wilfrido

    2015-05-26

    Energy consumption is a major concern in context-aware smartphone sensing. This paper first studies mobile device-based battery modeling, which adopts the kinetic battery model (KiBaM), under the scope of battery non-linearities with respect to variant loads. Second, this paper models the energy consumption behavior of accelerometers analytically and then provides extensive simulation results and a smartphone application to examine the proposed sensor model. Third, a Markov reward process is integrated to create energy consumption profiles, linking with sensory operations and their effects on battery non-linearity. Energy consumption profiles consist of different pairs of duty cycles and sampling frequencies during sensory operations. Furthermore, the total energy cost by each profile is represented by an accumulated reward in this process. Finally, three different methods are proposed on the evolution of the reward process, to present the linkage between different usage patterns on the accelerometer sensor through a smartphone application and the battery behavior. By doing this, this paper aims at achieving a fine efficiency in power consumption caused by sensory operations, while maintaining the accuracy of smartphone applications based on sensor usages. More importantly, this study intends that modeling the battery non-linearities together with investigating the effects of different usage patterns in sensory operations in terms of the power consumption and the battery discharge may lead to discovering optimal energy reduction strategies to extend the battery lifetime and help a continual improvement in context-aware mobile services.

  6. Capturing the musical brain with Lasso

    DEFF Research Database (Denmark)

    Toiviainen, Petri; Alluri, Vinoo; Brattico, Elvira

    2014-01-01

    accuracy using a leave-one-out cross-validation scheme. The method was applied to functional magnetic resonance imaging (fMRI) data that were collected using a naturalistic paradigm, in which participants' brain responses were recorded while they were continuously listening to pieces of real music...... to be consistent with areas of significant activation observed in previous research using a naturalistic paradigm with fMRI. Of the six musical features considered, five could be significantly predicted for the majority of participants. The areas significantly contributing to the optimal decoding models agreed...

  7. The co-operative model as a means of stakeholder management: An exploratory qualitative analysis

    Directory of Open Access Journals (Sweden)

    Darrell Hammond

    2016-11-01

    Full Text Available The South African economy has for some time been characterised by high unemployment, income inequality and a skills mismatch, all of which have contributed to conflict between business, government and labour. The co-operative model of stakeholder management is examined as a possible mitigating organisational form in this high-conflict environment. International experience indicates some success with co-operative models but they are not easy to implement effectively and face severe obstacles. Trust and knowledge sharing are critical for enabling a co-operative model of stakeholder management, which requires strong governance and adherence to strict rules. The model must balance the tension between optimisation of governance structures and responsiveness to members' needs. Furthermore, support from social and political institutions is necessary. We find barriers to scalability which manifest in the lack of depth of business skills, negative perception of the co-operative model by external stakeholders, government ambivalence, and a lack of willingness on the part of workers to co-operate for mutual benefit.

  8. A Knowledge-Based Expert System Using MFM Model for Operator Supporting

    International Nuclear Information System (INIS)

    Mo, Kun; Seong, Poong Hyun

    2005-01-01

    In this paper, a knowledge-based expert system using MFM (Multi-level Flow Modeling) is proposed for enhancing the operators' ability to cope with various situations in nuclear power plant. There are many complicated situations, in which regular and suitable operations should be done by operators accordingly. In order to help the operator to assess the situations promptly and accurately, and to regulate their operations according to these situations. it is necessary to develop an expert systems to help the operator for the fault diagnosis, alarm analysis, and operation results estimation for each operation. Many kinds of operator supporting systems focusing on different functions have been developed. Most of them used various methodologies for single diagnosis function or operation permission function. The proposed system integrated functions of fault diagnosis, alarm analysis and operation results estimation by the MFM basic algorithm for the operator supporting

  9. A Coupled Snow Operations-Skier Demand Model for the Ontario (Canada) Ski Region

    Science.gov (United States)

    Pons, Marc; Scott, Daniel; Steiger, Robert; Rutty, Michelle; Johnson, Peter; Vilella, Marc

    2016-04-01

    The multi-billion dollar global ski industry is one of the tourism subsectors most directly impacted by climate variability and change. In the decades ahead, the scholarly literature consistently projects decreased reliability of natural snow cover, shortened and more variable ski seasons, as well as increased reliance on snowmaking with associated increases in operational costs. In order to develop the coupled snow, ski operations and demand model for the Ontario ski region (which represents approximately 18% of Canada's ski market), the research utilized multiple methods, including: a in situ survey of over 2400 skiers, daily operations data from ski resorts over the last 10 years, climate station data (1981-2013), climate change scenario ensemble (AR5 - RCP 8.5), an updated SkiSim model (building on Scott et al. 2003; Steiger 2010), and an agent-based model (building on Pons et al. 2014). Daily snow and ski operations for all ski areas in southern Ontario were modeled with the updated SkiSim model, which utilized current differential snowmaking capacity of individual resorts, as determined from daily ski area operations data. Snowmaking capacities and decision rules were informed by interviews with ski area managers and daily operations data. Model outputs were validated with local climate station and ski operations data. The coupled SkiSim-ABM model was run with historical weather data for seasons representative of an average winter for the 1981-2010 period, as well as an anomalously cold winter (2012-13) and the record warm winter in the region (2011-12). The impact on total skier visits and revenues, and the geographic and temporal distribution of skier visits were compared. The implications of further climate adaptation (i.e., improving the snowmaking capacity of all ski areas to the level of leading resorts in the region) were also explored. This research advances system modelling, especially improving the integration of snow and ski operations models with

  10. A Dynamic Operation Permission Technique Based on an MFM Model and Numerical Simulation

    International Nuclear Information System (INIS)

    Akio, Gofuku; Masahiro, Yonemura

    2011-01-01

    It is important to support operator activities to an abnormal plant situation where many counter actions are taken in relatively short time. The authors proposed a technique called dynamic operation permission to decrease human errors without eliminating creative idea of operators to cope with an abnormal plant situation by checking if the counter action taken is consistent with emergency operation procedure. If the counter action is inconsistent, a dynamic operation permission system warns it to operators. It also explains how and why the counter action is inconsistent and what influence will appear on the future plant behavior by a qualitative influence inference technique based on a model by the Mf (Multilevel Flow Modeling). However, the previous dynamic operation permission is not able to explain quantitative effects on plant future behavior. Moreover, many possible influence paths are derived because a qualitative reasoning does not give a solution when positive and negative influences are propagated to the same node. This study extends the dynamic operation permission by combining the qualitative reasoning and the numerical simulation technique. The qualitative reasoning based on an Mf model of plant derives all possible influence propagation paths. Then, a numerical simulation gives a prediction of plant future behavior in the case of taking a counter action. The influence propagation that does not coincide with the simulation results is excluded from possible influence paths. The extended technique is implemented in a dynamic operation permission system for an oil refinery plant. An MFM model and a static numerical simulator are developed. The results of dynamic operation permission for some abnormal plant situations show the improvement of the accuracy of dynamic operation permission and the quality of explanation for the effects of the counter action taken

  11. Operational freight carrier planning basic concepts, optimization models and advanced memetic algorithms

    CERN Document Server

    Schönberger, Jörn

    2005-01-01

    The modern freight carrier business requires a sophisticated automatic decision support in order to ensure the efficiency and reliability and therefore the survival of transport service providers. This book addresses these challenges and provides generic decision models for the short-term operations planning as well as advanced metaheuristics to obtain efficient operation plans. After a thorough analysis of the operations planning in the freight carrier business, decision models are derived. Their suitability is proven within a large number of numerical experiments, in which a new class of hybrid genetic search approaches demonstrate their appropriateness.

  12. Modeling Multioperator Multi-UAV Operator Attention Allocation Problem Based on Maximizing the Global Reward

    Directory of Open Access Journals (Sweden)

    Yuhang Wu

    2016-01-01

    Full Text Available This paper focuses on the attention allocation problem (AAP in modeling multioperator multi-UAV (MOMU, with the operator model and task properties taken into consideration. The model of MOMU operator AAP based on maximizing the global reward is established and used to allocate tasks to all operators as well as set work time and rest time to each task simultaneously for operators. The proposed model is validated in Matlab simulation environment, using the immune algorithm and dynamic programming algorithm to evaluate the performance of the model in terms of the reward value with regard to the work time, rest time, and task allocation. The result shows that the total reward of the proposed model is larger than the one obtained from previously published methods using local maximization and the total reward of our method has an exponent-like relation with the task arrival rate. The proposed model can improve the operators’ task processing efficiency in the MOMU command and control scenarios.

  13. Framework for modeling supervisory control behavior of operators of nuclear power plants

    International Nuclear Information System (INIS)

    Baron, S.; Feehrer, C.; Muralidharan, R.; Pew, R.; Horwitz, P.

    1982-01-01

    The importance of modeling the human-machine system has long been recognized, and many attempts have been made to estimate the operator's effect on system performance and reliability. The development of reliability models has been aimed at providing the means for exploring the physical consequences of specific classes of human error. However, the total impact of human performance on system operation and the adequacy of existing design and operating standards cannot be adequatly captured or assessed by simple error probabilities, or even by the combination of such probabilities. The behaviors of relevance are supervisory in nature, with a substantial cognitive component. The broad requirements for a model of human supervisory control are extensive and suggest that a highly sophisticated computer model will be needed. The purpose of this paper is to provide a brief overview of the approach employed in developing such supervisory control models; of some proposed specializations and extensions to adapt them for the nuclear power plant case; and of the potential utility of such a model

  14. Modelling of Reservoir Operations using Fuzzy Logic and ANNs

    Science.gov (United States)

    Van De Giesen, N.; Coerver, B.; Rutten, M.

    2015-12-01

    Today, almost 40.000 large reservoirs, containing approximately 6.000 km3 of water and inundating an area of almost 400.000 km2, can be found on earth. Since these reservoirs have a storage capacity of almost one-sixth of the global annual river discharge they have a large impact on the timing, volume and peaks of river discharges. Global Hydrological Models (GHM) are thus significantly influenced by these anthropogenic changes in river flows. We developed a parametrically parsimonious method to extract operational rules based on historical reservoir storage and inflow time-series. Managing a reservoir is an imprecise and vague undertaking. Operators always face uncertainties about inflows, evaporation, seepage losses and various water demands to be met. They often base their decisions on experience and on available information, like reservoir storage and the previous periods inflow. We modeled this decision-making process through a combination of fuzzy logic and artificial neural networks in an Adaptive-Network-based Fuzzy Inference System (ANFIS). In a sensitivity analysis, we compared results for reservoirs in Vietnam, Central Asia and the USA. ANFIS can indeed capture reservoirs operations adequately when fed with a historical monthly time-series of inflows and storage. It was shown that using ANFIS, operational rules of existing reservoirs can be derived without much prior knowledge about the reservoirs. Their validity was tested by comparing actual and simulated releases with each other. For the eleven reservoirs modelled, the normalised outflow, , was predicted with a MSE of 0.002 to 0.044. The rules can be incorporated into GHMs. After a network for a specific reservoir has been trained, the inflow calculated by the hydrological model can be combined with the release and initial storage to calculate the storage for the next time-step using a mass balance. Subsequently, the release can be predicted one time-step ahead using the inflow and storage.

  15. Upper Rio Grande water operations model: A tool for enhanced system management

    Science.gov (United States)

    Gail Stockton; D. Michael Roark

    1999-01-01

    The Upper Rio Grande Water Operations Model (URGWOM) under development through a multi-agency effort has demonstrated capability to represent the physical river/reservoir system, to track and account for Rio Grande flows and imported San Juan flows, and to forecast flows at various points in the system. Testing of the Rio Chama portion of the water operations model was...

  16. A computational model for knowledge-driven monitoring of nuclear power plant operators based on information theory

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Seong, Poong Hyun

    2006-01-01

    To develop operator behavior models such as IDAC, quantitative models for the cognitive activities of nuclear power plant (NPP) operators in abnormal situations are essential. Among them, only few quantitative models for the monitoring and detection have been developed. In this paper, we propose a computational model for the knowledge-driven monitoring, which is also known as model-driven monitoring, of NPP operators in abnormal situations, based on the information theory. The basic assumption of the proposed model is that the probability that an operator shifts his or her attention to an information source is proportional to the expected information from the information source. A small experiment performed to evaluate the feasibility of the proposed model shows that the predictions made by the proposed model have high correlations with the experimental results. Even though it has been argued that heuristics might play an important role on human reasoning, we believe that the proposed model can provide part of the mathematical basis for developing quantitative models for knowledge-driven monitoring of NPP operators when NPP operators are assumed to behave very logically

  17. A computational model for evaluating the effects of attention, memory, and mental models on situation assessment of nuclear power plant operators

    International Nuclear Information System (INIS)

    Lee, Hyun-Chul; Seong, Poong-Hyun

    2009-01-01

    Operators in nuclear power plants have to acquire information from human system interfaces (HSIs) and the environment in order to create, update, and confirm their understanding of a plant state, as failures of situation assessment may cause wrong decisions for process control and finally errors of commission in nuclear power plants. A few computational models that can be used to predict and quantify the situation awareness of operators have been suggested. However, these models do not sufficiently consider human characteristics for nuclear power plant operators. In this paper, we propose a computational model for situation assessment of nuclear power plant operators using a Bayesian network. This model incorporates human factors significantly affecting operators' situation assessment, such as attention, working memory decay, and mental model. As this proposed model provides quantitative results of situation assessment and diagnostic performance, we expect that this model can be used in the design and evaluation of human system interfaces as well as the prediction of situation awareness errors in the human reliability analysis.

  18. A computational model for evaluating the effects of attention, memory, and mental models on situation assessment of nuclear power plant operators

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyun-Chul [Instrumentation and Control/Human Factors Division, Korea Atomic Energy Research Institute, 1045 Daedeok-daero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of)], E-mail: leehc@kaeri.re.kr; Seong, Poong-Hyun [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, 373-1, Guseong-dong, Yuseong-gu, Daejeon 305-701 (Korea, Republic of)

    2009-11-15

    Operators in nuclear power plants have to acquire information from human system interfaces (HSIs) and the environment in order to create, update, and confirm their understanding of a plant state, as failures of situation assessment may cause wrong decisions for process control and finally errors of commission in nuclear power plants. A few computational models that can be used to predict and quantify the situation awareness of operators have been suggested. However, these models do not sufficiently consider human characteristics for nuclear power plant operators. In this paper, we propose a computational model for situation assessment of nuclear power plant operators using a Bayesian network. This model incorporates human factors significantly affecting operators' situation assessment, such as attention, working memory decay, and mental model. As this proposed model provides quantitative results of situation assessment and diagnostic performance, we expect that this model can be used in the design and evaluation of human system interfaces as well as the prediction of situation awareness errors in the human reliability analysis.

  19. PWR plant operator training used full scope simulator incorporated MAAP model

    International Nuclear Information System (INIS)

    Matsumoto, Y.; Tabuchi, T.; Yamashita, T.; Komatsu, Y.; Tsubouchi, K.; Banka, T.; Mochizuki, T.; Nishimura, K.; Iizuka, H.

    2015-01-01

    NTC makes an effort with the understanding of plant behavior of core damage accident as part of our advanced training. For the Fukushima Daiichi Nuclear Power Station accident, we introduced the MAAP model into PWR operator training full scope simulator and also made the Severe Accident Visual Display unit. From 2014, we will introduce new training program for a core damage accident with PWR operator training full scope simulator incorporated the MAAP model and the Severe Accident Visual Display unit. (author)

  20. Pre-operative simulation of pediatric mastoid surgery with 3D-printed temporal bone models.

    Science.gov (United States)

    Rose, Austin S; Webster, Caroline E; Harrysson, Ola L A; Formeister, Eric J; Rawal, Rounak B; Iseli, Claire E

    2015-05-01

    As the process of additive manufacturing, or three-dimensional (3D) printing, has become more practical and affordable, a number of applications for the technology in the field of pediatric otolaryngology have been considered. One area of promise is temporal bone surgical simulation. Having previously developed a model for temporal bone surgical training using 3D printing, we sought to produce a patient-specific model for pre-operative simulation in pediatric otologic surgery. Our hypothesis was that the creation and pre-operative dissection of such a model was possible, and would demonstrate potential benefits in cases of abnormal temporal bone anatomy. In the case presented, an 11-year-old boy underwent a planned canal-wall-down (CWD) tympano-mastoidectomy for recurrent cholesteatoma preceded by a pre-operative surgical simulation using 3D-printed models of the temporal bone. The models were based on the child's pre-operative clinical CT scan and printed using multiple materials to simulate both bone and soft tissue structures. To help confirm the models as accurate representations of the child's anatomy, distances between various anatomic landmarks were measured and compared to the temporal bone CT scan and the 3D model. The simulation allowed the surgical team to appreciate the child's unusual temporal bone anatomy as well as any challenges that might arise in the safety of the temporal bone laboratory, prior to actual surgery in the operating room (OR). There was minimal variability, in terms of absolute distance (mm) and relative distance (%), in measurements between anatomic landmarks obtained from the patient intra-operatively, the pre-operative CT scan and the 3D-printed models. Accurate 3D temporal bone models can be rapidly produced based on clinical CT scans for pre-operative simulation of specific challenging otologic cases in children, potentially reducing medical errors and improving patient safety. Copyright © 2015 Elsevier Ireland Ltd. All rights

  1. Stability of the matrix model in operator interpretation

    Directory of Open Access Journals (Sweden)

    Katsuta Sakai

    2017-12-01

    Full Text Available The IIB matrix model is one of the candidates for nonperturbative formulation of string theory, and it is believed that the model contains gravitational degrees of freedom in some manner. In some preceding works, it was proposed that the matrix model describes the curved space where the matrices represent differential operators that are defined on a principal bundle. In this paper, we study the dynamics of the model in this interpretation, and point out the necessity of the principal bundle from the viewpoint of the stability and diffeomorphism invariance. We also compute the one-loop correction which yields a mass term for each field due to the principal bundle. We find that the stability is not violated.

  2. Objective ARX Model Order Selection for Multi-Channel Human Operator Identification

    NARCIS (Netherlands)

    Roggenkämper, N; Pool, D.M.; Drop, F.M.; van Paassen, M.M.; Mulder, M.

    2016-01-01

    In manual control, the human operator primarily responds to visual inputs but may elect to make use of other available feedback paths such as physical motion, adopting a multi-channel control strategy. Hu- man operator identification procedures generally require a priori selection of the model

  3. On the usability of quantitative modelling in operations strategy decission making

    NARCIS (Netherlands)

    Akkermans, H.A.; Bertrand, J.W.M.

    1997-01-01

    Quantitative modelling seems admirably suited to help managers in their strategic decision making on operations management issues, but in practice models are rarely used for this purpose. Investigates the reasons why, based on a detailed cross-case analysis of six cases of modelling-supported

  4. Operator-based linearization for efficient modeling of geothermal processes

    OpenAIRE

    Khait, M.; Voskov, D.V.

    2018-01-01

    Numerical simulation is one of the most important tools required for financial and operational management of geothermal reservoirs. The modern geothermal industry is challenged to run large ensembles of numerical models for uncertainty analysis, causing simulation performance to become a critical issue. Geothermal reservoir modeling requires the solution of governing equations describing the conservation of mass and energy. The robust, accurate and computationally efficient implementation of ...

  5. Modeling the design and operations of the federal radioactive waste management system

    International Nuclear Information System (INIS)

    Joy, D.S.; Nehls, J.W. Jr.; Harrison, I.G.; Miller, C.; Vogel, L.W.; Martin, J.D.; Capone, R.L.; Dougherty, L.

    1989-04-01

    Many configuration, transportation and operating alternatives are available to the Office of Civilian Radioactive Waste Management (OCRWM) in the design and operation of the Federal Radioactive Waste Management System (FWMS). Each alternative has different potential impacts on system throughput, efficiency and the thermal and radiological characteristics of the waste to be shipped, stored and emplaced. A need therefore exists for a quantitative means of assessing the ramifications of alternative system designs and operating strategies. We developed the Systems integration Operations/Logistics Model (SOLMOD). That model is used to replicate a user-specified system configuration and simulate the operation of that system -- from waste pickup at reactors to emplacement in a repository -- under a variety of operating strategies. The model can thus be used to assess system performance with or without Monitored Retrievable Storage (MRS), with or without consolidation at the repository, with varying shipping cask availability and so forth. This simulation capability is also intended to provide a tool for examining the impact of facility and equipment capacity and redundancy on overall waste processing capacity and system performance. SOLMOD can measure the impacts on system performance of certain operating contingencies. It can be used to test effects on transportation and waste pickup schedules resulting from a shut-down of one or more hot cells in the waste handling building at the repository or MRS. Simulation can also be used to study operating procedures and rules such as fuel pickup schedules, general freight vs. dedicated freight. 3 refs., 2 figs., 2 tabs

  6. Investigation of factors affecting the intelligence quotient (IQ) of ...

    African Journals Online (AJOL)

    ... and social risk factors play a role in the development of intellectual disability. ... Results: The optimal model was obtained with the Lasso approach and ... of attention-deficit hyperactivity disorder, family income, and number of siblings, ...

  7. Task Analytic Models to Guide Analysis and Design: Use of the Operator Function Model to Represent Pilot-Autoflight System Mode Problems

    Science.gov (United States)

    Degani, Asaf; Mitchell, Christine M.; Chappell, Alan R.; Shafto, Mike (Technical Monitor)

    1995-01-01

    Task-analytic models structure essential information about operator interaction with complex systems, in this case pilot interaction with the autoflight system. Such models serve two purposes: (1) they allow researchers and practitioners to understand pilots' actions; and (2) they provide a compact, computational representation needed to design 'intelligent' aids, e.g., displays, assistants, and training systems. This paper demonstrates the use of the operator function model to trace the process of mode engagements while a pilot is controlling an aircraft via the, autoflight system. The operator function model is a normative and nondeterministic model of how a well-trained, well-motivated operator manages multiple concurrent activities for effective real-time control. For each function, the model links the pilot's actions with the required information. Using the operator function model, this paper describes several mode engagement scenarios. These scenarios were observed and documented during a field study that focused on mode engagements and mode transitions during normal line operations. Data including time, ATC clearances, altitude, system states, and active modes and sub-modes, engagement of modes, were recorded during sixty-six flights. Using these data, seven prototypical mode engagement scenarios were extracted. One scenario details the decision of the crew to disengage a fully automatic mode in favor of a semi-automatic mode, and the consequences of this action. Another describes a mode error involving updating aircraft speed following the engagement of a speed submode. Other scenarios detail mode confusion at various phases of the flight. This analysis uses the operator function model to identify three aspects of mode engagement: (1) the progress of pilot-aircraft-autoflight system interaction; (2) control/display information required to perform mode management activities; and (3) the potential cause(s) of mode confusion. The goal of this paper is twofold

  8. Towards assimilation of InSAR data in operational weather models

    Science.gov (United States)

    Mulder, Gert; van Leijen, Freek; Barkmeijer, Jan; de Haan, Siebren; Hanssen, Ramon

    2017-04-01

    InSAR signal delays due to the varying atmospheric refractivity are a potential data source to improve weather models [1]. Especially with the launch of the new Sentinel-1 satellites, which increases data coverage, latency and accessibility, it may become possible to operationalize the assimilation of differential integrated refractivity (DIR) values in numerical weather models. Although studies exist on comparison between InSAR data and weather models [2], the impact of assimilation of DIR values in an operational weather model has never been assessed. In this study we present different ways to assimilate DIR values in an operational weather model and show the first forecast results. There are different possibilities to assimilate InSAR-data in a weather model. For example, (i) absolute DIR values can be derived using additional GNSS zenith or slant delay values, (ii) DIR values can be converted to water vapor pressures, or (iii) water vapor pressures can be derived for different heights by combining GNSS and InSAR data. However, an increasing number of assumptions in these processing steps will increase the uncertainty in the final results. Therefore, we chose to insert the InSAR derived DIR values after minimal additional processing. In this study we use the HARMONIE model [3], which is a spectral, non-hydrostatic model with a resolution of about 2.5 km. Currently, this is the operational model in 11 European countries and based on the AROME model [4]. To assimilate the DIR values in the weather model we use a simple adjustment of the weather parameters over the full slant column to match the DIR values. This is a first step towards a more sophisticated approach based on the 3D-VAR or 4D-VAR schemes [5]. Where both assimilation schemes can correct for different weather parameters simultaneously, and 4D-VAR allow us to assimilate DIR values at the exact moment of satellite overpass instead of the start of the forecast window. The approach will be demonstrated

  9. Operation Modeling of Power Systems Integrated with Large-Scale New Energy Power Sources

    Directory of Open Access Journals (Sweden)

    Hui Li

    2016-10-01

    Full Text Available In the most current methods of probabilistic power system production simulation, the output characteristics of new energy power generation (NEPG has not been comprehensively considered. In this paper, the power output characteristics of wind power generation and photovoltaic power generation are firstly analyzed based on statistical methods according to their historical operating data. Then the characteristic indexes and the filtering principle of the NEPG historical output scenarios are introduced with the confidence level, and the calculation model of NEPG’s credible capacity is proposed. Based on this, taking the minimum production costs or the best energy-saving and emission-reduction effect as the optimization objective, the power system operation model with large-scale integration of new energy power generation (NEPG is established considering the power balance, the electricity balance and the peak balance. Besides, the constraints of the operating characteristics of different power generation types, the maintenance schedule, the load reservation, the emergency reservation, the water abandonment and the transmitting capacity between different areas are also considered. With the proposed power system operation model, the operation simulations are carried out based on the actual Northwest power grid of China, which resolves the new energy power accommodations considering different system operating conditions. The simulation results well verify the validity of the proposed power system operation model in the accommodation analysis for the power system which is penetrated with large scale NEPG.

  10. Robustness Analysis of Visual QA Models by Basic Questions

    KAUST Repository

    Huang, Jia-Hong

    2017-09-14

    Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

  11. Robustness Analysis of Visual QA Models by Basic Questions

    KAUST Repository

    Huang, Jia-Hong; Alfadly, Modar; Ghanem, Bernard

    2017-01-01

    Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

  12. Testing and Implementation of the Navy's Operational Circulation Model for the Mediterranean Sea

    Science.gov (United States)

    Farrar, P. D.; Mask, A. C.

    2012-04-01

    The US Naval Oceanographic Office (NAVOCEANO) has the responsibility for running ocean models in support of Navy operations. NAVOCEANO delivers Navy-relevant global, regional, and coastal ocean forecast products on a 24 hour/7 day a week schedule. In 2011, NAVOCEANO implemented an operational version of the RNCOM (Regional Navy Coastal Ocean Model) for the Mediterranean Sea (MedSea), replacing an older variation of the Princeton Ocean Model originally set up for this area back in the mid-1990's. RNCOM is a gridded model that assimilates both satellite data and in situ profile data in near real time. This 3km MedSea RNCOM is nested within a lower resolution global NCOM in the Atlantic at the 12.5 degree West longitude. Before being accepted as a source of operational products, a Navy ocean model must pass a series of validation tests and then once in service, its skill is monitored by software and regional specialists. This presentation will provide a brief summary of the initial evaluation results. Because of the oceanographic peculiarities of this basin, the MedSea implementation posed a set of new problems for an RNCOM operation. One problem was the present Navy satellite altimetry model assimilation techniques do not improve Mediterranean NCOM forecasts, so it has been turned off, pending improvements. Another problem was that since most in-situ observations were profiling floats with short five-day profiling intervals, there was a problem with temporal aliasing when comparing these observations to the NCOM predictions. Because of the time and spatial correlations in the MedSea and in the model, the observation/model comparisons would give an unrealistically optimistic estimate of model accuracy of the Mediterranean's temperature/salinity structure. Careful pre-selection of profiles for comparison during the evaluation stage, based on spatial distribution and novelty, was used to minimize this effect. NAVOCEANO's operational customers are interested primarily in

  13. Real-time x-ray fluoroscopy-based catheter detection and tracking for cardiac electrophysiology interventions

    International Nuclear Information System (INIS)

    Ma Yingliang; Housden, R. James; Razavi, Reza; Rhode, Kawal S.; Gogin, Nicolas; Cathier, Pascal; Gijsbers, Geert; Cooklin, Michael; O'Neill, Mark; Gill, Jaswinder; Rinaldi, C. Aldo

    2013-01-01

    Purpose: X-ray fluoroscopically guided cardiac electrophysiology (EP) procedures are commonly carried out to treat patients with arrhythmias. X-ray images have poor soft tissue contrast and, for this reason, overlay of a three-dimensional (3D) roadmap derived from preprocedural volumetric images can be used to add anatomical information. It is useful to know the position of the catheter electrodes relative to the cardiac anatomy, for example, to record ablation therapy locations during atrial fibrillation therapy. Also, the electrode positions of the coronary sinus (CS) catheter or lasso catheter can be used for road map motion correction.Methods: In this paper, the authors present a novel unified computational framework for image-based catheter detection and tracking without any user interaction. The proposed framework includes fast blob detection, shape-constrained searching and model-based detection. In addition, catheter tracking methods were designed based on the customized catheter models input from the detection method. Three real-time detection and tracking methods are derived from the computational framework to detect or track the three most common types of catheters in EP procedures: the ablation catheter, the CS catheter, and the lasso catheter. Since the proposed methods use the same blob detection method to extract key information from x-ray images, the ablation, CS, and lasso catheters can be detected and tracked simultaneously in real-time.Results: The catheter detection methods were tested on 105 different clinical fluoroscopy sequences taken from 31 clinical procedures. Two-dimensional (2D) detection errors of 0.50 ± 0.29, 0.92 ± 0.61, and 0.63 ± 0.45 mm as well as success rates of 99.4%, 97.2%, and 88.9% were achieved for the CS catheter, ablation catheter, and lasso catheter, respectively. With the tracking method, accuracies were increased to 0.45 ± 0.28, 0.64 ± 0.37, and 0.53 ± 0.38 mm and success rates increased to 100%, 99.2%, and 96

  14. Development of synchronous generator saturation model from steady-state operating data

    Energy Technology Data Exchange (ETDEWEB)

    Jadric, Martin; Despalatovic, Marin; Terzic, Bozo [FESB University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, Split (Croatia)

    2010-11-15

    A new method to estimate and model the saturated synchronous reactances of hydroturbine generators from operating data is presented. For the estimation process, measurements of only the generator steady-state variables are required. First, using a specific procedure, the field to armature turns ratio is estimated from measured steady-state variables at constant power generation and various excitation conditions. Subsequently, for each set of steady-state operating data, saturated synchronous reactances are identified. Fitting surfaces, defined as polynomial functions in two variables, are later used to model these saturated reactances. It is shown that the simpler polynomial functions may be used to model saturation at the steady-state than at the dynamic conditions. The developed steady-state model is validated with measurements performed on the 34 MVA hydroturbine generator. (author)

  15. Designing visual displays and system models for safe reactor operations

    International Nuclear Information System (INIS)

    Brown-VanHoozer, S.A.

    1995-01-01

    The material presented in this paper is based on two studies involving the design of visual displays and the user's prospective model of a system. The studies involve a methodology known as Neuro-Linguistic Programming and its use in expanding design choices from the operator's perspective image. The contents of this paper focuses on the studies and how they are applicable to the safety of operating reactors

  16. Modeling decisions information fusion and aggregation operators

    CERN Document Server

    Torra, Vicenc

    2007-01-01

    Information fusion techniques and aggregation operators produce the most comprehensive, specific datum about an entity using data supplied from different sources, thus enabling us to reduce noise, increase accuracy, summarize and extract information, and make decisions. These techniques are applied in fields such as economics, biology and education, while in computer science they are particularly used in fields such as knowledge-based systems, robotics, and data mining. This book covers the underlying science and application issues related to aggregation operators, focusing on tools used in practical applications that involve numerical information. Starting with detailed introductions to information fusion and integration, measurement and probability theory, fuzzy sets, and functional equations, the authors then cover the following topics in detail: synthesis of judgements, fuzzy measures, weighted means and fuzzy integrals, indices and evaluation methods, model selection, and parameter extraction. The method...

  17. Teaching Model Innovation of Production Operation Management Engaging in ERP Sandbox Simulation

    Directory of Open Access Journals (Sweden)

    Tinggui Chen

    2014-05-01

    Full Text Available In light of the course of production operation management status, this article proposes the innovation and reform of the teaching model from three aspects of from the curriculum syllabus reform, the simulation of typical teaching organization model, and the enterprise resource process (ERP sandbox application in the course practice. There are an exhaustive implementation procedure and a further discussion on the promotion outcome. The results indicate that the innovation of teaching model and case studying practice in production operation management based on ERP sandbox simulation is feasible.

  18. System Dynamics Modeling for Emergency Operating System Resilience

    Energy Technology Data Exchange (ETDEWEB)

    Eng, Ang Wei; Kim, Jong Hyun [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2014-10-15

    The purpose of this paper is to present a causal model which explain human error cause-effect relationships of emergency operating system (EOS) by using system dynamics (SD) approach. The causal model will further quantified by analyzes nuclear power plant incidents/accidents data in Korea for simulation modeling. Emergency Operating System (EOS) is generally defined as a system which consists personnel, human-machine interface and procedures; and how these components interact and coordinate to respond to an incident or accident. Understanding the behavior of EOS especially personnel behavior and the factors influencing it during accident will contribute in human reliability evaluation. Human Reliability Analysis (HRA) is a method which assesses how human decisions and actions affect to system risk and further used to reduce the human errors probability. There are many HRA method used performance influencing factors (PIFs) to identify the causes of human errors. However, these methods have several limitations. In HRA, PIFs are assumed independent each other and relationship between them are not been study. Through the SD simulation, users able to simulate various situation of nuclear power plant respond to emergency from human and organizational aspects. The simulation also provides users a comprehensive view on how to improve the safety in plants. This paper presents a causal model that explained cause-effect relationships of EOS human. Through SD simulation, users able to identify the main contribution of human error easily. Users can also use SD simulation to predict when and how a human error occurs over time. In future work, the SD model can be expanded more on low level factors. The relationship within low level factors can investigated by using correlation method and further included in the model. This can enables users to study more detailed human error cause-effect relationships and the behavior of EOS. Another improvement can be made is on EOS factors

  19. System Dynamics Modeling for Emergency Operating System Resilience

    International Nuclear Information System (INIS)

    Eng, Ang Wei; Kim, Jong Hyun

    2014-01-01

    The purpose of this paper is to present a causal model which explain human error cause-effect relationships of emergency operating system (EOS) by using system dynamics (SD) approach. The causal model will further quantified by analyzes nuclear power plant incidents/accidents data in Korea for simulation modeling. Emergency Operating System (EOS) is generally defined as a system which consists personnel, human-machine interface and procedures; and how these components interact and coordinate to respond to an incident or accident. Understanding the behavior of EOS especially personnel behavior and the factors influencing it during accident will contribute in human reliability evaluation. Human Reliability Analysis (HRA) is a method which assesses how human decisions and actions affect to system risk and further used to reduce the human errors probability. There are many HRA method used performance influencing factors (PIFs) to identify the causes of human errors. However, these methods have several limitations. In HRA, PIFs are assumed independent each other and relationship between them are not been study. Through the SD simulation, users able to simulate various situation of nuclear power plant respond to emergency from human and organizational aspects. The simulation also provides users a comprehensive view on how to improve the safety in plants. This paper presents a causal model that explained cause-effect relationships of EOS human. Through SD simulation, users able to identify the main contribution of human error easily. Users can also use SD simulation to predict when and how a human error occurs over time. In future work, the SD model can be expanded more on low level factors. The relationship within low level factors can investigated by using correlation method and further included in the model. This can enables users to study more detailed human error cause-effect relationships and the behavior of EOS. Another improvement can be made is on EOS factors

  20. One-carbon metabolism, cognitive impairment and CSF measures of Alzheimer pathology: homocysteine and beyond.

    Science.gov (United States)

    Dayon, Loïc; Guiraud, Seu Ping; Corthésy, John; Da Silva, Laeticia; Migliavacca, Eugenia; Tautvydaitė, Domilė; Oikonomidi, Aikaterini; Moullet, Barbara; Henry, Hugues; Métairon, Sylviane; Marquis, Julien; Descombes, Patrick; Collino, Sebastiano; Martin, François-Pierre J; Montoliu, Ivan; Kussmann, Martin; Wojcik, Jérôme; Bowman, Gene L; Popp, Julius

    2017-06-17

    Hyperhomocysteinemia is a risk factor for cognitive decline and dementia, including Alzheimer disease (AD). Homocysteine (Hcy) is a sulfur-containing amino acid and metabolite of the methionine pathway. The interrelated methionine, purine, and thymidylate cycles constitute the one-carbon metabolism that plays a critical role in the synthesis of DNA, neurotransmitters, phospholipids, and myelin. In this study, we tested the hypothesis that one-carbon metabolites beyond Hcy are relevant to cognitive function and cerebrospinal fluid (CSF) measures of AD pathology in older adults. Cross-sectional analysis was performed on matched CSF and plasma collected from 120 older community-dwelling adults with (n = 72) or without (n = 48) cognitive impairment. Liquid chromatography-mass spectrometry was performed to quantify one-carbon metabolites and their cofactors. Least absolute shrinkage and selection operator (LASSO) regression was initially applied to clinical and biomarker measures that generate the highest diagnostic accuracy of a priori-defined cognitive impairment (Clinical Dementia Rating-based) and AD pathology (i.e., CSF tau phosphorylated at threonine 181 [p-tau181]/β-Amyloid 1-42 peptide chain [Aβ 1-42 ] >0.0779) to establish a reference benchmark. Two other LASSO-determined models were generated that included the one-carbon metabolites in CSF and then plasma. Correlations of CSF and plasma one-carbon metabolites with CSF amyloid and tau were explored. LASSO-determined models were stratified by apolipoprotein E (APOE) ε4 carrier status. The diagnostic accuracy of cognitive impairment for the reference model was 80.8% and included age, years of education, Aβ 1-42 , tau, and p-tau181. A model including CSF cystathionine, methionine, S-adenosyl-L-homocysteine (SAH), S-adenosylmethionine (SAM), serine, cysteine, and 5-methyltetrahydrofolate (5-MTHF) improved the diagnostic accuracy to 87.4%. A second model derived from plasma included cystathionine

  1. Object-oriented process dose modeling for glovebox operations

    International Nuclear Information System (INIS)

    Boerigter, S.T.; Fasel, J.H.; Kornreich, D.E.

    1999-01-01

    The Plutonium Facility at Los Alamos National Laboratory supports several defense and nondefense-related missions for the country by performing fabrication, surveillance, and research and development for materials and components that contain plutonium. Most operations occur in rooms with one or more arrays of gloveboxes connected to each other via trolley gloveboxes. Minimizing the effective dose equivalent (EDE) is a growing concern as a result of steadily declining allowable dose limits being imposed and a growing general awareness of safety in the workplace. In general, the authors discriminate three components of a worker's total EDE: the primary EDE, the secondary EDE, and background EDE. A particular background source of interest is the nuclear materials vault. The distinction between sources inside and outside of a particular room is arbitrary with the underlying assumption that building walls and floors provide significant shielding to justify including sources in other rooms in the background category. Los Alamos has developed the Process Modeling System (ProMoS) primarily for performing process analyses of nuclear operations. ProMoS is an object-oriented, discrete-event simulation package that has been used to analyze operations at Los Alamos and proposed facilities such as the new fabrication facilities for the Complex-21 effort. In the past, crude estimates of the process dose (the EDE received when a particular process occurred), room dose (the EDE received when a particular process occurred in a given room), and facility dose (the EDE received when a particular process occurred in the facility) were used to obtain an integrated EDE for a given process. Modifications to the ProMoS package were made to utilize secondary dose information to use dose modeling to enhance the process modeling efforts

  2. On the Use of Variability Operations in the V-Modell XT Software Process Line

    DEFF Research Database (Denmark)

    Kuhrmann, Marco; Méndez Fernández, Daniel; Ternité, Thomas

    2016-01-01

    . In this article, we present a study on the feasibility of variability operations to support the development of software process lines in the context of the V-Modell XT. We analyze which variability operations are defined and practically used. We provide an initial catalog of variability operations...... as an improvement proposal for other process models. Our findings show that 69 variability operation types are defined across several metamodel versions of which, however, 25 remain unused. The found variability operations allow for systematically modifying the content of process model elements and the process......Software process lines provide a systematic approach to develop and manage software processes. It defines a reference process containing general process assets, whereas a well-defined customization approach allows process engineers to create new process variants, e.g., by extending or modifying...

  3. Operations Assessment of Launch Vehicle Architectures using Activity Based Cost Models

    Science.gov (United States)

    Ruiz-Torres, Alex J.; McCleskey, Carey

    2000-01-01

    The growing emphasis on affordability for space transportation systems requires the assessment of new space vehicles for all life cycle activities, from design and development, through manufacturing and operations. This paper addresses the operational assessment of launch vehicles, focusing on modeling the ground support requirements of a vehicle architecture, and estimating the resulting costs and flight rate. This paper proposes the use of Activity Based Costing (ABC) modeling for this assessment. The model uses expert knowledge to determine the activities, the activity times and the activity costs based on vehicle design characteristics. The approach provides several advantages to current approaches to vehicle architecture assessment including easier validation and allowing vehicle designers to understand the cost and cycle time drivers.

  4. Using social network analysis and agent-based modelling to explore information flow using common operational pictures for maritime search and rescue operations.

    Science.gov (United States)

    Baber, C; Stanton, N A; Atkinson, J; McMaster, R; Houghton, R J

    2013-01-01

    The concept of common operational pictures (COPs) is explored through the application of social network analysis (SNA) and agent-based modelling to a generic search and rescue (SAR) scenario. Comparing the command structure that might arise from standard operating procedures with the sort of structure that might arise from examining information-in-common, using SNA, shows how one structure could be more amenable to 'command' with the other being more amenable to 'control' - which is potentially more suited to complex multi-agency operations. An agent-based model is developed to examine the impact of information sharing with different forms of COPs. It is shown that networks using common relevant operational pictures (which provide subsets of relevant information to groups of agents based on shared function) could result in better sharing of information and a more resilient structure than networks that use a COP. SNA and agent-based modelling are used to compare different forms of COPs for maritime SAR operations. Different forms of COP change the communications structures in the socio-technical systems in which they operate, which has implications for future design and development of a COP.

  5. Communicating Sustainability: An Operational Model for Evaluating Corporate Websites

    Directory of Open Access Journals (Sweden)

    Alfonso Siano

    2016-09-01

    Full Text Available The interest in corporate sustainability has increased rapidly in recent years and has encouraged organizations to adopt appropriate digital communication strategies, in which the corporate website plays a key role. Despite this growing attention in both the academic and business communities, models for the analysis and evaluation of online sustainability communication have not been developed to date. This paper aims to develop an operational model to identify and assess the requirements of sustainability communication in corporate websites. It has been developed from a literature review on corporate sustainability and digital communication and the analysis of the websites of the organizations included in the “Global CSR RepTrak 2015” by the Reputation Institute. The model identifies the core dimensions of online sustainability communication (orientation, structure, ergonomics, content—OSEC, sub-dimensions, such as stakeholder engagement and governance tools, communication principles, and measurable items (e.g., presence of the materiality matrix, interactive graphs. A pilot study on the websites of the energy and utilities companies included in the Dow Jones Sustainability World Index 2015 confirms the applicability of the OSEC framework. Thus, the model can provide managers and digital communication consultants with an operational tool that is useful for developing an industry ranking and assessing the best practices. The model can also help practitioners to identify corrective actions in the critical areas of digital sustainability communication and avoid greenwashing.

  6. Testing one model of family role in the development of formal operations

    Directory of Open Access Journals (Sweden)

    Stepanović Ivana

    2008-01-01

    Full Text Available Contemporary authors emphasise the importance of viewing the family as a specific educational context and of studying its role in the cognitive development. In this paper, we tested the model that postulates the way in which the different ways of parental mediation and various means of the family cultural-supportive system affect the development of formal operations. We assumed that the education of parents and financial status of the family form a wider context that influences the general dimensions of family interaction (emotional exchange and democratism, but also the cultural-pedagogical status of the family, and that their connection with formal operations is mediated by the above-mentioned variables. We expected the education of parents and general dimensions of family interaction to influence the parental mediation characteristic for the development of formal operations, operationalised by CSS scale, and to mediate, via this variable, the development of that form of thinking. The direct link with formal operations is postulated in the case of variables of cultural-pedagogical status and CSS scale. The sample consists of 305 pupils aged 15 to 19. The Structural Equation Modeling was used for testing the postulated model. The results show that there is a direct influence of cultural-pedagogical status and CSS scale on formal operations, but of mother's education as well. Some relations between other predictors were confirmed, and some not, which suggests that the proposed explanatory model must be revised to some degree.

  7. Verification of some numerical models for operationally predicting mesoscale winds aloft

    International Nuclear Information System (INIS)

    Cornett, J.S.; Randerson, D.

    1977-01-01

    Four numerical models are described for predicting mesoscale winds aloft for a 6 h period. These models are all tested statistically against persistence as the control forecast and against predictions made by operational forecasters. Mesoscale winds aloft data were used to initialize the models and to verify the predictions on an hourly basis. The model yielding the smallest root-mean-square vector errors (RMSVE's) was the one based on the most physics which included advection, ageostrophic acceleration, vertical mixing and friction. Horizontal advection was found to be the most important term in reducing the RMSVE's followed by ageostrophic acceleration, vertical advection, surface friction and vertical mixing. From a comparison of the mean absolute errors based on up to 72 independent wind-profile predictions made by operational forecasters, by the most complete model, and by persistence, we conclude that the model is the best wind predictor in the free air. In the boundary layer, the results tend to favor the forecaster for direction predictions. The speed predictions showed no overall superiority in any of these three models

  8. Virtual age model for equipment aging plant based on operation environment and service state

    International Nuclear Information System (INIS)

    Zhang Liming; Cai Qi; Zhao Xinwen; Chen Ling

    2010-01-01

    The accelerated life model based on the operation environment and service state was established by taking the virtual age as the equipment aging indices. The effect of different operation environments and service states on the reliability and virtual age under the continuum operation conditions and cycle operation conditions were analyzed, and the sensitivities of virtual age on operational environments and service states were studied. The results of the example application show that the effect of NPP equipment lifetime and the key parameters related to the reliability can be quantified by this model, and the result is in accordance with the reality.(authors)

  9. Operational Modelling of High Temperature Electrolysis (HTE)

    International Nuclear Information System (INIS)

    Patrick Lovera; Franck Blein; Julien Vulliet

    2006-01-01

    Solid Oxide Fuel Cells (SOFC) and High Temperature Electrolysis (HTE) work on two opposite processes. The basic equations (Nernst equation, corrected by a term of over-voltage) are thus very similar, only a few signs are different. An operational model, based on measurable quantities, was finalized for HTE process, and adapted to SOFCs. The model is analytical, which requires some complementary assumptions (proportionality of over-tensions to the current density, linearization of the logarithmic term in Nernst equation). It allows determining hydrogen production by HTE using a limited number of parameters. At a given temperature, only one macroscopic parameter, related to over-voltages, is needed for adjusting the model to the experimental results (SOFC), in a wide range of hydrogen flow-rates. For a given cell, this parameter follows an Arrhenius law with a satisfactory precision. The prevision in HTE process is compared to the available experimental results. (authors)

  10. A Novel Stress-Diathesis Model to Predict Risk of Post-operative Delirium: Implications for Intra-operative Management

    Directory of Open Access Journals (Sweden)

    Renée El-Gabalawy

    2017-08-01

    Full Text Available Introduction: Risk assessment for post-operative delirium (POD is poorly developed. Improved metrics could greatly facilitate peri-operative care as costs associated with POD are staggering. In this preliminary study, we develop a novel stress-diathesis model based on comprehensive pre-operative psychiatric and neuropsychological testing, a blood oxygenation level-dependent (BOLD magnetic resonance imaging (MRI carbon dioxide (CO2 stress test, and high fidelity measures of intra-operative parameters that may interact facilitating POD.Methods: The study was approved by the ethics board at the University of Manitoba and registered at clinicaltrials.gov as NCT02126215. Twelve patients were studied. Pre-operative psychiatric symptom measures and neuropsychological testing preceded MRI featuring a BOLD MRI CO2 stress test whereby BOLD scans were conducted while exposing participants to a rigorously controlled CO2 stimulus. During surgery the patient had hemodynamics and end-tidal gases downloaded at 0.5 hz. Post-operatively, the presence of POD and POD severity was comprehensively assessed using the Confusion Assessment Measure –Severity (CAM-S scoring instrument on days 0 (surgery through post-operative day 5, and patients were followed up at least 1 month post-operatively.Results: Six of 12 patients had no evidence of POD (non-POD. Three patients had POD and 3 had clinically significant confusional states (referred as subthreshold POD; ST-POD (score ≥ 5/19 on the CAM-S. Average severity for delirium was 1.3 in the non-POD group, 3.2 in ST-POD, and 6.1 in POD (F-statistic = 15.4, p < 0.001. Depressive symptoms, and cognitive measures of semantic fluency and executive functioning/processing speed were significantly associated with POD. Second level analysis revealed an increased inverse BOLD responsiveness to CO2 pre-operatively in ST-POD and marked increase in the POD groups when compared to the non-POD group. An association was also noted for

  11. An expert system for modelling operators' behaviour in control of a steam generator

    International Nuclear Information System (INIS)

    Cacciabue, P.C.; Guida, G.; Pace, A.

    1987-01-01

    Modelling the mental processes of an operator in charge of controlling a complex industrial plant is a challenging issue currently tackled by several research projects both in the area of artificial intelligence and cognitive psychology. Progress in this field could greatly contribute not only to a deeper understanding of operator's behaviour, but also to the design of intelligent operator support systems. In this paper the authors report the preliminary results of an experimental research effort devoted to model the behaviour of a plant operator by means of Knowledge-based techniques. The main standpoints of their work is that the cognitive processes underlying operator's behaviour can be of three main different types, according to the actual situation where the operator works. In normal situations, or during training sessions, the operator is free to develop deep reasoning, using knowledge about plant structure and function and relying on the first physical principles that govern its behaviour

  12. Modelling and operation strategies of DLR's large scale thermocline test facility (TESIS)

    Science.gov (United States)

    Odenthal, Christian; Breidenbach, Nils; Bauer, Thomas

    2017-06-01

    In this work an overview of the TESIS:store thermocline test facility and its current construction status will be given. Based on this, the TESIS:store facility using sensible solid filler material is modelled with a fully transient model, implemented in MATLAB®. Results in terms of the impact of filler site and operation strategies will be presented. While low porosity and small particle diameters for the filler material are beneficial, operation strategy is one key element with potential for optimization. It is shown that plant operators have to ponder between utilization and exergetic efficiency. Different durations of the charging and discharging period enable further potential for optimizations.

  13. High-gradient operators in the psl(2 vertical stroke 2) Gross-Neveu model

    International Nuclear Information System (INIS)

    Cagnazzo, Alessandra; Schomerus, Volker; Tlapak, Vaclav

    2014-10-01

    It has been observed more than 25 years ago that sigma model perturbation theory suffers from strongly RG-relevant high-gradient operators. The phenomenon was first seen in 1-loop calculations for the O(N) vector model and it is known to persist at least to two loops. More recently, Ryu et al. suggested that a certain deformation of the psl(N vertical stroke N) WZNW-model at level k=1, or equivalently the psl(N vertical stroke N) Gross-Neveu model, could be free of RG-relevant high-gradient operators and they tested their suggestion to leading order in perturbation theory. In this note we establish the absence of strongly RG-relevant high-gradient operators in the psl(2 vertical stroke 2) Gross-Neveu model to all loops. In addition, we determine the spectrum for a large subsector of the model at infinite coupling and observe that all scaling weights become half-integer. Evidence for a conjectured relation with the CP 1 vertical stroke 2 sigma model is not found.

  14. Yanqing solar field: Dynamic optical model and operational safety analysis

    International Nuclear Information System (INIS)

    Zhao, Dongming; Wang, Zhifeng; Xu, Ershu; Zhu, Lingzhi; Lei, Dongqiang; Xu, Li; Yuan, Guofeng

    2017-01-01

    Highlights: • A dynamic optical model of the Yanqing solar field was built. • Tracking angle characteristics were studied with different SCA layouts and time. • The average energy flux was simulated across four clear days. • Influences of defocus angles for energy flux were analyzed. - Abstract: A dynamic optical model was established for the Yanqing solar field at the parabolic trough solar thermal power plant and a simulation was conducted on four separate days of clear weather (March 3rd, June 2nd, September 25th, December 17th). The solar collector assembly (SCA) was comprised of a North-South and East-West layout. The model consisted of the following modules: DNI, SCA operational, and SCA optical. The tracking angle characteristics were analyzed and the results showed that the East-West layout of the tracking system was the most viable. The average energy flux was simulated for a given time period and different SCA layouts, yielding an average flux of 6 kW/m 2 , which was then used as the design and operational standards of the Yanqing parabolic trough plant. The mass flow of North-South layout was relatively stable. The influences of the defocus angles on both the average energy flux and the circumferential flux distribution were also studied. The results provided a theoretical basis for the following components: solar field design, mass flow control of the heat transfer fluid, design and operation of the tracking system, operational safety of SCAs, and power production prediction in the Yanqing 1 MW parabolic trough plant.

  15. NATO Operational Record: Collective Analytical Exploitation to Inform Operational Analysis Models and Common Operational Planning Factors (Archives operationnelles de l’OTAN: Exploitation analytique collective visant a alimenter les modeles d’analyse operationnelle et les facteurs de planification operationnelle commune)

    Science.gov (United States)

    2014-05-01

    futures de l’OTAN est positivement influencée par l’analyse opérationnelle qui s’appuie sur les données quantitatives et qualitatives des dossiers des...operations is positively influenced by operational analysis that relies on quantitative and qualitative data of operational records from past and...and future NATO operations is positively influenced by operational analysis methods, models, and tools that rely on quantitative and qualitative data

  16. New spin Calogero-Sutherland models related to BN-type Dunkl operators

    International Nuclear Information System (INIS)

    Finkel, F.; Gomez-Ullate, D.; Gonzalez-Lopez, A.; Rodriguez, M.A.; Zhdanov, R.

    2001-01-01

    We construct several new families of exactly and quasi-exactly solvable BC N -type Calogero-Sutherland models with internal degrees of freedom. Our approach is based on the introduction of a new family of Dunkl operators of B N type which, together with the original B N -type Dunkl operators, are shown to preserve certain polynomial subspaces of finite dimension. We prove that a wide class of quadratic combinations involving these three sets of Dunkl operators always yields a spin Calogero-Sutherland model, which is (quasi-)exactly solvable by construction. We show that all the spin Calogero-Sutherland models obtainable within this framework can be expressed in a unified way in terms of a Weierstrass ζ function with suitable half-periods. This provides a natural spin counterpart of the well-known general formula for a scalar completely integrable potential of BC N type due to Olshanetsky and Perelomov. As an illustration of our method, we exactly compute several energy levels and their corresponding wavefunctions of an elliptic quasi-exactly solvable potential for two and three particles of spin 1/2

  17. A predictive model for diagnosing stroke-related apraxia of speech.

    Science.gov (United States)

    Ballard, Kirrie J; Azizi, Lamiae; Duffy, Joseph R; McNeil, Malcolm R; Halaki, Mark; O'Dwyer, Nicholas; Layfield, Claire; Scholl, Dominique I; Vogel, Adam P; Robin, Donald A

    2016-01-29

    Diagnosis of the speech motor planning/programming disorder, apraxia of speech (AOS), has proven challenging, largely due to its common co-occurrence with the language-based impairment of aphasia. Currently, diagnosis is based on perceptually identifying and rating the severity of several speech features. It is not known whether all, or a subset of the features, are required for a positive diagnosis. The purpose of this study was to assess predictor variables for the presence of AOS after left-hemisphere stroke, with the goal of increasing diagnostic objectivity and efficiency. This population-based case-control study involved a sample of 72 cases, using the outcome measure of expert judgment on presence of AOS and including a large number of independently collected candidate predictors representing behavioral measures of linguistic, cognitive, nonspeech oral motor, and speech motor ability. We constructed a predictive model using multiple imputation to deal with missing data; the Least Absolute Shrinkage and Selection Operator (Lasso) technique for variable selection to define the most relevant predictors, and bootstrapping to check the model stability and quantify the optimism of the developed model. Two measures were sufficient to distinguish between participants with AOS plus aphasia and those with aphasia alone, (1) a measure of speech errors with words of increasing length and (2) a measure of relative vowel duration in three-syllable words with weak-strong stress pattern (e.g., banana, potato). The model has high discriminative ability to distinguish between cases with and without AOS (c-index=0.93) and good agreement between observed and predicted probabilities (calibration slope=0.94). Some caution is warranted, given the relatively small sample specific to left-hemisphere stroke, and the limitations of imputing missing data. These two speech measures are straightforward to collect and analyse, facilitating use in research and clinical settings. Copyright

  18. Operator-based linearization for efficient modeling of geothermal processes

    NARCIS (Netherlands)

    Khait, M.; Voskov, D.V.

    2018-01-01

    Numerical simulation is one of the most important tools required for financial and operational management of geothermal reservoirs. The modern geothermal industry is challenged to run large ensembles of numerical models for uncertainty analysis, causing simulation performance to become a critical

  19. Optimization of Operations Resources via Discrete Event Simulation Modeling

    Science.gov (United States)

    Joshi, B.; Morris, D.; White, N.; Unal, R.

    1996-01-01

    The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.

  20. Cognitive modeling and dynamic probabilistic simulation of operating crew response to complex system accidents

    International Nuclear Information System (INIS)

    Chang, Y.H.J.; Mosleh, A.

    2007-01-01

    This is the third in a series of five papers describing the IDAC (Information, Decision, and Action in Crew context) model for human reliability analysis. An example application of this modeling technique is also discussed in this series. The model is developed to probabilistically predict the responses of the nuclear power plant control room operating crew in accident conditions. The operator response spectrum includes cognitive, emotional, and physical activities during the course of an accident. This paper discusses the modeling components and their process rules. An operator's problem-solving process is divided into three types: information pre-processing (I), diagnosis and decision-making (D), and action execution (A). Explicit and context-dependent behavior rules for each type of operator are developed in the form of tables, and logical or mathematical relations. These regulate the process and activities of each of the three types of response. The behavior rules are developed for three generic types of operator: Decision Maker, Action Taker, and Consultant. This paper also provides a simple approach to calculating normalized probabilities of alternative behaviors given a context

  1. Robustness Analysis of Visual Question Answering Models by Basic Questions

    KAUST Repository

    Huang, Jia-Hong

    2017-11-01

    Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

  2. Robustness Analysis of Visual Question Answering Models by Basic Questions

    KAUST Repository

    Huang, Jia-Hong

    2017-01-01

    Visual Question Answering (VQA) models should have both high robustness and accuracy. Unfortunately, most of the current VQA research only focuses on accuracy because there is a lack of proper methods to measure the robustness of VQA models. There are two main modules in our algorithm. Given a natural language question about an image, the first module takes the question as input and then outputs the ranked basic questions, with similarity scores, of the main given question. The second module takes the main question, image and these basic questions as input and then outputs the text-based answer of the main question about the given image. We claim that a robust VQA model is one, whose performance is not changed much when related basic questions as also made available to it as input. We formulate the basic questions generation problem as a LASSO optimization, and also propose a large scale Basic Question Dataset (BQD) and Rscore (novel robustness measure), for analyzing the robustness of VQA models. We hope our BQD will be used as a benchmark for to evaluate the robustness of VQA models, so as to help the community build more robust and accurate VQA models.

  3. Cost Model for Risk Assessment of Company Operation in Audit

    Directory of Open Access Journals (Sweden)

    S. V.

    2017-12-01

    Full Text Available This article explores the approach to assessing the risk of company activities termination by building a cost model. This model gives auditors information on managers’ understanding of factors influencing change in the value of assets and liabilities, and the methods to identify it in more effective and reliable ways. Based on this information, the auditor can assess the adequacy of use of the assumption on continuity of company operation by management personnel when preparing financial statements. Financial uncertainty entails real manifestations of factors creating risks of the occurrence of costs, revenue losses due their manifestations, which in the long run can be a reason for termination of company operation, and, therefore, need to be foreseen in the auditor’s assessment of the adequacy of use of the continuity assumption when preparing financial statements by company management. The purpose of the study is to explore and develop a methodology for use of cost models to assess the risk of termination of company operation in audit. The issue of methodology for assessing the audit risk through analyzing methods for company valuation has not been dealt with. The review of methodologies for assessing the risks of termination of company operation in course of audit gives grounds for the conclusion that use of cost models can be an effective methodology for identification and assessment of such risks. The analysis of the above methods gives understanding of the existing system for company valuation, integrated into the management system, and the consequences of its use, i. e. comparison of the asset price data with the accounting data and the market value of the asset data. Overvalued or undervalued company assets may be a sign of future sale or liquidation of a company, which may signal on high probability of termination of company operation. A wrong choice or application of valuation methods can be indicative of the risk of non

  4. Evaluation of OPPS model for plant operator's task simulation with Micro-SAINT

    International Nuclear Information System (INIS)

    Yoshida, Kazuo

    1991-03-01

    The development of a computer simulation method for cognitive behavior of operators under emergency conditions in nuclear power plant are being conducted at Japan Atomic Energy Research Institute (JAERI). As one of activities in this project, the task network modeling and simulation method has been evaluated with reproduced OPPS model using Micro-SAINT which is a PC software for task network analysis. OPPS is an operator's task simulation model developed by Oak Ridge National Laboratory. Operator's tasks under the condition of failure open of a safety relief valve in a BWR power plant has been analyzed as a sample problem with Micro-SAINT version of OPPS for the evaluation of task network analysis method. Furthermore, the fundamental capabilities of Micro-SAINT has been evaluated, and the task network in OPPS model has been also examined. As the results of this study, it has been clarified that random seed numbers in Micro-SAINT affect the probabilistic branching ratio and the distribution of task execution time calculated by Monte Carlo simulations, and the expression of network for a repeated task in the OPPS model leads to incorrect standard deviation in the case that a task execution time has some distribution. (author)

  5. Operation room tool handling and miscommunication scenarios: an object-process methodology conceptual model.

    Science.gov (United States)

    Wachs, Juan P; Frenkel, Boaz; Dori, Dov

    2014-11-01

    Errors in the delivery of medical care are the principal cause of inpatient mortality and morbidity, accounting for around 98,000 deaths in the United States of America (USA) annually. Ineffective team communication, especially in the operation room (OR), is a major root of these errors. This miscommunication can be reduced by analyzing and constructing a conceptual model of communication and miscommunication in the OR. We introduce the principles underlying Object-Process Methodology (OPM)-based modeling of the intricate interactions between the surgeon and the surgical technician while handling surgical instruments in the OR. This model is a software- and hardware-independent description of the agents engaged in communication events, their physical activities, and their interactions. The model enables assessing whether the task-related objectives of the surgical procedure were achieved and completed successfully and what errors can occur during the communication. The facts used to construct the model were gathered from observations of various types of operations miscommunications in the operating room and its outcomes. The model takes advantage of the compact ontology of OPM, which is comprised of stateful objects - things that exist physically or informatically, and processes - things that transform objects by creating them, consuming them or changing their state. The modeled communication modalities are verbal and non-verbal, and errors are modeled as processes that deviate from the "sunny day" scenario. Using OPM refinement mechanism of in-zooming, key processes are drilled into and elaborated, along with the objects that are required as agents or instruments, or objects that these processes transform. The model was developed through an iterative process of observation, modeling, group discussions, and simplification. The model faithfully represents the processes related to tool handling that take place in an OR during an operation. The specification is at

  6. Ethical Issues in Engineering Models: An Operations Researcher?s Reflections

    OpenAIRE

    Kleijnen, J.

    2010-01-01

    This article starts with an overview of the author?s personal involvement?as an Operations Research consultant?in several engineering case-studies that may raise ethical questions; e.g., case-studies on nuclear waste, water management, sustainable ecology, military tactics, and animal welfare. All these case studies employ computer simulation models. In general, models are meant to solve practical problems, which may have ethical implications for the various stakeholders; namely, the modelers...

  7. Towards operational modeling and forecasting of the Iberian shelves ecosystem.

    Directory of Open Access Journals (Sweden)

    Martinho Marta-Almeida

    Full Text Available There is a growing interest on physical and biogeochemical oceanic hindcasts and forecasts from a wide range of users and businesses. In this contribution we present an operational biogeochemical forecast system for the Portuguese and Galician oceanographic regions, where atmospheric, hydrodynamic and biogeochemical variables are integrated. The ocean model ROMS, with a horizontal resolution of 3 km, is forced by the atmospheric model WRF and includes a Nutrients-Phytoplankton-Zooplankton-Detritus biogeochemical module (NPZD. In addition to oceanographic variables, the system predicts the concentration of nitrate, phytoplankton, zooplankton and detritus (mmol N m(-3. Model results are compared against radar currents and remote sensed SST and chlorophyll. Quantitative skill assessment during a summer upwelling period shows that our modelling system adequately represents the surface circulation over the shelf including the observed spatial variability and trends of temperature and chlorophyll concentration. Additionally, the skill assessment also shows some deficiencies like the overestimation of upwelling circulation and consequently, of the duration and intensity of the phytoplankton blooms. These and other departures from the observations are discussed, their origins identified and future improvements suggested. The forecast system is the first of its kind in the region and provides free online distribution of model input and output, as well as comparisons of model results with satellite imagery for qualitative operational assessment of model skill.

  8. Creation and annihilation operators for SU(3) in an SO(6,2) model

    International Nuclear Information System (INIS)

    Bracken, A.J.; MacGibbon, J.H.

    1984-01-01

    Creation and annihilation operators are defined which are Wigner operators (tensor shift operators) for SU(3). While the annihilation operators are simply boson operators, the creation operators are cubic polynomials in boson operators. Together they generate under commutation the Lie algebra of SO(6,2). A model for SU(3) is defined. The different SU(3) irreducible representations appear explicitly as manifestly covariant, irreducible tensors, whose orthogonality and normalisation properties are examined. Other Wigner operators for SU(3) can be constructed simply as products of the new creation and annihilation operators, or sums of such products. (author)

  9. PSOLA: A Heuristic Land-Use Allocation Model Using Patch-Level Operations and Knowledge-Informed Rules.

    Directory of Open Access Journals (Sweden)

    Yaolin Liu

    Full Text Available Optimizing land-use allocation is important to regional sustainable development, as it promotes the social equality of public services, increases the economic benefits of land-use activities, and reduces the ecological risk of land-use planning. Most land-use optimization models allocate land-use using cell-level operations that fragment land-use patches. These models do not cooperate well with land-use planning knowledge, leading to irrational land-use patterns. This study focuses on building a heuristic land-use allocation model (PSOLA using particle swarm optimization. The model allocates land-use with patch-level operations to avoid fragmentation. The patch-level operations include a patch-edge operator, a patch-size operator, and a patch-compactness operator that constrain the size and shape of land-use patches. The model is also integrated with knowledge-informed rules to provide auxiliary knowledge of land-use planning during optimization. The knowledge-informed rules consist of suitability, accessibility, land use policy, and stakeholders' preference. To validate the PSOLA model, a case study was performed in Gaoqiao Town in Zhejiang Province, China. The results demonstrate that the PSOLA model outperforms a basic PSO (Particle Swarm Optimization in the terms of the social, economic, ecological, and overall benefits by 3.60%, 7.10%, 1.53% and 4.06%, respectively, which confirms the effectiveness of our improvements. Furthermore, the model has an open architecture, enabling its extension as a generic tool to support decision making in land-use planning.

  10. A "Toy" Model for Operational Risk Quantification using Credibility Theory

    OpenAIRE

    Hans B\\"uhlmann; Pavel V. Shevchenko; Mario V. W\\"uthrich

    2009-01-01

    To meet the Basel II regulatory requirements for the Advanced Measurement Approaches in operational risk, the bank's internal model should make use of the internal data, relevant external data, scenario analysis and factors reflecting the business environment and internal control systems. One of the unresolved challenges in operational risk is combining of these data sources appropriately. In this paper we focus on quantification of the low frequency high impact losses exceeding some high thr...

  11. A shared-world conceptual model for integrating space station life sciences telescience operations

    Science.gov (United States)

    Johnson, Vicki; Bosley, John

    1988-01-01

    Mental models of the Space Station and its ancillary facilities will be employed by users of the Space Station as they draw upon past experiences, perform tasks, and collectively plan for future activities. The operational environment of the Space Station will incorporate telescience, a new set of operational modes. To investigate properties of the operational environment, distributed users, and the mental models they employ to manipulate resources while conducting telescience, an integrating shared-world conceptual model of Space Station telescience is proposed. The model comprises distributed users and resources (active elements); agents who mediate interactions among these elements on the basis of intelligent processing of shared information; and telescience protocols which structure the interactions of agents as they engage in cooperative, responsive interactions on behalf of users and resources distributed in space and time. Examples from the life sciences are used to instantiate and refine the model's principles. Implications for transaction management and autonomy are discussed. Experiments employing the model are described which the authors intend to conduct using the Space Station Life Sciences Telescience Testbed currently under development at Ames Research Center.

  12. Adapting Modeling & SImulation for Network Enabled Operations

    Science.gov (United States)

    2011-03-01

    Awareness in Aerospace Operations ( AGARD - CP -478; pp. 5/1-5/8), Neuilly Sur Seine, France: NATO- AGARD . 243 ChApter 8 ShAping uk defenCe poliCy...Chapter 3 73 Increasing the Maturity of Command to Deal with Complex, Information Age Environments • Players could concentrate on their own areas; they...The results are shown in figure 4.16, which shows the fit for the first four serials. The model still explains 73 % of the vari- ability, down from 82

  13. "A model co-operative country": Irish-Finnish co-operative contacts at the turn of the twentieth century

    DEFF Research Database (Denmark)

    Hilson, Mary

    2017-01-01

    Agricultural co-operative societies were widely discussed across late nineteenth-century Europe as a potential solution to the problems of agricultural depression, land reform and rural poverty. In Finland, the agronomist Hannes Gebhard drew inspiration from examples across Europe in founding the...... that even before the First World War it was Finland, not Ireland, that had begun to be regarded as ‘a model co-operative country’....... between Irish and Finnish co-operators around the turn of the century, and examines the ways in which the parallels between the two countries were constructed and presented by those involved in these exchanges. I will also consider the reasons for the divergence in the development of cooperation, so...

  14. A cognitive model of human behaviour for simulating operators of complex plants

    International Nuclear Information System (INIS)

    Cacciabue, P.C.; Mancini, G.; Bersini, U.

    1988-01-01

    This paper discusses the need of a 'deterministic' representation of the operator's reasoning and sensory-motor behaviour in order to approach correctly the overall problem of Man-Machine Interaction (MMI). Such type of modelling represents a fundamental complement to the merely probabilistic quantification of operator performances for safety as well as for design purposes. A cognitive model, formally based on a hierarchical goal-oriented approach and driven by fuzzy logic methodology, is then presented and briefly discussed, including the psychological criteria by which the content of operators' knowledge is exploited for instantiation of strategies during emergencies. Finally the potential applications of such methodology are reviewed identifying limits and advantages in comparison to more classical and mechanicistic approaches. (author)

  15. Simulation Model for Dynamic Operation of Double-Effect Absorption Chillers

    Directory of Open Access Journals (Sweden)

    Ahmed Mojahid Sid Ahmed Mohammed Salih

    2014-07-01

    Full Text Available The development in the field of refrigeration and air conditioning systems driven by absorption cycles acquired a considerable importance recently. For commercial absorption chillers, an essential challenge for creating chiller model certainly is the shortage of components technical specifications. These kinds of specifications are usually proprietary for chillers producers. In this paper, a double-effect parallel-flow-type steam absorption chiller model based on thermodynamic and energy equations is presented. The chiller studied is Lithium bromide-water with capacity of 1250 RT (Refrigeration Tons. The governing equations of the dynamic operation of the chiller are developed. From available design information, the values of the overall heat transfer coefficients multiplied by the surface area are computed. The dynamic operation of the absorption chiller is simulated to study the performance of the system. The model is able to provide essential details of the temperature, concentration, and flow rate at each state point in the chiller.

  16. The Operational Planning Model of Transhipment Processes in the Port

    Directory of Open Access Journals (Sweden)

    Mia Jurjević

    2016-04-01

    Full Text Available Modelling of a traffic system refers to the efficiency of operations for establishing successful business performance by examining the possibilities for its improvement. The main purpose of each container terminal is to ensure continuity and dynamics of the flow of containers. The objective of this paper is to present a method for determining the amount of certain types of containers that can be transhipped at each berth, with the proper cargo handling, taking into account minimum total costs of transhipment. The mathematical model of planning the transhipment and transportation of containers at the terminal is presented. The optimal solution, obtained with the method of linear programming, represents a plan for container deployment that will ensure effective ongoing process of transhipment, providing the lowest transhipment costs. The proposed model, tested in the port of Rijeka, should be the basis for makingadequate business decisions in the operational planning of the container terminal.

  17. An information theory-based approach to modeling the information processing of NPP operators

    International Nuclear Information System (INIS)

    Kim, Jong Hyun; Seong, Poong Hyun

    2002-01-01

    This paper proposes a quantitative approach to modeling the information processing of NPP operators. The aim of this work is to derive the amount of the information processed during a certain control task. The focus will be on i) developing a model for information processing of NPP operators and ii) quantifying the model. To resolve the problems of the previous approaches based on the information theory, i.e. the problems of single channel approaches, we primarily develop the information processing model having multiple stages, which contains information flows. Then the uncertainty of the information is quantified using the Conant's model, a kind of information theory

  18. Analysis of Operating Principles with S-system Models

    Science.gov (United States)

    Lee, Yun; Chen, Po-Wei; Voit, Eberhard O.

    2011-01-01

    Operating principles address general questions regarding the response dynamics of biological systems as we observe or hypothesize them, in comparison to a priori equally valid alternatives. In analogy to design principles, the question arises: Why are some operating strategies encountered more frequently than others and in what sense might they be superior? It is at this point impossible to study operation principles in complete generality, but the work here discusses the important situation where a biological system must shift operation from its normal steady state to a new steady state. This situation is quite common and includes many stress responses. We present two distinct methods for determining different solutions to this task of achieving a new target steady state. Both methods utilize the property of S-system models within Biochemical Systems Theory (BST) that steady-states can be explicitly represented as systems of linear algebraic equations. The first method uses matrix inversion, a pseudo-inverse, or regression to characterize the entire admissible solution space. Operations on the basis of the solution space permit modest alterations of the transients toward the target steady state. The second method uses standard or mixed integer linear programming to determine admissible solutions that satisfy criteria of functional effectiveness, which are specified beforehand. As an illustration, we use both methods to characterize alternative response patterns of yeast subjected to heat stress, and compare them with observations from the literature. PMID:21377479

  19. Preliminary Findings of the South Africa Power System Capacity Expansion and Operational Modelling Study: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Reber, Timothy J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Chartan, Erol Kevin [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brinkman, Gregory L [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-01-01

    Wind and solar power contract prices have recently become cheaper than many conventional new-build alternatives in South Africa and trends suggest a continued increase in the share of variable renewable energy (vRE) on South Africa's power system with coal technology seeing the greatest reduction in capacity, see 'Figure 6: Percentage share by Installed Capacity (MW)' in [1]. Hence it is essential to perform a state-of-the-art grid integration study examining the effects of these high penetrations of vRE on South Africa's power system. Under the 21st Century Power Partnership (21CPP), funded by the U.S. Department of Energy, the National Renewable Energy Laboratory (NREL) has significantly augmented existing models of the South African power system to investigate future vRE scenarios. NREL, in collaboration with Eskom's Planning Department, further developed, tested and ran a combined capacity expansion and operational model of the South African power system including spatially disaggregated detail and geographical representation of system resources. New software to visualize and interpret modelling outputs has been developed, and scenario analysis of stepwise vRE build targets reveals new insight into associated planning and operational impacts and costs. The model, built using PLEXOS, is split into two components, firstly a capacity expansion model and secondly a unit commitment and economic dispatch model. The capacity expansion model optimizes new generation decisions to achieve the lowest cost, with a full understanding of capital cost and an approximated understanding of operational costs. The operational model has a greater set of detailed operational constraints and is run at daily resolutions. Both are run from 2017 through 2050. This investigation suggests that running both models in tandem may be the most effective means to plan the least cost South African power system as build plans seen to be more expensive than optimal by the

  20. A model technology transfer program for independent operators: Kansas Technology Transfer Model (KTTM)

    Energy Technology Data Exchange (ETDEWEB)

    Schoeling, L.G.

    1993-09-01

    This report describes the development and testing of the Kansas Technology Transfer Model (KTTM) which is to be utilized as a regional model for the development of other technology transfer programs for independent operators throughout oil-producing regions in the US. It describes the linkage of the regional model with a proposed national technology transfer plan, an evaluation technique for improving and assessing the model, and the methodology which makes it adaptable on a regional basis. The report also describes management concepts helpful in managing a technology transfer program. The original Tertiary Oil Recovery Project (TORP) activities, upon which the KTTM is based, were developed and tested for Kansas and have proved to be effective in assisting independent operators in utilizing technology. Through joint activities of TORP and the Kansas Geological Survey (KGS), the KTTM was developed and documented for application in other oil-producing regions. During the course of developing this model, twelve documents describing the implementation of the KTTM were developed as deliverables to DOE. These include: (1) a problem identification (PI) manual describing the format and results of six PI workshops conducted in different areas of Kansas, (2) three technology workshop participant manuals on advanced waterflooding, reservoir description, and personal computer applications, (3) three technology workshop instructor manuals which provides instructor material for all three workshops, (4) three technologies were documented as demonstration projects which included reservoir management, permeability modification, and utilization of a liquid-level acoustic measuring device, (5) a bibliography of all literature utilized in the documents, and (6) a document which describes the KTTM.

  1. Assessing the operation rules of a reservoir system based on a detailed modelling-chain

    Science.gov (United States)

    Bruwier, M.; Erpicum, S.; Pirotton, M.; Archambeau, P.; Dewals, B.

    2014-09-01

    According to available climate change scenarios for Belgium, drier summers and wetter winters are expected. In this study, we focus on two muti-purpose reservoirs located in the Vesdre catchment, which is part of the Meuse basin. The current operation rules of the reservoirs are first analysed. Next, the impacts of two climate change scenarios are assessed and enhanced operation rules are proposed to mitigate these impacts. For this purpose, an integrated model of the catchment was used. It includes a hydrological model, one-dimensional and two-dimensional hydraulic models of the river and its main tributaries, a model of the reservoir system and a flood damage model. Five performance indicators of the reservoir system have been defined, reflecting its ability to provide sufficient drinking, to control floods, to produce hydropower and to reduce low-flow condition. As shown by the results, enhanced operation rules may improve the drinking water potential and the low-flow augmentation while the existing operation rules are efficient for flood control and for hydropower production.

  2. Assessing the operation rules of a reservoir system based on a detailed modelling chain

    Science.gov (United States)

    Bruwier, M.; Erpicum, S.; Pirotton, M.; Archambeau, P.; Dewals, B. J.

    2015-03-01

    According to available climate change scenarios for Belgium, drier summers and wetter winters are expected. In this study, we focus on two multi-purpose reservoirs located in the Vesdre catchment, which is part of the Meuse basin. The current operation rules of the reservoirs are first analysed. Next, the impacts of two climate change scenarios are assessed and enhanced operation rules are proposed to mitigate these impacts. For this purpose, an integrated model of the catchment was used. It includes a hydrological model, one-dimensional and two-dimensional hydraulic models of the river and its main tributaries, a model of the reservoir system and a flood damage model. Five performance indicators of the reservoir system have been defined, reflecting its ability to provide sufficient drinking water, to control floods, to produce hydropower and to reduce low-flow conditions. As shown by the results, enhanced operation rules may improve the drinking water potential and the low-flow augmentation while the existing operation rules are efficient for flood control and for hydropower production.

  3. A method for aggregating external operating conditions in multi-generation system optimization models

    DEFF Research Database (Denmark)

    Lythcke-Jørgensen, Christoffer Ernst; Münster, Marie; Ensinas, Adriano Viana

    2016-01-01

    This paper presents a novel, simple method for reducing external operating condition datasets to be used in multi-generation system optimization models. The method, called the Characteristic Operating Pattern (CHOP) method, is a visually-based aggregation method that clusters reference data based...... on parameter values rather than time of occurrence, thereby preserving important information on short-term relations between the relevant operating parameters. This is opposed to commonly used methods where data are averaged over chronological periods (months or years), and extreme conditions are hidden...... in the averaged values. The CHOP method is tested in a case study where the operation of a fictive Danish combined heat and power plant is optimized over a historical 5-year period. The optimization model is solved using the full external operating condition dataset, a reduced dataset obtained using the CHOP...

  4. Pre-operative simulation of periacetabular osteotomy via a three-dimensional model constructed from salt

    Directory of Open Access Journals (Sweden)

    Fukushima Kensuke

    2017-01-01

    Full Text Available Introduction: Periacetabular osteotomy (PAO is an effective joint-preserving procedure for young adults with developmental dysplasia of the hip. Although PAO provides excellent radiographic and clinical results, it is a technically demanding procedure with a distinct learning curve that requires careful 3D planning and, above all, has a number of potential complications. We therefore developed a pre-operative simulation method for PAO via creation of a new full-scale model. Methods: The model was prepared from the patient’s Digital Imaging and Communications in Medicine (DICOM formatted data from computed tomography (CT, for construction and assembly using 3D printing technology. A major feature of our model is that it is constructed from salt. In contrast to conventional models, our model provides a more accurate representation, at a lower manufacturing cost, and requires a shorter production time. Furthermore, our model realized simulated operation normally with using a chisel and drill without easy breakage or fissure. We were able to easily simulate the line of osteotomy and confirm acetabular version and coverage after moving to the osteotomized fragment. Additionally, this model allowed a dynamic assessment that avoided anterior impingement following the osteotomy. Results: Our models clearly reflected the anatomical shape of the patient’s hip. Our models allowed for surgical simulation, making realistic use of the chisel and drill. Our method of pre-operative simulation for PAO allowed for the assessment of accurate osteotomy line, determination of the position of the osteotomized fragment, and prevented anterior impingement after the operation. Conclusion: Our method of pre-operative simulation might improve the safety, accuracy, and results of PAO.

  5. EDM - A model for optimising the short-term power operation of a complex hydroelectric network

    International Nuclear Information System (INIS)

    Tremblay, M.; Guillaud, C.

    1996-01-01

    In order to optimize the short-term power operation of a complex hydroelectric network, a new model called EDM was added to PROSPER, a water management analysis system developed by SNC-Lavalin. PROSPER is now divided into three parts: an optimization model (DDDP), a simulation model (ESOLIN), and an economic dispatch model (EDM) for the short-term operation. The operation of the KSEB hydroelectric system (located in southern India) with PROSPER was described. The long-term analysis with monthly time steps is assisted by the DDDP, and the daily analysis with hourly or half-hourly time steps is performed with the EDM model. 3 figs

  6. Hypothetical operation model for the multi-bed system of the Tritium plant based on the scheduling approach

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae-Uk, E-mail: eslee@dongguk.edu [Department of Chemical Engineering, Pohang University of Science and Technology, San 31, Hyoja-Dong, Pohang 790-784 (Korea, Republic of); Chang, Min Ho; Yun, Sei-Hun [National Fusion Research Institute, 169-148-gil Kwahak-ro, Yusong-gu, Daejon 34133 (Korea, Republic of); Lee, Euy Soo [Department of Chemical & Biochemical Engineering, Dongguk University, Seoul 100-715 (Korea, Republic of); Lee, In-Beum [Department of Chemical Engineering and Graduate School of Engineering Mastership, Pohang University of Science and Technology, San 31, Hyoja-Dong, Pohang 790-784 (Korea, Republic of); Lee, Kun-Hong [Department of Chemical Engineering, Pohang University of Science and Technology, San 31, Hyoja-Dong, Pohang 790-784 (Korea, Republic of)

    2016-11-01

    Highlights: • We introduce a mathematical model for the multi-bed storage system in the tritium plant. • We obtain details of operation by solving the model. • The model assesses diverse operation scenarios with respect to risk. - Abstract: In this paper, we describe our hypothetical operation model (HOM) for the multi-bed system of the storage and delivery system (SDS) of the ITER tritium plant. The multi-bed system consists of multiple getter beds (i.e., for batch operation) and buffer vessels (i.e., for continuous operation). Our newly developed HOM is formulated as a mixed-integer linear programming (MILP) model and has been extensively investigated to optimize chemical and petrochemical production planning and scheduling. Our model determines the timing, duration, and size of tasks corresponding to each set of equipment. Further, inventory levels for each set of equipment are calculated. Our proposed model considers the operation of one cycle of one set of getter beds and is implemented and assessed as a case study problem.

  7. Addressing drug adherence using an operations management model.

    Science.gov (United States)

    Nunlee, Martin; Bones, Michelle

    2014-01-01

    OBJECTIVE To provide a model that enables health systems and pharmacy benefit managers to provide medications reliably and test for reliability and validity in the analysis of adherence to drug therapy of chronic disease. SUMMARY The quantifiable model described here can be used in conjunction with behavioral designs of drug adherence assessments. The model identifies variables that can be reproduced and expanded across the management of chronic diseases with drug therapy. By creating a reorder point system for reordering medications, the model uses a methodology commonly seen in operations research. The design includes a safety stock of medication and current supply of medication, which increases the likelihood that patients will have a continuous supply of medications, thereby positively affecting adherence by removing barriers. CONCLUSION This method identifies an adherence model that quantifies variables related to recommendations from health care providers; it can assist health care and service delivery systems in making decisions that influence adherence based on the expected order cycle days and the expected daily quantity of medication administered. This model addresses the possession of medication as a barrier to adherence.

  8. Modeling spatial-temporal operations with context-dependent associative memories.

    Science.gov (United States)

    Mizraji, Eduardo; Lin, Juan

    2015-10-01

    We organize our behavior and store structured information with many procedures that require the coding of spatial and temporal order in specific neural modules. In the simplest cases, spatial and temporal relations are condensed in prepositions like "below" and "above", "behind" and "in front of", or "before" and "after", etc. Neural operators lie beneath these words, sharing some similarities with logical gates that compute spatial and temporal asymmetric relations. We show how these operators can be modeled by means of neural matrix memories acting on Kronecker tensor products of vectors. The complexity of these memories is further enhanced by their ability to store episodes unfolding in space and time. How does the brain scale up from the raw plasticity of contingent episodic memories to the apparent stable connectivity of large neural networks? We clarify this transition by analyzing a model that flexibly codes episodic spatial and temporal structures into contextual markers capable of linking different memory modules.

  9. A Review of Quantitative Situation Assessment Models for Nuclear Power Plant Operators

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Seong, Poong Hyun

    2009-01-01

    Situation assessment is the process of developing situation awareness and situation awareness is defined as 'the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future.' Situation awareness is an important element influencing human actions because human decision making is based on the result of situation assessment or situation awareness. There are many models for situation awareness and those models can be categorized into qualitative or quantitative. As the effects of some input factors on situation awareness can be investigated through the quantitative models, the quantitative models are more useful for the design of operator interfaces, automation strategies, training program, and so on, than the qualitative models. This study presents the review of two quantitative models of situation assessment (SA) for nuclear power plant operators

  10. Time-dependent reliability analysis of nuclear reactor operators using probabilistic network models

    International Nuclear Information System (INIS)

    Oka, Y.; Miyata, K.; Kodaira, H.; Murakami, S.; Kondo, S.; Togo, Y.

    1987-01-01

    Human factors are very important for the reliability of a nuclear power plant. Human behavior has essentially a time-dependent nature. The details of thinking and decision making processes are important for detailed analysis of human reliability. They have, however, not been well considered by the conventional methods of human reliability analysis. The present paper describes the models for the time-dependent and detailed human reliability analysis. Recovery by an operator is taken into account and two-operators models are also presented

  11. Geometrical aspects of operator ordering terms in gauge invariant quantum models

    International Nuclear Information System (INIS)

    Houston, P.J.

    1990-01-01

    Finite-dimensional quantum models with both boson and fermion degrees of freedom, and which have a gauge invariance, are studied here as simple versions of gauge invariant quantum field theories. The configuration space of these finite-dimensional models has the structure of a principal fibre bundle and has defined on it a metric which is invariant under the action of the bundle or gauge group. When the gauge-dependent degrees of freedom are removed, thereby defining the quantum models on the base of the principal fibre bundle, extra operator ordering terms arise. By making use of dimensional reduction methods in removing the gauge dependence, expressions are obtained here for the operator ordering terms which show clearly their dependence on the geometry of the principal fibre bundle structure. (author)

  12. Time Series Modeling of Human Operator Dynamics in Manual Control Tasks

    Science.gov (United States)

    Biezad, D. J.; Schmidt, D. K.

    1984-01-01

    A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency response of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that was previously modeled to demonstrate the strengths of the method.

  13. Evaluation of digital soil mapping approaches with large sets of environmental covariates

    Science.gov (United States)

    Nussbaum, Madlene; Spiess, Kay; Baltensweiler, Andri; Grob, Urs; Keller, Armin; Greiner, Lucie; Schaepman, Michael E.; Papritz, Andreas

    2018-01-01

    The spatial assessment of soil functions requires maps of basic soil properties. Unfortunately, these are either missing for many regions or are not available at the desired spatial resolution or down to the required soil depth. The field-based generation of large soil datasets and conventional soil maps remains costly. Meanwhile, legacy soil data and comprehensive sets of spatial environmental data are available for many regions. Digital soil mapping (DSM) approaches relating soil data (responses) to environmental data (covariates) face the challenge of building statistical models from large sets of covariates originating, for example, from airborne imaging spectroscopy or multi-scale terrain analysis. We evaluated six approaches for DSM in three study regions in Switzerland (Berne, Greifensee, ZH forest) by mapping the effective soil depth available to plants (SD), pH, soil organic matter (SOM), effective cation exchange capacity (ECEC), clay, silt, gravel content and fine fraction bulk density for four soil depths (totalling 48 responses). Models were built from 300-500 environmental covariates by selecting linear models through (1) grouped lasso and (2) an ad hoc stepwise procedure for robust external-drift kriging (georob). For (3) geoadditive models we selected penalized smoothing spline terms by component-wise gradient boosting (geoGAM). We further used two tree-based methods: (4) boosted regression trees (BRTs) and (5) random forest (RF). Lastly, we computed (6) weighted model averages (MAs) from the predictions obtained from methods 1-5. Lasso, georob and geoGAM successfully selected strongly reduced sets of covariates (subsets of 3-6 % of all covariates). Differences in predictive performance, tested on independent validation data, were mostly small and did not reveal a single best method for 48 responses. Nevertheless, RF was often the best among methods 1-5 (28 of 48 responses), but was outcompeted by MA for 14 of these 28 responses. RF tended to over

  14. An Effect of the Co-Operative Network Model for Students' Quality in Thai Primary Schools

    Science.gov (United States)

    Khanthaphum, Udomsin; Tesaputa, Kowat; Weangsamoot, Visoot

    2016-01-01

    This research aimed: 1) to study the current and desirable states of the co-operative network in developing the learners' quality in Thai primary schools, 2) to develop a model of the co-operative network in developing the learners' quality, and 3) to examine the results of implementation of the co-operative network model in the primary school.…

  15. Model validity and frequency band selection in operational modal analysis

    Science.gov (United States)

    Au, Siu-Kui

    2016-12-01

    Experimental modal analysis aims at identifying the modal properties (e.g., natural frequencies, damping ratios, mode shapes) of a structure using vibration measurements. Two basic questions are encountered when operating in the frequency domain: Is there a mode near a particular frequency? If so, how much spectral data near the frequency can be included for modal identification without incurring significant modeling error? For data with high signal-to-noise (s/n) ratios these questions can be addressed using empirical tools such as singular value spectrum. Otherwise they are generally open and can be challenging, e.g., for modes with low s/n ratios or close modes. In this work these questions are addressed using a Bayesian approach. The focus is on operational modal analysis, i.e., with 'output-only' ambient data, where identification uncertainty and modeling error can be significant and their control is most demanding. The approach leads to 'evidence ratios' quantifying the relative plausibility of competing sets of modeling assumptions. The latter involves modeling the 'what-if-not' situation, which is non-trivial but is resolved by systematic consideration of alternative models and using maximum entropy principle. Synthetic and field data are considered to investigate the behavior of evidence ratios and how they should be interpreted in practical applications.

  16. Island operation - modelling of a small hydro power system

    Energy Technology Data Exchange (ETDEWEB)

    Skarp, Stefan

    2000-02-01

    Simulation is a useful tool for investigating a system behaviour. It is a way to examine operating situations without having to perform them in reality. If someone for example wants to test an operating situation where the system possibly will demolish, a computer simulation could be a both cheaper and safer way than to do the test in reality. This master thesis performs and analyses a simulation, modelling an electronic power system. The system consists of a minor hydro power station, a wood refining industry, and interconnecting power system components. In the simulation situation the system works in a so called island operation. The thesis aims at making a capacity analysis of the current system. Above all, the goal is to find restrictions in load power profile of the consumer, under given circumstances. The computer software used in simulations is Matlab and its additional program PSB (Power System Blockset). The work has been carried out in co-operation with the power supplier Skellefteaa Kraft, where the problem formulation of this master thesis was founded.

  17. Knowledge model of trainee for training support system of plant operation

    Energy Technology Data Exchange (ETDEWEB)

    Furuhama, Yutaka; Furuta, Kazuo; Kondo, Shunsuke [Tokyo Univ. (Japan). Faculty of Engineering

    1996-10-01

    We have already proposed a knowledge model of a trainee, which model consists of two layers: hierarchical function and qualitative structure. We developed a method to generate normative operator knowledge based on this knowledge model structure, and to identify trainee`s intention by means of truth maintenance. The methods were tested by cognitive experiment using a prototype of training support system. (author)

  18. Operator models for delivering municipal solid waste management services in developing countries. Part A: The evidence base.

    Science.gov (United States)

    Wilson, David C; Kanjogera, Jennifer Bangirana; Soós, Reka; Briciu, Cosmin; Smith, Stephen R; Whiteman, Andrew D; Spies, Sandra; Oelz, Barbara

    2017-08-01

    This article presents the evidence base for 'operator models' - that is, how to deliver a sustainable service through the interaction of the 'client', 'revenue collector' and 'operator' functions - for municipal solid waste management in emerging and developing countries. The companion article addresses a selection of locally appropriate operator models. The evidence shows that no 'standard' operator model is effective in all developing countries and circumstances. Each city uses a mix of different operator models; 134 cases showed on average 2.5 models per city, each applying to different elements of municipal solid waste management - that is, street sweeping, primary collection, secondary collection, transfer, recycling, resource recovery and disposal or a combination. Operator models were analysed in detail for 28 case studies; the article summarises evidence across all elements and in more detail for waste collection. Operators fall into three main groups: The public sector, formal private sector, and micro-service providers including micro-, community-based and informal enterprises. Micro-service providers emerge as a common group; they are effective in expanding primary collection service coverage into poor- or peri-urban neighbourhoods and in delivering recycling. Both public and private sector operators can deliver effective services in the appropriate situation; what matters more is a strong client organisation responsible for municipal solid waste management within the municipality, with stable political and financial backing and capacity to manage service delivery. Revenue collection is also integral to operator models: Generally the municipality pays the operator from direct charges and/or indirect taxes, rather than the operator collecting fees directly from the service user.

  19. Modeling Methodologies for Representing Urban Cultural Geographies in Stability Operations

    National Research Council Canada - National Science Library

    Ferris, Todd P

    2008-01-01

    ... 2.0.0, in an effort to provide modeling methodologies for a single simulation tool capable of exploring the complex world of urban cultural geographies undergoing Stability Operations in an irregular warfare (IW) environment...

  20. Modeling the Operation of a Platoon of Amphibious Vehicles for Support of Operational Test and Evaluation (OT&E)

    National Research Council Canada - National Science Library

    Gaver, Donald

    2001-01-01

    ...) of the Marine Corps' prospective Advanced Amphibious Assault Vehicle (AAAV). The model's emphasis is on suitability issues such as Operational Availability in an on-land (after ocean transit) mission region...

  1. Stochastic Modelling of Linear Programming Application to Brewing Operational Systems

    Directory of Open Access Journals (Sweden)

    Akanbi O.P.

    2014-07-01

    Full Text Available System where a large number of interrelated operations exist, technically-based operational mechanism is always required to achieve potential. An intuitive solution, which is common practice in most of the breweries, perhaps may not uncover the optimal solution, as there is hardly any guarantee to satisfy the best policy application. There is always high foreign exchange involved in procurement of imported raw materials and thus increases the cost of production, abandonment and poor utilization of available locally-sourced raw materials. This study focuses on the approaches which highlight the steps and mechanisms involved in optimizing the wort extract by the use of different types of adjuncts and formulating wort production models which are useful in proffering expected solutions. Optimization techniques, the generalized models and an overview of typical brewing processes were considered.

  2. Data Envelopment Analysis (DEA) Model in Operation Management

    Science.gov (United States)

    Malik, Meilisa; Efendi, Syahril; Zarlis, Muhammad

    2018-01-01

    Quality management is an effective system in operation management to develops, maintains, and improves quality from groups of companies that allow marketing, production, and service at the most economycal level as well as ensuring customer satisfication. Many companies are practicing quality management to improve their bussiness performance. One of performance measurement is through measurement of efficiency. One of the tools can be used to assess efficiency of companies performance is Data Envelopment Analysis (DEA). The aim of this paper is using Data Envelopment Analysis (DEA) model to assess efficiency of quality management. In this paper will be explained CCR, BCC, and SBM models to assess efficiency of quality management.

  3. Conformal operator product expansion in the Yukawa model

    International Nuclear Information System (INIS)

    Prati, M.C.

    1983-01-01

    Conformal techniques are applied to the Yukawa model, as an example of a theory with spinor fields. It is written the partial-wave analysis of the 4-point function of two scalars and two spinors in the channel phi psi → phi psi in terms of spinor tensor representations of the conformal group. Using this conformal expansion, it is diagonalized the Bethe-Salpeter equation, which is reduced to algebraic relations among the partial waves. It is shown that in the γ 5 -invariant model, but not in the general case, it is possible to derive dynamically from the expansions of the 4-point function the vacuum operator product phi psi>

  4. Phased mission modelling of systems with maintenance-free operating periods using simulated Petri nets

    Energy Technology Data Exchange (ETDEWEB)

    Chew, S.P.; Dunnett, S.J. [Department of Aeronautical and Automotive Engineering, Loughborough University, Loughborough, Leics (United Kingdom); Andrews, J.D. [Department of Aeronautical and Automotive Engineering, Loughborough University, Loughborough, Leics (United Kingdom)], E-mail: j.d.andrews@lboro.ac.uk

    2008-07-15

    A common scenario in engineering is that of a system which operates throughout several sequential and distinct periods of time, during which the modes and consequences of failure differ from one another. This type of operation is known as a phased mission, and for the mission to be a success the system must successfully operate throughout all of the phases. Examples include a rocket launch and an aeroplane flight. Component or sub-system failures may occur at any time during the mission, yet not affect the system performance until the phase in which their condition is critical. This may mean that the transition from one phase to the next is a critical event that leads to phase and mission failure, with the root cause being a component failure in a previous phase. A series of phased missions with no maintenance may be considered as a maintenance-free operating period (MFOP). This paper describes the use of a Petri net (PN) to model the reliability of the MFOP and phased missions scenario. The model uses Monte-Carlo simulation to obtain its results, and due to the modelling power of PNs, can consider complexities such as component failure rate interdependencies and mission abandonment. The model operates three different types of PN which interact to provide the overall system reliability modelling. The model is demonstrated and validated by considering two simple examples that can be solved analytically.

  5. Phased mission modelling of systems with maintenance-free operating periods using simulated Petri nets

    International Nuclear Information System (INIS)

    Chew, S.P.; Dunnett, S.J.; Andrews, J.D.

    2008-01-01

    A common scenario in engineering is that of a system which operates throughout several sequential and distinct periods of time, during which the modes and consequences of failure differ from one another. This type of operation is known as a phased mission, and for the mission to be a success the system must successfully operate throughout all of the phases. Examples include a rocket launch and an aeroplane flight. Component or sub-system failures may occur at any time during the mission, yet not affect the system performance until the phase in which their condition is critical. This may mean that the transition from one phase to the next is a critical event that leads to phase and mission failure, with the root cause being a component failure in a previous phase. A series of phased missions with no maintenance may be considered as a maintenance-free operating period (MFOP). This paper describes the use of a Petri net (PN) to model the reliability of the MFOP and phased missions scenario. The model uses Monte-Carlo simulation to obtain its results, and due to the modelling power of PNs, can consider complexities such as component failure rate interdependencies and mission abandonment. The model operates three different types of PN which interact to provide the overall system reliability modelling. The model is demonstrated and validated by considering two simple examples that can be solved analytically

  6. Operator models for delivering municipal solid waste management services in developing countries: Part B: Decision support.

    Science.gov (United States)

    Soós, Reka; Whiteman, Andrew D; Wilson, David C; Briciu, Cosmin; Nürnberger, Sofia; Oelz, Barbara; Gunsilius, Ellen; Schwehn, Ekkehard

    2017-08-01

    This is the second of two papers reporting the results of a major study considering 'operator models' for municipal solid waste management (MSWM) in emerging and developing countries. Part A documents the evidence base, while Part B presents a four-step decision support system for selecting an appropriate operator model in a particular local situation. Step 1 focuses on understanding local problems and framework conditions; Step 2 on formulating and prioritising local objectives; and Step 3 on assessing capacities and conditions, and thus identifying strengths and weaknesses, which underpin selection of the operator model. Step 4A addresses three generic questions, including public versus private operation, inter-municipal co-operation and integration of services. For steps 1-4A, checklists have been developed as decision support tools. Step 4B helps choose locally appropriate models from an evidence-based set of 42 common operator models ( coms); decision support tools here are a detailed catalogue of the coms, setting out advantages and disadvantages of each, and a decision-making flowchart. The decision-making process is iterative, repeating steps 2-4 as required. The advantages of a more formal process include avoiding pre-selection of a particular com known to and favoured by one decision maker, and also its assistance in identifying the possible weaknesses and aspects to consider in the selection and design of operator models. To make the best of whichever operator models are selected, key issues which need to be addressed include the capacity of the public authority as 'client', management in general and financial management in particular.

  7. The master T-operator for the Gaudin model and the KP hierarchy

    International Nuclear Information System (INIS)

    Alexandrov, Alexander; Leurent, Sebastien; Tsuboi, Zengo; Zabrodin, Anton

    2014-01-01

    Following the approach of [1], we construct the master T-operator for the quantum Gaudin model with twisted boundary conditions and show that it satisfies the bilinear identity and Hirota equations for the classical KP hierarchy. We also characterize the class of solutions to the KP hierarchy that correspond to eigenvalues of the master T-operator and study dynamics of their zeros as functions of the spectral parameter. This implies a remarkable connection between the quantum Gaudin model and the classical Calogero–Moser system of particles

  8. An improved cellular automata model for train operation simulation with dynamic acceleration

    Science.gov (United States)

    Li, Wen-Jun; Nie, Lei

    2018-03-01

    Urban rail transit plays an important role in the urban public traffic because of its advantages of fast speed, large transport capacity, high safety, reliability and low pollution. This study proposes an improved cellular automaton (CA) model by considering the dynamic characteristic of the train acceleration to analyze the energy consumption and train running time. Constructing an effective model for calculating energy consumption to aid train operation improvement is the basis for studying and analyzing energy-saving measures for urban rail transit system operation.

  9. Design, Operation and Control Modelling of SOFC/GT Hybrid Systems

    OpenAIRE

    Stiller, Christoph

    2006-01-01

    This thesis focuses on modelling-based design, operation and control of solid oxide fuel cell (SOFC) and gas turbine (GT) hybrid systems. Fuel cells are a promising approach to high-efficiency power generation, as they directly convert chemical energy to electric work. High-temperature fuel cells such as the SOFC can be integrated in gas turbine processes, which further increases the electrical efficiency to values up to 70%. However, there are a number of obstacles for safe operation of such...

  10. Updating of states in operational hydrological models

    Science.gov (United States)

    Bruland, O.; Kolberg, S.; Engeland, K.; Gragne, A. S.; Liston, G.; Sand, K.; Tøfte, L.; Alfredsen, K.

    2012-04-01

    Operationally the main purpose of hydrological models is to provide runoff forecasts. The quality of the model state and the accuracy of the weather forecast together with the model quality define the runoff forecast quality. Input and model errors accumulate over time and may leave the model in a poor state. Usually model states can be related to observable conditions in the catchment. Updating of these states, knowing their relation to observable catchment conditions, influence directly the forecast quality. Norway is internationally in the forefront in hydropower scheduling both on short and long terms. The inflow forecasts are fundamental to this scheduling. Their quality directly influence the producers profit as they optimize hydropower production to market demand and at the same time minimize spill of water and maximize available hydraulic head. The quality of the inflow forecasts strongly depends on the quality of the models applied and the quality of the information they use. In this project the focus has been to improve the quality of the model states which the forecast is based upon. Runoff and snow storage are two observable quantities that reflect the model state and are used in this project for updating. Generally the methods used can be divided in three groups: The first re-estimates the forcing data in the updating period; the second alters the weights in the forecast ensemble; and the third directly changes the model states. The uncertainty related to the forcing data through the updating period is due to both uncertainty in the actual observation and to how well the gauging stations represent the catchment both in respect to temperatures and precipitation. The project looks at methodologies that automatically re-estimates the forcing data and tests the result against observed response. Model uncertainty is reflected in a joint distribution of model parameters estimated using the Dream algorithm.

  11. A review of operational, regional-scale, chemical weather forecasting models in Europe

    Directory of Open Access Journals (Sweden)

    J. Kukkonen

    2012-01-01

    Full Text Available Numerical models that combine weather forecasting and atmospheric chemistry are here referred to as chemical weather forecasting models. Eighteen operational chemical weather forecasting models on regional and continental scales in Europe are described and compared in this article. Topics discussed in this article include how weather forecasting and atmospheric chemistry models are integrated into chemical weather forecasting systems, how physical processes are incorporated into the models through parameterization schemes, how the model architecture affects the predicted variables, and how air chemistry and aerosol processes are formulated. In addition, we discuss sensitivity analysis and evaluation of the models, user operational requirements, such as model availability and documentation, and output availability and dissemination. In this manner, this article allows for the evaluation of the relative strengths and weaknesses of the various modelling systems and modelling approaches. Finally, this article highlights the most prominent gaps of knowledge for chemical weather forecasting models and suggests potential priorities for future research directions, for the following selected focus areas: emission inventories, the integration of numerical weather prediction and atmospheric chemical transport models, boundary conditions and nesting of models, data assimilation of the various chemical species, improved understanding and parameterization of physical processes, better evaluation of models against data and the construction of model ensembles.

  12. Safe design and operation of fluidized-bed reactors: Choice between reactor models

    NARCIS (Netherlands)

    Westerink, E.J.; Westerterp, K.R.

    1990-01-01

    For three different catalytic fluidized bed reactor models, two models presented by Werther and a model presented by van Deemter, the region of safe and unique operation for a chosen reaction system was investigated. Three reaction systems were used: the oxidation of benzene to maleic anhydride, the

  13. Twist operator correlation functions in O(n) loop models

    International Nuclear Information System (INIS)

    Simmons, Jacob J H; Cardy, John

    2009-01-01

    Using conformal field theoretic methods we calculate correlation functions of geometric observables in the loop representation of the O(n) model at the critical point. We focus on correlation functions containing twist operators, combining these with anchored loops, boundaries with SLE processes and with double SLE processes. We focus further upon n = 0, representing self-avoiding loops, which corresponds to a logarithmic conformal field theory (LCFT) with c = 0. In this limit the twist operator plays the role of a 0-weight indicator operator, which we verify by comparison with known examples. Using the additional conditions imposed by the twist operator null states, we derive a new explicit result for the probabilities that an SLE 8/3 winds in various ways about two points in the upper half-plane, e.g. that the SLE passes to the left of both points. The collection of c = 0 logarithmic CFT operators that we use deriving the winding probabilities is novel, highlighting a potential incompatibility caused by the presence of two distinct logarithmic partners to the stress tensor within the theory. We argue that both partners do appear in the theory, one in the bulk and one on the boundary and that the incompatibility is resolved by restrictive bulk-boundary fusion rules

  14. A simple rule based model for scheduling farm management operations in SWAT

    Science.gov (United States)

    Schürz, Christoph; Mehdi, Bano; Schulz, Karsten

    2016-04-01

    For many interdisciplinary questions at the watershed scale, the Soil and Water Assessment Tool (SWAT; Arnold et al., 1998) has become an accepted and widely used tool. Despite its flexibility, the model is highly demanding when it comes to input data. At SWAT's core the water balance and the modeled nutrient cycles are plant growth driven (implemented with the EPIC crop growth model). Therefore, land use and crop data with high spatial and thematic resolution, as well as detailed information on cultivation and farm management practices are required. For many applications of the model however, these data are unavailable. In order to meet these requirements, SWAT offers the option to trigger scheduled farm management operations by applying the Potential Heat Unit (PHU) concept. The PHU concept solely takes into account the accumulation of daily mean temperature for management scheduling. Hence, it contradicts several farming strategies that take place in reality; such as: i) Planting and harvesting dates are set much too early or too late, as the PHU concept is strongly sensitivity to inter-annual temperature fluctuations; ii) The timing of fertilizer application, in SWAT this often occurs simultaneously on the same date in in each field; iii) and can also coincide with precipitation events. Particularly, the latter two can lead to strong peaks in modeled nutrient loads. To cope with these shortcomings we propose a simple rule based model (RBM) to schedule management operations according to realistic farmer management practices in SWAT. The RBM involves simple strategies requiring only data that are input into the SWAT model initially, such as temperature and precipitation data. The user provides boundaries of time periods for operation schedules to take place for all crops in the model. These data are readily available from the literature or from crop variety trials. The RBM applies the dates by complying with the following rules: i) Operations scheduled in the

  15. Operational experience with model-based steering in the SLC linac

    International Nuclear Information System (INIS)

    Thompson, K.A.; Himel, T.; Moore, S.; Sanchez-Chopitea, L.; Shoaee, H.

    1989-03-01

    Operational experience with model-driven steering in the linac of the Stanford Linear Collider is discussed. Important issues include two-beam steering, sensitivity of algorithms to faulty components, sources of disagreement with the model, and the effects of the finite resolution of beam position monitors. Methods developed to make the steering algorithms more robust in the presence of such complications are also presented. 5 refs., 1 fig

  16. Analysis of operational events by ATHEANA framework for human factor modelling

    International Nuclear Information System (INIS)

    Bedreaga, Luminita; Constntinescu, Cristina; Doca, Cezar; Guzun, Basarab

    2007-01-01

    In the area of human reliability assessment, the experts recognise the fact that the current methods have not represented correctly the role of human in prevention, initiating and mitigating the accidents in nuclear power plants. The nature of this deficiency appears because the current methods used in modelling of human factor have not taken into account the human performance and reliability such as it has been observed in the operational events. ATHEANA - A Technique for Human Error ANAlysis - is a new methodology for human analysis that has included the specific data of operational events and also psychological models for human behaviour. This method has included new elements such as the unsafe action and error mechanisms. In this paper we present the application of ATHEANA framework in the analysis of operational events that appeared in different nuclear power plants during 1979-2002. The analysis of operational events has consisted of: - identification of the unsafe actions; - including the unsafe actions into a category, omission ar commission; - establishing the type of error corresponding to the unsafe action: slip, lapse, mistake and circumvention; - establishing the influence of performance by shaping the factors and some corrective actions. (authors)

  17. The development of human behavior analysis techniques - A study on knowledge representation methods for operator cognitive model

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jung Woon; Park, Young Tack [Soongsil University, Seoul (Korea, Republic of)

    1996-07-01

    The main objective of this project is modeling of human operator in a main control room of Nuclear Power Plant. For this purpose, we carried out research on knowledge representation and inference method based on Rasmussen`s decision ladder structure. And we have developed SACOM(Simulation= Analyzer with a Cognitive Operator Model) using G2 shell on Sun workstations. SACOM consists of Operator Model, Interaction Analyzer, Situation Generator. Cognitive model aims to build a more detailed model of human operators in an effective way. SACOM is designed to model knowledge-based behavior of human operators more easily. The followings are main research topics carried out this year. First, in order to model knowledge-based behavior of human operators, more detailed scenarios are constructed. And, knowledge representation and inference methods are developed to support the scenarios. Second, meta knowledge structures are studied to support human operators 4 types of diagnoses. This work includes a study on meta and scheduler knowledge structures for generate-and-test, topographic, decision tree and case-based approaches. Third, domain knowledge structure are improved to support meta knowledge. Especially, domain knowledge structures are developed to model topographic diagnosis model. Fourth, more applicable interaction analyzer and situation generator are designed and implemented. The new version is implemented in G2 on Sun workstations. 35 refs., 49 figs. (author)

  18. Modeling and operation optimization of a proton exchange membrane fuel cell system for maximum efficiency

    International Nuclear Information System (INIS)

    Han, In-Su; Park, Sang-Kyun; Chung, Chang-Bock

    2016-01-01

    Highlights: • A proton exchange membrane fuel cell system is operationally optimized. • A constrained optimization problem is formulated to maximize fuel cell efficiency. • Empirical and semi-empirical models for most system components are developed. • Sensitivity analysis is performed to elucidate the effects of major operating variables. • The optimization results are verified by comparison with actual operation data. - Abstract: This paper presents an operation optimization method and demonstrates its application to a proton exchange membrane fuel cell system. A constrained optimization problem was formulated to maximize the efficiency of a fuel cell system by incorporating practical models derived from actual operations of the system. Empirical and semi-empirical models for most of the system components were developed based on artificial neural networks and semi-empirical equations. Prior to system optimizations, the developed models were validated by comparing simulation results with the measured ones. Moreover, sensitivity analyses were performed to elucidate the effects of major operating variables on the system efficiency under practical operating constraints. Then, the optimal operating conditions were sought at various system power loads. The optimization results revealed that the efficiency gaps between the worst and best operation conditions of the system could reach 1.2–5.5% depending on the power output range. To verify the optimization results, the optimal operating conditions were applied to the fuel cell system, and the measured results were compared with the expected optimal values. The discrepancies between the measured and expected values were found to be trivial, indicating that the proposed operation optimization method was quite successful for a substantial increase in the efficiency of the fuel cell system.

  19. A Three-Phase Microgrid Restoration Model Considering Unbalanced Operation of Distributed Generation

    International Nuclear Information System (INIS)

    Wang, Zeyu; Wang, Jianhui; Chen, Chen

    2016-01-01

    Recent severe outages highlight the urgency of improving grid resiliency in the U.S. Microgrid formation schemes are proposed to restore critical loads after outages occur. Most distribution networks have unbalanced configurations that are not represented in sufficient detail by single-phase models. This study provides a microgrid formation plan that adopts a three-phase network model to represent unbalanced distribution networks. The problem formulation has a quadratic objective function with mixed-integer linear constraints. The three-phase network model enables us to examine the three-phase power outputs of distributed generators (DGs), preventing unbalanced operation that might trip DGs. Because the DG unbalanced operation constraint is non-convex, an iterative process is presented that checks whether the unbalanced operation limits for DGs are satisfied after each iteration of optimization. We also develop a relatively conservative linear approximation on the unbalanced operation constraint to handle larger networks. Compared with the iterative solution process, the conservative linear approximation is able to accelerate the solution process at the cost of sacrificing optimality to a limited extent. Simulation in the IEEE 34 node and IEEE 123 test feeders indicate that the proposed method yields more practical microgrid formations results. In addition, this paper explores the coordinated operation of DGs and energy storage (ES) installations. The unbalanced three-phase outputs of ESs combined with the relatively balanced outputs of DGs could supply unbalanced loads. In conclusion, the case study also validates the DG-ES coordination.

  20. Fires involving radioactive materials : transference model; operative recommendations

    International Nuclear Information System (INIS)

    Rodriguez, C.E.; Puntarulo, L.J.; Canibano, J.A.

    1988-01-01

    In all aspects related to the nuclear activity, the occurrence of an explosion, fire or burst type accident, with or without victims, is directly related to the characteristics of the site. The present work analyses the different parameters involved, describing a transference model and recommendations for evaluation and control of the radiological risk for firemen. Special emphasis is placed on the measurement of the variables existing in this kind of operations

  1. Expert System Models for Forecasting Forklifts Engagement in a Warehouse Loading Operation: A Case Study

    Directory of Open Access Journals (Sweden)

    Dejan Mirčetić

    2016-08-01

    Full Text Available The paper focuses on the problem of forklifts engagement in warehouse loading operations. Two expert system (ES models are created using several machine learning (ML models. Models try to mimic expert decisions while determining the forklifts engagement in the loading operation. Different ML models are evaluated and adaptive neuro fuzzy inference system (ANFIS and classification and regression trees (CART are chosen as the ones which have shown best results for the research purpose. As a case study, a central warehouse of a beverage company was used. In a beverage distribution chain, the proper engagement of forklifts in a loading operation is crucial for maintaining the defined customer service level. The created ES models represent a new approach for the rationalization of the forklifts usage, particularly for solving the problem of the forklifts engagement incargo loading. They are simple, easy to understand, reliable, and practically applicable tool for deciding on the engagement of the forklifts in a loading operation.

  2. Modeling of human operator dynamics in simple manual control utilizing time series analysis. [tracking (position)

    Science.gov (United States)

    Agarwal, G. C.; Osafo-Charles, F.; Oneill, W. D.; Gottlieb, G. L.

    1982-01-01

    Time series analysis is applied to model human operator dynamics in pursuit and compensatory tracking modes. The normalized residual criterion is used as a one-step analytical tool to encompass the processes of identification, estimation, and diagnostic checking. A parameter constraining technique is introduced to develop more reliable models of human operator dynamics. The human operator is adequately modeled by a second order dynamic system both in pursuit and compensatory tracking modes. In comparing the data sampling rates, 100 msec between samples is adequate and is shown to provide better results than 200 msec sampling. The residual power spectrum and eigenvalue analysis show that the human operator is not a generator of periodic characteristics.

  3. A concessionaire model for food and beverage operations in South African National Parks

    Directory of Open Access Journals (Sweden)

    T Taylor

    2014-01-01

    Full Text Available In recent years, protected areas have come under pressure due to the budget cuts of government. As a result, national parks have had to devise strategies by means of which they are able to generate additional revenue, in order to remain competitive. Such a strategy is the introduction of public-private partnerships, which allows the private sector to operate certain lodging facilities, restaurants and shops within parks. SANParks introduced their commercialization strategy in 2000 and overall it has been a success. However, despite earning much needed revenue; there are many complaints and overall dissatisfaction from tourists with restaurant and shop facilities operated by concessionaires in SANParks. A survey capturing more than 5000 questionnaires was conducted to explore SANParks concessionaires in terms of food and beverages to identify factors relating to the consumption of food and beverages by tourists. The data was analysed to provide information needed to construct a model for concessionaire food and beverage operations in SANParks. Data provided a demographic profile of respondents, factor analysis provided food consumption factors and lastly structural equation modelling which provided goodness of fit indices for the concessionaire model. The purpose of this study was to construct a model for concessionaire food and beverage operations at SANParks.

  4. Systemic model for the aid for operating of the reactor Siloe

    International Nuclear Information System (INIS)

    Royer, J.C.; Moulin, V.; Monge, F.

    1995-01-01

    The Service of the Reactor Siloe (CEA/DRN/DRE/SRS), fully aware of the abilities and knowledge of his teams in the field of research reactor operating, has undertaken a project of knowledge engineering in this domain. The following aims have been defined: knowledge capitalization for the installation in order to insure its perenniality and valorization, elaboration of a project for the aid of the reactor operators. This article deals with the different actions by the SRS to reach the aims: realization of a technical model for the operation of the Siloe reactor, development of a knowledge-based system for the aid for operating. These actions based on a knowledge engineering methodology, SAGACE, and using industrial tools will lead to an amelioration of the security and the operating of the Siloe reactor. (authors). 13 refs., 7 figs

  5. An operator basis for the Standard Model with an added scalar singlet

    Energy Technology Data Exchange (ETDEWEB)

    Gripaios, Ben [Cavendish Laboratory, J.J. Thomson Avenue, Cambridge (United Kingdom); Sutherland, Dave [Cavendish Laboratory, J.J. Thomson Avenue, Cambridge (United Kingdom); Kavli Institute for Theoretical Physics, UCSB Kohn Hall, Santa Barbara CA (United States)

    2016-08-17

    Motivated by the possible di-gamma resonance at 750 GeV, we present a basis of effective operators for the Standard Model plus a scalar singlet at dimensions 5, 6, and 7. We point out that an earlier list at dimensions 5 and 6 contains two redundant operators at dimension 5.

  6. MATHEMATICAL MODELS OF PROCESSES AND SYSTEMS OF TECHNICAL OPERATION FOR ONBOARD COMPLEXES AND FUNCTIONAL SYSTEMS OF AVIONICS

    Directory of Open Access Journals (Sweden)

    Sergey Viktorovich Kuznetsov

    2017-01-01

    Full Text Available Modern aircraft are equipped with complicated systems and complexes of avionics. Aircraft and its avionics tech- nical operation process is observed as a process with changing of operation states. Mathematical models of avionics pro- cesses and systems of technical operation are represented as Markov chains, Markov and semi-Markov processes. The pur- pose is to develop the graph-models of avionics technical operation processes, describing their work in flight, as well as during maintenance on the ground in the various systems of technical operation. The graph-models of processes and sys- tems of on-board complexes and functional avionics systems in flight are proposed. They are based on the state tables. The models are specified for the various technical operation systems: the system with control of the reliability level, the system with parameters control and the system with resource control. The events, which cause the avionics complexes and func- tional systems change their technical state, are failures and faults of built-in test equipment. Avionics system of technical operation with reliability level control is applicable for objects with constant or slowly varying in time failure rate. Avion- ics system of technical operation with resource control is mainly used for objects with increasing over time failure rate. Avionics system of technical operation with parameters control is used for objects with increasing over time failure rate and with generalized parameters, which can provide forecasting and assign the borders of before-fail technical states. The pro- posed formal graphical approach avionics complexes and systems models designing is the basis for models and complex systems and facilities construction, both for a single aircraft and for an airline aircraft fleet, or even for the entire aircraft fleet of some specific type. The ultimate graph-models for avionics in various systems of technical operation permit the beginning of

  7. NOAA Operational Model Archive Distribution System (NOMADS): High Availability Applications for Reliable Real Time Access to Operational Model Data

    Science.gov (United States)

    Alpert, J. C.; Wang, J.

    2009-12-01

    To reduce the impact of natural hazards and environmental changes, the National Centers for Environmental Prediction (NCEP) provide first alert and a preferred partner for environmental prediction services, and represents a critical national resource to operational and research communities affected by climate, weather and water. NOMADS is now delivering high availability services as part of NOAA’s official real time data dissemination at its Web Operations Center (WOC) server. The WOC is a web service used by organizational units in and outside NOAA, and acts as a data repository where public information can be posted to a secure and scalable content server. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development efforts aimed at advancing modeling and GEO-related tasks. The user (client) executes what is efficient to execute on the client and the server efficiently provides format independent access services. Client applications can execute on the server, if it is desired, but the same program can be executed on the client side with no loss of efficiency. In this way this paradigm lends itself to aggregation servers that act as servers of servers listing, searching catalogs of holdings, data mining, and updating information from the metadata descriptions that enable collections of data in disparate places to be simultaneously accessed, with results processed on servers and clients to produce a needed answer. The services used to access the operational model data output are the Open-source Project for a Network Data Access Protocol (OPeNDAP), implemented with the Grid Analysis and Display System (GrADS) Data Server (GDS), and applications for slicing, dicing and area sub-setting the large matrix of real time model data holdings. This approach insures an efficient use of computer resources because users transmit/receive only the data necessary for their tasks including

  8. A hybrid spatiotemporal drought forecasting model for operational use

    Science.gov (United States)

    Vasiliades, L.; Loukas, A.

    2010-09-01

    Drought forecasting plays an important role in the planning and management of natural resources and water resource systems in a river basin. Early and timelines forecasting of a drought event can help to take proactive measures and set out drought mitigation strategies to alleviate the impacts of drought. Spatiotemporal data mining is the extraction of unknown and implicit knowledge, structures, spatiotemporal relationships, or patterns not explicitly stored in spatiotemporal databases. As one of data mining techniques, forecasting is widely used to predict the unknown future based upon the patterns hidden in the current and past data. This study develops a hybrid spatiotemporal scheme for integrated spatial and temporal forecasting. Temporal forecasting is achieved using feed-forward neural networks and the temporal forecasts are extended to the spatial dimension using a spatial recurrent neural network model. The methodology is demonstrated for an operational meteorological drought index the Standardized Precipitation Index (SPI) calculated at multiple timescales. 48 precipitation stations and 18 independent precipitation stations, located at Pinios river basin in Thessaly region, Greece, were used for the development and spatiotemporal validation of the hybrid spatiotemporal scheme. Several quantitative temporal and spatial statistical indices were considered for the performance evaluation of the models. Furthermore, qualitative statistical criteria based on contingency tables between observed and forecasted drought episodes were calculated. The results show that the lead time of forecasting for operational use depends on the SPI timescale. The hybrid spatiotemporal drought forecasting model could be operationally used for forecasting up to three months ahead for SPI short timescales (e.g. 3-6 months) up to six months ahead for large SPI timescales (e.g. 24 months). The above findings could be useful in developing a drought preparedness plan in the region.

  9. A Scalable Version of the Navy Operational Global Atmospheric Prediction System Spectral Forecast Model

    Directory of Open Access Journals (Sweden)

    Thomas E. Rosmond

    2000-01-01

    Full Text Available The Navy Operational Global Atmospheric Prediction System (NOGAPS includes a state-of-the-art spectral forecast model similar to models run at several major operational numerical weather prediction (NWP centers around the world. The model, developed by the Naval Research Laboratory (NRL in Monterey, California, has run operational at the Fleet Numerical Meteorological and Oceanographic Center (FNMOC since 1982, and most recently is being run on a Cray C90 in a multi-tasked configuration. Typically the multi-tasked code runs on 10 to 15 processors with overall parallel efficiency of about 90%. resolution is T159L30, but other operational and research applications run at significantly lower resolutions. A scalable NOGAPS forecast model has been developed by NRL in anticipation of a FNMOC C90 replacement in about 2001, as well as for current NOGAPS research requirements to run on DOD High-Performance Computing (HPC scalable systems. The model is designed to run with message passing (MPI. Model design criteria include bit reproducibility for different processor numbers and reasonably efficient performance on fully shared memory, distributed memory, and distributed shared memory systems for a wide range of model resolutions. Results for a wide range of processor numbers, model resolutions, and different vendor architectures are presented. Single node performance has been disappointing on RISC based systems, at least compared to vector processor performance. This is a common complaint, and will require careful re-examination of traditional numerical weather prediction (NWP model software design and data organization to fully exploit future scalable architectures.

  10. Modeling and validating the grabbing forces of hydraulic log grapples used in forest operations

    Science.gov (United States)

    Jingxin Wang; Chris B. LeDoux; Lihai Wang

    2003-01-01

    The grabbing forces of log grapples were modeled and analyzed mathematically under operating conditions when grabbing logs from compact log piles and from bunch-like log piles. The grabbing forces are closely related to the structural parameters of the grapple, the weight of the grapple, and the weight of the log grabbed. An operational model grapple was designed and...

  11. Numerical modelling of multi-vane expander operating conditions in ORC system

    Science.gov (United States)

    Rak, Józef; Błasiak, Przemysław; Kolasiński, Piotr

    2017-11-01

    Multi-vane expanders are positive displacement volumetric machines which are nowadays considered for application in micro-power domestic ORC systems as promising alternative to micro turbines and other volumetric expanders. The multi-vane expander features very simple design, low gas flow capacity, low expansion ratios, an advantageous ratio of the power output to the external dimensions and are insensitive to the negative influence of the gas-liquid mixture expansion. Moreover, the multi-vane expander can be easily hermetically sealed, which is one of the key issues in the ORC system design. A literature review indicates that issues concerning the application of multi-vane expanders in such systems, especially related to operating of multi-vane expander with different low-boiling working fluids, are innovative, not fully scientifically described and have the potential for practical implementation. In this paper the results of numerical investigations on multi-vane expander operating conditions are presented. The analyses were performed on three-dimensional numerical model of the expander in ANSYS CFX software. The numerical model of the expander was validated using the data obtained from the experiment carried out on a lab test-stand. Then a series of computational analysis were performed using expanders' numerical model in order to determine its operating conditions under various flow conditions of different working fluids.

  12. The systems integration operations/logistics model as a decision-support tool

    International Nuclear Information System (INIS)

    Miller, C.; Vogel, L.W.; Joy, D.S.

    1989-01-01

    Congress has enacted legislation specifying Yucca Mountain, Nevada, for characterization as the candidate site for the disposal of spent fuel and high-level wastes and has authorized a monitored retrievable storage (MRS) facility if one is warranted. Nevertheless, the exact configuration of the facilities making up the Federal Waste Management System (FWMS) was not specified. This has left the Office of Civilian Radioactive Waste Management (OCRWM) the responsibility for assuring the design of a safe and reliable disposal system. In order to assist in the analysis of potential configuration alternatives, operating strategies, and other factors for the FWMS and its various elements, a decision-support tool known as the systems integration operations/logistics model (SOLMOD) was developed. SOLMOD is a discrete event simulation model that emulates the movement and interaction of equipment and radioactive waste as it is processed through the FWMS - from pickup at reactor pools to emplacement. The model can be used to measure the impacts of different operating schedules and rules, system configurations, and equipment and other resource availabilities on the performance of processes comprising the FWMS and how these factors combine to determine overall system performance. SOLMOD can assist in identifying bottlenecks and can be used to assess capacity utilization of specific equipment and staff as well as overall system resilience

  13. Renormalization Group Evolution of the Standard Model Dimension Six Operators I: Formalism and lambda Dependence

    CERN Document Server

    Jenkins, Elizabeth E; Trott, Michael

    2013-01-01

    We calculate the order \\lambda, \\lambda^2 and \\lambda y^2 terms of the 59 x 59 one-loop anomalous dimension matrix of dimension-six operators, where \\lambda and y are the Standard Model Higgs self-coupling and a generic Yukawa coupling, respectively. The dimension-six operators modify the running of the Standard Model parameters themselves, and we compute the complete one-loop result for this. We discuss how there is mixing between operators for which no direct one-particle-irreducible diagram exists, due to operator replacements by the equations of motion.

  14. Advanced autonomous model-based operation of industrial process systems (Autoprofit) : technological developments and future perspectives

    NARCIS (Netherlands)

    Ozkan, L.; Bombois, X.J.A.; Ludlage, J.H.A.; Rojas, C.R.; Hjalmarsson, H.; Moden, P.E.; Lundh, M.; Backx, A.C.P.M.; Van den Hof, P.M.J.

    2016-01-01

    Model-based operation support technology such as Model Predictive Control (MPC) is a proven and accepted technology for multivariable and constrained large scale control problems in process industry. Despite the growing number of successful implementations, the low level of operational efficiency of

  15. A survey on the technologies and cases for the cognitive models of nuclear power plant operators

    International Nuclear Information System (INIS)

    Lee, Yong Hee; Chun, Se Woo; Seo, Sang Moon; Lee, Hyun Chul

    1993-04-01

    To enhance the safety and availability of nuclear power plants, it is necessary to develop the methodologies which can systematically analyze the interrelationships between plant operators and main process systems. Operator congnitive models enable to provide an explicit method to analyze how operator's congitive behavior reacts to the behavior of system changes. However, because no adequate model has been developed up to now, it is difficult to take an effective approach for the review, assessment and improvement of human factors. In this study, we have surveyed the techniques and the cases of operator model development, aiming to develop an operator's model as one of human engineering application methodologies. We have analyzed the cognitive characteristics of decision-making, which is one of the principal factors for modeling, and reviewed the methodologies and implementation thechniques used in the cases of the model development. We investigated the tendencies of the model developments by reviewing ten cases and especially CES, INTEROPS and COSIMO models which have been developed or are under development in nuclear fields. Also, we summarized the cognitive characteristics to be considered in the modeling for the purpose of modeling operator's decision-making. For modeling methodologies, we found a trend of the modeling that is software simulations based on the artificial intelligence technologies, especially focused in knowledge representation methods. Based on the results of our survey, we proposed a development approach and several urgent research subjects. We suggested the development simulation tools which can be applicable to the review, assessment and improvement of human factors, by implementing them as softwares using expert system development tools. The results of this study have been applied to our long-term project named 'The Development of Human Engineering Technologies.' (Author)

  16. QEDMOD: Fortran program for calculating the model Lamb-shift operator

    Science.gov (United States)

    Shabaev, V. M.; Tupitsyn, I. I.; Yerokhin, V. A.

    2018-02-01

    We present Fortran package QEDMOD for computing the model QED operator hQED that can be used to account for the Lamb shift in accurate atomic-structure calculations. The package routines calculate the matrix elements of hQED with the user-specified one-electron wave functions. The operator can be used to calculate Lamb shift in many-electron atomic systems with a typical accuracy of few percent, either by evaluating the matrix element of hQED with the many-electron wave function, or by adding hQED to the Dirac-Coulomb-Breit Hamiltonian.

  17. Modeling of biomass-to-energy supply chain operations: Applications, challenges and research directions

    International Nuclear Information System (INIS)

    Mafakheri, Fereshteh; Nasiri, Fuzhan

    2014-01-01

    Reducing dependency on fossil fuels and mitigating their environmental impacts are among the most promising aspects of utilizing renewable energy sources. The availability of various biomass resources has made it an appealing source of renewable energy. Given the variability of supply and sources of biomass, supply chains play an important role in the efficient provisioning of biomass resources for energy production. This paper provides a comprehensive review and classification of the excising literature in modeling of biomass supply chain operations while linking them to the wider strategic challenges and issues with the design, planning and management of biomass supply chains. On that basis, we will present an analysis of the existing gaps and the potential future directions for research in modeling of biomass supply chain operations. - Highlights: • An extensive review of biomass supply chain operations management models presented in the literature is provided. • The models are classified in line with biomass supply chain activities from harvesting to conversion. • The issues surrounding biomass supply chains are investigated manifesting the need to novel modeling approaches. • Our gap analysis has identified a number of existing shortcomings and opportunities for future research

  18. Modelling Vessel Traffic Service to understand resilience in everyday operations

    International Nuclear Information System (INIS)

    Praetorius, Gesa; Hollnagel, Erik; Dahlman, Joakim

    2015-01-01

    Vessel Traffic Service (VTS) is a service to promote traffic fluency and safety in the entrance to ports. This article's purpose has been to explore everyday operations of the VTS system to gain insights in how it contributes to safe and efficient traffic movements. Interviews, focus groups and an observation have been conducted to collect data about everyday operations, as well as to grasp how the VTS system adapts to changing operational conditions. The results show that work within the VTS domain is highly complex and that the two systems modelled realise their services vastly differently, which in turn affects the systems' ability to monitor, respond and anticipate. This is of great importance to consider whenever changes are planned and implemented within the VTS domain. Only if everyday operations are properly analysed and understood, it can be estimated how alterations to technology and organisation will affect the overall system performance

  19. Architecture-based Model for Preventive and Operative Crisis Management

    National Research Council Canada - National Science Library

    Jungert, Erland; Derefeldt, Gunilla; Hallberg, Jonas; Hallberg, Niklas; Hunstad, Amund; Thuren, Ronny

    2004-01-01

    .... A system that should support activities of this type must not only have a high capacity, with respect to the dataflow, but also have suitable tools for decision support. To overcome these problems, an architecture for preventive and operative crisis management is proposed. The architecture is based on models for command and control, but also for risk analysis.

  20. A simple operational gas release and swelling model. Pt. 1

    International Nuclear Information System (INIS)

    Wood, M.H.; Matthews, J.R.

    1980-01-01

    A new and simple model of fission gas release and swelling has been developed for oxide nuclear fuel under operational conditions. The model, which is to be incorporated into a fuel element behaviour code, is physically based and applicable to fuel at both thermal and fast reactor ratings. In this paper we present that part of the model describing the behaviour of intragranular gas: a future paper will detail the treatment of the grain boundary gas. The results of model calculations are compared with recent experimental observations of intragranular bubble concentrations and sizes, and gas release from fuel irradiated under isothermal conditions. Good agreement is found between experiment and theory. (orig.)