WorldWideScience

Sample records for operator lasso model

  1. Using Multivariate Regression Model with Least Absolute Shrinkage and Selection Operator (LASSO) to Predict the Incidence of Xerostomia after Intensity-Modulated Radiotherapy for Head and Neck Cancer

    Science.gov (United States)

    Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Wu, Jia-Ming; Wang, Hung-Yu; Horng, Mong-Fong; Chang, Chun-Ming; Lan, Jen-Hong; Huang, Ya-Yu; Fang, Fu-Min; Leung, Stephen Wan

    2014-01-01

    Purpose The aim of this study was to develop a multivariate logistic regression model with least absolute shrinkage and selection operator (LASSO) to make valid predictions about the incidence of moderate-to-severe patient-rated xerostomia among head and neck cancer (HNC) patients treated with IMRT. Methods and Materials Quality of life questionnaire datasets from 206 patients with HNC were analyzed. The European Organization for Research and Treatment of Cancer QLQ-H&N35 and QLQ-C30 questionnaires were used as the endpoint evaluation. The primary endpoint (grade 3+ xerostomia) was defined as moderate-to-severe xerostomia at 3 (XER3m) and 12 months (XER12m) after the completion of IMRT. Normal tissue complication probability (NTCP) models were developed. The optimal and suboptimal numbers of prognostic factors for a multivariate logistic regression model were determined using the LASSO with bootstrapping technique. Statistical analysis was performed using the scaled Brier score, Nagelkerke R2, chi-squared test, Omnibus, Hosmer-Lemeshow test, and the AUC. Results Eight prognostic factors were selected by LASSO for the 3-month time point: Dmean-c, Dmean-i, age, financial status, T stage, AJCC stage, smoking, and education. Nine prognostic factors were selected for the 12-month time point: Dmean-i, education, Dmean-c, smoking, T stage, baseline xerostomia, alcohol abuse, family history, and node classification. In the selection of the suboptimal number of prognostic factors by LASSO, three suboptimal prognostic factors were fine-tuned by Hosmer-Lemeshow test and AUC, i.e., Dmean-c, Dmean-i, and age for the 3-month time point. Five suboptimal prognostic factors were also selected for the 12-month time point, i.e., Dmean-i, education, Dmean-c, smoking, and T stage. The overall performance for both time points of the NTCP model in terms of scaled Brier score, Omnibus, and Nagelkerke R2 was satisfactory and corresponded well with the expected values. Conclusions

  2. Building interpretable predictive models for pediatric hospital readmission using Tree-Lasso logistic regression.

    Science.gov (United States)

    Jovanovic, Milos; Radovanovic, Sandro; Vukicevic, Milan; Van Poucke, Sven; Delibasic, Boris

    2016-09-01

    Quantification and early identification of unplanned readmission risk have the potential to improve the quality of care during hospitalization and after discharge. However, high dimensionality, sparsity, and class imbalance of electronic health data and the complexity of risk quantification, challenge the development of accurate predictive models. Predictive models require a certain level of interpretability in order to be applicable in real settings and create actionable insights. This paper aims to develop accurate and interpretable predictive models for readmission in a general pediatric patient population, by integrating a data-driven model (sparse logistic regression) and domain knowledge based on the international classification of diseases 9th-revision clinical modification (ICD-9-CM) hierarchy of diseases. Additionally, we propose a way to quantify the interpretability of a model and inspect the stability of alternative solutions. The analysis was conducted on >66,000 pediatric hospital discharge records from California, State Inpatient Databases, Healthcare Cost and Utilization Project between 2009 and 2011. We incorporated domain knowledge based on the ICD-9-CM hierarchy in a data driven, Tree-Lasso regularized logistic regression model, providing the framework for model interpretation. This approach was compared with traditional Lasso logistic regression resulting in models that are easier to interpret by fewer high-level diagnoses, with comparable prediction accuracy. The results revealed that the use of a Tree-Lasso model was as competitive in terms of accuracy (measured by area under the receiver operating characteristic curve-AUC) as the traditional Lasso logistic regression, but integration with the ICD-9-CM hierarchy of diseases provided more interpretable models in terms of high-level diagnoses. Additionally, interpretations of models are in accordance with existing medical understanding of pediatric readmission. Best performing models have

  3. The use of vector bootstrapping to improve variable selection precision in Lasso models

    NARCIS (Netherlands)

    Laurin, C.; Boomsma, D.I.; Lubke, G.H.

    2016-01-01

    The Lasso is a shrinkage regression method that is widely used for variable selection in statistical genetics. Commonly, K-fold cross-validation is used to fit a Lasso model. This is sometimes followed by using bootstrap confidence intervals to improve precision in the resulting variable selections.

  4. Implementations of geographically weighted lasso in spatial data with multicollinearity (Case study: Poverty modeling of Java Island)

    Science.gov (United States)

    Setiyorini, Anis; Suprijadi, Jadi; Handoko, Budhi

    2017-03-01

    Geographically Weighted Regression (GWR) is a regression model that takes into account the spatial heterogeneity effect. In the application of the GWR, inference on regression coefficients is often of interest, as is estimation and prediction of the response variable. Empirical research and studies have demonstrated that local correlation between explanatory variables can lead to estimated regression coefficients in GWR that are strongly correlated, a condition named multicollinearity. It later results on a large standard error on estimated regression coefficients, and, hence, problematic for inference on relationships between variables. Geographically Weighted Lasso (GWL) is a method which capable to deal with spatial heterogeneity and local multicollinearity in spatial data sets. GWL is a further development of GWR method, which adds a LASSO (Least Absolute Shrinkage and Selection Operator) constraint in parameter estimation. In this study, GWL will be applied by using fixed exponential kernel weights matrix to establish a poverty modeling of Java Island, Indonesia. The results of applying the GWL to poverty datasets show that this method stabilizes regression coefficients in the presence of multicollinearity and produces lower prediction and estimation error of the response variable than GWR does.

  5. ORACLE INEQUALITIES FOR THE LASSO IN THE COX MODEL.

    Science.gov (United States)

    Huang, Jian; Sun, Tingni; Ying, Zhiliang; Yu, Yi; Zhang, Cun-Hui

    2013-06-01

    We study the absolute penalized maximum partial likelihood estimator in sparse, high-dimensional Cox proportional hazards regression models where the number of time-dependent covariates can be larger than the sample size. We establish oracle inequalities based on natural extensions of the compatibility and cone invertibility factors of the Hessian matrix at the true regression coefficients. Similar results based on an extension of the restricted eigenvalue can be also proved by our method. However, the presented oracle inequalities are sharper since the compatibility and cone invertibility factors are always greater than the corresponding restricted eigenvalue. In the Cox regression model, the Hessian matrix is based on time-dependent covariates in censored risk sets, so that the compatibility and cone invertibility factors, and the restricted eigenvalue as well, are random variables even when they are evaluated for the Hessian at the true regression coefficients. Under mild conditions, we prove that these quantities are bounded from below by positive constants for time-dependent covariates, including cases where the number of covariates is of greater order than the sample size. Consequently, the compatibility and cone invertibility factors can be treated as positive constants in our oracle inequalities.

  6. Genetic risk prediction using a spatial autoregressive model with adaptive lasso.

    Science.gov (United States)

    Wen, Yalu; Shen, Xiaoxi; Lu, Qing

    2018-05-31

    With rapidly evolving high-throughput technologies, studies are being initiated to accelerate the process toward precision medicine. The collection of the vast amounts of sequencing data provides us with great opportunities to systematically study the role of a deep catalog of sequencing variants in risk prediction. Nevertheless, the massive amount of noise signals and low frequencies of rare variants in sequencing data pose great analytical challenges on risk prediction modeling. Motivated by the development in spatial statistics, we propose a spatial autoregressive model with adaptive lasso (SARAL) for risk prediction modeling using high-dimensional sequencing data. The SARAL is a set-based approach, and thus, it reduces the data dimension and accumulates genetic effects within a single-nucleotide variant (SNV) set. Moreover, it allows different SNV sets having various magnitudes and directions of effect sizes, which reflects the nature of complex diseases. With the adaptive lasso implemented, SARAL can shrink the effects of noise SNV sets to be zero and, thus, further improve prediction accuracy. Through simulation studies, we demonstrate that, overall, SARAL is comparable to, if not better than, the genomic best linear unbiased prediction method. The method is further illustrated by an application to the sequencing data from the Alzheimer's Disease Neuroimaging Initiative. Copyright © 2018 John Wiley & Sons, Ltd.

  7. Introduction to the LASSO

    Indian Academy of Sciences (India)

    the LASSO method as a constrained quadratic programming prob- lem, and ... solve the LASSO problem. We also ... The problem (2) is equivalent to the best subset selection. .... erator (LASSO), which is based on the following key concepts:.

  8. Spatio Temporal EEG Source Imaging with the Hierarchical Bayesian Elastic Net and Elitist Lasso Models.

    Science.gov (United States)

    Paz-Linares, Deirel; Vega-Hernández, Mayrim; Rojas-López, Pedro A; Valdés-Hernández, Pedro A; Martínez-Montes, Eduardo; Valdés-Sosa, Pedro A

    2017-01-01

    The estimation of EEG generating sources constitutes an Inverse Problem (IP) in Neuroscience. This is an ill-posed problem due to the non-uniqueness of the solution and regularization or prior information is needed to undertake Electrophysiology Source Imaging. Structured Sparsity priors can be attained through combinations of (L1 norm-based) and (L2 norm-based) constraints such as the Elastic Net (ENET) and Elitist Lasso (ELASSO) models. The former model is used to find solutions with a small number of smooth nonzero patches, while the latter imposes different degrees of sparsity simultaneously along different dimensions of the spatio-temporal matrix solutions. Both models have been addressed within the penalized regression approach, where the regularization parameters are selected heuristically, leading usually to non-optimal and computationally expensive solutions. The existing Bayesian formulation of ENET allows hyperparameter learning, but using the computationally intensive Monte Carlo/Expectation Maximization methods, which makes impractical its application to the EEG IP. While the ELASSO have not been considered before into the Bayesian context. In this work, we attempt to solve the EEG IP using a Bayesian framework for ENET and ELASSO models. We propose a Structured Sparse Bayesian Learning algorithm based on combining the Empirical Bayes and the iterative coordinate descent procedures to estimate both the parameters and hyperparameters. Using realistic simulations and avoiding the inverse crime we illustrate that our methods are able to recover complicated source setups more accurately and with a more robust estimation of the hyperparameters and behavior under different sparsity scenarios than classical LORETA, ENET and LASSO Fusion solutions. We also solve the EEG IP using data from a visual attention experiment, finding more interpretable neurophysiological patterns with our methods. The Matlab codes used in this work, including Simulations, Methods

  9. The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection.

    Science.gov (United States)

    Tang, Zaixiang; Shen, Yueping; Zhang, Xinyan; Yi, Nengjun

    2017-01-01

    Large-scale "omics" data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, there are considerable challenges in analyzing high-dimensional molecular data, including the large number of potential molecular predictors, limited number of samples, and small effect of each predictor. We propose new Bayesian hierarchical generalized linear models, called spike-and-slab lasso GLMs, for prognostic prediction and detection of associated genes using large-scale molecular data. The proposed model employs a spike-and-slab mixture double-exponential prior for coefficients that can induce weak shrinkage on large coefficients, and strong shrinkage on irrelevant coefficients. We have developed a fast and stable algorithm to fit large-scale hierarchal GLMs by incorporating expectation-maximization (EM) steps into the fast cyclic coordinate descent algorithm. The proposed approach integrates nice features of two popular methods, i.e., penalized lasso and Bayesian spike-and-slab variable selection. The performance of the proposed method is assessed via extensive simulation studies. The results show that the proposed approach can provide not only more accurate estimates of the parameters, but also better prediction. We demonstrate the proposed procedure on two cancer data sets: a well-known breast cancer data set consisting of 295 tumors, and expression data of 4919 genes; and the ovarian cancer data set from TCGA with 362 tumors, and expression data of 5336 genes. Our analyses show that the proposed procedure can generate powerful models for predicting outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). Copyright © 2017 by the Genetics Society of America.

  10. Recommendations for the Implementation of the LASSO Workflow

    Energy Technology Data Exchange (ETDEWEB)

    Gustafson, William I [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Vogelmann, Andrew M [Brookhaven National Lab. (BNL), Upton, NY (United States); Cheng, Xiaoping [National University of Defense Technology, China; Endo, Satoshi [Brookhaven National Lab. (BNL), Upton, NY (United States); Krishna, Bhargavi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Li, Zhijin [California Inst. of Technology (CalTech), La Canada Flintridge, CA (United States). Jet Propulsion Lab.; University of California, Los Angeles; Toto, Tami [Brookhaven National Lab. (BNL), Upton, NY (United States); Xiao, Heng [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-11-15

    The U. S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Research Fa-cility began a pilot project in May 2015 to design a routine, high-resolution modeling capability to complement ARM’s extensive suite of measurements. This modeling capability, envisioned in the ARM Decadal Vision (U.S. Department of Energy 2014), subsequently has been named the Large-Eddy Simu-lation (LES) ARM Symbiotic Simulation and Observation (LASSO) project, and it has an initial focus of shallow convection at the ARM Southern Great Plains (SGP) atmospheric observatory. This report documents the recommendations resulting from the pilot project to be considered by ARM for imple-mentation into routine operations. During the pilot phase, LASSO has evolved from the initial vision outlined in the pilot project white paper (Gustafson and Vogelmann 2015) to what is recommended in this report. Further details on the overall LASSO project are available at https://www.arm.gov/capabilities/modeling/lasso. Feedback regarding LASSO and the recommendations in this report can be directed to William Gustafson, the project principal investigator (PI), and Andrew Vogelmann, the co-principal investigator (Co-PI), via lasso@arm.gov.

  11. Consistent and Conservative Model Selection with the Adaptive LASSO in Stationary and Nonstationary Autoregressions

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    2016-01-01

    We show that the adaptive Lasso is oracle efficient in stationary and nonstationary autoregressions. This means that it estimates parameters consistently, selects the correct sparsity pattern, and estimates the coefficients belonging to the relevant variables at the same asymptotic efficiency...

  12. Statistically Modeling I-V Characteristics of CNT-FET with LASSO

    Science.gov (United States)

    Ma, Dongsheng; Ye, Zuochang; Wang, Yan

    2017-08-01

    With the advent of internet of things (IOT), the need for studying new material and devices for various applications is increasing. Traditionally we build compact models for transistors on the basis of physics. But physical models are expensive and need a very long time to adjust for non-ideal effects. As the vision for the application of many novel devices is not certain or the manufacture process is not mature, deriving generalized accurate physical models for such devices is very strenuous, whereas statistical modeling is becoming a potential method because of its data oriented property and fast implementation. In this paper, one classical statistical regression method, LASSO, is used to model the I-V characteristics of CNT-FET and a pseudo-PMOS inverter simulation based on the trained model is implemented in Cadence. The normalized relative mean square prediction error of the trained model versus experiment sample data and the simulation results show that the model is acceptable for digital circuit static simulation. And such modeling methodology can extend to general devices.

  13. Geographically weighted lasso (GWL) study for modeling the diarrheic to achieve open defecation free (ODF) target

    Science.gov (United States)

    Arumsari, Nurvita; Sutidjo, S. U.; Brodjol; Soedjono, Eddy S.

    2014-03-01

    Diarrhea has been one main cause of morbidity and mortality to children around the world, especially in the developing countries According to available data that was mentioned. It showed that sanitary and healthy lifestyle implementation by the inhabitants was not good yet. Inadequacy of environmental influence and the availability of health services were suspected factors which influenced diarrhea cases happened followed by heightened percentage of the diarrheic. This research is aimed at modelling the diarrheic by using Geographically Weighted Lasso method. With the existence of spatial heterogeneity was tested by Breusch Pagan, it was showed that diarrheic modeling with weighted regression, especially GWR and GWL, can explain the variation in each location. But, the absence of multi-collinearity cases on predictor variables, which were affecting the diarrheic, resulted in GWR and GWL modelling to be not different or identical. It is shown from the resulting MSE value. While from R2 value which usually higher on GWL model showed a significant variable predictor based on more parametric shrinkage value.

  14. Efficient methods for overlapping group lasso.

    Science.gov (United States)

    Yuan, Lei; Liu, Jun; Ye, Jieping

    2013-09-01

    The group Lasso is an extension of the Lasso for feature selection on (predefined) nonoverlapping groups of features. The nonoverlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. Our methods and theoretical results are then generalized to tackle the general overlapping group Lasso formulation based on the l(q) norm. We further extend our algorithm to solve a nonconvex overlapping group Lasso formulation based on the capped norm regularization, which reduces the estimation bias introduced by the convex penalty. We have performed empirical evaluations using both a synthetic and the breast cancer gene expression dataset, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results show that the proposed algorithm is more efficient than existing state-of-the-art algorithms. Results also demonstrate the effectiveness of the nonconvex formulation for overlapping group Lasso.

  15. Identifying the Prognosis Factors in Death after Liver Transplantation via Adaptive LASSO in Iran

    Directory of Open Access Journals (Sweden)

    Hadi Raeisi Shahraki

    2016-01-01

    Full Text Available Despite the widespread use of liver transplantation as a routine therapy in liver diseases, the effective factors on its outcomes are still controversial. This study attempted to identify the most effective factors on death after liver transplantation. For this purpose, modified least absolute shrinkage and selection operator (LASSO, called Adaptive LASSO, was utilized. One of the best advantages of this method is considering high number of factors. Therefore, in a historical cohort study from 2008 to 2013, the clinical findings of 680 patients undergoing liver transplant surgery were considered. Ridge and Adaptive LASSO regression methods were then implemented to identify the most effective factors on death. To compare the performance of these two models, receiver operating characteristic (ROC curve was used. According to the results, 12 factors in Ridge regression and 9 ones in Adaptive LASSO regression were significant. The area under the ROC curve (AUC of Adaptive LASSO was equal to 89% (95% CI: 86%–91%, which was significantly greater than Ridge regression (64%, 95% CI: 61%–68% (p<0.001. As a conclusion, the significant factors and the performance criteria revealed the superiority of Adaptive LASSO method as a penalized model versus traditional regression model in the present study.

  16. Description of the LASSO Alpha 2 Release

    Energy Technology Data Exchange (ETDEWEB)

    Gustafson, William I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Vogelmann, Andrew M. [Brookhaven National Lab. (BNL), Upton, NY (United States); Cheng, Xiaoping [Univ. of California, Los Angeles, CA (United States); Endo, Satoshi [Brookhaven National Lab. (BNL), Upton, NY (United States); Krishna, Bhargavi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Li, Z. [Univ. of California, Los Angeles, CA (United States); Toto, Tami [Brookhaven National Lab. (BNL), Upton, NY (United States); Xiao, H. [Univ. of California, Los Angeles, CA (United States)

    2017-09-01

    The Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility began a pilot project in May 2015 to design a routine, high-resolution modeling capability to complement ARM’s extensive suite of measurements. This modeling capability has been named the Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation (LASSO) project. The initial focus of LASSO is on shallow convection at the ARM Southern Great Plains (SGP) Climate Research Facility. The availability of LES simulations with concurrent observations will serve many purposes. LES helps bridge the scale gap between DOE ARM observations and models, and the use of routine LES adds value to observations. It provides a self-consistent representation of the atmosphere and a dynamical context for the observations. Further, it elucidates unobservable processes and properties. LASSO will generate a simulation library for researchers that enables statistical approaches beyond a single-case mentality. It will also provide tools necessary for modelers to reproduce the LES and conduct their own sensitivity experiments. Many different uses are envisioned for the combined LASSO LES and observational library. For an observationalist, LASSO can help inform instrument remote sensing retrievals, conduct Observation System Simulation Experiments (OSSEs), and test implications of radar scan strategies or flight paths. For a theoretician, LASSO will help calculate estimates of fluxes and co-variability of values, and test relationships without having to run the model yourself. For a modeler, LASSO will help one know ahead of time which days have good forcing, have co-registered observations at high-resolution scales, and have simulation inputs and corresponding outputs to test parameterizations. Further details on the overall LASSO project are available at https://www.arm.gov/capabilities/modeling/lasso.

  17. EPS-LASSO: Test for High-Dimensional Regression Under Extreme Phenotype Sampling of Continuous Traits.

    Science.gov (United States)

    Xu, Chao; Fang, Jian; Shen, Hui; Wang, Yu-Ping; Deng, Hong-Wen

    2018-01-25

    Extreme phenotype sampling (EPS) is a broadly-used design to identify candidate genetic factors contributing to the variation of quantitative traits. By enriching the signals in extreme phenotypic samples, EPS can boost the association power compared to random sampling. Most existing statistical methods for EPS examine the genetic factors individually, despite many quantitative traits have multiple genetic factors underlying their variation. It is desirable to model the joint effects of genetic factors, which may increase the power and identify novel quantitative trait loci under EPS. The joint analysis of genetic data in high-dimensional situations requires specialized techniques, e.g., the least absolute shrinkage and selection operator (LASSO). Although there are extensive research and application related to LASSO, the statistical inference and testing for the sparse model under EPS remain unknown. We propose a novel sparse model (EPS-LASSO) with hypothesis test for high-dimensional regression under EPS based on a decorrelated score function. The comprehensive simulation shows EPS-LASSO outperforms existing methods with stable type I error and FDR control. EPS-LASSO can provide a consistent power for both low- and high-dimensional situations compared with the other methods dealing with high-dimensional situations. The power of EPS-LASSO is close to other low-dimensional methods when the causal effect sizes are small and is superior when the effects are large. Applying EPS-LASSO to a transcriptome-wide gene expression study for obesity reveals 10 significant body mass index associated genes. Our results indicate that EPS-LASSO is an effective method for EPS data analysis, which can account for correlated predictors. The source code is available at https://github.com/xu1912/EPSLASSO. hdeng2@tulane.edu. Supplementary data are available at Bioinformatics online. © The Author (2018). Published by Oxford University Press. All rights reserved. For Permissions, please

  18. Description of the LASSO Alpha 1 Release

    Energy Technology Data Exchange (ETDEWEB)

    Gustafson, William I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Vogelmann, Andrew M. [Brookhaven National Lab. (BNL), Upton, NY (United States); Cheng, Xiaoping [Univ. of California, Los Angeles, CA (United States); Endo, Satoshi [Brookhaven National Lab. (BNL), Upton, NY (United States); Krishna, Bhargavi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Li, Zhijin [Univ. of California, Los Angeles, CA (United States); Toto, Tami [Brookhaven National Lab. (BNL), Upton, NY (United States); Xiao, H [Univ. of California, Los Angeles, CA (United States)

    2017-07-31

    The Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility began a pilot project in May 2015 to design a routine, high-resolution modeling capability to complement ARM’s extensive suite of measurements. This modeling capability has been named the Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation (LASSO) project. The availability of LES simulations with concurrent observations will serve many purposes. LES helps bridge the scale gap between DOE ARM observations and models, and the use of routine LES adds value to observations. It provides a self-consistent representation of the atmosphere and a dynamical context for the observations. Further, it elucidates unobservable processes and properties. LASSO will generate a simulation library for researchers that enables statistical approaches beyond a single-case mentality. It will also provide tools necessary for modelers to reproduce the LES and conduct their own sensitivity experiments. Many different uses are envisioned for the combined LASSO LES and observational library. For an observationalist, LASSO can help inform instrument remote-sensing retrievals, conduct Observation System Simulation Experiments (OSSEs), and test implications of radar scan strategies or flight paths. For a theoretician, LASSO will help calculate estimates of fluxes and co-variability of values, and test relationships without having to run the model yourself. For a modeler, LASSO will help one know ahead of time which days have good forcing, have co-registered observations at high-resolution scales, and have simulation inputs and corresponding outputs to test parameterizations. Further details on the overall LASSO project are available at http://www.arm. gov/science/themes/lasso.

  19. Handling high predictor dimensionality in slope-unit-based landslide susceptibility models through LASSO-penalized Generalized Linear Model

    KAUST Repository

    Camilo, Daniela Castro

    2017-08-30

    Grid-based landslide susceptibility models at regional scales are computationally demanding when using a fine grid resolution. Conversely, Slope-Unit (SU) based susceptibility models allows to investigate the same areas offering two main advantages: 1) a smaller computational burden and 2) a more geomorphologically-oriented interpretation. In this contribution, we generate SU-based landslide susceptibility for the Sado Island in Japan. This island is characterized by deep-seated landslides which we assume can only limitedly be explained by the first two statistical moments (mean and variance) of a set of predictors within each slope unit. As a consequence, in a nested experiment, we first analyse the distributions of a set of continuous predictors within each slope unit computing the standard deviation and quantiles from 0.05 to 0.95 with a step of 0.05. These are then used as predictors for landslide susceptibility. In addition, we combine shape indices for polygon features and the normalized extent of each class belonging to the outcropping lithology in a given SU. This procedure significantly enlarges the size of the predictor hyperspace, thus producing a high level of slope-unit characterization. In a second step, we adopt a LASSO-penalized Generalized Linear Model to shrink back the predictor set to a sensible and interpretable number, carrying only the most significant covariates in the models. As a result, we are able to document the geomorphic features (e.g., 95% quantile of Elevation and 5% quantile of Plan Curvature) that primarily control the SU-based susceptibility within the test area while producing high predictive performances. The implementation of the statistical analyses are included in a parallelized R script (LUDARA) which is here made available for the community to replicate analogous experiments.

  20. Handling high predictor dimensionality in slope-unit-based landslide susceptibility models through LASSO-penalized Generalized Linear Model

    KAUST Repository

    Camilo, Daniela Castro; Lombardo, Luigi; Mai, Paul Martin; Dou, Jie; Huser, Raphaë l

    2017-01-01

    Grid-based landslide susceptibility models at regional scales are computationally demanding when using a fine grid resolution. Conversely, Slope-Unit (SU) based susceptibility models allows to investigate the same areas offering two main advantages: 1) a smaller computational burden and 2) a more geomorphologically-oriented interpretation. In this contribution, we generate SU-based landslide susceptibility for the Sado Island in Japan. This island is characterized by deep-seated landslides which we assume can only limitedly be explained by the first two statistical moments (mean and variance) of a set of predictors within each slope unit. As a consequence, in a nested experiment, we first analyse the distributions of a set of continuous predictors within each slope unit computing the standard deviation and quantiles from 0.05 to 0.95 with a step of 0.05. These are then used as predictors for landslide susceptibility. In addition, we combine shape indices for polygon features and the normalized extent of each class belonging to the outcropping lithology in a given SU. This procedure significantly enlarges the size of the predictor hyperspace, thus producing a high level of slope-unit characterization. In a second step, we adopt a LASSO-penalized Generalized Linear Model to shrink back the predictor set to a sensible and interpretable number, carrying only the most significant covariates in the models. As a result, we are able to document the geomorphic features (e.g., 95% quantile of Elevation and 5% quantile of Plan Curvature) that primarily control the SU-based susceptibility within the test area while producing high predictive performances. The implementation of the statistical analyses are included in a parallelized R script (LUDARA) which is here made available for the community to replicate analogous experiments.

  1. Group spike-and-slab lasso generalized linear models for disease prediction and associated genes detection by incorporating pathway information.

    Science.gov (United States)

    Tang, Zaixiang; Shen, Yueping; Li, Yan; Zhang, Xinyan; Wen, Jia; Qian, Chen'ao; Zhuang, Wenzhuo; Shi, Xinghua; Yi, Nengjun

    2018-03-15

    Large-scale molecular data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, standard approaches for omics data analysis ignore the group structure among genes encoded in functional relationships or pathway information. We propose new Bayesian hierarchical generalized linear models, called group spike-and-slab lasso GLMs, for predicting disease outcomes and detecting associated genes by incorporating large-scale molecular data and group structures. The proposed model employs a mixture double-exponential prior for coefficients that induces self-adaptive shrinkage amount on different coefficients. The group information is incorporated into the model by setting group-specific parameters. We have developed a fast and stable deterministic algorithm to fit the proposed hierarchal GLMs, which can perform variable selection within groups. We assess the performance of the proposed method on several simulated scenarios, by varying the overlap among groups, group size, number of non-null groups, and the correlation within group. Compared with existing methods, the proposed method provides not only more accurate estimates of the parameters but also better prediction. We further demonstrate the application of the proposed procedure on three cancer datasets by utilizing pathway structures of genes. Our results show that the proposed method generates powerful models for predicting disease outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). nyi@uab.edu. Supplementary data are available at Bioinformatics online.

  2. Fine-mapping additive and dominant SNP effects using group-LASSO and Fractional Resample Model Averaging

    Science.gov (United States)

    Sabourin, Jeremy; Nobel, Andrew B.; Valdar, William

    2014-01-01

    Genomewide association studies sometimes identify loci at which both the number and identities of the underlying causal variants are ambiguous. In such cases, statistical methods that model effects of multiple SNPs simultaneously can help disentangle the observed patterns of association and provide information about how those SNPs could be prioritized for follow-up studies. Current multi-SNP methods, however, tend to assume that SNP effects are well captured by additive genetics; yet when genetic dominance is present, this assumption translates to reduced power and faulty prioritizations. We describe a statistical procedure for prioritizing SNPs at GWAS loci that efficiently models both additive and dominance effects. Our method, LLARRMA-dawg, combines a group LASSO procedure for sparse modeling of multiple SNP effects with a resampling procedure based on fractional observation weights; it estimates for each SNP the robustness of association with the phenotype both to sampling variation and to competing explanations from other SNPs. In producing a SNP prioritization that best identifies underlying true signals, we show that: our method easily outperforms a single marker analysis; when additive-only signals are present, our joint model for additive and dominance is equivalent to or only slightly less powerful than modeling additive-only effects; and, when dominance signals are present, even in combination with substantial additive effects, our joint model is unequivocally more powerful than a model assuming additivity. We also describe how performance can be improved through calibrated randomized penalization, and discuss how dominance in ungenotyped SNPs can be incorporated through either heterozygote dosage or multiple imputation. PMID:25417853

  3. EM Adaptive LASSO – A Multilocus Modeling Strategy for Detecting SNPs Associated With Zero-inflated Count Phenotypes

    Directory of Open Access Journals (Sweden)

    Himel eMallick

    2016-03-01

    Full Text Available Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP, Negative Binomial, and Zero-inflated Negative Binomial (ZINB. However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely

  4. When Is Network Lasso Accurate?

    Directory of Open Access Journals (Sweden)

    Alexander Jung

    2018-01-01

    Full Text Available The “least absolute shrinkage and selection operator” (Lasso method has been adapted recently for network-structured datasets. In particular, this network Lasso method allows to learn graph signals from a small number of noisy signal samples by using the total variation of a graph signal for regularization. While efficient and scalable implementations of the network Lasso are available, only little is known about the conditions on the underlying network structure which ensure network Lasso to be accurate. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal. We also quantify the error incurred by network Lasso in terms of two constants which reflect the connectivity of the sampled nodes.

  5. Inference in partially identified models with many moment inequalities using Lasso

    DEFF Research Database (Denmark)

    Bugni, Federico A.; Caner, Mehmet; Kock, Anders Bredahl

    This paper considers the problem of inference in a partially identified moment (in)equality model with possibly many moment inequalities. Our contribution is to propose a novel two-step new inference method based on the combination of two ideas. On the one hand, our test statistic and critical...

  6. The Bayesian Covariance Lasso.

    Science.gov (United States)

    Khondker, Zakaria S; Zhu, Hongtu; Chu, Haitao; Lin, Weili; Ibrahim, Joseph G

    2013-04-01

    Estimation of sparse covariance matrices and their inverse subject to positive definiteness constraints has drawn a lot of attention in recent years. The abundance of high-dimensional data, where the sample size ( n ) is less than the dimension ( d ), requires shrinkage estimation methods since the maximum likelihood estimator is not positive definite in this case. Furthermore, when n is larger than d but not sufficiently larger, shrinkage estimation is more stable than maximum likelihood as it reduces the condition number of the precision matrix. Frequentist methods have utilized penalized likelihood methods, whereas Bayesian approaches rely on matrix decompositions or Wishart priors for shrinkage. In this paper we propose a new method, called the Bayesian Covariance Lasso (BCLASSO), for the shrinkage estimation of a precision (covariance) matrix. We consider a class of priors for the precision matrix that leads to the popular frequentist penalties as special cases, develop a Bayes estimator for the precision matrix, and propose an efficient sampling scheme that does not precalculate boundaries for positive definiteness. The proposed method is permutation invariant and performs shrinkage and estimation simultaneously for non-full rank data. Simulations show that the proposed BCLASSO performs similarly as frequentist methods for non-full rank data.

  7. Improving the prediction of going concern of Taiwanese listed companies using a hybrid of LASSO with data mining techniques.

    Science.gov (United States)

    Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De

    2016-01-01

    The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).

  8. Bayesian LASSO, scale space and decision making in association genetics.

    Science.gov (United States)

    Pasanen, Leena; Holmström, Lasse; Sillanpää, Mikko J

    2015-01-01

    LASSO is a penalized regression method that facilitates model fitting in situations where there are as many, or even more explanatory variables than observations, and only a few variables are relevant in explaining the data. We focus on the Bayesian version of LASSO and consider four problems that need special attention: (i) controlling false positives, (ii) multiple comparisons, (iii) collinearity among explanatory variables, and (iv) the choice of the tuning parameter that controls the amount of shrinkage and the sparsity of the estimates. The particular application considered is association genetics, where LASSO regression can be used to find links between chromosome locations and phenotypic traits in a biological organism. However, the proposed techniques are relevant also in other contexts where LASSO is used for variable selection. We separate the true associations from false positives using the posterior distribution of the effects (regression coefficients) provided by Bayesian LASSO. We propose to solve the multiple comparisons problem by using simultaneous inference based on the joint posterior distribution of the effects. Bayesian LASSO also tends to distribute an effect among collinear variables, making detection of an association difficult. We propose to solve this problem by considering not only individual effects but also their functionals (i.e. sums and differences). Finally, whereas in Bayesian LASSO the tuning parameter is often regarded as a random variable, we adopt a scale space view and consider a whole range of fixed tuning parameters, instead. The effect estimates and the associated inference are considered for all tuning parameters in the selected range and the results are visualized with color maps that provide useful insights into data and the association problem considered. The methods are illustrated using two sets of artificial data and one real data set, all representing typical settings in association genetics.

  9. Variable selection by lasso-type methods

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2011-09-01

    Full Text Available Variable selection is an important property of shrinkage methods. The adaptive lasso is an oracle procedure and can do consistent variable selection. In this paper, we provide an explanation that how use of adaptive weights make it possible for the adaptive lasso to satisfy the necessary and almost sufcient condition for consistent variable selection. We suggest a novel algorithm and give an important result that for the adaptive lasso if predictors are normalised after the introduction of adaptive weights, it makes the adaptive lasso performance identical to the lasso.

  10. Pierced Lasso Proteins

    Science.gov (United States)

    Jennings, Patricia

    Entanglement and knots are naturally occurring, where, in the microscopic world, knots in DNA and homopolymers are well characterized. The most complex knots are observed in proteins which are harder to investigate, as proteins are heteropolymers composed of a combination of 20 different amino acids with different individual biophysical properties. As new-knotted topologies and new proteins containing knots continue to be discovered and characterized, the investigation of knots in proteins has gained intense interest. Thus far, the principle focus has been on the evolutionary origin of tying a knot, with questions of how a protein chain `self-ties' into a knot, what the mechanism(s) are that contribute to threading, and the biological relevance and functional implication of a knotted topology in vivo gaining the most insight. Efforts to study the fully untied and unfolded chain indicate that the knot is highly stable, remaining intact in the unfolded state orders of magnitude longer than first anticipated. The persistence of ``stable'' knots in the unfolded state, together with the challenge of defining an unfolded and untied chain from an unfolded and knotted chain, complicates the study of fully untied protein in vitro. Our discovery of a new class of knotted proteins, the Pierced Lassos (PL) loop topology, simplifies the knotting approach. While PLs are not easily recognizable by the naked eye, they have now been identified in many proteins in the PDB through the use of computation tools. PL topologies are diverse proteins found in all kingdoms of life, performing a large variety of biological responses such as cell signaling, immune responses, transporters and inhibitors (http://lassoprot.cent.uw.edu.pl/). Many of these PL topologies are secreted proteins, extracellular proteins, as well as, redox sensors, enzymes and metal and co-factor binding proteins; all of which provide a favorable environment for the formation of the disulphide bridge. In the PL

  11. Performance Analysis of Hospitals Affiliated to Mashhad University of Medical Sciences Using the Pabon Lasso Model: A Six-Year-Trend Study

    Directory of Open Access Journals (Sweden)

    Kalhor

    2016-08-01

    Full Text Available Background Nowadays, productivity and efficiency are considered a culture and a perspective in both life and work environments. This is the starting point of human development. Objectives The aim of the present study was to investigate the performance of hospitals affiliated to Mashhad University of Medical Sciences using the Pabon Lasso Model. Methods The present study was a descriptive-analytic research, with a cross-sectional design, conducted during six years (2009 - 2014, at selected hospitals. The studied hospitals of this study were 21 public hospitals affiliated to Mashhad University of Medical Sciences. The data was obtained from the treatment Deputy of Khorasan Razavi province. Results Results from the present study showed that only 19% of the studied hospitals were located in zone 3 of the diagram, indicating a perfect performance. Twenty-eight percent were in zone 1, 19% in zone 2, and 28% in zone 4. Conclusions According to the findings, only a few hospitals are at the desirable zone (zone 3; the rest of the hospitals fell in other zones, which could be a result of poor performance and poor management of hospital resources. Most of the hospitals were in zones 1 and 4, whose characteristics are low bed turnover and longer stay, indicating higher bed supply than demand for healthcare services or longer hospitalization, less outpatient equipment use, and higher costs.

  12. Structural Graphical Lasso for Learning Mouse Brain Connectivity

    KAUST Repository

    Yang, Sen

    2015-08-07

    Investigations into brain connectivity aim to recover networks of brain regions connected by anatomical tracts or by functional associations. The inference of brain networks has recently attracted much interest due to the increasing availability of high-resolution brain imaging data. Sparse inverse covariance estimation with lasso and group lasso penalty has been demonstrated to be a powerful approach to discover brain networks. Motivated by the hierarchical structure of the brain networks, we consider the problem of estimating a graphical model with tree-structural regularization in this paper. The regularization encourages the graphical model to exhibit a brain-like structure. Specifically, in this hierarchical structure, hundreds of thousands of voxels serve as the leaf nodes of the tree. A node in the intermediate layer represents a region formed by voxels in the subtree rooted at that node. The whole brain is considered as the root of the tree. We propose to apply the tree-structural regularized graphical model to estimate the mouse brain network. However, the dimensionality of whole-brain data, usually on the order of hundreds of thousands, poses significant computational challenges. Efficient algorithms that are capable of estimating networks from high-dimensional data are highly desired. To address the computational challenge, we develop a screening rule which can quickly identify many zero blocks in the estimated graphical model, thereby dramatically reducing the computational cost of solving the proposed model. It is based on a novel insight on the relationship between screening and the so-called proximal operator that we first establish in this paper. We perform experiments on both synthetic data and real data from the Allen Developing Mouse Brain Atlas; results demonstrate the effectiveness and efficiency of the proposed approach.

  13. FFTLasso: Large-Scale LASSO in the Fourier Domain

    KAUST Repository

    Bibi, Adel Aamer

    2017-11-09

    In this paper, we revisit the LASSO sparse representation problem, which has been studied and used in a variety of different areas, ranging from signal processing and information theory to computer vision and machine learning. In the vision community, it found its way into many important applications, including face recognition, tracking, super resolution, image denoising, to name a few. Despite advances in efficient sparse algorithms, solving large-scale LASSO problems remains a challenge. To circumvent this difficulty, people tend to downsample and subsample the problem (e.g. via dimensionality reduction) to maintain a manageable sized LASSO, which usually comes at the cost of losing solution accuracy. This paper proposes a novel circulant reformulation of the LASSO that lifts the problem to a higher dimension, where ADMM can be efficiently applied to its dual form. Because of this lifting, all optimization variables are updated using only basic element-wise operations, the most computationally expensive of which is a 1D FFT. In this way, there is no need for a linear system solver nor matrix-vector multiplication. Since all operations in our FFTLasso method are element-wise, the subproblems are completely independent and can be trivially parallelized (e.g. on a GPU). The attractive computational properties of FFTLasso are verified by extensive experiments on synthetic and real data and on the face recognition task. They demonstrate that FFTLasso scales much more effectively than a state-of-the-art solver.

  14. FFTLasso: Large-Scale LASSO in the Fourier Domain

    KAUST Repository

    Bibi, Adel Aamer; Itani, Hani; Ghanem, Bernard

    2017-01-01

    In this paper, we revisit the LASSO sparse representation problem, which has been studied and used in a variety of different areas, ranging from signal processing and information theory to computer vision and machine learning. In the vision community, it found its way into many important applications, including face recognition, tracking, super resolution, image denoising, to name a few. Despite advances in efficient sparse algorithms, solving large-scale LASSO problems remains a challenge. To circumvent this difficulty, people tend to downsample and subsample the problem (e.g. via dimensionality reduction) to maintain a manageable sized LASSO, which usually comes at the cost of losing solution accuracy. This paper proposes a novel circulant reformulation of the LASSO that lifts the problem to a higher dimension, where ADMM can be efficiently applied to its dual form. Because of this lifting, all optimization variables are updated using only basic element-wise operations, the most computationally expensive of which is a 1D FFT. In this way, there is no need for a linear system solver nor matrix-vector multiplication. Since all operations in our FFTLasso method are element-wise, the subproblems are completely independent and can be trivially parallelized (e.g. on a GPU). The attractive computational properties of FFTLasso are verified by extensive experiments on synthetic and real data and on the face recognition task. They demonstrate that FFTLasso scales much more effectively than a state-of-the-art solver.

  15. Comparison of partial least squares and lasso regression techniques as applied to laser-induced breakdown spectroscopy of geological samples

    International Nuclear Information System (INIS)

    Dyar, M.D.; Carmosino, M.L.; Breves, E.A.; Ozanne, M.V.; Clegg, S.M.; Wiens, R.C.

    2012-01-01

    A remote laser-induced breakdown spectrometer (LIBS) designed to simulate the ChemCam instrument on the Mars Science Laboratory Rover Curiosity was used to probe 100 geologic samples at a 9-m standoff distance. ChemCam consists of an integrated remote LIBS instrument that will probe samples up to 7 m from the mast of the rover and a remote micro-imager (RMI) that will record context images. The elemental compositions of 100 igneous and highly-metamorphosed rocks are determined with LIBS using three variations of multivariate analysis, with a goal of improving the analytical accuracy. Two forms of partial least squares (PLS) regression are employed with finely-tuned parameters: PLS-1 regresses a single response variable (elemental concentration) against the observation variables (spectra, or intensity at each of 6144 spectrometer channels), while PLS-2 simultaneously regresses multiple response variables (concentrations of the ten major elements in rocks) against the observation predictor variables, taking advantage of natural correlations between elements. Those results are contrasted with those from the multivariate regression technique of the least absolute shrinkage and selection operator (lasso), which is a penalized shrunken regression method that selects the specific channels for each element that explain the most variance in the concentration of that element. To make this comparison, we use results of cross-validation and of held-out testing, and employ unscaled and uncentered spectral intensity data because all of the input variables are already in the same units. Results demonstrate that the lasso, PLS-1, and PLS-2 all yield comparable results in terms of accuracy for this dataset. However, the interpretability of these methods differs greatly in terms of fundamental understanding of LIBS emissions. PLS techniques generate principal components, linear combinations of intensities at any number of spectrometer channels, which explain as much variance in the

  16. Comparison of partial least squares and lasso regression techniques as applied to laser-induced breakdown spectroscopy of geological samples

    Energy Technology Data Exchange (ETDEWEB)

    Dyar, M.D., E-mail: mdyar@mtholyoke.edu [Dept. of Astronomy, Mount Holyoke College, 50 College St., South Hadley, MA 01075 (United States); Carmosino, M.L.; Breves, E.A.; Ozanne, M.V. [Dept. of Astronomy, Mount Holyoke College, 50 College St., South Hadley, MA 01075 (United States); Clegg, S.M.; Wiens, R.C. [Los Alamos National Laboratory, P.O. Box 1663, MS J565, Los Alamos, NM 87545 (United States)

    2012-04-15

    A remote laser-induced breakdown spectrometer (LIBS) designed to simulate the ChemCam instrument on the Mars Science Laboratory Rover Curiosity was used to probe 100 geologic samples at a 9-m standoff distance. ChemCam consists of an integrated remote LIBS instrument that will probe samples up to 7 m from the mast of the rover and a remote micro-imager (RMI) that will record context images. The elemental compositions of 100 igneous and highly-metamorphosed rocks are determined with LIBS using three variations of multivariate analysis, with a goal of improving the analytical accuracy. Two forms of partial least squares (PLS) regression are employed with finely-tuned parameters: PLS-1 regresses a single response variable (elemental concentration) against the observation variables (spectra, or intensity at each of 6144 spectrometer channels), while PLS-2 simultaneously regresses multiple response variables (concentrations of the ten major elements in rocks) against the observation predictor variables, taking advantage of natural correlations between elements. Those results are contrasted with those from the multivariate regression technique of the least absolute shrinkage and selection operator (lasso), which is a penalized shrunken regression method that selects the specific channels for each element that explain the most variance in the concentration of that element. To make this comparison, we use results of cross-validation and of held-out testing, and employ unscaled and uncentered spectral intensity data because all of the input variables are already in the same units. Results demonstrate that the lasso, PLS-1, and PLS-2 all yield comparable results in terms of accuracy for this dataset. However, the interpretability of these methods differs greatly in terms of fundamental understanding of LIBS emissions. PLS techniques generate principal components, linear combinations of intensities at any number of spectrometer channels, which explain as much variance in the

  17. Toward Probabilistic Diagnosis and Understanding of Depression Based on Functional MRI Data Analysis with Logistic Group LASSO.

    Directory of Open Access Journals (Sweden)

    Yu Shimizu

    Full Text Available Diagnosis of psychiatric disorders based on brain imaging data is highly desirable in clinical applications. However, a common problem in applying machine learning algorithms is that the number of imaging data dimensions often greatly exceeds the number of available training samples. Furthermore, interpretability of the learned classifier with respect to brain function and anatomy is an important, but non-trivial issue. We propose the use of logistic regression with a least absolute shrinkage and selection operator (LASSO to capture the most critical input features. In particular, we consider application of group LASSO to select brain areas relevant to diagnosis. An additional advantage of LASSO is its probabilistic output, which allows evaluation of diagnosis certainty. To verify our approach, we obtained semantic and phonological verbal fluency fMRI data from 31 depression patients and 31 control subjects, and compared the performances of group LASSO (gLASSO, and sparse group LASSO (sgLASSO to those of standard LASSO (sLASSO, Support Vector Machine (SVM, and Random Forest. Over 90% classification accuracy was achieved with gLASSO, sgLASSO, as well as SVM; however, in contrast to SVM, LASSO approaches allow for identification of the most discriminative weights and estimation of prediction reliability. Semantic task data revealed contributions to the classification from left precuneus, left precentral gyrus, left inferior frontal cortex (pars triangularis, and left cerebellum (c rus1. Weights for the phonological task indicated contributions from left inferior frontal operculum, left post central gyrus, left insula, left middle frontal cortex, bilateral middle temporal cortices, bilateral precuneus, left inferior frontal cortex (pars triangularis, and left precentral gyrus. The distribution of normalized odds ratios further showed, that predictions with absolute odds ratios higher than 0.2 could be regarded as certain.

  18. LASSO NTCP predictors for the incidence of xerostomia in patients with head and neck squamous cell carcinoma and nasopharyngeal carcinoma

    Science.gov (United States)

    Lee, Tsair-Fwu; Liou, Ming-Hsiang; Huang, Yu-Jie; Chao, Pei-Ju; Ting, Hui-Min; Lee, Hsiao-Yi

    2014-01-01

    To predict the incidence of moderate-to-severe patient-reported xerostomia among head and neck squamous cell carcinoma (HNSCC) and nasopharyngeal carcinoma (NPC) patients treated with intensity-modulated radiotherapy (IMRT). Multivariable normal tissue complication probability (NTCP) models were developed by using quality of life questionnaire datasets from 152 patients with HNSCC and 84 patients with NPC. The primary endpoint was defined as moderate-to-severe xerostomia after IMRT. The numbers of predictive factors for a multivariable logistic regression model were determined using the least absolute shrinkage and selection operator (LASSO) with bootstrapping technique. Four predictive models were achieved by LASSO with the smallest number of factors while preserving predictive value with higher AUC performance. For all models, the dosimetric factors for the mean dose given to the contralateral and ipsilateral parotid gland were selected as the most significant predictors. Followed by the different clinical and socio-economic factors being selected, namely age, financial status, T stage, and education for different models were chosen. The predicted incidence of xerostomia for HNSCC and NPC patients can be improved by using multivariable logistic regression models with LASSO technique. The predictive model developed in HNSCC cannot be generalized to NPC cohort treated with IMRT without validation and vice versa. PMID:25163814

  19. The joint graphical lasso for inverse covariance estimation across multiple classes.

    Science.gov (United States)

    Danaher, Patrick; Wang, Pei; Witten, Daniela M

    2014-03-01

    We consider the problem of estimating multiple related Gaussian graphical models from a high-dimensional data set with observations belonging to distinct classes. We propose the joint graphical lasso , which borrows strength across the classes in order to estimate multiple graphical models that share certain characteristics, such as the locations or weights of nonzero edges. Our approach is based upon maximizing a penalized log likelihood. We employ generalized fused lasso or group lasso penalties, and implement a fast ADMM algorithm to solve the corresponding convex optimization problems. The performance of the proposed method is illustrated through simulated and real data examples.

  20. The Los Alamos Space Science Outreach (LASSO) Program

    Science.gov (United States)

    Barker, P. L.; Skoug, R. M.; Alexander, R. J.; Thomsen, M. F.; Gary, S. P.

    2002-12-01

    The Los Alamos Space Science Outreach (LASSO) program features summer workshops in which K-14 teachers spend several weeks at LANL learning space science from Los Alamos scientists and developing methods and materials for teaching this science to their students. The program is designed to provide hands-on space science training to teachers as well as assistance in developing lesson plans for use in their classrooms. The program supports an instructional model based on education research and cognitive theory. Students and teachers engage in activities that encourage critical thinking and a constructivist approach to learning. LASSO is run through the Los Alamos Science Education Team (SET). SET personnel have many years of experience in teaching, education research, and science education programs. Their involvement ensures that the teacher workshop program is grounded in sound pedagogical methods and meets current educational standards. Lesson plans focus on current LANL satellite projects to study the solar wind and the Earth's magnetosphere. LASSO is an umbrella program for space science education activities at Los Alamos National Laboratory (LANL) that was created to enhance the science and math interests and skills of students from New Mexico and the nation. The LASSO umbrella allows maximum leveraging of EPO funding from a number of projects (and thus maximum educational benefits to both students and teachers), while providing a format for the expression of the unique science perspective of each project.

  1. Matlab implementation of LASSO, LARS, the elastic net and SPCA

    DEFF Research Database (Denmark)

    2005-01-01

    There are a number of interesting variable selection methods available beside the regular forward selection and stepwise selection methods. Such approaches include LASSO (Least Absolute Shrinkage and Selection Operator), least angle regression (LARS) and elastic net (LARS-EN) regression. There al...... exists a method for calculating principal components with sparse loadings. This software package contains Matlab implementations of these functions. The standard implementations of these functions are available as add-on packages in S-Plus and R....

  2. Logistic LASSO regression for the diagnosis of breast cancer using clinical demographic data and the BI-RADS lexicon for ultrasonography.

    Science.gov (United States)

    Kim, Sun Mi; Kim, Yongdai; Jeong, Kuhwan; Jeong, Heeyeong; Kim, Jiyoung

    2018-01-01

    The aim of this study was to compare the performance of image analysis for predicting breast cancer using two distinct regression models and to evaluate the usefulness of incorporating clinical and demographic data (CDD) into the image analysis in order to improve the diagnosis of breast cancer. This study included 139 solid masses from 139 patients who underwent a ultrasonography-guided core biopsy and had available CDD between June 2009 and April 2010. Three breast radiologists retrospectively reviewed 139 breast masses and described each lesion using the Breast Imaging Reporting and Data System (BI-RADS) lexicon. We applied and compared two regression methods-stepwise logistic (SL) regression and logistic least absolute shrinkage and selection operator (LASSO) regression-in which the BI-RADS descriptors and CDD were used as covariates. We investigated the performances of these regression methods and the agreement of radiologists in terms of test misclassification error and the area under the curve (AUC) of the tests. Logistic LASSO regression was superior (Pcomparable to the AUC with CDD (0.873 vs. 0.880, P=0.141). Logistic LASSO regression based on BI-RADS descriptors and CDD showed better performance than SL in predicting the presence of breast cancer. The use of CDD as a supplement to the BI-RADS descriptors significantly improved the prediction of breast cancer using logistic LASSO regression.

  3. Oracle Efficient Estimation and Forecasting with the Adaptive LASSO and the Adaptive Group LASSO in Vector Autoregressions

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Callot, Laurent

    We show that the adaptive Lasso (aLasso) and the adaptive group Lasso (agLasso) are oracle efficient in stationary vector autoregressions where the number of parameters per equation is smaller than the number of observations. In particular, this means that the parameters are estimated consistently...

  4. LES ARM Symbiotic Simulation and Observation (LASSO) Implementation Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Gustafson Jr., WI [Pacific Northwest National Laboratory; Vogelmann, AM [Brookhaven National Laboratory

    2015-09-01

    This document illustrates the design of the Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation (LASSO) workflow to provide a routine, high-resolution modeling capability to augment the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility’s high-density observations. LASSO will create a powerful new capability for furthering ARM’s mission to advance understanding of cloud, radiation, aerosol, and land-surface processes. The combined observational and modeling elements will enable a new level of scientific inquiry by connecting processes and context to observations and providing needed statistics for details that cannot be measured. The result will be improved process understanding that facilitates concomitant improvements in climate model parameterizations. The initial LASSO implementation will be for ARM’s Southern Great Plains site in Oklahoma and will focus on shallow convection, which is poorly simulated by climate models due in part to clouds’ typically small spatial scale compared to model grid spacing, and because the convection involves complicated interactions of microphysical and boundary layer processes.

  5. Controlling the local false discovery rate in the adaptive Lasso

    KAUST Repository

    Sampson, J. N.

    2013-04-09

    The Lasso shrinkage procedure achieved its popularity, in part, by its tendency to shrink estimated coefficients to zero, and its ability to serve as a variable selection procedure. Using data-adaptive weights, the adaptive Lasso modified the original procedure to increase the penalty terms for those variables estimated to be less important by ordinary least squares. Although this modified procedure attained the oracle properties, the resulting models tend to include a large number of "false positives" in practice. Here, we adapt the concept of local false discovery rates (lFDRs) so that it applies to the sequence, λn, of smoothing parameters for the adaptive Lasso. We define the lFDR for a given λn to be the probability that the variable added to the model by decreasing λn to λn-δ is not associated with the outcome, where δ is a small value. We derive the relationship between the lFDR and λn, show lFDR =1 for traditional smoothing parameters, and show how to select λn so as to achieve a desired lFDR. We compare the smoothing parameters chosen to achieve a specified lFDR and those chosen to achieve the oracle properties, as well as their resulting estimates for model coefficients, with both simulation and an example from a genetic study of prostate specific antigen.

  6. Fast empirical Bayesian LASSO for multiple quantitative trait locus mapping

    Directory of Open Access Journals (Sweden)

    Xu Shizhong

    2011-05-01

    Full Text Available Abstract Background The Bayesian shrinkage technique has been applied to multiple quantitative trait loci (QTLs mapping to estimate the genetic effects of QTLs on quantitative traits from a very large set of possible effects including the main and epistatic effects of QTLs. Although the recently developed empirical Bayes (EB method significantly reduced computation comparing with the fully Bayesian approach, its speed and accuracy are limited by the fact that numerical optimization is required to estimate the variance components in the QTL model. Results We developed a fast empirical Bayesian LASSO (EBLASSO method for multiple QTL mapping. The fact that the EBLASSO can estimate the variance components in a closed form along with other algorithmic techniques render the EBLASSO method more efficient and accurate. Comparing with the EB method, our simulation study demonstrated that the EBLASSO method could substantially improve the computational speed and detect more QTL effects without increasing the false positive rate. Particularly, the EBLASSO algorithm running on a personal computer could easily handle a linear QTL model with more than 100,000 variables in our simulation study. Real data analysis also demonstrated that the EBLASSO method detected more reasonable effects than the EB method. Comparing with the LASSO, our simulation showed that the current version of the EBLASSO implemented in Matlab had similar speed as the LASSO implemented in Fortran, and that the EBLASSO detected the same number of true effects as the LASSO but a much smaller number of false positive effects. Conclusions The EBLASSO method can handle a large number of effects possibly including both the main and epistatic QTL effects, environmental effects and the effects of gene-environment interactions. It will be a very useful tool for multiple QTL mapping.

  7. The Bayesian group lasso for confounded spatial data

    Science.gov (United States)

    Hefley, Trevor J.; Hooten, Mevin B.; Hanks, Ephraim M.; Russell, Robin E.; Walsh, Daniel P.

    2017-01-01

    Generalized linear mixed models for spatial processes are widely used in applied statistics. In many applications of the spatial generalized linear mixed model (SGLMM), the goal is to obtain inference about regression coefficients while achieving optimal predictive ability. When implementing the SGLMM, multicollinearity among covariates and the spatial random effects can make computation challenging and influence inference. We present a Bayesian group lasso prior with a single tuning parameter that can be chosen to optimize predictive ability of the SGLMM and jointly regularize the regression coefficients and spatial random effect. We implement the group lasso SGLMM using efficient Markov chain Monte Carlo (MCMC) algorithms and demonstrate how multicollinearity among covariates and the spatial random effect can be monitored as a derived quantity. To test our method, we compared several parameterizations of the SGLMM using simulated data and two examples from plant ecology and disease ecology. In all examples, problematic levels multicollinearity occurred and influenced sampling efficiency and inference. We found that the group lasso prior resulted in roughly twice the effective sample size for MCMC samples of regression coefficients and can have higher and less variable predictive accuracy based on out-of-sample data when compared to the standard SGLMM.

  8. Supervised group Lasso with applications to microarray data analysis

    Directory of Open Access Journals (Sweden)

    Huang Jian

    2007-02-01

    Full Text Available Abstract Background A tremendous amount of efforts have been devoted to identifying genes for diagnosis and prognosis of diseases using microarray gene expression data. It has been demonstrated that gene expression data have cluster structure, where the clusters consist of co-regulated genes which tend to have coordinated functions. However, most available statistical methods for gene selection do not take into consideration the cluster structure. Results We propose a supervised group Lasso approach that takes into account the cluster structure in gene expression data for gene selection and predictive model building. For gene expression data without biological cluster information, we first divide genes into clusters using the K-means approach and determine the optimal number of clusters using the Gap method. The supervised group Lasso consists of two steps. In the first step, we identify important genes within each cluster using the Lasso method. In the second step, we select important clusters using the group Lasso. Tuning parameters are determined using V-fold cross validation at both steps to allow for further flexibility. Prediction performance is evaluated using leave-one-out cross validation. We apply the proposed method to disease classification and survival analysis with microarray data. Conclusion We analyze four microarray data sets using the proposed approach: two cancer data sets with binary cancer occurrence as outcomes and two lymphoma data sets with survival outcomes. The results show that the proposed approach is capable of identifying a small number of influential gene clusters and important genes within those clusters, and has better prediction performance than existing methods.

  9. Breast cancer tumor classification using LASSO method selection approach

    International Nuclear Information System (INIS)

    Celaya P, J. M.; Ortiz M, J. A.; Martinez B, M. R.; Solis S, L. O.; Castaneda M, R.; Garza V, I.; Martinez F, M.; Ortiz R, J. M.

    2016-10-01

    Breast cancer is one of the leading causes of deaths worldwide among women. Early tumor detection is key in reducing breast cancer deaths and screening mammography is the widest available method for early detection. Mammography is the most common and effective breast cancer screening test. However, the rate of positive findings is very low, making the radiologic interpretation monotonous and biased toward errors. In an attempt to alleviate radiological workload, this work presents a computer-aided diagnosis (CAD x) method aimed to automatically classify tumor lesions into malign or benign as a means to a second opinion. The CAD x methos, extracts image features, and classifies the screening mammogram abnormality into one of two categories: subject at risk of having malignant tumor (malign), and healthy subject (benign). In this study, 143 abnormal segmentation s (57 malign and 86 benign) from the Breast Cancer Digital Repository (BCD R) public database were used to train and evaluate the CAD x system. Percentile-rank (p-rank) was used to standardize the data. Using the LASSO feature selection methodology, the model achieved a Leave-one-out-cross-validation area under the receiver operating characteristic curve (Auc) of 0.950. The proposed method has the potential to rank abnormal lesions with high probability of malignant findings aiding in the detection of potential malign cases as a second opinion to the radiologist. (Author)

  10. Breast cancer tumor classification using LASSO method selection approach

    Energy Technology Data Exchange (ETDEWEB)

    Celaya P, J. M.; Ortiz M, J. A.; Martinez B, M. R.; Solis S, L. O.; Castaneda M, R.; Garza V, I.; Martinez F, M.; Ortiz R, J. M., E-mail: morvymm@yahoo.com.mx [Universidad Autonoma de Zacatecas, Av. Ramon Lopez Velarde 801, Col. Centro, 98000 Zacatecas, Zac. (Mexico)

    2016-10-15

    Breast cancer is one of the leading causes of deaths worldwide among women. Early tumor detection is key in reducing breast cancer deaths and screening mammography is the widest available method for early detection. Mammography is the most common and effective breast cancer screening test. However, the rate of positive findings is very low, making the radiologic interpretation monotonous and biased toward errors. In an attempt to alleviate radiological workload, this work presents a computer-aided diagnosis (CAD x) method aimed to automatically classify tumor lesions into malign or benign as a means to a second opinion. The CAD x methos, extracts image features, and classifies the screening mammogram abnormality into one of two categories: subject at risk of having malignant tumor (malign), and healthy subject (benign). In this study, 143 abnormal segmentation s (57 malign and 86 benign) from the Breast Cancer Digital Repository (BCD R) public database were used to train and evaluate the CAD x system. Percentile-rank (p-rank) was used to standardize the data. Using the LASSO feature selection methodology, the model achieved a Leave-one-out-cross-validation area under the receiver operating characteristic curve (Auc) of 0.950. The proposed method has the potential to rank abnormal lesions with high probability of malignant findings aiding in the detection of potential malign cases as a second opinion to the radiologist. (Author)

  11. YM2: Continuum expectations, lattice convergence, and lassos

    International Nuclear Information System (INIS)

    Driver, B.K.

    1989-01-01

    The two dimensional Yang-Mills theory (YM 2 ) is analyzed in both the continuum and the lattice. In the complete axial gauge the continuum theory may be defined in terms of a Lie algebra valued white noise, and parallel translation may be defined by stochastic differential equations. This machinery is used to compute the expectations of gauge invariant functions of the parallel translation operators along a collection of curves C. The expectation values are expressed as finite dimensional integrals with densities that are products of the heat kernel on the structure group. The time parameters of the heat kernels are determined by the areas enclosed by the collection C, and the arguments are determined by the crossing topologies of the curves in C. The expectations for the Wilson lattice models have a similar structure, and from this it follows that in the limit of small lattice spacing the lattice expectations converge to the continuum expectations. It is also shown that the lasso variables advocated by L. Gross exist and are sufficient to generate all the measurable functions on the YM 2 -measure space. (orig.)

  12. Non-Asymptotic Oracle Inequalities for the High-Dimensional Cox Regression via Lasso.

    Science.gov (United States)

    Kong, Shengchun; Nan, Bin

    2014-01-01

    We consider finite sample properties of the regularized high-dimensional Cox regression via lasso. Existing literature focuses on linear models or generalized linear models with Lipschitz loss functions, where the empirical risk functions are the summations of independent and identically distributed (iid) losses. The summands in the negative log partial likelihood function for censored survival data, however, are neither iid nor Lipschitz.We first approximate the negative log partial likelihood function by a sum of iid non-Lipschitz terms, then derive the non-asymptotic oracle inequalities for the lasso penalized Cox regression using pointwise arguments to tackle the difficulties caused by lacking iid Lipschitz losses.

  13. Multivariate sparse group lasso for the multivariate multiple linear regression with an arbitrary group structure.

    Science.gov (United States)

    Li, Yanming; Nan, Bin; Zhu, Ji

    2015-06-01

    We propose a multivariate sparse group lasso variable selection and estimation method for data with high-dimensional predictors as well as high-dimensional response variables. The method is carried out through a penalized multivariate multiple linear regression model with an arbitrary group structure for the regression coefficient matrix. It suits many biology studies well in detecting associations between multiple traits and multiple predictors, with each trait and each predictor embedded in some biological functional groups such as genes, pathways or brain regions. The method is able to effectively remove unimportant groups as well as unimportant individual coefficients within important groups, particularly for large p small n problems, and is flexible in handling various complex group structures such as overlapping or nested or multilevel hierarchical structures. The method is evaluated through extensive simulations with comparisons to the conventional lasso and group lasso methods, and is applied to an eQTL association study. © 2015, The International Biometric Society.

  14. Inference for feature selection using the Lasso with high-dimensional data

    DEFF Research Database (Denmark)

    Brink-Jensen, Kasper; Ekstrøm, Claus Thorn

    2014-01-01

    Penalized regression models such as the Lasso have proved useful for variable selection in many fields - especially for situations with high-dimensional data where the numbers of predictors far exceeds the number of observations. These methods identify and rank variables of importance but do...... not generally provide any inference of the selected variables. Thus, the variables selected might be the "most important" but need not be significant. We propose a significance test for the selection found by the Lasso. We introduce a procedure that computes inference and p-values for features chosen...... by the Lasso. This method rephrases the null hypothesis and uses a randomization approach which ensures that the error rate is controlled even for small samples. We demonstrate the ability of the algorithm to compute $p$-values of the expected magnitude with simulated data using a multitude of scenarios...

  15. Association between biomarkers and clinical characteristics in chronic subdural hematoma patients assessed with lasso regression.

    Directory of Open Access Journals (Sweden)

    Are Hugo Pripp

    Full Text Available Chronic subdural hematoma (CSDH is characterized by an "old" encapsulated collection of blood and blood breakdown products between the brain and its outermost covering (the dura. Recognized risk factors for development of CSDH are head injury, old age and using anticoagulation medication, but its underlying pathophysiological processes are still unclear. It is assumed that a complex local process of interrelated mechanisms including inflammation, neomembrane formation, angiogenesis and fibrinolysis could be related to its development and propagation. However, the association between the biomarkers of inflammation and angiogenesis, and the clinical and radiological characteristics of CSDH patients, need further investigation. The high number of biomarkers compared to the number of observations, the correlation between biomarkers, missing data and skewed distributions may limit the usefulness of classical statistical methods. We therefore explored lasso regression to assess the association between 30 biomarkers of inflammation and angiogenesis at the site of lesions, and selected clinical and radiological characteristics in a cohort of 93 patients. Lasso regression performs both variable selection and regularization to improve the predictive accuracy and interpretability of the statistical model. The results from the lasso regression showed analysis exhibited lack of robust statistical association between the biomarkers in hematoma fluid with age, gender, brain infarct, neurological deficiencies and volume of hematoma. However, there were associations between several of the biomarkers with postoperative recurrence requiring reoperation. The statistical analysis with lasso regression supported previous findings that the immunological characteristics of CSDH are local. The relationship between biomarkers, the radiological appearance of lesions and recurrence requiring reoperation have been inclusive using classical statistical methods on these data

  16. Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

    Science.gov (United States)

    Ndiaye, Eugene; Fercoq, Olivier; Gramfort, Alexandre; Leclère, Vincent; Salmon, Joseph

    2017-10-01

    In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider ℓ 1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for uncertainty quantification. In this work, after illustrating numerical difficulties for the Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expensive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features.

  17. Detection of radionuclides from weak and poorly resolved spectra using Lasso and subsampling techniques

    International Nuclear Information System (INIS)

    Bai, Er-Wei; Chan, Kung-sik; Eichinger, William; Kump, Paul

    2011-01-01

    We consider a problem of identification of nuclides from weak and poorly resolved spectra. A two stage algorithm is proposed and tested based on the principle of majority voting. The idea is to model gamma-ray counts as Poisson processes. Then, the average part is taken to be the model and the difference between the observed gamma-ray counts and the average is considered as random noise. In the linear part, the unknown coefficients correspond to if isotopes of interest are present or absent. Lasso types of algorithms are applied to find non-vanishing coefficients. Since Lasso or any prediction error based algorithm is inconsistent with variable selection for finite data length, an estimate of parameter distribution based on subsampling techniques is added in addition to Lasso. Simulation examples are provided in which the traditional peak detection algorithms fail to work and the proposed two stage algorithm performs well in terms of both the False Negative and False Positive errors. - Highlights: → Identification of nuclides from weak and poorly resolved spectra. → An algorithm is proposed and tested based on the principle of majority voting. → Lasso types of algorithms are applied to find non-vanishing coefficients. → An estimate of parameter distribution based on sub-sampling techniques is included. → Simulations compare the results of the proposed method with those of peak detection.

  18. Detection of radionuclides from weak and poorly resolved spectra using Lasso and subsampling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Er-Wei, E-mail: er-wei-bai@uiowa.edu [Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA 52242 (United States); Chan, Kung-sik, E-mail: kung-sik-chan@uiowa.edu [Department of Statistical and Actuarial Science, University of Iowa, Iowa City, IA 52242 (United States); Eichinger, William, E-mail: william-eichinger@uiowa.edu [Department of Civil and Environmental Engineering, University of Iowa, Iowa City, IA 52242 (United States); Kump, Paul [Department of Electrical and Computer Engineering, University of Iowa, Iowa City, IA 52242 (United States)

    2011-10-15

    We consider a problem of identification of nuclides from weak and poorly resolved spectra. A two stage algorithm is proposed and tested based on the principle of majority voting. The idea is to model gamma-ray counts as Poisson processes. Then, the average part is taken to be the model and the difference between the observed gamma-ray counts and the average is considered as random noise. In the linear part, the unknown coefficients correspond to if isotopes of interest are present or absent. Lasso types of algorithms are applied to find non-vanishing coefficients. Since Lasso or any prediction error based algorithm is inconsistent with variable selection for finite data length, an estimate of parameter distribution based on subsampling techniques is added in addition to Lasso. Simulation examples are provided in which the traditional peak detection algorithms fail to work and the proposed two stage algorithm performs well in terms of both the False Negative and False Positive errors. - Highlights: > Identification of nuclides from weak and poorly resolved spectra. > An algorithm is proposed and tested based on the principle of majority voting. > Lasso types of algorithms are applied to find non-vanishing coefficients. > An estimate of parameter distribution based on sub-sampling techniques is included. > Simulations compare the results of the proposed method with those of peak detection.

  19. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van' t [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands)

    2012-03-15

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  20. Impact of statistical learning methods on the predictive power of multivariate normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Schilstra, Cornelis; Langendijk, Johannes A; van't Veld, Aart A

    2012-03-15

    To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Impact of Statistical Learning Methods on the Predictive Power of Multivariate Normal Tissue Complication Probability Models

    International Nuclear Information System (INIS)

    Xu Chengjian; Schaaf, Arjen van der; Schilstra, Cornelis; Langendijk, Johannes A.; Veld, Aart A. van’t

    2012-01-01

    Purpose: To study the impact of different statistical learning methods on the prediction performance of multivariate normal tissue complication probability (NTCP) models. Methods and Materials: In this study, three learning methods, stepwise selection, least absolute shrinkage and selection operator (LASSO), and Bayesian model averaging (BMA), were used to build NTCP models of xerostomia following radiotherapy treatment for head and neck cancer. Performance of each learning method was evaluated by a repeated cross-validation scheme in order to obtain a fair comparison among methods. Results: It was found that the LASSO and BMA methods produced models with significantly better predictive power than that of the stepwise selection method. Furthermore, the LASSO method yields an easily interpretable model as the stepwise method does, in contrast to the less intuitive BMA method. Conclusions: The commonly used stepwise selection method, which is simple to execute, may be insufficient for NTCP modeling. The LASSO method is recommended.

  2. Validating the LASSO algorithm by unmixing spectral signatures in multicolor phantoms

    Science.gov (United States)

    Samarov, Daniel V.; Clarke, Matthew; Lee, Ji Yoon; Allen, David; Litorja, Maritoni; Hwang, Jeeseong

    2012-03-01

    As hyperspectral imaging (HSI) sees increased implementation into the biological and medical elds it becomes increasingly important that the algorithms being used to analyze the corresponding output be validated. While certainly important under any circumstance, as this technology begins to see a transition from benchtop to bedside ensuring that the measurements being given to medical professionals are accurate and reproducible is critical. In order to address these issues work has been done in generating a collection of datasets which could act as a test bed for algorithms validation. Using a microarray spot printer a collection of three food color dyes, acid red 1 (AR), brilliant blue R (BBR) and erioglaucine (EG) are mixed together at dierent concentrations in varying proportions at dierent locations on a microarray chip. With the concentration and mixture proportions known at each location, using HSI an algorithm should in principle, based on estimates of abundances, be able to determine the concentrations and proportions of each dye at each location on the chip. These types of data are particularly important in the context of medical measurements as the resulting estimated abundances will be used to make critical decisions which can have a serious impact on an individual's health. In this paper we present a novel algorithm for processing and analyzing HSI data based on the LASSO algorithm (similar to "basis pursuit"). The LASSO is a statistical method for simultaneously performing model estimation and variable selection. In the context of estimating abundances in an HSI scene these so called "sparse" representations provided by the LASSO are appropriate as not every pixel will be expected to contain every endmember. The algorithm we present takes the general framework of the LASSO algorithm a step further and incorporates the rich spatial information which is available in HSI to further improve the estimates of abundance. We show our algorithm's improvement

  3. Modelling arithmetic operations

    Energy Technology Data Exchange (ETDEWEB)

    Shabanov-kushnarenk, Yu P

    1981-01-01

    The possibility of modelling finite alphabetic operators using formal intelligence theory, is explored, with the setting up of models of a 3-digit adder and a multidigit subtractor, as examples. 2 references.

  4. Fused Adaptive Lasso for Spatial and Temporal Quantile Function Estimation

    KAUST Repository

    Sun, Ying

    2015-09-01

    Quantile functions are important in characterizing the entire probability distribution of a random variable, especially when the tail of a skewed distribution is of interest. This article introduces new quantile function estimators for spatial and temporal data with a fused adaptive Lasso penalty to accommodate the dependence in space and time. This method penalizes the difference among neighboring quantiles, hence it is desirable for applications with features ordered in time or space without replicated observations. The theoretical properties are investigated and the performances of the proposed methods are evaluated by simulations. The proposed method is applied to particulate matter (PM) data from the Community Multiscale Air Quality (CMAQ) model to characterize the upper quantiles, which are crucial for studying spatial association between PM concentrations and adverse human health effects. © 2016 American Statistical Association and the American Society for Quality.

  5. Operator spin foam models

    International Nuclear Information System (INIS)

    Bahr, Benjamin; Hellmann, Frank; Kaminski, Wojciech; Kisielowski, Marcin; Lewandowski, Jerzy

    2011-01-01

    The goal of this paper is to introduce a systematic approach to spin foams. We define operator spin foams, that is foams labelled by group representations and operators, as our main tool. A set of moves we define in the set of the operator spin foams (among other operations) allows us to split the faces and the edges of the foams. We assign to each operator spin foam a contracted operator, by using the contractions at the vertices and suitably adjusted face amplitudes. The emergence of the face amplitudes is the consequence of assuming the invariance of the contracted operator with respect to the moves. Next, we define spin foam models and consider the class of models assumed to be symmetric with respect to the moves we have introduced, and assuming their partition functions (state sums) are defined by the contracted operators. Briefly speaking, those operator spin foam models are invariant with respect to the cellular decomposition, and are sensitive only to the topology and colouring of the foam. Imposing an extra symmetry leads to a family we call natural operator spin foam models. This symmetry, combined with assumed invariance with respect to the edge splitting move, determines a complete characterization of a general natural model. It can be obtained by applying arbitrary (quantum) constraints on an arbitrary BF spin foam model. In particular, imposing suitable constraints on a spin(4) BF spin foam model is exactly the way we tend to view 4D quantum gravity, starting with the BC model and continuing with the Engle-Pereira-Rovelli-Livine (EPRL) or Freidel-Krasnov (FK) models. That makes our framework directly applicable to those models. Specifically, our operator spin foam framework can be translated into the language of spin foams and partition functions. Among our natural spin foam models there are the BF spin foam model, the BC model, and a model corresponding to the EPRL intertwiners. Our operator spin foam framework can also be used for more general spin

  6. Sparse inverse covariance estimation with the graphical lasso.

    Science.gov (United States)

    Friedman, Jerome; Hastie, Trevor; Tibshirani, Robert

    2008-07-01

    We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm--the graphical lasso--that is remarkably fast: It solves a 1000-node problem ( approximately 500,000 parameters) in at most a minute and is 30-4000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinshausen and Bühlmann (2006). We illustrate the method on some cell-signaling data from proteomics.

  7. Least absolute shrinkage and selection operator type methods for the identification of serum biomarkers of overweight and obesity: simulation and application

    Directory of Open Access Journals (Sweden)

    Monica M. Vasquez

    2016-11-01

    Full Text Available Abstract Background The study of circulating biomarkers and their association with disease outcomes has become progressively complex due to advances in the measurement of these biomarkers through multiplex technologies. The Least Absolute Shrinkage and Selection Operator (LASSO is a data analysis method that may be utilized for biomarker selection in these high dimensional data. However, it is unclear which LASSO-type method is preferable when considering data scenarios that may be present in serum biomarker research, such as high correlation between biomarkers, weak associations with the outcome, and sparse number of true signals. The goal of this study was to compare the LASSO to five LASSO-type methods given these scenarios. Methods A simulation study was performed to compare the LASSO, Adaptive LASSO, Elastic Net, Iterated LASSO, Bootstrap-Enhanced LASSO, and Weighted Fusion for the binary logistic regression model. The simulation study was designed to reflect the data structure of the population-based Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD, specifically the sample size (N = 1000 for total population, 500 for sub-analyses, correlation of biomarkers (0.20, 0.50, 0.80, prevalence of overweight (40% and obese (12% outcomes, and the association of outcomes with standardized serum biomarker concentrations (log-odds ratio = 0.05–1.75. Each LASSO-type method was then applied to the TESAOD data of 306 overweight, 66 obese, and 463 normal-weight subjects with a panel of 86 serum biomarkers. Results Based on the simulation study, no method had an overall superior performance. The Weighted Fusion correctly identified more true signals, but incorrectly included more noise variables. The LASSO and Elastic Net correctly identified many true signals and excluded more noise variables. In the application study, biomarkers of overweight and obesity selected by all methods were Adiponectin, Apolipoprotein H, Calcitonin, CD

  8. Least absolute shrinkage and selection operator type methods for the identification of serum biomarkers of overweight and obesity: simulation and application.

    Science.gov (United States)

    Vasquez, Monica M; Hu, Chengcheng; Roe, Denise J; Chen, Zhao; Halonen, Marilyn; Guerra, Stefano

    2016-11-14

    The study of circulating biomarkers and their association with disease outcomes has become progressively complex due to advances in the measurement of these biomarkers through multiplex technologies. The Least Absolute Shrinkage and Selection Operator (LASSO) is a data analysis method that may be utilized for biomarker selection in these high dimensional data. However, it is unclear which LASSO-type method is preferable when considering data scenarios that may be present in serum biomarker research, such as high correlation between biomarkers, weak associations with the outcome, and sparse number of true signals. The goal of this study was to compare the LASSO to five LASSO-type methods given these scenarios. A simulation study was performed to compare the LASSO, Adaptive LASSO, Elastic Net, Iterated LASSO, Bootstrap-Enhanced LASSO, and Weighted Fusion for the binary logistic regression model. The simulation study was designed to reflect the data structure of the population-based Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD), specifically the sample size (N = 1000 for total population, 500 for sub-analyses), correlation of biomarkers (0.20, 0.50, 0.80), prevalence of overweight (40%) and obese (12%) outcomes, and the association of outcomes with standardized serum biomarker concentrations (log-odds ratio = 0.05-1.75). Each LASSO-type method was then applied to the TESAOD data of 306 overweight, 66 obese, and 463 normal-weight subjects with a panel of 86 serum biomarkers. Based on the simulation study, no method had an overall superior performance. The Weighted Fusion correctly identified more true signals, but incorrectly included more noise variables. The LASSO and Elastic Net correctly identified many true signals and excluded more noise variables. In the application study, biomarkers of overweight and obesity selected by all methods were Adiponectin, Apolipoprotein H, Calcitonin, CD14, Complement 3, C-reactive protein, Ferritin

  9. Fused Adaptive Lasso for Spatial and Temporal Quantile Function Estimation

    KAUST Repository

    Sun, Ying; Wang, Huixia J.; Fuentes, Montserrat

    2015-01-01

    and temporal data with a fused adaptive Lasso penalty to accommodate the dependence in space and time. This method penalizes the difference among neighboring quantiles, hence it is desirable for applications with features ordered in time or space without

  10. Factors associated with performing tuberculosis screening of HIV-positive patients in Ghana: LASSO-based predictor selection in a large public health data set

    Directory of Open Access Journals (Sweden)

    Susanne Mueller-Using

    2016-07-01

    Full Text Available Abstract Background The purpose of this study is to propose the Least Absolute Shrinkage and Selection Operators procedure (LASSO as an alternative to conventional variable selection models, as it allows for easy interpretation and handles multicollinearities. We developed a model on the basis of LASSO-selected parameters in order to link associated demographical, socio-economical, clinical and immunological factors to performing tuberculosis screening in HIV-positive patients in Ghana. Methods Applying the LASSO method and multivariate logistic regression analysis on a large public health data set, we selected relevant predictors related to tuberculosis screening. Results One Thousand Ninety Five patients infected with HIV were enrolled into this study with 691 (63.2 % of them having tuberculosis screening documented in their patient folders. Predictors found to be significantly associated with performance of tuberculosis screening can be classified into factors related to the clinician’s perception of the clinical state, as well as those related to PLHIV’s awareness. These factors include newly diagnosed HIV infections (n = 354 (32.42 %, aOR 1.84, current CD4+ T cell count (aOR 0.92, non-availability of HIV type (n = 787 (72.07 %, aOR 0.56, chronic cough (n = 32 (2.93 %, aOR 5.07, intake of co-trimoxazole (n = 271 (24.82 %, aOR 2.31, vitamin supplementation (n = 220 (20.15 %, aOR 2.64 as well as the use of mosquito bed nets (n = 613 (56.14 %, aOR 1.53. Conclusions Accelerated TB screening among newly diagnosed HIV-patients indicates that application of the WHO screening form for intensifying tuberculosis case finding among HIV-positive individuals in resource-limited settings is increasingly adopted. However, screening for TB in PLHIV is still impacted by clinician’s perception of patient’s health state and PLHIV’s health awareness. Education of staff, counselling of PLHIV and sufficient financing are

  11. Logistic LASSO regression for the diagnosis of breast cancer using clinical demographic data and the BI-RADS lexicon for ultrasonography

    Directory of Open Access Journals (Sweden)

    Sun Mi Kim

    2018-01-01

    Full Text Available Purpose The aim of this study was to compare the performance of image analysis for predicting breast cancer using two distinct regression models and to evaluate the usefulness of incorporating clinical and demographic data (CDD into the image analysis in order to improve the diagnosis of breast cancer. Methods This study included 139 solid masses from 139 patients who underwent a ultrasonography-guided core biopsy and had available CDD between June 2009 and April 2010. Three breast radiologists retrospectively reviewed 139 breast masses and described each lesion using the Breast Imaging Reporting and Data System (BI-RADS lexicon. We applied and compared two regression methods-stepwise logistic (SL regression and logistic least absolute shrinkage and selection operator (LASSO regression-in which the BI-RADS descriptors and CDD were used as covariates. We investigated the performances of these regression methods and the agreement of radiologists in terms of test misclassification error and the area under the curve (AUC of the tests. Results Logistic LASSO regression was superior (P<0.05 to SL regression, regardless of whether CDD was included in the covariates, in terms of test misclassification errors (0.234 vs. 0.253, without CDD; 0.196 vs. 0.258, with CDD and AUC (0.785 vs. 0.759, without CDD; 0.873 vs. 0.735, with CDD. However, it was inferior (P<0.05 to the agreement of three radiologists in terms of test misclassification errors (0.234 vs. 0.168, without CDD; 0.196 vs. 0.088, with CDD and the AUC without CDD (0.785 vs. 0.844, P<0.001, but was comparable to the AUC with CDD (0.873 vs. 0.880, P=0.141. Conclusion Logistic LASSO regression based on BI-RADS descriptors and CDD showed better performance than SL in predicting the presence of breast cancer. The use of CDD as a supplement to the BI-RADS descriptors significantly improved the prediction of breast cancer using logistic LASSO regression.

  12. Rotorwash Operational Footprint Modeling

    Science.gov (United States)

    2014-07-01

    I-13. Francis, J. K., and Gillespie, A., “Relating Gust Speed to Tree Damage in Hurricane Hugo , 1989,” Journal of Arboriculture, November 1993...statement has been Rotorwash Operational Footprint Modeling 72 found to be correct. In many parts of the United States, the requirements for hurricane ...On August 18, 1983, Hurricane Alicia struck downtown Houston, Texas. Researchers were allowed into downtown Houston the following day to help survey

  13. Operations and Modeling Analysis

    Science.gov (United States)

    Ebeling, Charles

    2005-01-01

    The Reliability and Maintainability Analysis Tool (RMAT) provides NASA the capability to estimate reliability and maintainability (R&M) parameters and operational support requirements for proposed space vehicles based upon relationships established from both aircraft and Shuttle R&M data. RMAT has matured both in its underlying database and in its level of sophistication in extrapolating this historical data to satisfy proposed mission requirements, maintenance concepts and policies, and type of vehicle (i.e. ranging from aircraft like to shuttle like). However, a companion analyses tool, the Logistics Cost Model (LCM) has not reached the same level of maturity as RMAT due, in large part, to nonexistent or outdated cost estimating relationships and underlying cost databases, and it's almost exclusive dependence on Shuttle operations and logistics cost input parameters. As a result, the full capability of the RMAT/LCM suite of analysis tools to take a conceptual vehicle and derive its operations and support requirements along with the resulting operating and support costs has not been realized.

  14. Lasso and probabilistic inequalities for multivariate point processes

    DEFF Research Database (Denmark)

    Hansen, Niels Richard; Reynaud-Bouret, Patricia; Rivoirard, Vincent

    2015-01-01

    Due to its low computational cost, Lasso is an attractive regularization method for high-dimensional statistical settings. In this paper, we consider multivariate counting processes depending on an unknown function parameter to be estimated by linear combinations of a fixed dictionary. To select...... for multivariate Hawkes processes are proven, which allows us to check these assumptions by considering general dictionaries based on histograms, Fourier or wavelet bases. Motivated by problems of neuronal activity inference, we finally carry out a simulation study for multivariate Hawkes processes and compare our...... methodology with the adaptive Lasso procedure proposed by Zou in (J. Amer. Statist. Assoc. 101 (2006) 1418–1429). We observe an excellent behavior of our procedure. We rely on theoretical aspects for the essential question of tuning our methodology. Unlike adaptive Lasso of (J. Amer. Statist. Assoc. 101 (2006...

  15. Academic Education Chain Operation Model

    OpenAIRE

    Ruskov, Petko; Ruskov, Andrey

    2007-01-01

    This paper presents an approach for modelling the educational processes as a value added chain. It is an attempt to use a business approach to interpret and compile existing business and educational processes towards reference models and suggest an Academic Education Chain Operation Model. The model can be used to develop an Academic Chain Operation Reference Model.

  16. OPTIMAL WAVELENGTH SELECTION ON HYPERSPECTRAL DATA WITH FUSED LASSO FOR BIOMASS ESTIMATION OF TROPICAL RAIN FOREST

    Directory of Open Access Journals (Sweden)

    T. Takayama

    2016-06-01

    Full Text Available Above-ground biomass prediction of tropical rain forest using remote sensing data is of paramount importance to continuous large-area forest monitoring. Hyperspectral data can provide rich spectral information for the biomass prediction; however, the prediction accuracy is affected by a small-sample-size problem, which widely exists as overfitting in using high dimensional data where the number of training samples is smaller than the dimensionality of the samples due to limitation of require time, cost, and human resources for field surveys. A common approach to addressing this problem is reducing the dimensionality of dataset. Also, acquired hyperspectral data usually have low signal-to-noise ratio due to a narrow bandwidth and local or global shifts of peaks due to instrumental instability or small differences in considering practical measurement conditions. In this work, we propose a methodology based on fused lasso regression that select optimal bands for the biomass prediction model with encouraging sparsity and grouping, which solves the small-sample-size problem by the dimensionality reduction from the sparsity and the noise and peak shift problem by the grouping. The prediction model provided higher accuracy with root-mean-square error (RMSE of 66.16 t/ha in the cross-validation than other methods; multiple linear analysis, partial least squares regression, and lasso regression. Furthermore, fusion of spectral and spatial information derived from texture index increased the prediction accuracy with RMSE of 62.62 t/ha. This analysis proves efficiency of fused lasso and image texture in biomass estimation of tropical forests.

  17. Simultaneous Channel and Feature Selection of Fused EEG Features Based on Sparse Group Lasso

    Directory of Open Access Journals (Sweden)

    Jin-Jia Wang

    2015-01-01

    Full Text Available Feature extraction and classification of EEG signals are core parts of brain computer interfaces (BCIs. Due to the high dimension of the EEG feature vector, an effective feature selection algorithm has become an integral part of research studies. In this paper, we present a new method based on a wrapped Sparse Group Lasso for channel and feature selection of fused EEG signals. The high-dimensional fused features are firstly obtained, which include the power spectrum, time-domain statistics, AR model, and the wavelet coefficient features extracted from the preprocessed EEG signals. The wrapped channel and feature selection method is then applied, which uses the logistical regression model with Sparse Group Lasso penalized function. The model is fitted on the training data, and parameter estimation is obtained by modified blockwise coordinate descent and coordinate gradient descent method. The best parameters and feature subset are selected by using a 10-fold cross-validation. Finally, the test data is classified using the trained model. Compared with existing channel and feature selection methods, results show that the proposed method is more suitable, more stable, and faster for high-dimensional feature fusion. It can simultaneously achieve channel and feature selection with a lower error rate. The test accuracy on the data used from international BCI Competition IV reached 84.72%.

  18. Lasso and probabilistic inequalities for multivariate point processes

    OpenAIRE

    Hansen, Niels Richard; Reynaud-Bouret, Patricia; Rivoirard, Vincent

    2012-01-01

    Due to its low computational cost, Lasso is an attractive regularization method for high-dimensional statistical settings. In this paper, we consider multivariate counting processes depending on an unknown function parameter to be estimated by linear combinations of a fixed dictionary. To select coefficients, we propose an adaptive $\\ell_{1}$-penalization methodology, where data-driven weights of the penalty are derived from new Bernstein type inequalities for martingales. Oracle inequalities...

  19. Models of human operators

    International Nuclear Information System (INIS)

    Knee, H.E.; Schryver, J.C.

    1991-01-01

    Models of human behavior and cognition (HB and C) are necessary for understanding the total response of complex systems. Many such models have come available over the past thirty years for various applications. Unfortunately, many potential model users remain skeptical about their practicality, acceptability, and usefulness. Such hesitancy stems in part to disbelief in the ability to model complex cognitive processes, and a belief that relevant human behavior can be adequately accounted for through the use of commonsense heuristics. This paper will highlight several models of HB and C and identify existing and potential applications in attempt to dispel such notions. (author)

  20. Sparse EEG/MEG source estimation via a group lasso.

    Directory of Open Access Journals (Sweden)

    Michael Lim

    Full Text Available Non-invasive recordings of human brain activity through electroencephalography (EEG or magnetoencelphalography (MEG are of value for both basic science and clinical applications in sensory, cognitive, and affective neuroscience. Here we introduce a new approach to estimating the intra-cranial sources of EEG/MEG activity measured from extra-cranial sensors. The approach is based on the group lasso, a sparse-prior inverse that has been adapted to take advantage of functionally-defined regions of interest for the definition of physiologically meaningful groups within a functionally-based common space. Detailed simulations using realistic source-geometries and data from a human Visual Evoked Potential experiment demonstrate that the group-lasso method has improved performance over traditional ℓ2 minimum-norm methods. In addition, we show that pooling source estimates across subjects over functionally defined regions of interest results in improvements in the accuracy of source estimates for both the group-lasso and minimum-norm approaches.

  1. Mental models of the operator

    International Nuclear Information System (INIS)

    Stary, I.

    2004-01-01

    A brief explanation is presented of the mental model concept, properties of mental models and fundamentals of mental models theory. Possible applications of such models in nuclear power plants are described in more detail. They include training of power plant operators, research into their behaviour and design of the operator-control process interface. The design of a mental model of an operator working in abnormal conditions due to power plant malfunction is outlined as an example taken from the literature. The model has been created based on analysis of experiments performed on a nuclear power plant simulator, run by a training center. (author)

  2. Academic Education Chain Operation Model

    NARCIS (Netherlands)

    Ruskov, Petko; Ruskov, Andrey

    2007-01-01

    This paper presents an approach for modelling the educational processes as a value added chain. It is an attempt to use a business approach to interpret and compile existing business and educational processes towards reference models and suggest an Academic Education Chain Operation Model. The model

  3. Reliability analysis and operator modelling

    International Nuclear Information System (INIS)

    Hollnagel, Erik

    1996-01-01

    The paper considers the state of operator modelling in reliability analysis. Operator models are needed in reliability analysis because operators are needed in process control systems. HRA methods must therefore be able to account both for human performance variability and for the dynamics of the interaction. A selected set of first generation HRA approaches is briefly described in terms of the operator model they use, their classification principle, and the actual method they propose. In addition, two examples of second generation methods are also considered. It is concluded that first generation HRA methods generally have very simplistic operator models, either referring to the time-reliability relationship or to elementary information processing concepts. It is argued that second generation HRA methods must recognise that cognition is embedded in a context, and be able to account for that in the way human reliability is analysed and assessed

  4. Capturing the musical brain with Lasso

    DEFF Research Database (Denmark)

    Toiviainen, Petri; Alluri, Vinoo; Brattico, Elvira

    2014-01-01

    accuracy using a leave-one-out cross-validation scheme. The method was applied to functional magnetic resonance imaging (fMRI) data that were collected using a naturalistic paradigm, in which participants' brain responses were recorded while they were continuously listening to pieces of real music...... to be consistent with areas of significant activation observed in previous research using a naturalistic paradigm with fMRI. Of the six musical features considered, five could be significantly predicted for the majority of participants. The areas significantly contributing to the optimal decoding models agreed...

  5. Modelling of Batch Process Operations

    DEFF Research Database (Denmark)

    Abdul Samad, Noor Asma Fazli; Cameron, Ian; Gani, Rafiqul

    2011-01-01

    Here a batch cooling crystalliser is modelled and simulated as is a batch distillation system. In the batch crystalliser four operational modes of the crystalliser are considered, namely: initial cooling, nucleation, crystal growth and product removal. A model generation procedure is shown that s...

  6. Sungsanpin, a lasso peptide from a deep-sea streptomycete.

    Science.gov (United States)

    Um, Soohyun; Kim, Young-Joo; Kwon, Hyuknam; Wen, He; Kim, Seong-Hwan; Kwon, Hak Cheol; Park, Sunghyouk; Shin, Jongheon; Oh, Dong-Chan

    2013-05-24

    Sungsanpin (1), a new 15-amino-acid peptide, was discovered from a Streptomyces species isolated from deep-sea sediment collected off Jeju Island, Korea. The planar structure of 1 was determined by 1D and 2D NMR spectroscopy, mass spectrometry, and UV spectroscopy. The absolute configurations of the stereocenters in this compound were assigned by derivatizations of the hydrolysate of 1 with Marfey's reagents and 2,3,4,6-tetra-O-acetyl-β-d-glucopyranosyl isothiocyanate, followed by LC-MS analysis. Careful analysis of the ROESY NMR spectrum and three-dimensional structure calculations revealed that sungsanpin possesses the features of a lasso peptide: eight amino acids (-Gly(1)-Phe-Gly-Ser-Lys-Pro-Ile-Asp(8)-) that form a cyclic peptide and seven amino acids (-Ser(9)-Phe-Gly-Leu-Ser-Trp-Leu(15)) that form a tail that loops through the ring. Sungsanpin is thus the first example of a lasso peptide isolated from a marine-derived microorganism. Sungsanpin displayed inhibitory activity in a cell invasion assay with the human lung cancer cell line A549.

  7. Operations planning simulation: Model study

    Science.gov (United States)

    1974-01-01

    The use of simulation modeling for the identification of system sensitivities to internal and external forces and variables is discussed. The technique provides a means of exploring alternate system procedures and processes, so that these alternatives may be considered on a mutually comparative basis permitting the selection of a mode or modes of operation which have potential advantages to the system user and the operator. These advantages are measurements is system efficiency are: (1) the ability to meet specific schedules for operations, mission or mission readiness requirements or performance standards and (2) to accomplish the objectives within cost effective limits.

  8. Discovery and replication of gene influences on brain structure using LASSO regression

    Directory of Open Access Journals (Sweden)

    Omid eKohannim

    2012-08-01

    Full Text Available We implemented LASSO (least absolute shrinkage and selection operator regression to evaluate gene effects in genome-wide association studies (GWAS of brain images, using an MRI-derived temporal lobe volume measure from 729 subjects scanned as part of the Alzheimer’s Disease Neuroimaging Initiative (ADNI. Sparse groups of SNPs in individual genes were selected by LASSO, which identifies efficient sets of variants influencing the data. These SNPs were considered jointly when assessing their association with neuroimaging measures. We discovered 22 genes that passed genome-wide significance for influencing temporal lobe volume. This was a substantially greater number of significant genes compared to those found with standard, univariate GWAS. These top genes are all expressed in the brain and include genes previously related to brain function or neuropsychiatric disorders such as MACROD2, SORCS2, GRIN2B, MAGI2, NPAS3, CLSTN2, GABRG3, NRXN3, PRKAG2, GAS7, RBFOX1, ADARB2, CHD4 and CDH13. The top genes we identified with this method also displayed significant and widespread post-hoc effects on voxelwise, tensor-based morphometry (TBM maps of the temporal lobes. The most significantly associated gene was an autism susceptibility gene known as MACROD2. We were able to successfully replicate the effect of the MACROD2 gene in an independent cohort of 564 young, Australian healthy adult twins and siblings scanned with MRI (mean age: 23.8±2.2 SD years. In exploratory analyses, three selected SNPs in the MACROD2 gene were also significantly associated with performance intelligence quotient (PIQ. Our approach powerfully complements univariate techniques in detecting influences of genes on the living brain.

  9. Making Deformable Template Models Operational

    DEFF Research Database (Denmark)

    Fisker, Rune

    2000-01-01

    for estimation of the model parameters, which applies a combination of a maximum likelihood and minimum distance criterion. Another contribution is a very fast search based initialization algorithm using a filter interpretation of the likelihood model. These two methods can be applied to most deformable template......Deformable template models are a very popular and powerful tool within the field of image processing and computer vision. This thesis treats this type of models extensively with special focus on handling their common difficulties, i.e. model parameter selection, initialization and optimization....... A proper handling of the common difficulties is essential for making the models operational by a non-expert user, which is a requirement for intensifying and commercializing the use of deformable template models. The thesis is organized as a collection of the most important articles, which has been...

  10. Exact Covariance Thresholding into Connected Components for Large-Scale Graphical Lasso.

    Science.gov (United States)

    Mazumder, Rahul; Hastie, Trevor

    2012-03-01

    We consider the sparse inverse covariance regularization problem or graphical lasso with regularization parameter λ. Suppose the sample covariance graph formed by thresholding the entries of the sample covariance matrix at λ is decomposed into connected components. We show that the vertex-partition induced by the connected components of the thresholded sample covariance graph (at λ) is exactly equal to that induced by the connected components of the estimated concentration graph, obtained by solving the graphical lasso problem for the same λ. This characterizes a very interesting property of a path of graphical lasso solutions. Furthermore, this simple rule, when used as a wrapper around existing algorithms for the graphical lasso, leads to enormous performance gains. For a range of values of λ, our proposal splits a large graphical lasso problem into smaller tractable problems, making it possible to solve an otherwise infeasible large-scale problem. We illustrate the graceful scalability of our proposal via synthetic and real-life microarray examples.

  11. Selecting Sensitive Parameter Subsets in Dynamical Models With Application to Biomechanical System Identification.

    Science.gov (United States)

    Ramadan, Ahmed; Boss, Connor; Choi, Jongeun; Peter Reeves, N; Cholewicki, Jacek; Popovich, John M; Radcliffe, Clark J

    2018-07-01

    Estimating many parameters of biomechanical systems with limited data may achieve good fit but may also increase 95% confidence intervals in parameter estimates. This results in poor identifiability in the estimation problem. Therefore, we propose a novel method to select sensitive biomechanical model parameters that should be estimated, while fixing the remaining parameters to values obtained from preliminary estimation. Our method relies on identifying the parameters to which the measurement output is most sensitive. The proposed method is based on the Fisher information matrix (FIM). It was compared against the nonlinear least absolute shrinkage and selection operator (LASSO) method to guide modelers on the pros and cons of our FIM method. We present an application identifying a biomechanical parametric model of a head position-tracking task for ten human subjects. Using measured data, our method (1) reduced model complexity by only requiring five out of twelve parameters to be estimated, (2) significantly reduced parameter 95% confidence intervals by up to 89% of the original confidence interval, (3) maintained goodness of fit measured by variance accounted for (VAF) at 82%, (4) reduced computation time, where our FIM method was 164 times faster than the LASSO method, and (5) selected similar sensitive parameters to the LASSO method, where three out of five selected sensitive parameters were shared by FIM and LASSO methods.

  12. Operating cost model for local service airlines

    Science.gov (United States)

    Anderson, J. L.; Andrastek, D. A.

    1976-01-01

    Several mathematical models now exist which determine the operating economics for a United States trunk airline. These models are valuable in assessing the impact of new aircraft into an airline's fleet. The use of a trunk airline cost model for the local service airline does not result in representative operating costs. A new model is presented which is representative of the operating conditions and resultant costs for the local service airline. The calculated annual direct and indirect operating costs for two multiequipment airlines are compared with their actual operating experience.

  13. Dictionary-Based Image Denoising by Fused-Lasso Atom Selection

    Directory of Open Access Journals (Sweden)

    Ao Li

    2014-01-01

    Full Text Available We proposed an efficient image denoising scheme by fused lasso with dictionary learning. The scheme has two important contributions. The first one is that we learned the patch-based adaptive dictionary by principal component analysis (PCA with clustering the image into many subsets, which can better preserve the local geometric structure. The second one is that we coded the patches in each subset by fused lasso with the clustering learned dictionary and proposed an iterative Split Bregman to solve it rapidly. We present the capabilities with several experiments. The results show that the proposed scheme is competitive to some excellent denoising algorithms.

  14. The DIAMOND Model of Peace Support Operations

    National Research Council Canada - National Science Library

    Bailey, Peter

    2005-01-01

    DIAMOND (Diplomatic And Military Operations in a Non-warfighting Domain) is a high-level stochastic simulation developed at Dstl as a key centerpiece within the Peace Support Operations (PSO) 'modelling jigsaw...

  15. Modeling Operating Modes during Plant Life Cycle

    DEFF Research Database (Denmark)

    Jørgensen, Sten Bay; Lind, Morten

    2012-01-01

    Modelling process plants during normal operation requires a set a basic assumptions to define the desired functionalities which lead to fullfillment of the operational goal(-s) for the plant. However during during start-up and shut down as well as during batch operation an ensemble of interrelated...... modes are required to cover the whole operational window of a processs plant including intermediary operating modes. Development of such an model ensemble for a plant would constitute a systematic way of defining the possible plant operating modes and thus provide a platform for also defining a set...... of candidate control structures. The present contribution focuses on development of a model ensemble for a plant with an illustartive example for a bioreactor. Starting from a functional model a process plant may be conceptually designed and qualitative operating models may be developed to cover the different...

  16. Operational characteristics of nuclear power plants - modelling of operational safety

    International Nuclear Information System (INIS)

    Studovic, M.

    1984-01-01

    By operational experience of nuclear power plants and realize dlevel of availability of plant, systems and componenst reliabiliuty, operational safety and public protection, as a source on nature of distrurbances in power plant systems and lessons drawn by the TMI-2, in th epaper are discussed: examination of design safety for ultimate ensuring of safe operational conditions of the nuclear power plant; significance of the adequate action for keeping proess parameters in prescribed limits and reactor cooling rquirements; developed systems for measurements detection and monitoring all critical parameters in the nuclear steam supply system; contents of theoretical investigation and mathematical modeling of the physical phenomena and process in nuclear power plant system and components as software, supporting for ensuring of operational safety and new access in staff education process; program and progress of the investigation of some physical phenomena and mathematical modeling of nuclear plant transients, prepared at faculty of mechanical Engineering in Belgrade. (author)

  17. Economic sustainability in franchising: a model to predict franchisor success or failure

    OpenAIRE

    Calderón Monge, Esther; Pastor Sanz, Ivan .; Huerta Zavala, Pilar Angélica

    2017-01-01

    As a business model, franchising makes a major contribution to gross domestic product (GDP). A model that predicts franchisor success or failure is therefore necessary to ensure economic sustainability. In this study, such a model was developed by applying Lasso regression to a sample of franchises operating between 2002 and 2013. For franchises with the highest likelihood of survival, the franchise fees and the ratio of company-owned to franchised outlets were suited to the age ...

  18. Operations management research methodologies using quantitative modeling

    NARCIS (Netherlands)

    Bertrand, J.W.M.; Fransoo, J.C.

    2002-01-01

    Gives an overview of quantitative model-based research in operations management, focusing on research methodology. Distinguishes between empirical and axiomatic research, and furthermore between descriptive and normative research. Presents guidelines for doing quantitative model-based research in

  19. LASSO-ligand activity by surface similarity order: a new tool for ligand based virtual screening.

    Science.gov (United States)

    Reid, Darryl; Sadjad, Bashir S; Zsoldos, Zsolt; Simon, Aniko

    2008-01-01

    Virtual Ligand Screening (VLS) has become an integral part of the drug discovery process for many pharmaceutical companies. Ligand similarity searches provide a very powerful method of screening large databases of ligands to identify possible hits. If these hits belong to new chemotypes the method is deemed even more successful. eHiTS LASSO uses a new interacting surface point types (ISPT) molecular descriptor that is generated from the 3D structure of the ligand, but unlike most 3D descriptors it is conformation independent. Combined with a neural network machine learning technique, LASSO screens molecular databases at an ultra fast speed of 1 million structures in under 1 min on a standard PC. The results obtained from eHiTS LASSO trained on relatively small training sets of just 2, 4 or 8 actives are presented using the diverse directory of useful decoys (DUD) dataset. It is shown that over a wide range of receptor families, eHiTS LASSO is consistently able to enrich screened databases and provides scaffold hopping ability.

  20. On the Oracle Property of the Adaptive LASSO in Stationary and Nonstationary Autoregressions

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl

    We show that the Adaptive LASSO is oracle efficient in stationary and non-stationary autoregressions. This means that it estimates parameters consistently, selects the correct sparsity pattern, and estimates the coefficients belonging to the relevant variables at the same asymptotic efficiency...

  1. Why operational risk modelling creates inverse incentives

    NARCIS (Netherlands)

    Doff, R.

    2015-01-01

    Operational risk modelling has become commonplace in large international banks and is gaining popularity in the insurance industry as well. This is partly due to financial regulation (Basel II, Solvency II). This article argues that operational risk modelling is fundamentally flawed, despite efforts

  2. Modeling and simulation with operator scaling

    OpenAIRE

    Cohen, Serge; Meerschaert, Mark M.; Rosiński, Jan

    2010-01-01

    Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical application...

  3. Model improves oil field operating cost estimates

    International Nuclear Information System (INIS)

    Glaeser, J.L.

    1996-01-01

    A detailed operating cost model that forecasts operating cost profiles toward the end of a field's life should be constructed for testing depletion strategies and plans for major oil fields. Developing a good understanding of future operating cost trends is important. Incorrectly forecasting the trend can result in bad decision making regarding investments and reservoir operating strategies. Recent projects show that significant operating expense reductions can be made in the latter stages o field depletion without significantly reducing the expected ultimate recoverable reserves. Predicting future operating cost trends is especially important for operators who are currently producing a field and must forecast the economic limit of the property. For reasons presented in this article, it is usually not correct to either assume that operating expense stays fixed in dollar terms throughout the lifetime of a field, nor is it correct to assume that operating costs stay fixed on a dollar per barrel basis

  4. Modeling operators' emergency response time for chemical processing operations.

    Science.gov (United States)

    Murray, Susan L; Harputlu, Emrah; Mentzer, Ray A; Mannan, M Sam

    2014-01-01

    Operators have a crucial role during emergencies at a variety of facilities such as chemical processing plants. When an abnormality occurs in the production process, the operator often has limited time to either take corrective actions or evacuate before the situation becomes deadly. It is crucial that system designers and safety professionals can estimate the time required for a response before procedures and facilities are designed and operations are initiated. There are existing industrial engineering techniques to establish time standards for tasks performed at a normal working pace. However, it is reasonable to expect the time required to take action in emergency situations will be different than working at a normal production pace. It is possible that in an emergency, operators will act faster compared to a normal pace. It would be useful for system designers to be able to establish a time range for operators' response times for emergency situations. This article develops a modeling approach to estimate the time standard range for operators taking corrective actions or following evacuation procedures in emergency situations. This will aid engineers and managers in establishing time requirements for operators in emergency situations. The methodology used for this study combines a well-established industrial engineering technique for determining time requirements (predetermined time standard system) and adjustment coefficients for emergency situations developed by the authors. Numerous videos of workers performing well-established tasks at a maximum pace were studied. As an example, one of the tasks analyzed was pit crew workers changing tires as quickly as they could during a race. The operations in these videos were decomposed into basic, fundamental motions (such as walking, reaching for a tool, and bending over) by studying the videos frame by frame. A comparison analysis was then performed between the emergency pace and the normal working pace operations

  5. A Novel SCCA Approach via Truncated ℓ1-norm and Truncated Group Lasso for Brain Imaging Genetics.

    Science.gov (United States)

    Du, Lei; Liu, Kefei; Zhang, Tuo; Yao, Xiaohui; Yan, Jingwen; Risacher, Shannon L; Han, Junwei; Guo, Lei; Saykin, Andrew J; Shen, Li

    2017-09-18

    Brain imaging genetics, which studies the linkage between genetic variations and structural or functional measures of the human brain, has become increasingly important in recent years. Discovering the bi-multivariate relationship between genetic markers such as single-nucleotide polymorphisms (SNPs) and neuroimaging quantitative traits (QTs) is one major task in imaging genetics. Sparse Canonical Correlation Analysis (SCCA) has been a popular technique in this area for its powerful capability in identifying bi-multivariate relationships coupled with feature selection. The existing SCCA methods impose either the ℓ 1 -norm or its variants to induce sparsity. The ℓ 0 -norm penalty is a perfect sparsity-inducing tool which, however, is an NP-hard problem. In this paper, we propose the truncated ℓ 1 -norm penalized SCCA to improve the performance and effectiveness of the ℓ 1 -norm based SCCA methods. Besides, we propose an efficient optimization algorithms to solve this novel SCCA problem. The proposed method is an adaptive shrinkage method via tuning τ . It can avoid the time intensive parameter tuning if given a reasonable small τ . Furthermore, we extend it to the truncated group-lasso (TGL), and propose TGL-SCCA model to improve the group-lasso-based SCCA methods. The experimental results, compared with four benchmark methods, show that our SCCA methods identify better or similar correlation coefficients, and better canonical loading profiles than the competing methods. This demonstrates the effectiveness and efficiency of our methods in discovering interesting imaging genetic associations. The Matlab code and sample data are freely available at http://www.iu.edu/∼shenlab/tools/tlpscca/ . © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  6. Improved Variable Selection Algorithm Using a LASSO-Type Penalty, with an Application to Assessing Hepatitis B Infection Relevant Factors in Community Residents

    Science.gov (United States)

    Guo, Pi; Zeng, Fangfang; Hu, Xiaomin; Zhang, Dingmei; Zhu, Shuming; Deng, Yu; Hao, Yuantao

    2015-01-01

    Objectives In epidemiological studies, it is important to identify independent associations between collective exposures and a health outcome. The current stepwise selection technique ignores stochastic errors and suffers from a lack of stability. The alternative LASSO-penalized regression model can be applied to detect significant predictors from a pool of candidate variables. However, this technique is prone to false positives and tends to create excessive biases. It remains challenging to develop robust variable selection methods and enhance predictability. Material and methods Two improved algorithms denoted the two-stage hybrid and bootstrap ranking procedures, both using a LASSO-type penalty, were developed for epidemiological association analysis. The performance of the proposed procedures and other methods including conventional LASSO, Bolasso, stepwise and stability selection models were evaluated using intensive simulation. In addition, methods were compared by using an empirical analysis based on large-scale survey data of hepatitis B infection-relevant factors among Guangdong residents. Results The proposed procedures produced comparable or less biased selection results when compared to conventional variable selection models. In total, the two newly proposed procedures were stable with respect to various scenarios of simulation, demonstrating a higher power and a lower false positive rate during variable selection than the compared methods. In empirical analysis, the proposed procedures yielding a sparse set of hepatitis B infection-relevant factors gave the best predictive performance and showed that the procedures were able to select a more stringent set of factors. The individual history of hepatitis B vaccination, family and individual history of hepatitis B infection were associated with hepatitis B infection in the studied residents according to the proposed procedures. Conclusions The newly proposed procedures improve the identification of

  7. Glass operational file. Operational models and integration calculations

    International Nuclear Information System (INIS)

    Ribet, I.

    2004-01-01

    This document presents the operational choices of dominating phenomena, hypotheses, equations and numerical data of the parameters used in the two operational models elaborated for the calculation of the glass source terms with respect to the waste packages considered: existing packages (R7T7, AVM and CEA glasses) and future ones (UOX2, UOX3, UMo, others). The overall operational choices are justified and demonstrated and a critical analysis of the approach is systematically proposed. The use of the operational model (OPM) V 0 → V r , realistic, conservative and robust, is recommended for glasses with a high thermal and radioactive load, which represent the main part of the vitrified wastes. The OPM V 0 S, much more overestimating but faster to parameterize, can be used for the long-term behaviour forecasting of glasses with low thermal and radioactive load, considering today's lack of knowledge for the parameterization of a V 0 → V r type OPM. Efficiency estimations have been made for R7T7 glasses (OPM V 0 → V r ) and AVM glasses (OPM V 0 S), which correspond to more than 99.9% of the vitrified waste packages activity. The very contrasted results obtained, illustrate the importance of the choice of operational models: in conditions representative of a geologic disposal, the estimation of R7T7-type package lifetime exceeds several hundred thousands years. Even if the estimated lifetime of AVM packages is much shorter (because of the overestimating character of the OPM V 0 S), the release potential radiotoxicity is of the same order as the one of R7T7 packages. (J.S.)

  8. Comparing models of offensive cyber operations

    CSIR Research Space (South Africa)

    Grant, T

    2015-10-01

    Full Text Available would be needed by a Cyber Security Operations Centre in order to perform offensive cyber operations?". The analysis was performed, using as a springboard seven models of cyber-attack, and resulted in the development of what is described as a canonical...

  9. Modeling Control Situations in Power System Operations

    DEFF Research Database (Denmark)

    Saleem, Arshad; Lind, Morten; Singh, Sri Niwas

    2010-01-01

    for intelligent operation and control must represent system features, so that information from measurements can be related to possible system states and to control actions. These general modeling requirements are well understood, but it is, in general, difficult to translate them into a model because of the lack...... of explicit principles for model construction. This paper presents a work on using explicit means-ends model based reasoning about complex control situations which results in maintaining consistent perspectives and selecting appropriate control action for goal driven agents. An example of power system......Increased interconnection and loading of the power system along with deregulation has brought new challenges for electric power system operation, control and automation. Traditional power system models used in intelligent operation and control are highly dependent on the task purpose. Thus, a model...

  10. Quark shell model using projection operators

    International Nuclear Information System (INIS)

    Ullah, N.

    1988-01-01

    Using the projection operators in the quark shell model, the wave functions for proton are calculated and expressions for calculating the wave function of neutron and also magnetic moment of proton and neutron are derived. (M.G.B.)

  11. Visualization study of operators' plant knowledge model

    International Nuclear Information System (INIS)

    Kanno, Tarou; Furuta, Kazuo; Yoshikawa, Shinji

    1999-03-01

    Nuclear plants are typically very complicated systems and are required extremely high level safety on the operations. Since it is never possible to include all the possible anomaly scenarios in education/training curriculum, plant knowledge formation is desired for operators to enable thein to act against unexpected anomalies based on knowledge base decision making. The authors have been conducted a study on operators' plant knowledge model for the purpose of supporting operators' effort in forming this kind of plant knowledge. In this report, an integrated plant knowledge model consisting of configuration space, causality space, goal space and status space is proposed. The authors examined appropriateness of this model and developed a prototype system to support knowledge formation by visualizing the operators' knowledge model and decision making process in knowledge-based actions with this model on a software system. Finally the feasibility of this prototype as a supportive method in operator education/training to enhance operators' ability in knowledge-based performance has been evaluated. (author)

  12. Relaxed memory models: an operational approach

    OpenAIRE

    Boudol , Gérard; Petri , Gustavo

    2009-01-01

    International audience; Memory models define an interface between programs written in some language and their implementation, determining which behaviour the memory (and thus a program) is allowed to have in a given model. A minimal guarantee memory models should provide to the programmer is that well-synchronized, that is, data-race free code has a standard semantics. Traditionally, memory models are defined axiomatically, setting constraints on the order in which memory operations are allow...

  13. Study on modeling of operator's learning mechanism

    International Nuclear Information System (INIS)

    Yoshimura, Seichi; Hasegawa, Naoko

    1998-01-01

    One effective method to analyze the causes of human errors is to model the behavior of human and to simulate it. The Central Research Institute of Electric Power Industry (CRIEPI) has developed an operator team behavior simulation system called SYBORG (Simulation System for the Behavior of an Operating Group) to analyze the human errors and to establish the countermeasures for them. As an operator behavior model which composes SYBORG has no learning mechanism and the knowledge of a plant is fixed, it cannot take suitable actions when unknown situations occur nor learn anything from the experience. However, considering actual operators, learning is an essential human factor to enhance their abilities to diagnose plant anomalies. In this paper, Q learning with 1/f fluctuation was proposed as a learning mechanism of an operator and simulation using the mechanism was conducted. The results showed the effectiveness of the learning mechanism. (author)

  14. Risk management model of winter navigation operations

    International Nuclear Information System (INIS)

    Valdez Banda, Osiris A.; Goerlandt, Floris; Kuzmin, Vladimir; Kujala, Pentti; Montewka, Jakub

    2016-01-01

    The wintertime maritime traffic operations in the Gulf of Finland are managed through the Finnish–Swedish Winter Navigation System. This establishes the requirements and limitations for the vessels navigating when ice covers this area. During winter navigation in the Gulf of Finland, the largest risk stems from accidental ship collisions which may also trigger oil spills. In this article, a model for managing the risk of winter navigation operations is presented. The model analyses the probability of oil spills derived from collisions involving oil tanker vessels and other vessel types. The model structure is based on the steps provided in the Formal Safety Assessment (FSA) by the International Maritime Organization (IMO) and adapted into a Bayesian Network model. The results indicate that ship independent navigation and convoys are the operations with higher probability of oil spills. Minor spills are most probable, while major oil spills found very unlikely but possible. - Highlights: •A model to assess and manage the risk of winter navigation operations is proposed. •The risks of oil spills in winter navigation in the Gulf of Finland are analysed. •The model assesses and prioritizes actions to control the risk of the operations. •The model suggests navigational training as the most efficient risk control option.

  15. PERBANDINGAN ANALISIS LEAST ABSOLUTE SHRINKAGE AND SELECTION OPERATOR DAN PARTIAL LEAST SQUARES (Studi Kasus: Data Microarray

    Directory of Open Access Journals (Sweden)

    KADEK DWI FARMANI

    2012-09-01

    Full Text Available Linear regression analysis is one of the parametric statistical methods which utilize the relationship between two or more quantitative variables. In linear regression analysis, there are several assumptions that must be met that is normal distribution of errors, there is no correlation between the error and error variance is constant and homogent. There are some constraints that caused the assumption can not be met, for example, the correlation between independent variables (multicollinearity, constraints on the number of data and independent variables are obtained. When the number of samples obtained less than the number of independent variables, then the data is called the microarray data. Least Absolute shrinkage and Selection Operator (LASSO and Partial Least Squares (PLS is a statistical method that can be used to overcome the microarray, overfitting, and multicollinearity. From the above description, it is necessary to study with the intention of comparing LASSO and PLS method. This study uses coronary heart and stroke patients data which is a microarray data and contain multicollinearity. With these two characteristics of the data that most have a weak correlation between independent variables, LASSO method produces a better model than PLS seen from the large RMSEP.

  16. The Launch Systems Operations Cost Model

    Science.gov (United States)

    Prince, Frank A.; Hamaker, Joseph W. (Technical Monitor)

    2001-01-01

    One of NASA's primary missions is to reduce the cost of access to space while simultaneously increasing safety. A key component, and one of the least understood, is the recurring operations and support cost for reusable launch systems. In order to predict these costs, NASA, under the leadership of the Independent Program Assessment Office (IPAO), has commissioned the development of a Launch Systems Operations Cost Model (LSOCM). LSOCM is a tool to predict the operations & support (O&S) cost of new and modified reusable (and partially reusable) launch systems. The requirements are to predict the non-recurring cost for the ground infrastructure and the recurring cost of maintaining that infrastructure, performing vehicle logistics, and performing the O&S actions to return the vehicle to flight. In addition, the model must estimate the time required to cycle the vehicle through all of the ground processing activities. The current version of LSOCM is an amalgamation of existing tools, leveraging our understanding of shuttle operations cost with a means of predicting how the maintenance burden will change as the vehicle becomes more aircraft like. The use of the Conceptual Operations Manpower Estimating Tool/Operations Cost Model (COMET/OCM) provides a solid point of departure based on shuttle and expendable launch vehicle (ELV) experience. The incorporation of the Reliability and Maintainability Analysis Tool (RMAT) as expressed by a set of response surface model equations gives a method for estimating how changing launch system characteristics affects cost and cycle time as compared to today's shuttle system. Plans are being made to improve the model. The development team will be spending the next few months devising a structured methodology that will enable verified and validated algorithms to give accurate cost estimates. To assist in this endeavor the LSOCM team is part of an Agency wide effort to combine resources with other cost and operations professionals to

  17. AN ANALYTIC OUTLOOK OF THE MADRIGAL MORO LASSO AL MIO DUOLO BY GESUALDO DA VENOSA

    Directory of Open Access Journals (Sweden)

    MURARU AUREL

    2015-09-01

    Full Text Available The analysis of the madrigal Moro lasso al mio duolo reveals the melancholic, thoughtful and grieving atmosphere, gene­rating shady, silent, sometimes dark soundscapes. Gesualdo shapes the poliphony through chromatic licenses, in order to create a tense musical discourse, permanently yearning for stability and balance amidst a harmonic construction lacking any attempt for resolution. Thus the strange harmonies of Gesualdo are shaped, giving birth to a unique musical style, full of dissonances and endless musical tension.

  18. LASSO observations at McDonald and OCA/CERGA: A preliminary analysis

    Science.gov (United States)

    Veillet, CH.; Fridelance, P.; Feraudy, D.; Boudon, Y.; Shelus, P. J.; Ricklefs, R. L.; Wiant, J. R.

    1993-01-01

    The Laser Synchronization from Synchronous Orbit (LASSO) observations between USA and Europe were made possible with the move of Meteosat 3/P2 toward 50 deg W. Two Lunar Laser Ranging stations participated into the observations: the MLRS at McDonald Observatory (Texas, USA) and OCA/CERGA (Grasse, France). Common sessions were performed since 30 Apr. 1992, and will be continued up to the next Meteosat 3/P2 move further West (planned for January 1993). The preliminary analysis made with the data already collected by the end of Nov. 1992 shows that the precision which can be obtained from LASSO is better than 100 ps, the accuracy depending on how well the stations maintain their time metrology, as well as on the quality of the calibration (still to be made.) For extracting such a precision from the data, the processing has been drastically changed compared to the initial LASSO data analysis. It takes into account all the measurements made, timings on board, and echoes at each station. This complete use of the data increased dramatically the confidence into the synchronization results.

  19. Modeling for operational event risk assessment

    International Nuclear Information System (INIS)

    Sattison, M.B.

    1997-01-01

    The U.S. Nuclear Regulatory Commission has been using risk models to evaluate the risk significance of operational events in U.S. commercial nuclear power plants for more seventeen years. During that time, the models have evolved in response to the advances in risk assessment technology and insights gained with experience. Evaluation techniques fall into two categories, initiating event assessments and condition assessments. The models used for these analyses have become uniquely specialized for just this purpose

  20. Renormalizations and operator expansion in sigma model

    International Nuclear Information System (INIS)

    Terentyev, M.V.

    1988-01-01

    The operator expansion (OPE) is studied for the Green function at x 2 → 0 (n(x) is the dynamical field ofσ-model) in the framework of the two-dimensional σ-model with the O(N) symmetry group at large N. As a preliminary step we formulate the renormalization scheme which permits introduction of an arbitrary intermediate scale μ 2 in the framework of 1/N expansion and discuss factorization (separation) of small (p μ) momentum region. It is shown that definition of composite local operators and coefficient functions figuring in OPE is unambiguous only in the leading order in 1/N expansion when dominant are the solutions with extremum of action. Corrections of order f(μ 2 )/N (here f(μ 2 ) is the effective interaction constant at the point μ 2 ) in composite operators and coefficient functions essentially depend on factorization method of high and low momentum regions. It is shown also that contributions to the power corrections of order m 2 x 2 f(μ 2 )/N in the Green function (here m is the dynamical mass-scale factor in σ-model) arise simultaneously from two sources: from the mean vacuum value of the composite operator n ∂ 2 n and from the hard particle contributions in the coefficient function of unite operator. Due to the analogy between σ-model and QCD the obtained result indicates theoretical limitations to the sum rule method in QCD. (author)

  1. Modeling Operations Costs for Human Exploration Architectures

    Science.gov (United States)

    Shishko, Robert

    2013-01-01

    Operations and support (O&S) costs for human spaceflight have not received the same attention in the cost estimating community as have development costs. This is unfortunate as O&S costs typically comprise a majority of life-cycle costs (LCC) in such programs as the International Space Station (ISS) and the now-cancelled Constellation Program. Recognizing this, the Constellation Program and NASA HQs supported the development of an O&S cost model specifically for human spaceflight. This model, known as the Exploration Architectures Operations Cost Model (ExAOCM), provided the operations cost estimates for a variety of alternative human missions to the moon, Mars, and Near-Earth Objects (NEOs) in architectural studies. ExAOCM is philosophically based on the DoD Architecture Framework (DoDAF) concepts of operational nodes, systems, operational functions, and milestones. This paper presents some of the historical background surrounding the development of the model, and discusses the underlying structure, its unusual user interface, and lastly, previous examples of its use in the aforementioned architectural studies.

  2. Following an Optimal Batch Bioreactor Operations Model

    DEFF Research Database (Denmark)

    Ibarra-Junquera, V.; Jørgensen, Sten Bay; Virgen-Ortíz, J.J.

    2012-01-01

    The problem of following an optimal batch operation model for a bioreactor in the presence of uncertainties is studied. The optimal batch bioreactor operation model (OBBOM) refers to the bioreactor trajectory for nominal cultivation to be optimal. A multiple-variable dynamic optimization of fed...... as the master system which includes the optimal cultivation trajectory for the feed flow rate and the substrate concentration. The “real” bioreactor, the one with unknown dynamics and perturbations, is considered as the slave system. Finally, the controller is designed such that the real bioreactor...

  3. Systems Integration Operations/Logistics Model (SOLMOD)

    International Nuclear Information System (INIS)

    Vogel, L.W.; Joy, D.S.

    1990-01-01

    SOLMOD is a discrete event simulation model written in FORTRAN 77 and operates in a VAX or PC environment. The model emulates the movement and interaction of equipment and radioactive waste as it is processed through the FWMS. SOLMOD can be used to measure the impacts of different operating schedules and rules, system configurations, reliability, availability, maintainability (RAM) considerations, and equipment and other resource availabilities on the performance of processes comprising the FWMS and how these factors combine to determine overall system performance. Model outputs are a series of measurements of the amount and characteristics of waste at selected points in the FWMS and the utilization of resources needed to transport and process the waste. The model results may be reported on a yearly, monthly, weekly, or daily basis to facilitate analysis. 3 refs., 3 figs., 2 tabs

  4. An operator model-based filtering scheme

    International Nuclear Information System (INIS)

    Sawhney, R.S.; Dodds, H.L.; Schryer, J.C.

    1990-01-01

    This paper presents a diagnostic model developed at Oak Ridge National Laboratory (ORNL) for off-normal nuclear power plant events. The diagnostic model is intended to serve as an embedded module of a cognitive model of the human operator, one application of which could be to assist control room operators in correctly responding to off-normal events by providing a rapid and accurate assessment of alarm patterns and parameter trends. The sequential filter model is comprised of two distinct subsystems --- an alarm analysis followed by an analysis of interpreted plant signals. During the alarm analysis phase, the alarm pattern is evaluated to generate hypotheses of possible initiating events in order of likelihood of occurrence. Each hypothesis is further evaluated through analysis of the current trends of state variables in order to validate/reject (in the form of increased/decreased certainty factor) the given hypothesis. 7 refs., 4 figs

  5. Model Based Autonomy for Robust Mars Operations

    Science.gov (United States)

    Kurien, James A.; Nayak, P. Pandurang; Williams, Brian C.; Lau, Sonie (Technical Monitor)

    1998-01-01

    Space missions have historically relied upon a large ground staff, numbering in the hundreds for complex missions, to maintain routine operations. When an anomaly occurs, this small army of engineers attempts to identify and work around the problem. A piloted Mars mission, with its multiyear duration, cost pressures, half-hour communication delays and two-week blackouts cannot be closely controlled by a battalion of engineers on Earth. Flight crew involvement in routine system operations must also be minimized to maximize science return. It also may be unrealistic to require the crew have the expertise in each mission subsystem needed to diagnose a system failure and effect a timely repair, as engineers did for Apollo 13. Enter model-based autonomy, which allows complex systems to autonomously maintain operation despite failures or anomalous conditions, contributing to safe, robust, and minimally supervised operation of spacecraft, life support, In Situ Resource Utilization (ISRU) and power systems. Autonomous reasoning is central to the approach. A reasoning algorithm uses a logical or mathematical model of a system to infer how to operate the system, diagnose failures and generate appropriate behavior to repair or reconfigure the system in response. The 'plug and play' nature of the models enables low cost development of autonomy for multiple platforms. Declarative, reusable models capture relevant aspects of the behavior of simple devices (e.g. valves or thrusters). Reasoning algorithms combine device models to create a model of the system-wide interactions and behavior of a complex, unique artifact such as a spacecraft. Rather than requiring engineers to all possible interactions and failures at design time or perform analysis during the mission, the reasoning engine generates the appropriate response to the current situation, taking into account its system-wide knowledge, the current state, and even sensor failures or unexpected behavior.

  6. Effective operator treatment of the Lipkin model

    International Nuclear Information System (INIS)

    Abraham, K.J.; Vary, J.P.

    2004-01-01

    We analyze the Lipkin model in the strong coupling limit using effective operator techniques. We present both analytical and numerical results for low energy effective Hamiltonians. We investigate the reliability of various approximations used to simplify the nuclear many body problem, such as the cluster approximation. We demonstrate, in explicit examples, certain limits to the validity of the cluster approximation but caution that these limits may be particular to this model where the interactions are of unlimited range

  7. A practical model for sustainable operational performance

    International Nuclear Information System (INIS)

    Vlek, C.A.J.; Steg, E.M.; Feenstra, D.; Gerbens-Leenis, W.; Lindenberg, S.; Moll, H.; Schoot Uiterkamp, A.; Sijtsma, F.; Van Witteloostuijn, A.

    2002-01-01

    By means of a concrete model for sustainable operational performance enterprises can report uniformly on the sustainability of their contributions to the economy, welfare and the environment. The development and design of a three-dimensional monitoring system is presented and discussed [nl

  8. Business Intelligence Modeling in Launch Operations

    Science.gov (United States)

    Bardina, Jorge E.; Thirumalainambi, Rajkumar; Davis, Rodney D.

    2005-01-01

    This technology project is to advance an integrated Planning and Management Simulation Model for evaluation of risks, costs, and reliability of launch systems from Earth to Orbit for Space Exploration. The approach builds on research done in the NASA ARC/KSC developed Virtual Test Bed (VTB) to integrate architectural, operations process, and mission simulations for the purpose of evaluating enterprise level strategies to reduce cost, improve systems operability, and reduce mission risks. The objectives are to understand the interdependency of architecture and process on recurring launch cost of operations, provide management a tool for assessing systems safety and dependability versus cost, and leverage lessons learned and empirical models from Shuttle and International Space Station to validate models applied to Exploration. The systems-of-systems concept is built to balance the conflicting objectives of safety, reliability, and process strategy in order to achieve long term sustainability. A planning and analysis test bed is needed for evaluation of enterprise level options and strategies for transit and launch systems as well as surface and orbital systems. This environment can also support agency simulation .based acquisition process objectives. The technology development approach is based on the collaborative effort set forth in the VTB's integrating operations. process models, systems and environment models, and cost models as a comprehensive disciplined enterprise analysis environment. Significant emphasis is being placed on adapting root cause from existing Shuttle operations to exploration. Technical challenges include cost model validation, integration of parametric models with discrete event process and systems simulations. and large-scale simulation integration. The enterprise architecture is required for coherent integration of systems models. It will also require a plan for evolution over the life of the program. The proposed technology will produce

  9. Business intelligence modeling in launch operations

    Science.gov (United States)

    Bardina, Jorge E.; Thirumalainambi, Rajkumar; Davis, Rodney D.

    2005-05-01

    The future of business intelligence in space exploration will focus on the intelligent system-of-systems real-time enterprise. In present business intelligence, a number of technologies that are most relevant to space exploration are experiencing the greatest change. Emerging patterns of set of processes rather than organizational units leading to end-to-end automation is becoming a major objective of enterprise information technology. The cost element is a leading factor of future exploration systems. This technology project is to advance an integrated Planning and Management Simulation Model for evaluation of risks, costs, and reliability of launch systems from Earth to Orbit for Space Exploration. The approach builds on research done in the NASA ARC/KSC developed Virtual Test Bed (VTB) to integrate architectural, operations process, and mission simulations for the purpose of evaluating enterprise level strategies to reduce cost, improve systems operability, and reduce mission risks. The objectives are to understand the interdependency of architecture and process on recurring launch cost of operations, provide management a tool for assessing systems safety and dependability versus cost, and leverage lessons learned and empirical models from Shuttle and International Space Station to validate models applied to Exploration. The systems-of-systems concept is built to balance the conflicting objectives of safety, reliability, and process strategy in order to achieve long term sustainability. A planning and analysis test bed is needed for evaluation of enterprise level options and strategies for transit and launch systems as well as surface and orbital systems. This environment can also support agency simulation based acquisition process objectives. The technology development approach is based on the collaborative effort set forth in the VTB's integrating operations, process models, systems and environment models, and cost models as a comprehensive disciplined

  10. The national operational environment model (NOEM)

    Science.gov (United States)

    Salerno, John J.; Romano, Brian; Geiler, Warren

    2011-06-01

    The National Operational Environment Model (NOEM) is a strategic analysis/assessment tool that provides insight into the complex state space (as a system) that is today's modern operational environment. The NOEM supports baseline forecasts by generating plausible futures based on the current state. It supports what-if analysis by forecasting ramifications of potential "Blue" actions on the environment. The NOEM also supports sensitivity analysis by identifying possible pressure (leverage) points in support of the Commander that resolves forecasted instabilities, and by ranking sensitivities in a list for each leverage point and response. The NOEM can be used to assist Decision Makers, Analysts and Researchers with understanding the inter-workings of a region or nation state, the consequences of implementing specific policies, and the ability to plug in new operational environment theories/models as they mature. The NOEM is built upon an open-source, license-free set of capabilities, and aims to provide support for pluggable modules that make up a given model. The NOEM currently has an extensive number of modules (e.g. economic, security & social well-being pieces such as critical infrastructure) completed along with a number of tools to exercise them. The focus this year is on modeling the social and behavioral aspects of a populace within their environment, primarily the formation of various interest groups, their beliefs, their requirements, their grievances, their affinities, and the likelihood of a wide range of their actions, depending on their perceived level of security and happiness. As such, several research efforts are currently underway to model human behavior from a group perspective, in the pursuit of eventual integration and balance of populace needs/demands within their respective operational environment and the capacity to meet those demands. In this paper we will provide an overview of the NOEM, the need for and a description of its main components

  11. IPF-LASSO: Integrative L1-Penalized Regression with Penalty Factors for Prediction Based on Multi-Omics Data

    Directory of Open Access Journals (Sweden)

    Anne-Laure Boulesteix

    2017-01-01

    Full Text Available As modern biotechnologies advance, it has become increasingly frequent that different modalities of high-dimensional molecular data (termed “omics” data in this paper, such as gene expression, methylation, and copy number, are collected from the same patient cohort to predict the clinical outcome. While prediction based on omics data has been widely studied in the last fifteen years, little has been done in the statistical literature on the integration of multiple omics modalities to select a subset of variables for prediction, which is a critical task in personalized medicine. In this paper, we propose a simple penalized regression method to address this problem by assigning different penalty factors to different data modalities for feature selection and prediction. The penalty factors can be chosen in a fully data-driven fashion by cross-validation or by taking practical considerations into account. In simulation studies, we compare the prediction performance of our approach, called IPF-LASSO (Integrative LASSO with Penalty Factors and implemented in the R package ipflasso, with the standard LASSO and sparse group LASSO. The use of IPF-LASSO is also illustrated through applications to two real-life cancer datasets. All data and codes are available on the companion website to ensure reproducibility.

  12. Operator formulation of the droplet model

    International Nuclear Information System (INIS)

    Lee, B.W.

    1987-01-01

    We study in detail the implications of the operator formulation of the droplet model. The picture of high-energy scattering that emerges from this model attributed the interaction between two colliding particles at high energies to an instantaneous, multiple exchange between two extended charge distributions. Thus the study of charge correlation functions becomes the most important problem in the droplet model. We find that in order for the elastic cross section to have a finite limit at infinite energy, the charge must be a conserved one. In quantum electrodynamics the charge in question is the electric charge. In hadronic physics, we conjecture, it is the baryonic charge. Various arguments for and implications of this hypothesis are presented. We study formal properties of the charge correlation functions that follow from microcausality, T, C, P invariances, and charge conservation. Perturbation expansion of the correlation functions is studied, and their cluster properties are deduced. A cluster expansion of the high-energy T matrix is developed, and the exponentiation of the interaction potential in this scheme is noted. The operator droplet model is put to the test of reproducing the high-energy limit of elastic scattering quantum electrodynamics found by Cheng and Wu in perturbation theory. We find that the droplet model reproduces exactly the results of Cheng and Wu as to the impact factor. In fact, the ''impact picture'' of Cheng and Wu is completely equivalent to the droplet model in the operator version. An appraisal is made of the possible limitation of the model. (author). 13 refs

  13. System Dynamics Modeling of Multipurpose Reservoir Operation

    Directory of Open Access Journals (Sweden)

    Ebrahim Momeni

    2006-03-01

    Full Text Available System dynamics, a feedback – based object – oriented simulation approach, not only represents complex dynamic systemic systems in a realistic way but also allows the involvement of end users in model development to increase their confidence in modeling process. The increased speed of model development, the possibility of group model development, the effective communication of model results, and the trust developed in the model due to user participation are the main strengths of this approach. The ease of model modification in response to changes in the system and the ability to perform sensitivity analysis make this approach more attractive compared with systems analysis techniques for modeling water management systems. In this study, a system dynamics model was developed for the Zayandehrud basin in central Iran. This model contains river basin, dam reservoir, plains, irrigation systems, and groundwater. Current operation rule is conjunctive use of ground and surface water. Allocation factor for each irrigation system is computed based on the feedback from groundwater storage in its zone. Deficit water is extracted from groundwater.The results show that applying better rules can not only satisfy all demands such as Gawkhuni swamp environmental demand, but it can also  prevent groundwater level drawdown in future.

  14. Pierced Lasso Bundles are a new class of knot-like motifs.

    Directory of Open Access Journals (Sweden)

    Ellinor Haglund

    2014-06-01

    Full Text Available A four-helix bundle is a well-characterized motif often used as a target for designed pharmaceutical therapeutics and nutritional supplements. Recently, we discovered a new structural complexity within this motif created by a disulphide bridge in the long-chain helical bundle cytokine leptin. When oxidized, leptin contains a disulphide bridge creating a covalent-loop through which part of the polypeptide chain is threaded (as seen in knotted proteins. We explored whether other proteins contain a similar intriguing knot-like structure as in leptin and discovered 11 structurally homologous proteins in the PDB. We call this new helical family class the Pierced Lasso Bundle (PLB and the knot-like threaded structural motif a Pierced Lasso (PL. In the current study, we use structure-based simulation to investigate the threading/folding mechanisms for all the PLBs along with three unthreaded homologs as the covalent loop (or lasso in leptin is important in folding dynamics and activity. We find that the presence of a small covalent loop leads to a mechanism where structural elements slipknot to thread through the covalent loop. Larger loops use a piercing mechanism where the free terminal plugs through the covalent loop. Remarkably, the position of the loop as well as its size influences the native state dynamics, which can impact receptor binding and biological activity. This previously unrecognized complexity of knot-like proteins within the helical bundle family comprises a completely new class within the knot family, and the hidden complexity we unraveled in the PLBs is expected to be found in other protein structures outside the four-helix bundles. The insights gained here provide critical new elements for future investigation of this emerging class of proteins, where function and the energetic landscape can be controlled by hidden topology, and should be take into account in ab initio predictions of newly identified protein targets.

  15. Applying Least Absolute Shrinkage Selection Operator and Akaike Information Criterion Analysis to Find the Best Multiple Linear Regression Models between Climate Indices and Components of Cow's Milk.

    Science.gov (United States)

    Marami Milani, Mohammad Reza; Hense, Andreas; Rahmani, Elham; Ploeger, Angelika

    2016-07-23

    This study focuses on multiple linear regression models relating six climate indices (temperature humidity THI, environmental stress ESI, equivalent temperature index ETI, heat load HLI, modified HLI (HLI new ), and respiratory rate predictor RRP) with three main components of cow's milk (yield, fat, and protein) for cows in Iran. The least absolute shrinkage selection operator (LASSO) and the Akaike information criterion (AIC) techniques are applied to select the best model for milk predictands with the smallest number of climate predictors. Uncertainty estimation is employed by applying bootstrapping through resampling. Cross validation is used to avoid over-fitting. Climatic parameters are calculated from the NASA-MERRA global atmospheric reanalysis. Milk data for the months from April to September, 2002 to 2010 are used. The best linear regression models are found in spring between milk yield as the predictand and THI, ESI, ETI, HLI, and RRP as predictors with p -value < 0.001 and R ² (0.50, 0.49) respectively. In summer, milk yield with independent variables of THI, ETI, and ESI show the highest relation ( p -value < 0.001) with R ² (0.69). For fat and protein the results are only marginal. This method is suggested for the impact studies of climate variability/change on agriculture and food science fields when short-time series or data with large uncertainty are available.

  16. Controlling the local false discovery rate in the adaptive Lasso

    KAUST Repository

    Sampson, J. N.; Chatterjee, N.; Carroll, R. J.; Muller, S.

    2013-01-01

    FDR. We compare the smoothing parameters chosen to achieve a specified lFDR and those chosen to achieve the oracle properties, as well as their resulting estimates for model coefficients, with both simulation and an example from a genetic study of prostate

  17. Mapping Haplotype-haplotype Interactions with Adaptive LASSO

    Directory of Open Access Journals (Sweden)

    Li Ming

    2010-08-01

    Full Text Available Abstract Background The genetic etiology of complex diseases in human has been commonly viewed as a complex process involving both genetic and environmental factors functioning in a complicated manner. Quite often the interactions among genetic variants play major roles in determining the susceptibility of an individual to a particular disease. Statistical methods for modeling interactions underlying complex diseases between single genetic variants (e.g. single nucleotide polymorphisms or SNPs have been extensively studied. Recently, haplotype-based analysis has gained its popularity among genetic association studies. When multiple sequence or haplotype interactions are involved in determining an individual's susceptibility to a disease, it presents daunting challenges in statistical modeling and testing of the interaction effects, largely due to the complicated higher order epistatic complexity. Results In this article, we propose a new strategy in modeling haplotype-haplotype interactions under the penalized logistic regression framework with adaptive L1-penalty. We consider interactions of sequence variants between haplotype blocks. The adaptive L1-penalty allows simultaneous effect estimation and variable selection in a single model. We propose a new parameter estimation method which estimates and selects parameters by the modified Gauss-Seidel method nested within the EM algorithm. Simulation studies show that it has low false positive rate and reasonable power in detecting haplotype interactions. The method is applied to test haplotype interactions involved in mother and offspring genome in a small for gestational age (SGA neonates data set, and significant interactions between different genomes are detected. Conclusions As demonstrated by the simulation studies and real data analysis, the approach developed provides an efficient tool for the modeling and testing of haplotype interactions. The implementation of the method in R codes can be

  18. Snow model design for operational purposes

    Science.gov (United States)

    Kolberg, Sjur

    2017-04-01

    A parsimonious distributed energy balance snow model intended for operational use is evaluated using discharge, snow covered area and grain size; the latter two as observed from the MODIS sensor. The snow model is an improvement of the existing GamSnow model, which is a part of the Enki modelling framework. Core requirements for the new version have been: 1. Reduction of calibration freedom, motivated by previous experience of non-identifiable parameters in the existing version 2. Improvement of process representation based on recent advances in physically based snow modelling 3. Limiting the sensitivity to forcing data which are poorly known over the spatial domain of interest (often in mountainous areas) 4. Preference for observable states, and the ability to improve from updates. The albedo calculation is completely revised, now based on grain size through an emulation of the SNICAR model (Flanner and Zender, 2006; Gardener and Sharp, 2010). The number of calibration parameters in the albedo model is reduced from 6 to 2. The wind function governing turbulent energy fluxes has been reduced from 2 to 1 parameter. Following Raleigh et al (2011), snow surface radiant temperature is split from the top layer thermodynamic temperature, using bias-corrected wet-bulb temperature to model the former. Analyses are ongoing, and the poster will bring evaluation results from 16 years of MODIS observations and more than 25 catchments in southern Norway.

  19. Updating of states in operational hydrological models

    Science.gov (United States)

    Bruland, O.; Kolberg, S.; Engeland, K.; Gragne, A. S.; Liston, G.; Sand, K.; Tøfte, L.; Alfredsen, K.

    2012-04-01

    Operationally the main purpose of hydrological models is to provide runoff forecasts. The quality of the model state and the accuracy of the weather forecast together with the model quality define the runoff forecast quality. Input and model errors accumulate over time and may leave the model in a poor state. Usually model states can be related to observable conditions in the catchment. Updating of these states, knowing their relation to observable catchment conditions, influence directly the forecast quality. Norway is internationally in the forefront in hydropower scheduling both on short and long terms. The inflow forecasts are fundamental to this scheduling. Their quality directly influence the producers profit as they optimize hydropower production to market demand and at the same time minimize spill of water and maximize available hydraulic head. The quality of the inflow forecasts strongly depends on the quality of the models applied and the quality of the information they use. In this project the focus has been to improve the quality of the model states which the forecast is based upon. Runoff and snow storage are two observable quantities that reflect the model state and are used in this project for updating. Generally the methods used can be divided in three groups: The first re-estimates the forcing data in the updating period; the second alters the weights in the forecast ensemble; and the third directly changes the model states. The uncertainty related to the forcing data through the updating period is due to both uncertainty in the actual observation and to how well the gauging stations represent the catchment both in respect to temperatures and precipitation. The project looks at methodologies that automatically re-estimates the forcing data and tests the result against observed response. Model uncertainty is reflected in a joint distribution of model parameters estimated using the Dream algorithm.

  20. Operator expansion in σ-model

    International Nuclear Information System (INIS)

    Terent'ev, M.V.

    1986-01-01

    The operator expansion is studied in two dimensional σ-model with O(N) symmetry group at large values of N for the Green function at x 2 → 0 (Here n(x) is the dynamical field of σ-model). As a preliminary step the renormalization scheme is formulated in framework of I/N expansion where the intermediate scale μ 2 is introdused and regions of large (p > μ) and small (p 2 )/N in composite operators (here f(μ 2 ) is the effective coupling constant at the point μ 2 ) and the corrections of order of m 2 x 2 f(μ 2 )/N in the coefficient functions (here m is the dynamical mass-scale factor of σ-model) decisively depend on the recipe of factorization of small and large momenta regions. Due to the analogy between σ-model and quantum chromodynamics (QCD) the obtained result indicates the theoretical limitations to the accuracy of sum rule method in QCD

  1. [Multi-mathematical modelings for compatibility optimization of Jiangzhi granules].

    Science.gov (United States)

    Yang, Ming; Zhang, Li; Ge, Yingli; Lu, Yanliu; Ji, Guang

    2011-12-01

    To investigate into the method of "multi activity index evaluation and combination optimized of mult-component" for Chinese herbal formulas. According to the scheme of uniform experimental design, efficacy experiment, multi index evaluation, least absolute shrinkage, selection operator (LASSO) modeling, evolutionary optimization algorithm, validation experiment, we optimized the combination of Jiangzhi granules based on the activity indexes of blood serum ALT, ALT, AST, TG, TC, HDL, LDL and TG level of liver tissues, ratio of liver tissue to body. Analytic hierarchy process (AHP) combining with criteria importance through intercriteria correlation (CRITIC) for multi activity index evaluation was more reasonable and objective, it reflected the information of activity index's order and objective sample data. LASSO algorithm modeling could accurately reflect the relationship between different combination of Jiangzhi granule and the activity comprehensive indexes. The optimized combination of Jiangzhi granule showed better values of the activity comprehensive indexed than the original formula after the validation experiment. AHP combining with CRITIC can be used for multi activity index evaluation and LASSO algorithm, it is suitable for combination optimized of Chinese herbal formulas.

  2. Operational Modelling of High Temperature Electrolysis (HTE)

    International Nuclear Information System (INIS)

    Patrick Lovera; Franck Blein; Julien Vulliet

    2006-01-01

    Solid Oxide Fuel Cells (SOFC) and High Temperature Electrolysis (HTE) work on two opposite processes. The basic equations (Nernst equation, corrected by a term of over-voltage) are thus very similar, only a few signs are different. An operational model, based on measurable quantities, was finalized for HTE process, and adapted to SOFCs. The model is analytical, which requires some complementary assumptions (proportionality of over-tensions to the current density, linearization of the logarithmic term in Nernst equation). It allows determining hydrogen production by HTE using a limited number of parameters. At a given temperature, only one macroscopic parameter, related to over-voltages, is needed for adjusting the model to the experimental results (SOFC), in a wide range of hydrogen flow-rates. For a given cell, this parameter follows an Arrhenius law with a satisfactory precision. The prevision in HTE process is compared to the available experimental results. (authors)

  3. Facility Will Help Transition Models Into Operations

    Science.gov (United States)

    Kumar, Mohi

    2009-02-01

    The U.S. National Oceanic and Atmospheric Administration's Space Weather Prediction Center (NOAA SWPC), in partnership with the U.S. Air Force Weather Agency (AFWA), is establishing a center to promote and facilitate the transition of space weather models to operations. The new facility, called the Developmental Testbed Center (DTC), will take models used by researchers and rigorously test them to see if they can withstand continued use as viable warning systems. If a model used in a space weather warning system crashes or fails to perform well, severe consequences can result. These include increased radiation risks to astronauts and people traveling on high-altitude flights, national security vulnerabilities from the loss of military satellite communications, and the cost of replacing damaged military and commercial spacecraft.

  4. Modeling decisions information fusion and aggregation operators

    CERN Document Server

    Torra, Vicenc

    2007-01-01

    Information fusion techniques and aggregation operators produce the most comprehensive, specific datum about an entity using data supplied from different sources, thus enabling us to reduce noise, increase accuracy, summarize and extract information, and make decisions. These techniques are applied in fields such as economics, biology and education, while in computer science they are particularly used in fields such as knowledge-based systems, robotics, and data mining. This book covers the underlying science and application issues related to aggregation operators, focusing on tools used in practical applications that involve numerical information. Starting with detailed introductions to information fusion and integration, measurement and probability theory, fuzzy sets, and functional equations, the authors then cover the following topics in detail: synthesis of judgements, fuzzy measures, weighted means and fuzzy integrals, indices and evaluation methods, model selection, and parameter extraction. The method...

  5. Nordic Model of Subregional Co-Operation

    Directory of Open Access Journals (Sweden)

    Grzela Joanna

    2017-12-01

    Full Text Available Nordic co-operation is renowned throughout the world and perceived as the collaboration of a group of countries which are similar in their views and activities. The main pillars of the Nordic model of co-operation are the tradition of constitutional principles, activity of public movements and organisations, freedom of speech, equality, solidarity, and respect for the natural environment. In connection with labour and entrepreneurship, these elements are the features of a society which favours efficiency, a sense of security and balance between an individual and a group. Currently, the collaboration is a complex process, including many national, governmental and institutional connections which form the “Nordic family”.

  6. Similarity regularized sparse group lasso for cup to disc ratio computation.

    Science.gov (United States)

    Cheng, Jun; Zhang, Zhuo; Tao, Dacheng; Wong, Damon Wing Kee; Liu, Jiang; Baskaran, Mani; Aung, Tin; Wong, Tien Yin

    2017-08-01

    Automatic cup to disc ratio (CDR) computation from color fundus images has shown to be promising for glaucoma detection. Over the past decade, many algorithms have been proposed. In this paper, we first review the recent work in the area and then present a novel similarity-regularized sparse group lasso method for automated CDR estimation. The proposed method reconstructs the testing disc image based on a set of reference disc images by integrating the similarity between testing and the reference disc images with the sparse group lasso constraints. The reconstruction coefficients are then used to estimate the CDR of the testing image. The proposed method has been validated using 650 images with manually annotated CDRs. Experimental results show an average CDR error of 0.0616 and a correlation coefficient of 0.7, outperforming other methods. The areas under curve in the diagnostic test reach 0.843 and 0.837 when manual and automatically segmented discs are used respectively, better than other methods as well.

  7. Adapting Modeling & SImulation for Network Enabled Operations

    Science.gov (United States)

    2011-03-01

    Awareness in Aerospace Operations ( AGARD - CP -478; pp. 5/1-5/8), Neuilly Sur Seine, France: NATO- AGARD . 243 ChApter 8 ShAping uk defenCe poliCy...Chapter 3 73 Increasing the Maturity of Command to Deal with Complex, Information Age Environments • Players could concentrate on their own areas; they...The results are shown in figure 4.16, which shows the fit for the first four serials. The model still explains 73 % of the vari- ability, down from 82

  8. Modeling lift operations with SASmacr Simulation Studio

    Science.gov (United States)

    Kar, Leow Soo

    2016-10-01

    Lifts or elevators are an essential part of multistorey buildings which provide vertical transportation for its occupants. In large and high-rise apartment buildings, its occupants are permanent, while in buildings, like hospitals or office blocks, the occupants are temporary or users of the buildings. They come in to work or to visit, and thus, the population of such buildings are much higher than those in residential apartments. It is common these days that large office blocks or hospitals have at least 8 to 10 lifts serving its population. In order to optimize the level of service performance, different transportation schemes are devised to control the lift operations. For example, one lift may be assigned to solely service the even floors and another solely for the odd floors, etc. In this paper, a basic lift system is modelled using SAS Simulation Studio to study the effect of factors such as the number of floors, capacity of the lift car, arrival rate and exit rate of passengers at each floor, peak and off peak periods on the system performance. The simulation is applied to a real lift operation in Sunway College's North Building to validate the model.

  9. Use of an operational model evaluation system for model intercomparison

    Energy Technology Data Exchange (ETDEWEB)

    Foster, K. T., LLNL

    1998-03-01

    The Atmospheric Release Advisory Capability (ARAC) is a centralized emergency response system used to assess the impact from atmospheric releases of hazardous materials. As part of an on- going development program, new three-dimensional diagnostic windfield and Lagrangian particle dispersion models will soon replace ARAC`s current operational windfield and dispersion codes. A prototype model performance evaluation system has been implemented to facilitate the study of the capabilities and performance of early development versions of these new models relative to ARAC`s current operational codes. This system provides tools for both objective statistical analysis using common performance measures and for more subjective visualization of the temporal and spatial relationships of model results relative to field measurements. Supporting this system is a database of processed field experiment data (source terms and meteorological and tracer measurements) from over 100 individual tracer releases.

  10. Perceptual quality estimation of H.264/AVC videos using reduced-reference and no-reference models

    Science.gov (United States)

    Shahid, Muhammad; Pandremmenou, Katerina; Kondi, Lisimachos P.; Rossholm, Andreas; Lövström, Benny

    2016-09-01

    Reduced-reference (RR) and no-reference (NR) models for video quality estimation, using features that account for the impact of coding artifacts, spatio-temporal complexity, and packet losses, are proposed. The purpose of this study is to analyze a number of potentially quality-relevant features in order to select the most suitable set of features for building the desired models. The proposed sets of features have not been used in the literature and some of the features are used for the first time in this study. The features are employed by the least absolute shrinkage and selection operator (LASSO), which selects only the most influential of them toward perceptual quality. For comparison, we apply feature selection in the complete feature sets and ridge regression on the reduced sets. The models are validated using a database of H.264/AVC encoded videos that were subjectively assessed for quality in an ITU-T compliant laboratory. We infer that just two features selected by RR LASSO and two bitstream-based features selected by NR LASSO are able to estimate perceptual quality with high accuracy, higher than that of ridge, which uses more features. The comparisons with competing works and two full-reference metrics also verify the superiority of our models.

  11. Estimating High-Dimensional Time Series Models

    DEFF Research Database (Denmark)

    Medeiros, Marcelo C.; Mendes, Eduardo F.

    We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume both the number of covariates in the model and candidate variables can increase with the number of observations and the number of candidate variables is, possibly......, larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection consistency), and has the oracle property, even when the errors are non-Gaussian and conditionally heteroskedastic. A simulation study shows...

  12. An approach to modeling operator's cognitive behavior using artificial intelligence techniques in emergency operating event sequences

    International Nuclear Information System (INIS)

    Cheon, Se Woo; Sur, Sang Moon; Lee, Yong Hee; Park, Young Taeck; Moon, Sang Joon

    1994-01-01

    Computer modeling of an operator's cognitive behavior is a promising approach for the purpose of human factors study and man-machine systems assessment. In this paper, the states of the art in modeling operator behavior and the current status in developing an operator's model (MINERVA - NPP) are presented. The model is constructed as a knowledge-based system of a blackboard framework and is simulated based on emergency operating procedures

  13. An Operational Model for the Prediction of Jet Blast

    Science.gov (United States)

    2012-01-09

    This paper presents an operational model for the prediction of jet blast. The model was : developed based upon three modules including a jet exhaust model, jet centerline decay : model and aircraft motion model. The final analysis was compared with d...

  14. A Secure Operational Model for Mobile Payments

    Directory of Open Access Journals (Sweden)

    Tao-Ku Chang

    2014-01-01

    Full Text Available Instead of paying by cash, check, or credit cards, customers can now also use their mobile devices to pay for a wide range of services and both digital and physical goods. However, customers’ security concerns are a major barrier to the broad adoption and use of mobile payments. In this paper we present the design of a secure operational model for mobile payments in which access control is based on a service-oriented architecture. A customer uses his/her mobile device to get authorization from a remote server and generate a two-dimensional barcode as the payment certificate. This payment certificate has a time limit and can be used once only. The system also provides the ability to remotely lock and disable the mobile payment service.

  15. Measurement error correction in the least absolute shrinkage and selection operator model when validation data are available.

    Science.gov (United States)

    Vasquez, Monica M; Hu, Chengcheng; Roe, Denise J; Halonen, Marilyn; Guerra, Stefano

    2017-01-01

    Measurement of serum biomarkers by multiplex assays may be more variable as compared to single biomarker assays. Measurement error in these data may bias parameter estimates in regression analysis, which could mask true associations of serum biomarkers with an outcome. The Least Absolute Shrinkage and Selection Operator (LASSO) can be used for variable selection in these high-dimensional data. Furthermore, when the distribution of measurement error is assumed to be known or estimated with replication data, a simple measurement error correction method can be applied to the LASSO method. However, in practice the distribution of the measurement error is unknown and is expensive to estimate through replication both in monetary cost and need for greater amount of sample which is often limited in quantity. We adapt an existing bias correction approach by estimating the measurement error using validation data in which a subset of serum biomarkers are re-measured on a random subset of the study sample. We evaluate this method using simulated data and data from the Tucson Epidemiological Study of Airway Obstructive Disease (TESAOD). We show that the bias in parameter estimation is reduced and variable selection is improved.

  16. Warehouse operations planning model for Bausch & Lomb

    NARCIS (Netherlands)

    Atilgan, Ceren

    2009-01-01

    Operations planning is a major part of the Sales& Operations Planning (S&OP) process. It provides an overview on the operations capacity requirements by considering the supply and demand plan. However, Bausch& Lomb does not have a structured operations planning process for their warehouse

  17. Implementation of an operator model with error mechanisms for nuclear power plant control room operation

    International Nuclear Information System (INIS)

    Suh, Sang Moon; Cheon, Se Woo; Lee, Yong Hee; Lee, Jung Woon; Park, Young Taek

    1996-01-01

    SACOM(Simulation Analyser with Cognitive Operator Model) is being developed at Korea Atomic Energy Research Institute to simulate human operator's cognitive characteristics during the emergency situations of nuclear power plans. An operator model with error mechanisms has been developed and combined into SACOM to simulate human operator's cognitive information process based on the Rasmussen's decision ladder model. The operational logic for five different cognitive activities (Agents), operator's attentional control (Controller), short-term memory (Blackboard), and long-term memory (Knowledge Base) have been developed and implemented on blackboard architecture. A trial simulation with a scenario for emergency operation has been performed to verify the operational logic. It was found that the operator model with error mechanisms is suitable for the simulation of operator's cognitive behavior in emergency situation

  18. Improving traffic signal management and operations : a basic service model.

    Science.gov (United States)

    2009-12-01

    This report provides a guide for achieving a basic service model for traffic signal management and : operations. The basic service model is based on simply stated and defensible operational objectives : that consider the staffing level, expertise and...

  19. Integrative Sparse K-Means With Overlapping Group Lasso in Genomic Applications for Disease Subtype Discovery.

    Science.gov (United States)

    Huo, Zhiguang; Tseng, George

    2017-06-01

    Cancer subtypes discovery is the first step to deliver personalized medicine to cancer patients. With the accumulation of massive multi-level omics datasets and established biological knowledge databases, omics data integration with incorporation of rich existing biological knowledge is essential for deciphering a biological mechanism behind the complex diseases. In this manuscript, we propose an integrative sparse K -means (is- K means) approach to discover disease subtypes with the guidance of prior biological knowledge via sparse overlapping group lasso. An algorithm using an alternating direction method of multiplier (ADMM) will be applied for fast optimization. Simulation and three real applications in breast cancer and leukemia will be used to compare is- K means with existing methods and demonstrate its superior clustering accuracy, feature selection, functional annotation of detected molecular features and computing efficiency.

  20. Modeling and Design of Container Terminal Operations

    NARCIS (Netherlands)

    D. Roy (Debjit); M.B.M. de Koster (René)

    2014-01-01

    textabstractDesign of container terminal operations is complex because multiple factors affect the operational perfor- mance. These factors include: topological constraints, a large number of design parameters and settings, and stochastic interactions that interplay among the quayside, vehicle

  1. Scoring relevancy of features based on combinatorial analysis of Lasso with application to lymphoma diagnosis

    Directory of Open Access Journals (Sweden)

    Zare Habil

    2013-01-01

    Full Text Available Abstract One challenge in applying bioinformatic tools to clinical or biological data is high number of features that might be provided to the learning algorithm without any prior knowledge on which ones should be used. In such applications, the number of features can drastically exceed the number of training instances which is often limited by the number of available samples for the study. The Lasso is one of many regularization methods that have been developed to prevent overfitting and improve prediction performance in high-dimensional settings. In this paper, we propose a novel algorithm for feature selection based on the Lasso and our hypothesis is that defining a scoring scheme that measures the "quality" of each feature can provide a more robust feature selection method. Our approach is to generate several samples from the training data by bootstrapping, determine the best relevance-ordering of the features for each sample, and finally combine these relevance-orderings to select highly relevant features. In addition to the theoretical analysis of our feature scoring scheme, we provided empirical evaluations on six real datasets from different fields to confirm the superiority of our method in exploratory data analysis and prediction performance. For example, we applied FeaLect, our feature scoring algorithm, to a lymphoma dataset, and according to a human expert, our method led to selecting more meaningful features than those commonly used in the clinics. This case study built a basis for discovering interesting new criteria for lymphoma diagnosis. Furthermore, to facilitate the use of our algorithm in other applications, the source code that implements our algorithm was released as FeaLect, a documented R package in CRAN.

  2. Robust Models for Operator Workload Estimation

    Science.gov (United States)

    2015-03-01

    piloted aircraft (RPA) simultaneously, a vast improvement in resource utilization compared to existing operations that require several operators per...into distinct cognitive channels (visual, auditory, spatial, etc.) based on our ability to multitask effectively as long as no one channel is

  3. Analytical modeling of nuclear power station operator reliability

    International Nuclear Information System (INIS)

    Sabri, Z.A.; Husseiny, A.A.

    1979-01-01

    The operator-plant interface is a critical component of power stations which requires the formulation of mathematical models to be applied in plant reliability analysis. The human model introduced here is based on cybernetic interactions and allows for use of available data from psychological experiments, hot and cold training and normal operation. The operator model is identified and integrated in the control and protection systems. The availability and reliability are given for different segments of the operator task and for specific periods of the operator life: namely, training, operation and vigilance or near retirement periods. The results can be easily and directly incorporated in system reliability analysis. (author)

  4. Multivariate operational risk: dependence modelling with Lévy copulas

    OpenAIRE

    Böcker, K. and Klüppelberg, C.

    2015-01-01

    Simultaneous modelling of operational risks occurring in different event type/business line cells poses the challenge for operational risk quantification. Invoking the new concept of L´evy copulas for dependence modelling yields simple approximations of high quality for multivariate operational VAR.

  5. Development of operator thinking model and its application to nuclear reactor plant operation system

    International Nuclear Information System (INIS)

    Miki, Tetsushi; Endou, Akira; Himeno, Yoshiaki

    1992-01-01

    At first, this paper presents the developing method of an operator thinking model and the outline of the developed model. In next, it describes the nuclear reactor plant operation system which has been developed based on this model. Finally, it has been confirmed that the method described in this paper is very effective in order to construct expert systems which replace the reactor operator's role with AI (artificial intelligence) systems. (author)

  6. Applied Geography Internships: Operational Canadian Models.

    Science.gov (United States)

    Foster, L. T.

    1982-01-01

    Anxious to maintain student enrollments, geography departments have placed greater emphasis on the applied nature of the discipline. Described are (1) the advantages of internships in college geography curricula that enable students to gain firsthand knowledge about the usefulness of geography in real world situations and (2) operational models…

  7. A knowledge-Induced Operator Model

    Directory of Open Access Journals (Sweden)

    M.A. Choudhury

    2007-06-01

    Full Text Available Learning systems are in the forefront of analytical investigation in the sciences. In the social sciences they occupy the study of complexity and strongly interactive world-systems. Sometimes they are diversely referred to as symbiotics and semiotics when studied in conjunction with logical expressions. In the mathematical sciences the methodology underlying learning systems with complex behavior is based on formal logic or systems analysis. In this paper relationally learning systems are shown to transcend the space-time domain of scientific investigation into the knowledge dimension. Such a knowledge domain is explained by pervasive interaction leading to integration and followed by continuous evolution as complementary processes existing between entities and systemic domains in world-systems, thus the abbreviation IIE-processes. This paper establishes a mathematical characterization of the properties of knowledge-induced process-based world-systems in the light of the epistemology of unity of knowledge signified in this paper by extensive complementarities caused by the epistemic and ontological foundation of the text of unity of knowledge, the prime example of which is the realm of the divine laws. The result is formalism in mathematical generalization of the learning phenomenon by means of an operator. This operator summarizes the properties of interaction, integration and evolution (IIE in the continuum domain of knowledge formation signified by universal complementarities across entities, systems and sub-systems in unifying world-systems. The opposite case of ‘de-knowledge’ and its operator is also briefly formalized.

  8. Modeling and optimization of laser cutting operations

    Directory of Open Access Journals (Sweden)

    Gadallah Mohamed Hassan

    2015-01-01

    Full Text Available Laser beam cutting is one important nontraditional machining process. This paper optimizes the parameters of laser beam cutting parameters of stainless steel (316L considering the effect of input parameters such as power, oxygen pressure, frequency and cutting speed. Statistical design of experiments is carried in three different levels and process responses such as average kerf taper (Ta, surface roughness (Ra and heat affected zones are measured accordingly. A response surface model is developed as a function of the process parameters. Responses predicted by the models (as per Taguchi’s L27OA are employed to search for an optimal combination to achieve desired process yield. Response Surface Models (RSMs are developed for mean responses, S/N ratio, and standard deviation of responses. Optimization models are formulated as single objective optimization problem subject to process constraints. Models are formulated based on Analysis of Variance (ANOVA and optimized using Matlab developed environment. Optimum solutions are compared with Taguchi Methodology results. As such, practicing engineers have means to model, analyze and optimize nontraditional machining processes. Validation experiments are carried to verify the developed models with success.

  9. An operator calculus for surface and volume modeling

    Science.gov (United States)

    Gordon, W. J.

    1984-01-01

    The mathematical techniques which form the foundation for most of the surface and volume modeling techniques used in practice are briefly described. An outline of what may be termed an operator calculus for the approximation and interpolation of functions of more than one independent variable is presented. By considering the linear operators associated with bivariate and multivariate interpolation/approximation schemes, it is shown how they can be compounded by operator multiplication and Boolean addition to obtain a distributive lattice of approximation operators. It is then demonstrated via specific examples how this operator calculus leads to practical techniques for sculptured surface and volume modeling.

  10. Spectral decomposition of model operators in de Branges spaces

    International Nuclear Information System (INIS)

    Gubreev, Gennady M; Tarasenko, Anna A

    2011-01-01

    The paper is devoted to studying a class of completely continuous nonselfadjoint operators in de Branges spaces of entire functions. Among other results, a class of unconditional bases of de Branges spaces consisting of values of their reproducing kernels is constructed. The operators that are studied are model operators in the class of completely continuous non-dissipative operators with two-dimensional imaginary parts. Bibliography: 22 titles.

  11. Neutron field control cybernetics model of RBMK reactor operator

    International Nuclear Information System (INIS)

    Polyakov, V.V.; Postnikov, V.V.; Sviridenkov, A.N.

    1992-01-01

    Results on parameter optimization for cybernetics model of RBMK reactor operator by power release control function are presented. Convolutions of various criteria applied previously in algorithms of the program 'Adviser to reactor operator' formed the basis of the model. 7 refs.; 4 figs

  12. Identification of human operator performance models utilizing time series analysis

    Science.gov (United States)

    Holden, F. M.; Shinners, S. M.

    1973-01-01

    The results of an effort performed by Sperry Systems Management Division for AMRL in applying time series analysis as a tool for modeling the human operator are presented. This technique is utilized for determining the variation of the human transfer function under various levels of stress. The human operator's model is determined based on actual input and output data from a tracking experiment.

  13. Multiple operating models for data linkage: A privacy positive

    Directory of Open Access Journals (Sweden)

    Katrina Irvine

    2017-04-01

    Our data linkage centre will implement new operating models with cascading levels of data handling on behalf of custodians. Sharing or publication of empirical evidence on timeframes, efficiency and quality can provide useful inputs in the design of new operating models and assist with the development of stakeholder and public confidence.

  14. Modeling of useful operating life of radioelectronics

    Directory of Open Access Journals (Sweden)

    Nevlyudova V. V.

    2014-08-01

    Full Text Available The author considers the possibility of using the laws of nonequilibrium thermodynamics to determine the relationship between controlled parameters of radioelectronics and the displayed environment, as well as the construction of a deterministic model of the processes of manufacturing defects development. This possibility is based on the observed patterns of change in the amount of content area, in accordance with the principles of behavior of the thermodynamic parameters characterizing the state of the real environment (entropy, the quantity of heat, etc.. The equation for the evolution of the technical state of radioelectronics is based on the deterministic kinetic model of the processes occurring in the multi-component environment, and on the observation model, which takes into account the errors caused by external influences instability and uncertainty.

  15. Deterministic operations research models and methods in linear optimization

    CERN Document Server

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  16. Comparing models of offensive cyber operations

    CSIR Research Space (South Africa)

    Grant, T

    2012-03-01

    Full Text Available Group Fallback only No Damballa, 2008 Crime Case studies Lone No No Owens et al, 2009 Warfare Literature Group Yes Yes Croom, 2010 Crime (APT) Case studies Group No No Dreijer, 2011 Warfare Previous models and case studies Group Yes No Van... be needed by a geographically or functionally distributed group of attackers. While some of the models describe the installation of a backdoor or an advanced persistent threat (APT), none of them describe the behaviour involved in returning to a...

  17. Statistical validation of normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; van t Veld, Aart; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    PURPOSE: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: A penalized regression method, LASSO (least absolute shrinkage

  18. Fuzzy rule-based model for hydropower reservoirs operation

    Energy Technology Data Exchange (ETDEWEB)

    Moeini, R.; Afshar, A.; Afshar, M.H. [School of Civil Engineering, Iran University of Science and Technology, Tehran (Iran, Islamic Republic of)

    2011-02-15

    Real-time hydropower reservoir operation is a continuous decision-making process of determining the water level of a reservoir or the volume of water released from it. The hydropower operation is usually based on operating policies and rules defined and decided upon in strategic planning. This paper presents a fuzzy rule-based model for the operation of hydropower reservoirs. The proposed fuzzy rule-based model presents a set of suitable operating rules for release from the reservoir based on ideal or target storage levels. The model operates on an 'if-then' principle, in which the 'if' is a vector of fuzzy premises and the 'then' is a vector of fuzzy consequences. In this paper, reservoir storage, inflow, and period are used as premises and the release as the consequence. The steps involved in the development of the model include, construction of membership functions for the inflow, storage and the release, formulation of fuzzy rules, implication, aggregation and defuzzification. The required knowledge bases for the formulation of the fuzzy rules is obtained form a stochastic dynamic programming (SDP) model with a steady state policy. The proposed model is applied to the hydropower operation of ''Dez'' reservoir in Iran and the results are presented and compared with those of the SDP model. The results indicate the ability of the method to solve hydropower reservoir operation problems. (author)

  19. Role of conceptual models in nuclear power plant operation

    International Nuclear Information System (INIS)

    Williams, M.D.; Moran, T.P.; Brown, J.S.

    1982-01-01

    A crucial objective in plant operation (and perhaps licensing) ought to be to explicitly train operators to develop, perhaps with computer aids, robust conceptual models of the plants they control. The question is whether we are actually able to develop robust conceptual models and validate their robustness. Cognitive science is just beginning to come to grips with this problem. This paper describes some of the evolving technology for building conceptual models of physical mechanisms and some of the implications of such models in the context of nuclear power plant operation

  20. Computer-aided operations engineering with integrated models of systems and operations

    Science.gov (United States)

    Malin, Jane T.; Ryan, Dan; Fleming, Land

    1994-01-01

    CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.

  1. The Application of Architecture Frameworks to Modelling Exploration Operations Costs

    Science.gov (United States)

    Shishko, Robert

    2006-01-01

    Developments in architectural frameworks and system-of-systems thinking have provided useful constructs for systems engineering. DoDAF concepts, language, and formalisms, in particular, provide a natural way of conceptualizing an operations cost model applicable to NASA's space exploration vision. Not all DoDAF products have meaning or apply to a DoDAF inspired operations cost model, but this paper describes how such DoDAF concepts as nodes, systems, and operational activities relate to the development of a model to estimate exploration operations costs. The paper discusses the specific implementation to the Mission Operations Directorate (MOD) operational functions/activities currently being developed and presents an overview of how this powerful representation can apply to robotic space missions as well.

  2. Operational Plan Ontology Model for Interconnection and Interoperability

    Science.gov (United States)

    Long, F.; Sun, Y. K.; Shi, H. Q.

    2017-03-01

    Aiming at the assistant decision-making system’s bottleneck of processing the operational plan data and information, this paper starts from the analysis of the problem of traditional expression and the technical advantage of ontology, and then it defines the elements of the operational plan ontology model and determines the basis of construction. Later, it builds up a semi-knowledge-level operational plan ontology model. Finally, it probes into the operational plan expression based on the operational plan ontology model and the usage of the application software. Thus, this paper has the theoretical significance and application value in the improvement of interconnection and interoperability of the operational plan among assistant decision-making systems.

  3. Optimizing refiner operation with statistical modelling

    Energy Technology Data Exchange (ETDEWEB)

    Broderick, G [Noranda Research Centre, Pointe Claire, PQ (Canada)

    1997-02-01

    The impact of refining conditions on the energy efficiency of the process and on the handsheet quality of a chemi-mechanical pulp was studied as part of a series of pilot scale refining trials. Statistical models of refiner performance were constructed from these results and non-linear optimization of process conditions were conducted. Optimization results indicated that increasing the ratio of specific energy applied in the first stage led to a reduction of some 15 per cent in the total energy requirement. The strategy can also be used to obtain significant increases in pulp quality for a given energy input. 20 refs., 6 tabs.

  4. A proposal for operator team behavior model and operator's thinking mechanism

    International Nuclear Information System (INIS)

    Yoshimura, Seiichi; Takano, Kenichi; Sasou, Kunihide

    1995-01-01

    Operating environment in huge systems like nuclear power plants or airplanes is changing rapidly with the advance of computer technology. It is necessary to elucidate thinking process of operators and decision-making process of an operator team in abnormal situations, in order to prevent human errors under such environment. The Central Research Institute of Electric Power Industry is promoting a research project to establish human error prevention countermeasures by modeling and simulating the thinking process of operators and decision-making process of an operator team. In the previous paper, application of multilevel flow modeling was proposed to a mental model which conducts future prediction and cause identification, and the characteristics were verified by experienced plant operators. In this paper, an operator team behavior model and a fundamental operator's thinking mechanism especially 'situation understanding' are proposed, and the proposals are evaluated by experiments using a full-scale simulator. The results reveal that some assumptions such as 'communication is done between a leader and a follower' are almost appropriate and that the situation understanding can be represented by 'probable candidates for cause, determination of a parameter which changes when an event occurs, determination of parameters which are influenced by the change of the previous parameter, determination of a principal parameter and future prediction of the principal parameter'. (author)

  5. Modeling Optimal Scheduling for Pumping System to Minimize Operation Cost and Enhance Operation Reliability

    Directory of Open Access Journals (Sweden)

    Yin Luo

    2012-01-01

    Full Text Available Traditional pump scheduling models neglect the operation reliability which directly relates with the unscheduled maintenance cost and the wear cost during the operation. Just for this, based on the assumption that the vibration directly relates with the operation reliability and the degree of wear, it could express the operation reliability as the normalization of the vibration level. The characteristic of the vibration with the operation point was studied, it could be concluded that idealized flow versus vibration plot should be a distinct bathtub shape. There is a narrow sweet spot (80 to 100 percent BEP to obtain low vibration levels in this shape, and the vibration also follows similar law with the square of the rotation speed without resonance phenomena. Then, the operation reliability could be modeled as the function of the capacity and rotation speed of the pump and add this function to the traditional model to form the new. And contrast with the tradition method, the result shown that the new model could fix the result produced by the traditional, make the pump operate in low vibration, then the operation reliability could increase and the maintenance cost could decrease.

  6. Stochastic and simulation models of maritime intercept operations capabilities

    OpenAIRE

    Sato, Hiroyuki

    2005-01-01

    The research formulates and exercises stochastic and simulation models to assess the Maritime Intercept Operations (MIO) capabilities. The models focus on the surveillance operations of the Maritime Patrol Aircraft (MPA). The analysis using the models estimates the probability with which a terrorist vessel (Red) is detected, correctly classified, and escorted for intensive investigation and neutralization before it leaves an area of interest (AOI). The difficulty of obtaining adequate int...

  7. Operator model-based design and evaluation of advanced systems

    International Nuclear Information System (INIS)

    Schryver, J.C.

    1988-01-01

    A multi-level operator modeling approach is recommended to provide broad support for the integrated design of advanced control and protection systems for new nuclear power plants. Preliminary design should address the symbiosis of automated systems and human operator by giving careful attention to the roles assigned to these two system elements. A conceptual model of the operator role is developed in the context of a command control-communication problem. According to this approach, joint responsibility can be realized in at least two ways: sharing or allocation. The inherent stabilities of different regions of the operator role space are considered

  8. Operational risk quantification and modelling within Romanian insurance industry

    Directory of Open Access Journals (Sweden)

    Tudor Răzvan

    2017-07-01

    Full Text Available This paper aims at covering and describing the shortcomings of various models used to quantify and model the operational risk within insurance industry with a particular focus on Romanian specific regulation: Norm 6/2015 concerning the operational risk issued by IT systems. While most of the local insurers are focusing on implementing the standard model to compute the Operational Risk solvency capital required, the local regulator has issued a local norm that requires to identify and assess the IT based operational risks from an ISO 27001 perspective. The challenges raised by the correlations assumed in the Standard model are substantially increased by this new regulation that requires only the identification and quantification of the IT operational risks. The solvency capital requirement stipulated by the implementation of Solvency II doesn’t recommend a model or formula on how to integrate the newly identified risks in the Operational Risk capital requirements. In this context we are going to assess the academic and practitioner’s understanding in what concerns: The Frequency-Severity approach, Bayesian estimation techniques, Scenario Analysis and Risk Accounting based on risk units, and how they could support the modelling of operational risk that are IT based. Developing an internal model only for the operational risk capital requirement proved to be, so far, costly and not necessarily beneficial for the local insurers. As the IT component will play a key role in the future of the insurance industry, the result of this analysis will provide a specific approach in operational risk modelling that can be implemented in the context of Solvency II, in a particular situation when (internal or external operational risk databases are scarce or not available.

  9. Launch and Landing Effects Ground Operations (LLEGO) Model

    Science.gov (United States)

    2008-01-01

    LLEGO is a model for understanding recurring launch and landing operations costs at Kennedy Space Center for human space flight. Launch and landing operations are often referred to as ground processing, or ground operations. Currently, this function is specific to the ground operations for the Space Shuttle Space Transportation System within the Space Shuttle Program. The Constellation system to follow the Space Shuttle consists of the crewed Orion spacecraft atop an Ares I launch vehicle and the uncrewed Ares V cargo launch vehicle. The Constellation flight and ground systems build upon many elements of the existing Shuttle flight and ground hardware, as well as upon existing organizations and processes. In turn, the LLEGO model builds upon past ground operations research, modeling, data, and experience in estimating for future programs. Rather than to simply provide estimates, the LLEGO model s main purpose is to improve expenses by relating complex relationships among functions (ground operations contractor, subcontractors, civil service technical, center management, operations, etc.) to tangible drivers. Drivers include flight system complexity and reliability, as well as operations and supply chain management processes and technology. Together these factors define the operability and potential improvements for any future system, from the most direct to the least direct expenses.

  10. Analysis and Modeling of Ground Operations at Hub Airports

    Science.gov (United States)

    Atkins, Stephen (Technical Monitor); Andersson, Kari; Carr, Francis; Feron, Eric; Hall, William D.

    2000-01-01

    Building simple and accurate models of hub airports can considerably help one understand airport dynamics, and may provide quantitative estimates of operational airport improvements. In this paper, three models are proposed to capture the dynamics of busy hub airport operations. Two simple queuing models are introduced to capture the taxi-out and taxi-in processes. An integer programming model aimed at representing airline decision-making attempts to capture the dynamics of the aircraft turnaround process. These models can be applied for predictive purposes. They may also be used to evaluate control strategies for improving overall airport efficiency.

  11. Operation quality assessment model for video conference system

    Science.gov (United States)

    Du, Bangshi; Qi, Feng; Shao, Sujie; Wang, Ying; Li, Weijian

    2018-01-01

    Video conference system has become an important support platform for smart grid operation and management, its operation quality is gradually concerning grid enterprise. First, the evaluation indicator system covering network, business and operation maintenance aspects was established on basis of video conference system's operation statistics. Then, the operation quality assessment model combining genetic algorithm with regularized BP neural network was proposed, which outputs operation quality level of the system within a time period and provides company manager with some optimization advice. The simulation results show that the proposed evaluation model offers the advantages of fast convergence and high prediction accuracy in contrast with regularized BP neural network, and its generalization ability is superior to LM-BP neural network and Bayesian BP neural network.

  12. Simulation Modeling of a Facility Layout in Operations Management Classes

    Science.gov (United States)

    Yazici, Hulya Julie

    2006-01-01

    Teaching quantitative courses can be challenging. Similarly, layout modeling and lean production concepts can be difficult to grasp in an introductory OM (operations management) class. This article describes a simulation model developed in PROMODEL to facilitate the learning of layout modeling and lean manufacturing. Simulation allows for the…

  13. Designing visual displays and system models for safe reactor operations

    Energy Technology Data Exchange (ETDEWEB)

    Brown-VanHoozer, S.A.

    1995-12-31

    The material presented in this paper is based on two studies involving the design of visual displays and the user`s prospective model of a system. The studies involve a methodology known as Neuro-Linguistic Programming and its use in expanding design choices from the operator`s perspective image. The contents of this paper focuses on the studies and how they are applicable to the safety of operating reactors.

  14. A toy model for higher spin Dirac operators

    International Nuclear Information System (INIS)

    Eelbode, D.; Van de Voorde, L.

    2010-01-01

    This paper deals with the higher spin Dirac operator Q 2,1 acting on functions taking values in an irreducible representation space for so(m) with highest weight (5/2, 3/2, 1/2,..., 1/2). . This operator acts as a toy model for generalizations of the classical Rarita-Schwinger equations in Clifford analysis. Polynomial null solutions for this operator are studied in particular.

  15. MODELING THE FLIGHT TRAJECTORY OF OPERATIONAL-TACTICAL BALLISTIC MISSILES

    Directory of Open Access Journals (Sweden)

    I. V. Filipchenko

    2018-01-01

    Full Text Available The article gives the basic approaches to updating the systems of combat operations modeling in the part of enemy missile attack simulation taking into account the possibility of tactical ballistic missile maneuvering during the flight. The results of simulation of combat tactical missile defense operations are given. 

  16. Design and modeling of reservoir operation strategies for sediment management

    NARCIS (Netherlands)

    Sloff, C.J.; Omer, A.Y.A.; Heynert, K.V.; Mohamed, Y.A.

    2015-01-01

    Appropriate operation strategies that allow for sediment flushing and sluicing (sediment routing) can reduce rapid storage losses of (hydropower and water-supply) reservoirs. In this study we have shown, using field observations and computational models, that the efficiency of these operations

  17. Modeling Methodologies for Representing Urban Cultural Geographies in Stability Operations

    National Research Council Canada - National Science Library

    Ferris, Todd P

    2008-01-01

    ... 2.0.0, in an effort to provide modeling methodologies for a single simulation tool capable of exploring the complex world of urban cultural geographies undergoing Stability Operations in an irregular warfare (IW) environment...

  18. Aviation Shipboard Operations Modeling and Simulation (ASOMS) Laboratory

    Data.gov (United States)

    Federal Laboratory Consortium — Purpose:It is the mission of the Aviation Shipboard Operations Modeling and Simulation (ASOMS) Laboratory to provide a means by which to virtually duplicate products...

  19. Advancing reservoir operation description in physically based hydrological models

    Science.gov (United States)

    Anghileri, Daniela; Giudici, Federico; Castelletti, Andrea; Burlando, Paolo

    2016-04-01

    Last decades have seen significant advances in our capacity of characterizing and reproducing hydrological processes within physically based models. Yet, when the human component is considered (e.g. reservoirs, water distribution systems), the associated decisions are generally modeled with very simplistic rules, which might underperform in reproducing the actual operators' behaviour on a daily or sub-daily basis. For example, reservoir operations are usually described by a target-level rule curve, which represents the level that the reservoir should track during normal operating conditions. The associated release decision is determined by the current state of the reservoir relative to the rule curve. This modeling approach can reasonably reproduce the seasonal water volume shift due to reservoir operation. Still, it cannot capture more complex decision making processes in response, e.g., to the fluctuations of energy prices and demands, the temporal unavailability of power plants or varying amount of snow accumulated in the basin. In this work, we link a physically explicit hydrological model with detailed hydropower behavioural models describing the decision making process by the dam operator. In particular, we consider two categories of behavioural models: explicit or rule-based behavioural models, where reservoir operating rules are empirically inferred from observational data, and implicit or optimization based behavioural models, where, following a normative economic approach, the decision maker is represented as a rational agent maximising a utility function. We compare these two alternate modelling approaches on the real-world water system of Lake Como catchment in the Italian Alps. The water system is characterized by the presence of 18 artificial hydropower reservoirs generating almost 13% of the Italian hydropower production. Results show to which extent the hydrological regime in the catchment is affected by different behavioural models and reservoir

  20. Bayesian network modeling of operator's state recognition process

    International Nuclear Information System (INIS)

    Hatakeyama, Naoki; Furuta, Kazuo

    2000-01-01

    Nowadays we are facing a difficult problem of establishing a good relation between humans and machines. To solve this problem, we suppose that machine system need to have a model of human behavior. In this study we model the state cognition process of a PWR plant operator as an example. We use a Bayesian network as an inference engine. We incorporate the knowledge hierarchy in the Bayesian network and confirm its validity using the example of PWR plant operator. (author)

  1. Marine Vessel Models in Changing Operational Conditions - A Tutorial

    DEFF Research Database (Denmark)

    Perez, Tristan; Sørensen, Asgeir; Blanke, Mogens

    2006-01-01

    conditions (VOC). However, since marine systems operate in changing VOCs, there is a need to adapt the models. To date, there is no theory available to describe a general model valid across different VOCs due to the complexity of the hydrodynamic involved. It is believed that system identification could......This tutorial paper provides an introduction, from a systems perspective, to the topic of ship motion dynamics of surface ships. It presents a classification of parametric models currently used for monitoring and control of marine vessels. These models are valid for certain vessel operational...

  2. THE HANFORD WASTE FEED DELIVERY OPERATIONS RESEARCH MODEL

    International Nuclear Information System (INIS)

    Berry, J.; Gallaher, B.N.

    2011-01-01

    Washington River Protection Solutions (WRPS), the Hanford tank farm contractor, is tasked with the long term planning of the cleanup mission. Cleanup plans do not explicitly reflect the mission effects associated with tank farm operating equipment failures. EnergySolutions, a subcontractor to WRPS has developed, in conjunction with WRPS tank farms staff, an Operations Research (OR) model to assess and identify areas to improve the performance of the Waste Feed Delivery Systems. This paper provides an example of how OR modeling can be used to help identify and mitigate operational risks at the Hanford tank farms.

  3. Role of cognitive models of operators in the design, operation and licensing of nuclear power plants

    International Nuclear Information System (INIS)

    Rasmussen, J.

    1982-01-01

    Cognitive models of the behavior of nuclear power plant operators - that is, models developed in terms of human properties rather than external task characteristics - are assuming increasingly important roles in plant design, operation and licensing. This is partly due to an increased concern for human decision making during unfamiliar plant conditions, and partly due to problems that arise when modern information technology is used to support operators in complex situations. Some of the problems identified during work on interface design and risk analysis are described. First, the question of categories of models is raised. Next, the use of cognitive models for system design is discussed. The use of the available cognitive models for more effective operator training is also advocated. The need for using cognitive models in risk analysis is also emphasized. Finally, the sources of human performance data, that is, event reports, incident analysis, experiments, and training simulators are mentioned, and the need for a consistent framework for data analysis based on cognitive models is discussed

  4. Modeling and Simulation for Mission Operations Work System Design

    Science.gov (United States)

    Sierhuis, Maarten; Clancey, William J.; Seah, Chin; Trimble, Jay P.; Sims, Michael H.

    2003-01-01

    Work System analysis and design is complex and non-deterministic. In this paper we describe Brahms, a multiagent modeling and simulation environment for designing complex interactions in human-machine systems. Brahms was originally conceived as a business process design tool that simulates work practices, including social systems of work. We describe our modeling and simulation method for mission operations work systems design, based on a research case study in which we used Brahms to design mission operations for a proposed discovery mission to the Moon. We then describe the results of an actual method application project-the Brahms Mars Exploration Rover. Space mission operations are similar to operations of traditional organizations; we show that the application of Brahms for space mission operations design is relevant and transferable to other types of business processes in organizations.

  5. Structure of Pioncare covariant tensor operators in quantum mechanical models

    International Nuclear Information System (INIS)

    Polyzou, W.N.; Klink, W.H.

    1988-01-01

    The structure of operators that transform covariantly in Poincare invariant quantum mechanical models is analyzed. These operators are shown to have an interaction dependence that comes from the geometry of the Poincare group. The operators can be expressed in terms of matrix elements in a complete set of eigenstates of the mass and spin operators associated with the dynamical representation of the Poincare group. The matrix elements are factored into geometrical coefficients (Clebsch--Gordan coefficients for the Poincare group) and invariant matrix elements. The geometrical coefficients are fixed by the transformation properties of the operator and the eigenvalue spectrum of the mass and spin. The invariant matrix elements, which distinguish between different operators with the same transformation properties, are given in terms of a set of invariant form factors. copyright 1988 Academic Press, Inc

  6. An Economic Model of U.S. Airline Operating Expenses

    Science.gov (United States)

    Harris, Franklin D.

    2005-01-01

    This report presents a new economic model of operating expenses for 67 airlines. The model is based on data that the airlines reported to the United States Department of Transportation in 1999. The model incorporates expense-estimating equations that capture direct and indirect expenses of both passenger and cargo airlines. The variables and business factors included in the equations are detailed enough to calculate expenses at the flight equipment reporting level. Total operating expenses for a given airline are then obtained by summation over all aircraft operated by the airline. The model's accuracy is demonstrated by correlation with the DOT Form 41 data from which it was derived. Passenger airlines are more accurately modeled than cargo airlines. An appendix presents a concise summary of the expense estimating equations with explanatory notes. The equations include many operational and aircraft variables, which accommodate any changes that airline and aircraft manufacturers might make to lower expenses in the future. In 1999, total operating expenses of the 67 airlines included in this study amounted to slightly over $100.5 billion. The economic model reported herein estimates $109.3 billion.

  7. A model to predict productivity of different chipping operations ...

    African Journals Online (AJOL)

    Additional international case studies from North America, South America, and central and northern Europe were used to test the accuracy of the model, in which 15 studies confirmed the model's validity and two failed to pass the test. Keywords: average piece size, chipper, power, sensitivity analysis, type of operation, unit ...

  8. A Model for Resource Allocation Using Operational Knowledge Assets

    Science.gov (United States)

    Andreou, Andreas N.; Bontis, Nick

    2007-01-01

    Purpose: The paper seeks to develop a business model that shows the impact of operational knowledge assets on intellectual capital (IC) components and business performance and use the model to show how knowledge assets can be prioritized in driving resource allocation decisions. Design/methodology/approach: Quantitative data were collected from 84…

  9. The development of a model of control room operator cognition

    International Nuclear Information System (INIS)

    Harrison, C. Felicity

    1998-01-01

    The nuclear generation station CRO is one of the main contributors to plant performance and safety. In the past, studies of operator behaviour have been made under emergency or abnormal situations, with little consideration being given to the more routine aspects of plant operation. One of the tasks of the operator is to detect the early signs of a problem, and to take steps to prevent a transition to an abnormal plant state. In order to do this CRO must determine that plant indications are no longer in the normal range, and take action to prevent a further move away from normal. This task is made more difficult by the extreme complexity of the control room, and by the may hindrances that the operator must face. It would therefore be of great benefit to understand CRO cognitive performance, especially under normal operating conditions. Through research carried out at several Canadian nuclear facilities we were able to develop a deeper understanding of CRO monitoring of highly automated systems during normal operations, and specifically to investigate the contributions of cognitive skills to monitoring performance. The consultants were asked to develop a deeper understanding of CRO monitoring during normal operations, and specifically to investigate the contributions of cognitive skills to monitoring performance. The overall objective of this research was to develop and validate a model of CRO monitoring. The findings of this research have practical implications for systems integration, training, and interface design. The result of this work was a model of operator monitoring activities. (author)

  10. Estimation of pump operational state with model-based methods

    International Nuclear Information System (INIS)

    Ahonen, Tero; Tamminen, Jussi; Ahola, Jero; Viholainen, Juha; Aranto, Niina; Kestilae, Juha

    2010-01-01

    Pumps are widely used in industry, and they account for 20% of the industrial electricity consumption. Since the speed variation is often the most energy-efficient method to control the head and flow rate of a centrifugal pump, frequency converters are used with induction motor-driven pumps. Although a frequency converter can estimate the operational state of an induction motor without external measurements, the state of a centrifugal pump or other load machine is not typically considered. The pump is, however, usually controlled on the basis of the required flow rate or output pressure. As the pump operational state can be estimated with a general model having adjustable parameters, external flow rate or pressure measurements are not necessary to determine the pump flow rate or output pressure. Hence, external measurements could be replaced with an adjustable model for the pump that uses estimates of the motor operational state. Besides control purposes, modelling the pump operation can provide useful information for energy auditing and optimization purposes. In this paper, two model-based methods for pump operation estimation are presented. Factors affecting the accuracy of the estimation methods are analyzed. The applicability of the methods is verified by laboratory measurements and tests in two pilot installations. Test results indicate that the estimation methods can be applied to the analysis and control of pump operation. The accuracy of the methods is sufficient for auditing purposes, and the methods can inform the user if the pump is driven inefficiently.

  11. Multiobjective Optimization Modeling Approach for Multipurpose Single Reservoir Operation

    Directory of Open Access Journals (Sweden)

    Iosvany Recio Villa

    2018-04-01

    Full Text Available The water resources planning and management discipline recognizes the importance of a reservoir’s carryover storage. However, mathematical models for reservoir operation that include carryover storage are scarce. This paper presents a novel multiobjective optimization modeling framework that uses the constraint-ε method and genetic algorithms as optimization techniques for the operation of multipurpose simple reservoirs, including carryover storage. The carryover storage was conceived by modifying Kritsky and Menkel’s method for reservoir design at the operational stage. The main objective function minimizes the cost of the total annual water shortage for irrigation areas connected to a reservoir, while the secondary one maximizes its energy production. The model includes operational constraints for the reservoir, Kritsky and Menkel’s method, irrigation areas, and the hydropower plant. The study is applied to Carlos Manuel de Céspedes reservoir, establishing a 12-month planning horizon and an annual reliability of 75%. The results highly demonstrate the applicability of the model, obtaining monthly releases from the reservoir that include the carryover storage, degree of reservoir inflow regulation, water shortages in irrigation areas, and the energy generated by the hydroelectric plant. The main product is an operational graph that includes zones as well as rule and guide curves, which are used as triggers for long-term reservoir operation.

  12. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  13. Statistical validation of normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Koopman Operator Framework for Time Series Modeling and Analysis

    Science.gov (United States)

    Surana, Amit

    2018-01-01

    We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.

  15. Operations and support cost modeling of conceptual space vehicles

    Science.gov (United States)

    Ebeling, Charles

    1994-01-01

    The University of Dayton is pleased to submit this annual report to the National Aeronautics and Space Administration (NASA) Langley Research Center which documents the development of an operations and support (O&S) cost model as part of a larger life cycle cost (LCC) structure. It is intended for use during the conceptual design of new launch vehicles and spacecraft. This research is being conducted under NASA Research Grant NAG-1-1327. This research effort changes the focus from that of the first two years in which a reliability and maintainability model was developed to the initial development of an operations and support life cycle cost model. Cost categories were initially patterned after NASA's three axis work breakdown structure consisting of a configuration axis (vehicle), a function axis, and a cost axis. A revised cost element structure (CES), which is currently under study by NASA, was used to established the basic cost elements used in the model. While the focus of the effort was on operations and maintenance costs and other recurring costs, the computerized model allowed for other cost categories such as RDT&E and production costs to be addressed. Secondary tasks performed concurrent with the development of the costing model included support and upgrades to the reliability and maintainability (R&M) model. The primary result of the current research has been a methodology and a computer implementation of the methodology to provide for timely operations and support cost analysis during the conceptual design activities.

  16. Categorical model of structural operational semantics for imperative language

    Directory of Open Access Journals (Sweden)

    William Steingartner

    2016-12-01

    Full Text Available Definition of programming languages consists of the formal definition of syntax and semantics. One of the most popular semantic methods used in various stages of software engineering is structural operational semantics. It describes program behavior in the form of state changes after execution of elementary steps of program. This feature makes structural operational semantics useful for implementation of programming languages and also for verification purposes. In our paper we present a new approach to structural operational semantics. We model behavior of programs in category of states, where objects are states, an abstraction of computer memory and morphisms model state changes, execution of a program in elementary steps. The advantage of using categorical model is its exact mathematical structure with many useful proved properties and its graphical illustration of program behavior as a path, i.e. a composition of morphisms. Our approach is able to accentuate dynamics of structural operational semantics. For simplicity, we assume that data are intuitively typed. Visualization and facility of our model is  not only  a  new model of structural operational semantics of imperative programming languages but it can also serve for education purposes.

  17. VERIFICATION OF GEAR DYNAMIC MODEL IN DIFFERENT OPERATING CONDITIONS

    Directory of Open Access Journals (Sweden)

    Grzegorz PERUŃ

    2014-09-01

    Full Text Available The article presents the results of verification of the drive system dynamic model with gear. Tests were carried out on the real object in different operating conditions. For the same assumed conditions were also carried out simulation studies. Comparison of the results obtained from those two series of tests helped determine the suitability of the model and verify the possibility of replacing experimental research by simulations with use of dynamic model.

  18. A "Toy" Model for Operational Risk Quantification using Credibility Theory

    OpenAIRE

    Hans B\\"uhlmann; Pavel V. Shevchenko; Mario V. W\\"uthrich

    2009-01-01

    To meet the Basel II regulatory requirements for the Advanced Measurement Approaches in operational risk, the bank's internal model should make use of the internal data, relevant external data, scenario analysis and factors reflecting the business environment and internal control systems. One of the unresolved challenges in operational risk is combining of these data sources appropriately. In this paper we focus on quantification of the low frequency high impact losses exceeding some high thr...

  19. Designing visual displays and system models for safe reactor operations

    International Nuclear Information System (INIS)

    Brown-VanHoozer, S.A.

    1995-01-01

    The material presented in this paper is based on two studies involving the design of visual displays and the user's prospective model of a system. The studies involve a methodology known as Neuro-Linguistic Programming and its use in expanding design choices from the operator's perspective image. The contents of this paper focuses on the studies and how they are applicable to the safety of operating reactors

  20. Model of environmental life cycle assessment for coal mining operations

    Energy Technology Data Exchange (ETDEWEB)

    Burchart-Korol, Dorota, E-mail: dburchart@gig.eu; Fugiel, Agata, E-mail: afugiel@gig.eu; Czaplicka-Kolarz, Krystyna, E-mail: kczaplicka@gig.eu; Turek, Marian, E-mail: mturek@gig.eu

    2016-08-15

    This paper presents a novel approach to environmental assessment of coal mining operations, which enables assessment of the factors that are both directly and indirectly affecting the environment and are associated with the production of raw materials and energy used in processes. The primary novelty of the paper is the development of a computational environmental life cycle assessment (LCA) model for coal mining operations and the application of the model for coal mining operations in Poland. The LCA model enables the assessment of environmental indicators for all identified unit processes in hard coal mines with the life cycle approach. The proposed model enables the assessment of greenhouse gas emissions (GHGs) based on the IPCC method and the assessment of damage categories, such as human health, ecosystems and resources based on the ReCiPe method. The model enables the assessment of GHGs for hard coal mining operations in three time frames: 20, 100 and 500 years. The model was used to evaluate the coal mines in Poland. It was demonstrated that the largest environmental impacts in damage categories were associated with the use of fossil fuels, methane emissions and the use of electricity, processing of wastes, heat, and steel supports. It was concluded that an environmental assessment of coal mining operations, apart from direct influence from processing waste, methane emissions and drainage water, should include the use of electricity, heat and steel, particularly for steel supports. Because the model allows the comparison of environmental impact assessment for various unit processes, it can be used for all hard coal mines, not only in Poland but also in the world. This development is an important step forward in the study of the impacts of fossil fuels on the environment with the potential to mitigate the impact of the coal industry on the environment. - Highlights: • A computational LCA model for assessment of coal mining operations • Identification of

  1. Model of environmental life cycle assessment for coal mining operations

    International Nuclear Information System (INIS)

    Burchart-Korol, Dorota; Fugiel, Agata; Czaplicka-Kolarz, Krystyna; Turek, Marian

    2016-01-01

    This paper presents a novel approach to environmental assessment of coal mining operations, which enables assessment of the factors that are both directly and indirectly affecting the environment and are associated with the production of raw materials and energy used in processes. The primary novelty of the paper is the development of a computational environmental life cycle assessment (LCA) model for coal mining operations and the application of the model for coal mining operations in Poland. The LCA model enables the assessment of environmental indicators for all identified unit processes in hard coal mines with the life cycle approach. The proposed model enables the assessment of greenhouse gas emissions (GHGs) based on the IPCC method and the assessment of damage categories, such as human health, ecosystems and resources based on the ReCiPe method. The model enables the assessment of GHGs for hard coal mining operations in three time frames: 20, 100 and 500 years. The model was used to evaluate the coal mines in Poland. It was demonstrated that the largest environmental impacts in damage categories were associated with the use of fossil fuels, methane emissions and the use of electricity, processing of wastes, heat, and steel supports. It was concluded that an environmental assessment of coal mining operations, apart from direct influence from processing waste, methane emissions and drainage water, should include the use of electricity, heat and steel, particularly for steel supports. Because the model allows the comparison of environmental impact assessment for various unit processes, it can be used for all hard coal mines, not only in Poland but also in the world. This development is an important step forward in the study of the impacts of fossil fuels on the environment with the potential to mitigate the impact of the coal industry on the environment. - Highlights: • A computational LCA model for assessment of coal mining operations • Identification of

  2. Hysteresis modeling based on saturation operator without constraints

    International Nuclear Information System (INIS)

    Park, Y.W.; Seok, Y.T.; Park, H.J.; Chung, J.Y.

    2007-01-01

    This paper proposes a simple way to model complex hysteresis in a magnetostrictive actuator by employing the saturation operators without constraints. Having no constraints causes a singularity problem, i.e. the inverse matrix cannot be obtained during calculating the weights. To overcome it, a pseudoinverse concept is introduced. Simulation results are compared with the experimental data, based on a Terfenol-D actuator. It is clear that the proposed model is much closer to the experimental data than the modified PI model. The relative error is calculated as 12% and less than 1% with the modified PI Model and proposed model, respectively

  3. Modelling the basic error tendencies of human operators

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    The paper outlines the primary structural features of human cognition: a limited, serial workspace interacting with a parallel distributed knowledge base. It is argued that the essential computational features of human cognition - to be captured by an adequate operator model - reside in the mechanisms by which stored knowledge structures are selected and brought into play. Two such computational 'primitives' are identified: similarity-matching and frequency-gambling. These two retrieval heuristics, it is argued, shape both the overall character of human performance (i.e. its heavy reliance on pattern-matching) and its basic error tendencies ('strong-but-wrong' responses, confirmation, similarity and frequency biases, and cognitive 'lock-up'). The various features of human cognition are integrated with a dynamic operator model capable of being represented in software form. This computer model, when run repeatedly with a variety of problem configurations, should produce a distribution of behaviours which, in total, simulate the general character of operator performance. (author)

  4. Modelling the basic error tendencies of human operators

    International Nuclear Information System (INIS)

    Reason, James

    1988-01-01

    The paper outlines the primary structural features of human cognition: a limited, serial workspace interacting with a parallel distributed knowledge base. It is argued that the essential computational features of human cognition - to be captured by an adequate operator model - reside in the mechanisms by which stored knowledge structures are selected and brought into play. Two such computational 'primitives' are identified: similarity-matching and frequency-gambling. These two retrieval heuristics, it is argued, shape both the overall character of human performance (i.e. its heavy reliance on pattern-matching) and its basic error tendencies ('strong-but-wrong' responses, confirmation, similarity and frequency biases, and cognitive 'lock-up'). The various features of human cognition are integrated with a dynamic operator model capable of being represented in software form. This computer model, when run repeatedly with a variety of problem configurations, should produce a distribution of behaviours which, in toto, simulate the general character of operator performance. (author)

  5. Modeling of reservoir operation in UNH global hydrological model

    Science.gov (United States)

    Shiklomanov, Alexander; Prusevich, Alexander; Frolking, Steve; Glidden, Stanley; Lammers, Richard; Wisser, Dominik

    2015-04-01

    Climate is changing and river flow is an integrated characteristic reflecting numerous environmental processes and their changes aggregated over large areas. Anthropogenic impacts on the river flow, however, can significantly exceed the changes associated with climate variability. Besides of irrigation, reservoirs and dams are one of major anthropogenic factor affecting streamflow. They distort hydrological regime of many rivers by trapping of freshwater runoff, modifying timing of river discharge and increasing the evaporation rate. Thus, reservoirs is an integral part of the global hydrological system and their impacts on rivers have to be taken into account for better quantification and understanding of hydrological changes. We developed a new technique, which was incorporated into WBM-TrANS model (Water Balance Model-Transport from Anthropogenic and Natural Systems) to simulate river routing through large reservoirs and natural lakes based on information available from freely accessible databases such as GRanD (the Global Reservoir and Dam database) or NID (National Inventory of Dams for US). Different formulations were applied for unregulated spillway dams and lakes, and for 4 types of regulated reservoirs, which were subdivided based on main purpose including generic (multipurpose), hydropower generation, irrigation and water supply, and flood control. We also incorporated rules for reservoir fill up and draining at the times of construction and decommission based on available data. The model were tested for many reservoirs of different size and types located in various climatic conditions using several gridded meteorological data sets as model input and observed daily and monthly discharge data from GRDC (Global Runoff Data Center), USGS Water Data (US Geological Survey), and UNH archives. The best results with Nash-Sutcliffe model efficiency coefficient in the range of 0.5-0.9 were obtained for temperate zone of Northern Hemisphere where most of large

  6. Model of environmental life cycle assessment for coal mining operations.

    Science.gov (United States)

    Burchart-Korol, Dorota; Fugiel, Agata; Czaplicka-Kolarz, Krystyna; Turek, Marian

    2016-08-15

    This paper presents a novel approach to environmental assessment of coal mining operations, which enables assessment of the factors that are both directly and indirectly affecting the environment and are associated with the production of raw materials and energy used in processes. The primary novelty of the paper is the development of a computational environmental life cycle assessment (LCA) model for coal mining operations and the application of the model for coal mining operations in Poland. The LCA model enables the assessment of environmental indicators for all identified unit processes in hard coal mines with the life cycle approach. The proposed model enables the assessment of greenhouse gas emissions (GHGs) based on the IPCC method and the assessment of damage categories, such as human health, ecosystems and resources based on the ReCiPe method. The model enables the assessment of GHGs for hard coal mining operations in three time frames: 20, 100 and 500years. The model was used to evaluate the coal mines in Poland. It was demonstrated that the largest environmental impacts in damage categories were associated with the use of fossil fuels, methane emissions and the use of electricity, processing of wastes, heat, and steel supports. It was concluded that an environmental assessment of coal mining operations, apart from direct influence from processing waste, methane emissions and drainage water, should include the use of electricity, heat and steel, particularly for steel supports. Because the model allows the comparison of environmental impact assessment for various unit processes, it can be used for all hard coal mines, not only in Poland but also in the world. This development is an important step forward in the study of the impacts of fossil fuels on the environment with the potential to mitigate the impact of the coal industry on the environment. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Fuzzy multiobjective models for optimal operation of a hydropower system

    Science.gov (United States)

    Teegavarapu, Ramesh S. V.; Ferreira, André R.; Simonovic, Slobodan P.

    2013-06-01

    Optimal operation models for a hydropower system using new fuzzy multiobjective mathematical programming models are developed and evaluated in this study. The models use (i) mixed integer nonlinear programming (MINLP) with binary variables and (ii) integrate a new turbine unit commitment formulation along with water quality constraints used for evaluation of reservoir downstream impairment. Reardon method used in solution of genetic algorithm optimization problems forms the basis for development of a new fuzzy multiobjective hydropower system optimization model with creation of Reardon type fuzzy membership functions. The models are applied to a real-life hydropower reservoir system in Brazil. Genetic Algorithms (GAs) are used to (i) solve the optimization formulations to avoid computational intractability and combinatorial problems associated with binary variables in unit commitment, (ii) efficiently address Reardon method formulations, and (iii) deal with local optimal solutions obtained from the use of traditional gradient-based solvers. Decision maker's preferences are incorporated within fuzzy mathematical programming formulations to obtain compromise operating rules for a multiobjective reservoir operation problem dominated by conflicting goals of energy production, water quality and conservation releases. Results provide insight into compromise operation rules obtained using the new Reardon fuzzy multiobjective optimization framework and confirm its applicability to a variety of multiobjective water resources problems.

  8. Modeling of HVAC operational faults in building performance simulation

    International Nuclear Information System (INIS)

    Zhang, Rongpeng; Hong, Tianzhen

    2017-01-01

    Highlights: •Discuss significance of capturing operational faults in existing buildings. •Develop a novel feature in EnergyPlus to model operational faults of HVAC systems. •Compare three approaches to faults modeling using EnergyPlus. •A case study demonstrates the use of the fault-modeling feature. •Future developments of new faults are discussed. -- Abstract: Operational faults are common in the heating, ventilating, and air conditioning (HVAC) systems of existing buildings, leading to a decrease in energy efficiency and occupant comfort. Various fault detection and diagnostic methods have been developed to identify and analyze HVAC operational faults at the component or subsystem level. However, current methods lack a holistic approach to predicting the overall impacts of faults at the building level—an approach that adequately addresses the coupling between various operational components, the synchronized effect between simultaneous faults, and the dynamic nature of fault severity. This study introduces the novel development of a fault-modeling feature in EnergyPlus which fills in the knowledge gap left by previous studies. This paper presents the design and implementation of the new feature in EnergyPlus and discusses in detail the fault-modeling challenges faced. The new fault-modeling feature enables EnergyPlus to quantify the impacts of faults on building energy use and occupant comfort, thus supporting the decision making of timely fault corrections. Including actual building operational faults in energy models also improves the accuracy of the baseline model, which is critical in the measurement and verification of retrofit or commissioning projects. As an example, EnergyPlus version 8.6 was used to investigate the impacts of a number of typical operational faults in an office building across several U.S. climate zones. The results demonstrate that the faults have significant impacts on building energy performance as well as on occupant

  9. Reactor core modeling practice: Operational requirements, model characteristics, and model validation

    International Nuclear Information System (INIS)

    Zerbino, H.

    1997-01-01

    The physical models implemented in power plant simulators have greatly increased in performance and complexity in recent years. This process has been enabled by the ever increasing computing power available at affordable prices. This paper describes this process from several angles: First the operational requirements which are more critical from the point of view of model performance, both for normal and off-normal operating conditions; A second section discusses core model characteristics in the light of the solutions implemented by Thomson Training and Simulation (TT and S) in several full-scope simulators recently built and delivered for Dutch, German, and French nuclear power plants; finally we consider the model validation procedures, which are of course an integral part of model development, and which are becoming more and more severe as performance expectations increase. As a conclusion, it may be asserted that in the core modeling field, as in other areas, the general improvement in the quality of simulation codes has resulted in a fairly rapid convergence towards mainstream engineering-grade calculations. This is remarkable performance in view of the stringent real-time requirements which the simulation codes must satisfy as well as the extremely wide range of operating conditions that they are called upon to cover with good accuracy. (author)

  10. A simple operational gas release and swelling model. Pt. 1

    International Nuclear Information System (INIS)

    Wood, M.H.; Matthews, J.R.

    1980-01-01

    A new and simple model of fission gas release and swelling has been developed for oxide nuclear fuel under operational conditions. The model, which is to be incorporated into a fuel element behaviour code, is physically based and applicable to fuel at both thermal and fast reactor ratings. In this paper we present that part of the model describing the behaviour of intragranular gas: a future paper will detail the treatment of the grain boundary gas. The results of model calculations are compared with recent experimental observations of intragranular bubble concentrations and sizes, and gas release from fuel irradiated under isothermal conditions. Good agreement is found between experiment and theory. (orig.)

  11. A High-Speed Train Operation Plan Inspection Simulation Model

    Directory of Open Access Journals (Sweden)

    Yang Rui

    2018-01-01

    Full Text Available We developed a train operation simulation tool to inspect a train operation plan. In applying an improved Petri Net, the train was regarded as a token, and the line and station were regarded as places, respectively, in accordance with the high-speed train operation characteristics and network function. Location change and running information transfer of the high-speed train were realized by customizing a variety of transitions. The model was built based on the concept of component combination, considering the random disturbance in the process of train running. The simulation framework can be generated quickly and the system operation can be completed according to the different test requirements and the required network data. We tested the simulation tool when used for the real-world Wuhan to Guangzhou high-speed line. The results showed that the proposed model can be developed, the simulation results basically coincide with the objective reality, and it can not only test the feasibility of the high-speed train operation plan, but also be used as a support model to develop the simulation platform with more capabilities.

  12. Dynamic and adaptive policy models for coalition operations

    Science.gov (United States)

    Verma, Dinesh; Calo, Seraphin; Chakraborty, Supriyo; Bertino, Elisa; Williams, Chris; Tucker, Jeremy; Rivera, Brian; de Mel, Geeth R.

    2017-05-01

    It is envisioned that the success of future military operations depends on the better integration, organizationally and operationally, among allies, coalition members, inter-agency partners, and so forth. However, this leads to a challenging and complex environment where the heterogeneity and dynamism in the operating environment intertwines with the evolving situational factors that affect the decision-making life cycle of the war fighter. Therefore, the users in such environments need secure, accessible, and resilient information infrastructures where policy-based mechanisms adopt the behaviours of the systems to meet end user goals. By specifying and enforcing a policy based model and framework for operations and security which accommodates heterogeneous coalitions, high levels of agility can be enabled to allow rapid assembly and restructuring of system and information resources. However, current prevalent policy models (e.g., rule based event-condition-action model and its variants) are not sufficient to deal with the highly dynamic and plausibly non-deterministic nature of these environments. Therefore, to address the above challenges, in this paper, we present a new approach for policies which enables managed systems to take more autonomic decisions regarding their operations.

  13. Lean waste classification model to support the sustainable operational practice

    Science.gov (United States)

    Sutrisno, A.; Vanany, I.; Gunawan, I.; Asjad, M.

    2018-04-01

    Driven by growing pressure for a more sustainable operational practice, improvement on the classification of non-value added (waste) is one of the prerequisites to realize sustainability of a firm. While the use of the 7 (seven) types of the Ohno model now becoming a versatile tool to reveal the lean waste occurrence. In many recent investigations, the use of the Seven Waste model of Ohno is insufficient to cope with the types of waste occurred in industrial practices at various application levels. Intended to a narrowing down this limitation, this paper presented an improved waste classification model based on survey to recent studies discussing on waste at various operational stages. Implications on the waste classification model to the body of knowledge and industrial practices are provided.

  14. Modelling Vessel Traffic Service to understand resilience in everyday operations

    International Nuclear Information System (INIS)

    Praetorius, Gesa; Hollnagel, Erik; Dahlman, Joakim

    2015-01-01

    Vessel Traffic Service (VTS) is a service to promote traffic fluency and safety in the entrance to ports. This article's purpose has been to explore everyday operations of the VTS system to gain insights in how it contributes to safe and efficient traffic movements. Interviews, focus groups and an observation have been conducted to collect data about everyday operations, as well as to grasp how the VTS system adapts to changing operational conditions. The results show that work within the VTS domain is highly complex and that the two systems modelled realise their services vastly differently, which in turn affects the systems' ability to monitor, respond and anticipate. This is of great importance to consider whenever changes are planned and implemented within the VTS domain. Only if everyday operations are properly analysed and understood, it can be estimated how alterations to technology and organisation will affect the overall system performance

  15. Cognitive model of the power unit operator activity

    International Nuclear Information System (INIS)

    Chachko, S.A.

    1992-01-01

    Basic notions making it possible to study and simulate the peculiarities of man-operator activity, in particular his way of thiking, are considered. Special attention is paid to cognitive models based on concept of decisive role of knowledge (its acquisition, storage and application) in the man mental processes and activity. The models are based on three basic notions, which are the professional world image, activity strategy and spontaneous decisions

  16. Operator-based linearization for efficient modeling of geothermal processes

    OpenAIRE

    Khait, M.; Voskov, D.V.

    2018-01-01

    Numerical simulation is one of the most important tools required for financial and operational management of geothermal reservoirs. The modern geothermal industry is challenged to run large ensembles of numerical models for uncertainty analysis, causing simulation performance to become a critical issue. Geothermal reservoir modeling requires the solution of governing equations describing the conservation of mass and energy. The robust, accurate and computationally efficient implementation of ...

  17. MAESTRO - a model and expert system tuning resource for operators

    International Nuclear Information System (INIS)

    Lager, D.L.; Brand, H.R.; Maurer, W.J.; Coffield, F.; Chambers, F.

    1990-01-01

    We have developed MAESTRO, a model and expert system tuning resource for operators. It provides a unified software environment for optimizing the performance of large, complex machines, in particular the Advanced Test Accelerator and Experimental Test Accelerator at Lawrence Livermore National Laboratory. The system incorporates three approaches to tuning: a mouse-based manual interface to select and control magnets and to view displays of machine performance; an automation based on 'cloning the operator' by implementing the strategies and reasoning used by the operator; and an automation based on a simulator model which, when accurately matched to the machine, allows downloading of optimal sets of parameters and permits diagnosing errors in the beam line. The latter two approaches are based on the artificial-intelligence technique known as Expert Systems. (orig.)

  18. Operator regularization in the Weinberg-Salam model

    International Nuclear Information System (INIS)

    Chowdhury, A.M.; McKeon, D.G.C.

    1987-01-01

    The technique of operator regularization is applied to the Weinberg-Salam model. By directly regulating operators that arise in the course of evaluating path integrals in the background-field formalism, we preserve all symmetries of the theory. An expansion due to Schwinger is employed to compute amplitudes perturbatively, thereby avoiding Feynman diagrams. No explicitly divergent quantities arise in this approach. The general features of the method are outlined with particular attention paid to the problem of simultaneously regulating functions of an operator A and inverse functions upon which A itself depends. Specific application is made to computation of the one-loop contribution to the muon-photon vertex in the Weinberg-Salam model in the limit of zero momentum transfer to the photon

  19. Mathematical modelling of unglazed solar collectors under extreme operating conditions

    DEFF Research Database (Denmark)

    Bunea, M.; Perers, Bengt; Eicher, S.

    2015-01-01

    average temperature levels at the evaporator. Simulation of these systems requires a collector model that can take into account operation at very low temperatures (below freezing) and under various weather conditions, particularly operation without solar irradiation.A solar collector mathematical model......Combined heat pumps and solar collectors got a renewed interest on the heating system market worldwide. Connected to the heat pump evaporator, unglazed solar collectors can considerably increase their efficiency, but they also raise the coefficient of performance of the heat pump with higher...... was found due to the condensation phenomenon and up to 40% due to frost under no solar irradiation. This work also points out the influence of the operating conditions on the collector's characteristics.Based on experiments carried out at a test facility, every heat flux on the absorber was separately...

  20. MAESTRO -- A Model and Expert System Tuning Resource for Operators

    International Nuclear Information System (INIS)

    Lager, D.L.; Brand, H.R.; Maurer, W.J.; Coffield, F.E.; Chambers, F.

    1989-01-01

    We have developed MAESTRO, a Model And Expert System Tuning Resource for Operators. It provides a unified software environment for optimizing the performance of large, complex machines, in particular the Advanced Test Accelerator and Experimental Test Accelerator at Lawrence Livermore National Laboratory. The system incorporates three approaches to tuning: a mouse-based manual interface to select and control magnets and to view displays of machine performance; an automation based on ''cloning the operator'' by implementing the strategies and reasoning used by the operator; an automation based on a simulator model which, when accurately matched to the machine, allows downloading of optimal sets of parameters and permits diagnosing errors in the beamline. The latter two approaches are based on the Artificial Intelligence technique known as Expert Systems. 4 refs., 4 figs

  1. Automated particulate sampler field test model operations guide

    Energy Technology Data Exchange (ETDEWEB)

    Bowyer, S.M.; Miley, H.S.

    1996-10-01

    The Automated Particulate Sampler Field Test Model Operations Guide is a collection of documents which provides a complete picture of the Automated Particulate Sampler (APS) and the Field Test in which it was evaluated. The Pacific Northwest National Laboratory (PNNL) Automated Particulate Sampler was developed for the purpose of radionuclide particulate monitoring for use under the Comprehensive Test Ban Treaty (CTBT). Its design was directed by anticipated requirements of small size, low power consumption, low noise level, fully automatic operation, and most predominantly the sensitivity requirements of the Conference on Disarmament Working Paper 224 (CDWP224). This guide is intended to serve as both a reference document for the APS and to provide detailed instructions on how to operate the sampler. This document provides a complete description of the APS Field Test Model and all the activity related to its evaluation and progression.

  2. Dynamic modeling of temperature change in outdoor operated tubular photobioreactors.

    Science.gov (United States)

    Androga, Dominic Deo; Uyar, Basar; Koku, Harun; Eroglu, Inci

    2017-07-01

    In this study, a one-dimensional transient model was developed to analyze the temperature variation of tubular photobioreactors operated outdoors and the validity of the model was tested by comparing the predictions of the model with the experimental data. The model included the effects of convection and radiative heat exchange on the reactor temperature throughout the day. The temperatures in the reactors increased with increasing solar radiation and air temperatures, and the predicted reactor temperatures corresponded well to the measured experimental values. The heat transferred to the reactor was mainly through radiation: the radiative heat absorbed by the reactor medium, ground radiation, air radiation, and solar (direct and diffuse) radiation, while heat loss was mainly through the heat transfer to the cooling water and forced convection. The amount of heat transferred by reflected radiation and metabolic activities of the bacteria and pump work was negligible. Counter-current cooling was more effective in controlling reactor temperature than co-current cooling. The model developed identifies major heat transfer mechanisms in outdoor operated tubular photobioreactors, and accurately predicts temperature changes in these systems. This is useful in determining cooling duty under transient conditions and scaling up photobioreactors. The photobioreactor design and the thermal modeling were carried out and experimental results obtained for the case study of photofermentative hydrogen production by Rhodobacter capsulatus, but the approach is applicable to photobiological systems that are to be operated under outdoor conditions with significant cooling demands.

  3. Operator-based linearization for efficient modeling of geothermal processes

    NARCIS (Netherlands)

    Khait, M.; Voskov, D.V.

    2018-01-01

    Numerical simulation is one of the most important tools required for financial and operational management of geothermal reservoirs. The modern geothermal industry is challenged to run large ensembles of numerical models for uncertainty analysis, causing simulation performance to become a critical

  4. Water operator partnerships as a model to achieve the Millenium ...

    African Journals Online (AJOL)

    In the void left by the declining popularity of public-private partnerships, the concept of 'water operator partnerships' (WOPs) has increasingly been promoted as an alternative for improving water services provision in developing countries. This paper assesses the potential of such partnerships as a 'model' for contributing to ...

  5. Architecture-based Model for Preventive and Operative Crisis Management

    National Research Council Canada - National Science Library

    Jungert, Erland; Derefeldt, Gunilla; Hallberg, Jonas; Hallberg, Niklas; Hunstad, Amund; Thuren, Ronny

    2004-01-01

    .... A system that should support activities of this type must not only have a high capacity, with respect to the dataflow, but also have suitable tools for decision support. To overcome these problems, an architecture for preventive and operative crisis management is proposed. The architecture is based on models for command and control, but also for risk analysis.

  6. Modeling operational risks of the nuclear industry with Bayesian networks

    International Nuclear Information System (INIS)

    Wieland, Patricia; Lustosa, Leonardo J.

    2009-01-01

    Basically, planning a new industrial plant requires information on the industrial management, regulations, site selection, definition of initial and planned capacity, and on the estimation of the potential demand. However, this is far from enough to assure the success of an industrial enterprise. Unexpected and extremely damaging events may occur that deviates from the original plan. The so-called operational risks are not only in the system, equipment, process or human (technical or managerial) failures. They are also in intentional events such as frauds and sabotage, or extreme events like terrorist attacks or radiological accidents and even on public reaction to perceived environmental or future generation impacts. For the nuclear industry, it is a challenge to identify and to assess the operational risks and their various sources. Early identification of operational risks can help in preparing contingency plans, to delay the decision to invest or to approve a project that can, at an extreme, affect the public perception of the nuclear energy. A major problem in modeling operational risk losses is the lack of internal data that are essential, for example, to apply the loss distribution approach. As an alternative, methods that consider qualitative and subjective information can be applied, for example, fuzzy logic, neural networks, system dynamic or Bayesian networks. An advantage of applying Bayesian networks to model operational risk is the possibility to include expert opinions and variables of interest, to structure the model via causal dependencies among these variables, and to specify subjective prior and conditional probabilities distributions at each step or network node. This paper suggests a classification of operational risks in industry and discusses the benefits and obstacles of the Bayesian networks approach to model those risks. (author)

  7. Modeling operational risks of the nuclear industry with Bayesian networks

    Energy Technology Data Exchange (ETDEWEB)

    Wieland, Patricia [Pontificia Univ. Catolica do Rio de Janeiro (PUC-Rio), RJ (Brazil). Dept. de Engenharia Industrial; Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil)], e-mail: pwieland@cnen.gov.br; Lustosa, Leonardo J. [Pontificia Univ. Catolica do Rio de Janeiro (PUC-Rio), RJ (Brazil). Dept. de Engenharia Industrial], e-mail: ljl@puc-rio.br

    2009-07-01

    Basically, planning a new industrial plant requires information on the industrial management, regulations, site selection, definition of initial and planned capacity, and on the estimation of the potential demand. However, this is far from enough to assure the success of an industrial enterprise. Unexpected and extremely damaging events may occur that deviates from the original plan. The so-called operational risks are not only in the system, equipment, process or human (technical or managerial) failures. They are also in intentional events such as frauds and sabotage, or extreme events like terrorist attacks or radiological accidents and even on public reaction to perceived environmental or future generation impacts. For the nuclear industry, it is a challenge to identify and to assess the operational risks and their various sources. Early identification of operational risks can help in preparing contingency plans, to delay the decision to invest or to approve a project that can, at an extreme, affect the public perception of the nuclear energy. A major problem in modeling operational risk losses is the lack of internal data that are essential, for example, to apply the loss distribution approach. As an alternative, methods that consider qualitative and subjective information can be applied, for example, fuzzy logic, neural networks, system dynamic or Bayesian networks. An advantage of applying Bayesian networks to model operational risk is the possibility to include expert opinions and variables of interest, to structure the model via causal dependencies among these variables, and to specify subjective prior and conditional probabilities distributions at each step or network node. This paper suggests a classification of operational risks in industry and discusses the benefits and obstacles of the Bayesian networks approach to model those risks. (author)

  8. Modeling the Environmental Impact of Air Traffic Operations

    Science.gov (United States)

    Chen, Neil

    2011-01-01

    There is increased interest to understand and mitigate the impacts of air traffic on the climate, since greenhouse gases, nitrogen oxides, and contrails generated by air traffic can have adverse impacts on the climate. The models described in this presentation are useful for quantifying these impacts and for studying alternative environmentally aware operational concepts. These models have been developed by leveraging and building upon existing simulation and optimization techniques developed for the design of efficient traffic flow management strategies. Specific enhancements to the existing simulation and optimization techniques include new models that simulate aircraft fuel flow, emissions and contrails. To ensure that these new models are beneficial to the larger climate research community, the outputs of these new models are compatible with existing global climate modeling tools like the FAA's Aviation Environmental Design Tool.

  9. Analysis of operating model of electronic invoice colombian Colombian electronic billing analysis of the operational model

    Directory of Open Access Journals (Sweden)

    Sérgio Roberto da Silva

    2016-06-01

    Full Text Available Colombia has been one of the first countries to introduce electronic billing process on a voluntary basis, from a traditional to a digital version. In this context, the article analyzes the electronic billing process implemented in Colombia and the advantages. Methodological research is applied, qualitative, descriptive and documentary; where the regulatory framework and the conceptualization of the model is identified; the process of adoption of electronic billing is analyzed, and finally the advantages and disadvantages of its implementation is analyzed. The findings indicate that the model applied in Colombia to issue an electronic billing in sending and receiving process, is not complex, but it requires a small adequate infrastructure and trained personnel to reach all sectors, especially the micro and business which is the largest business network in the country.

  10. The Operational Planning Model of Transhipment Processes in the Port

    Directory of Open Access Journals (Sweden)

    Mia Jurjević

    2016-04-01

    Full Text Available Modelling of a traffic system refers to the efficiency of operations for establishing successful business performance by examining the possibilities for its improvement. The main purpose of each container terminal is to ensure continuity and dynamics of the flow of containers. The objective of this paper is to present a method for determining the amount of certain types of containers that can be transhipped at each berth, with the proper cargo handling, taking into account minimum total costs of transhipment. The mathematical model of planning the transhipment and transportation of containers at the terminal is presented. The optimal solution, obtained with the method of linear programming, represents a plan for container deployment that will ensure effective ongoing process of transhipment, providing the lowest transhipment costs. The proposed model, tested in the port of Rijeka, should be the basis for makingadequate business decisions in the operational planning of the container terminal.

  11. eWaterCycle: A global operational hydrological forecasting model

    Science.gov (United States)

    van de Giesen, Nick; Bierkens, Marc; Donchyts, Gennadii; Drost, Niels; Hut, Rolf; Sutanudjaja, Edwin

    2015-04-01

    Development of an operational hyper-resolution hydrological global model is a central goal of the eWaterCycle project (www.ewatercycle.org). This operational model includes ensemble forecasts (14 days) to predict water related stress around the globe. Assimilation of near-real time satellite data is part of the intended product that will be launched at EGU 2015. The challenges come from several directions. First, there are challenges that are mainly computer science oriented but have direct practical hydrological implications. For example, we aim to make use as much as possible of existing standards and open-source software. For example, different parts of our system are coupled through the Basic Model Interface (BMI) developed in the framework of the Community Surface Dynamics Modeling System (CSDMS). The PCR-GLOBWB model, built by Utrecht University, is the basic hydrological model that is the engine of the eWaterCycle project. Re-engineering of parts of the software was needed for it to run efficiently in a High Performance Computing (HPC) environment, and to be able to interface using BMI, and run on multiple compute nodes in parallel. The final aim is to have a spatial resolution of 1km x 1km, which is currently 10 x 10km. This high resolution is computationally not too demanding but very memory intensive. The memory bottleneck becomes especially apparent for data assimilation, for which we use OpenDA. OpenDa allows for different data assimilation techniques without the need to build these from scratch. We have developed a BMI adaptor for OpenDA, allowing OpenDA to use any BMI compatible model. To circumvent memory shortages which would result from standard applications of the Ensemble Kalman Filter, we have developed a variant that does not need to keep all ensemble members in working memory. At EGU, we will present this variant and how it fits well in HPC environments. An important step in the eWaterCycle project was the coupling between the hydrological and

  12. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  13. Towards operational modeling and forecasting of the Iberian shelves ecosystem.

    Directory of Open Access Journals (Sweden)

    Martinho Marta-Almeida

    Full Text Available There is a growing interest on physical and biogeochemical oceanic hindcasts and forecasts from a wide range of users and businesses. In this contribution we present an operational biogeochemical forecast system for the Portuguese and Galician oceanographic regions, where atmospheric, hydrodynamic and biogeochemical variables are integrated. The ocean model ROMS, with a horizontal resolution of 3 km, is forced by the atmospheric model WRF and includes a Nutrients-Phytoplankton-Zooplankton-Detritus biogeochemical module (NPZD. In addition to oceanographic variables, the system predicts the concentration of nitrate, phytoplankton, zooplankton and detritus (mmol N m(-3. Model results are compared against radar currents and remote sensed SST and chlorophyll. Quantitative skill assessment during a summer upwelling period shows that our modelling system adequately represents the surface circulation over the shelf including the observed spatial variability and trends of temperature and chlorophyll concentration. Additionally, the skill assessment also shows some deficiencies like the overestimation of upwelling circulation and consequently, of the duration and intensity of the phytoplankton blooms. These and other departures from the observations are discussed, their origins identified and future improvements suggested. The forecast system is the first of its kind in the region and provides free online distribution of model input and output, as well as comparisons of model results with satellite imagery for qualitative operational assessment of model skill.

  14. Ethical issues in engineering models: an operations researcher's reflections.

    Science.gov (United States)

    Kleijnen, J

    2011-09-01

    This article starts with an overview of the author's personal involvement--as an Operations Research consultant--in several engineering case-studies that may raise ethical questions; e.g., case-studies on nuclear waste, water management, sustainable ecology, military tactics, and animal welfare. All these case studies employ computer simulation models. In general, models are meant to solve practical problems, which may have ethical implications for the various stakeholders; namely, the modelers, the clients, and the public at large. The article further presents an overview of codes of ethics in a variety of disciples. It discusses the role of mathematical models, focusing on the validation of these models' assumptions. Documentation of these model assumptions needs special attention. Some ethical norms and values may be quantified through the model's multiple performance measures, which might be optimized. The uncertainty about the validity of the model leads to risk or uncertainty analysis and to a search for robust models. Ethical questions may be pressing in military models, including war games. However, computer games and the related experimental economics may also provide a special tool to study ethical issues. Finally, the article briefly discusses whistleblowing. Its many references to publications and websites enable further study of ethical issues in modeling.

  15. A Stochastic Operational Planning Model for Smart Power Systems

    Directory of Open Access Journals (Sweden)

    Sh. Jadid

    2014-12-01

    Full Text Available Smart Grids are result of utilizing novel technologies such as distributed energy resources, and communication technologies in power system to compensate some of its defects. Various power resources provide some benefits for operation domain however, power system operator should use a powerful methodology to manage them. Renewable resources and load add uncertainty to the problem. So, independent system operator should use a stochastic method to manage them. A Stochastic unit commitment is presented in this paper to schedule various power resources such as distributed generation units, conventional thermal generation units, wind and PV farms, and demand response resources. Demand response resources, interruptible loads, distributed generation units, and conventional thermal generation units are used to provide required reserve for compensating stochastic nature of various resources and loads. In the presented model, resources connected to distribution network can participate in wholesale market through aggregators. Moreover, a novel three-program model which can be used by aggregators is presented in this article. Loads and distributed generation can contract with aggregators by these programs. A three-bus test system and the IEEE RTS are used to illustrate usefulness of the presented model. The results show that ISO can manage the system effectively by using this model

  16. Operations and support cost modeling using Markov chains

    Science.gov (United States)

    Unal, Resit

    1989-01-01

    Systems for future missions will be selected with life cycle costs (LCC) as a primary evaluation criterion. This reflects the current realization that only systems which are considered affordable will be built in the future due to the national budget constaints. Such an environment calls for innovative cost modeling techniques which address all of the phases a space system goes through during its life cycle, namely: design and development, fabrication, operations and support; and retirement. A significant portion of the LCC for reusable systems are generated during the operations and support phase (OS). Typically, OS costs can account for 60 to 80 percent of the total LCC. Clearly, OS costs are wholly determined or at least strongly influenced by decisions made during the design and development phases of the project. As a result OS costs need to be considered and estimated early in the conceptual phase. To be effective, an OS cost estimating model needs to account for actual instead of ideal processes by associating cost elements with probabilities. One approach that may be suitable for OS cost modeling is the use of the Markov Chain Process. Markov chains are an important method of probabilistic analysis for operations research analysts but they are rarely used for life cycle cost analysis. This research effort evaluates the use of Markov Chains in LCC analysis by developing OS cost model for a hypothetical reusable space transportation vehicle (HSTV) and suggests further uses of the Markov Chain process as a design-aid tool.

  17. Application of online modeling to the operation of SLC

    International Nuclear Information System (INIS)

    Woodley, M.D.; Sanchez-Chopitea, L.; Shoaee, H.

    1987-01-01

    Online computer models of first order beam optics have been developed for the commissioning, control and operation of the entire SLC including Damping Rings, Linac, Positron Return Line and Collider Arcs. A generalized online environment utilizing these models provides the capability for interactive selection of a desire optics configuration and for the study of its properties. Automated procedures have been developed which calculate and load beamline component set-points and which can scale magnet strengths to achieve desired beam properties for any Linac energy profile. Graphic displays facilitate comparison of design, desired and actual optical characteristics of the beamlines. Measured beam properties, such as beam emittance and dispersion, can be incorporated interactively into the models and used for beam matching and optimization of injection and extraction efficiencies and beam transmissions. The online optics modeling facility also serves as the foundation for many model-driven applications such as autosteering, calculation of beam launch parameters, emittance measurement and dispersion correction

  18. A model technology transfer program for independent operators

    Energy Technology Data Exchange (ETDEWEB)

    Schoeling, L.G.

    1996-08-01

    In August 1992, the Energy Research Center (ERC) at the University of Kansas was awarded a contract by the US Department of Energy (DOE) to develop a technology transfer regional model. This report describes the development and testing of the Kansas Technology Transfer Model (KTTM) which is to be utilized as a regional model for the development of other technology transfer programs for independent operators throughout oil-producing regions in the US. It describes the linkage of the regional model with a proposed national technology transfer plan, an evaluation technique for improving and assessing the model, and the methodology which makes it adaptable on a regional basis. The report also describes management concepts helpful in managing a technology transfer program.

  19. Application of online modeling to the operation of SLC

    International Nuclear Information System (INIS)

    Woodley, M.D.; Sanchez-Chopitea, L.; Shoaee, H.

    1987-02-01

    Online computer models of first order beam optics have been developed for the commissioning, control and operation of the entire SLC including Damping Rings, Linac, Positron Return Line and Collider Arcs. A generalized online environment utilizing these models provides the capability for interactive selection of a desired optics configuration and for the study of its properties. Automated procedures have been developed which calculate and load beamline component set-points and which can scale magnet strengths to achieve desired beam properties for any Linac energy profile. Graphic displays facilitate comparison of design, desired and actual optical characteristics of the beamlines. Measured beam properties, such as beam emittance and dispersion, can be incorporated interactively into the models and used for beamline matching and optimization of injection and extraction efficiencies and beam transmission. The online optics modeling facility also serves as the foundation for many model-driven applications such as autosteering, calculation of beam launch parameters, emittance measurement and dispersion correction

  20. Optimizing Biorefinery Design and Operations via Linear Programming Models

    Energy Technology Data Exchange (ETDEWEB)

    Talmadge, Michael; Batan, Liaw; Lamers, Patrick; Hartley, Damon; Biddy, Mary; Tao, Ling; Tan, Eric

    2017-03-28

    The ability to assess and optimize economics of biomass resource utilization for the production of fuels, chemicals and power is essential for the ultimate success of a bioenergy industry. The team of authors, consisting of members from the National Renewable Energy Laboratory (NREL) and the Idaho National Laboratory (INL), has developed simple biorefinery linear programming (LP) models to enable the optimization of theoretical or existing biorefineries. The goal of this analysis is to demonstrate how such models can benefit the developing biorefining industry. It focuses on a theoretical multi-pathway, thermochemical biorefinery configuration and demonstrates how the biorefinery can use LP models for operations planning and optimization in comparable ways to the petroleum refining industry. Using LP modeling tools developed under U.S. Department of Energy's Bioenergy Technologies Office (DOE-BETO) funded efforts, the authors investigate optimization challenges for the theoretical biorefineries such as (1) optimal feedstock slate based on available biomass and prices, (2) breakeven price analysis for available feedstocks, (3) impact analysis for changes in feedstock costs and product prices, (4) optimal biorefinery operations during unit shutdowns / turnarounds, and (5) incentives for increased processing capacity. These biorefinery examples are comparable to crude oil purchasing and operational optimization studies that petroleum refiners perform routinely using LPs and other optimization models. It is important to note that the analyses presented in this article are strictly theoretical and they are not based on current energy market prices. The pricing structure assigned for this demonstrative analysis is consistent with $4 per gallon gasoline, which clearly assumes an economic environment that would favor the construction and operation of biorefineries. The analysis approach and examples provide valuable insights into the usefulness of analysis tools for

  1. Development of an inpatient operational pharmacy productivity model.

    Science.gov (United States)

    Naseman, Ryan W; Lopez, Ben R; Forrey, Ryan A; Weber, Robert J; Kipp, Kris M

    2015-02-01

    An innovative model for measuring the operational productivity of medication order management in inpatient settings is described. Order verification within a computerized prescriber order-entry system was chosen as the pharmacy workload driver. To account for inherent variability in the tasks involved in processing different types of orders, pharmaceutical products were grouped by class, and each class was assigned a time standard, or "medication complexity weight" reflecting the intensity of pharmacist and technician activities (verification of drug indication, verification of appropriate dosing, adverse-event prevention and monitoring, medication preparation, product checking, product delivery, returns processing, nurse/provider education, and problem-order resolution). The resulting "weighted verifications" (WV) model allows productivity monitoring by job function (pharmacist versus technician) to guide hiring and staffing decisions. A 9-month historical sample of verified medication orders was analyzed using the WV model, and the calculations were compared with values derived from two established models—one based on the Case Mix Index (CMI) and the other based on the proprietary Pharmacy Intensity Score (PIS). Evaluation of Pearson correlation coefficients indicated that values calculated using the WV model were highly correlated with those derived from the CMI-and PIS-based models (r = 0.845 and 0.886, respectively). Relative to the comparator models, the WV model offered the advantage of less period-to-period variability. The WV model yielded productivity data that correlated closely with values calculated using two validated workload management models. The model may be used as an alternative measure of pharmacy operational productivity. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  2. Transparent settlement model between mobile network operator and mobile voice over Internet protocol operator

    Directory of Open Access Journals (Sweden)

    Luzango Pangani Mfupe

    2014-12-01

    Full Text Available Advances in technology have enabled network-less mobile voice over internet protocol operator (MVoIPO to offer data services (i.e. voice, text and video to mobile network operator's (MNO's subscribers through an application enabled on subscriber's user equipment using MNO's packet-based cellular network infrastructure. However, this raises the problem of how to handle interconnection settlements between the two types of operators, particularly how to deal with users who now have the ability to make ‘free’ on-net MVoIP calls among themselves within the MNO's network. This study proposes a service level agreement-based transparent settlement model (TSM to solve this problem. The model is based on concepts of achievement and reward, not violation and punishment. The TSM calculates the MVoIPO's throughput distribution by monitoring the variations of peaks and troughs at the edge of a network. This facilitates the determination of conformance and non-conformance levels to the pre-set throughput thresholds and, subsequently, the issuing of compensation to the MVoIPO by the MNO as a result of generating an economically acceptable volume of data traffic.

  3. Standard model baryogenesis through four-fermion operators in braneworlds

    International Nuclear Information System (INIS)

    Chung, Daniel J.H.; Dent, Thomas

    2002-01-01

    We study a new baryogenesis scenario in a class of braneworld models with low fundamental scale, which typically have difficulty with baryogenesis. The scenario is characterized by its minimal nature: the field content is that of the standard model and all interactions consistent with the gauge symmetry are admitted. Baryon number is violated via a dimension-6 proton decay operator, suppressed today by the mechanism of quark-lepton separation in extra dimensions; we assume that this operator was unsuppressed in the early Universe due to a time-dependent quark-lepton separation. The source of CP violation is the CKM matrix, in combination with the dimension-6 operators. We find that almost independently of cosmology, sufficient baryogenesis is nearly impossible in such a scenario if the fundamental scale is above 100 TeV, as required by an unsuppressed neutron-antineutron oscillation operator. The only exception producing sufficient baryon asymmetry is a scenario involving out-of-equilibrium c quarks interacting with equilibrium b quarks

  4. AN OPERATIONAL MANAGEMENT MODEL FOR A COAL MINING PRODUCTION UNIT

    Directory of Open Access Journals (Sweden)

    R. Visser

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: The coal mining industry faces increased pressure for higher quality coal at lower cost and increased volumes. To satisfy these requirements the industry needs technically skilled first line supervisors with operational management skills. Most first line supervisors possess the necessary technical, but not the required operational management skills. Various operational management philosophies, describing world-class operational management practices exist; however, it is not possible to implement these philosophies as-is in a mining environment due to the various differences between manufacturing and mining. The solution is to provide an operational management model, adapted from these philosophies, to first line supervisors in the coal mining industry.

    AFRIKAANSE OPSOMMING: Die steenkoolmynbedryf ervaar groeiende druk van die mark vir hoër gehalte steenkool, laer koste en verhoogde volumes. Om hierdie behoefte te bevredig benodig die myn tegniesgeskoolde eerstelyntoesighouers met bedryfsbestuursvaardighede. Ongelukkig beskik die meeste toesighouers wel oor die nodige tegniese kennis, maar nie die nodige bedryfsbestuursvaardighede nie. Daar bestaan verskeie bedryfsbestuursfilosofieë wat wêreldklas bedryfsbestuurspraktyke omskryf. Dit is egter nie moontlik om die filisofieë net so in die mynbedryf te implimenteer nie a.g.v. die verskille tussen vervaardiging en mynbou. Die oplossing is om ‘n bedryfsbestuurmodel wat op hierdie filosofieë geskoei is, aan eerstelyntoesighouers in die steenkoolbedryf te verskaf.

  5. Analysis of Operating Principles with S-system Models

    Science.gov (United States)

    Lee, Yun; Chen, Po-Wei; Voit, Eberhard O.

    2011-01-01

    Operating principles address general questions regarding the response dynamics of biological systems as we observe or hypothesize them, in comparison to a priori equally valid alternatives. In analogy to design principles, the question arises: Why are some operating strategies encountered more frequently than others and in what sense might they be superior? It is at this point impossible to study operation principles in complete generality, but the work here discusses the important situation where a biological system must shift operation from its normal steady state to a new steady state. This situation is quite common and includes many stress responses. We present two distinct methods for determining different solutions to this task of achieving a new target steady state. Both methods utilize the property of S-system models within Biochemical Systems Theory (BST) that steady-states can be explicitly represented as systems of linear algebraic equations. The first method uses matrix inversion, a pseudo-inverse, or regression to characterize the entire admissible solution space. Operations on the basis of the solution space permit modest alterations of the transients toward the target steady state. The second method uses standard or mixed integer linear programming to determine admissible solutions that satisfy criteria of functional effectiveness, which are specified beforehand. As an illustration, we use both methods to characterize alternative response patterns of yeast subjected to heat stress, and compare them with observations from the literature. PMID:21377479

  6. Twist operator correlation functions in O(n) loop models

    International Nuclear Information System (INIS)

    Simmons, Jacob J H; Cardy, John

    2009-01-01

    Using conformal field theoretic methods we calculate correlation functions of geometric observables in the loop representation of the O(n) model at the critical point. We focus on correlation functions containing twist operators, combining these with anchored loops, boundaries with SLE processes and with double SLE processes. We focus further upon n = 0, representing self-avoiding loops, which corresponds to a logarithmic conformal field theory (LCFT) with c = 0. In this limit the twist operator plays the role of a 0-weight indicator operator, which we verify by comparison with known examples. Using the additional conditions imposed by the twist operator null states, we derive a new explicit result for the probabilities that an SLE 8/3 winds in various ways about two points in the upper half-plane, e.g. that the SLE passes to the left of both points. The collection of c = 0 logarithmic CFT operators that we use deriving the winding probabilities is novel, highlighting a potential incompatibility caused by the presence of two distinct logarithmic partners to the stress tensor within the theory. We argue that both partners do appear in the theory, one in the bulk and one on the boundary and that the incompatibility is resolved by restrictive bulk-boundary fusion rules

  7. Optimization of Operations Resources via Discrete Event Simulation Modeling

    Science.gov (United States)

    Joshi, B.; Morris, D.; White, N.; Unal, R.

    1996-01-01

    The resource levels required for operation and support of reusable launch vehicles are typically defined through discrete event simulation modeling. Minimizing these resources constitutes an optimization problem involving discrete variables and simulation. Conventional approaches to solve such optimization problems involving integer valued decision variables are the pattern search and statistical methods. However, in a simulation environment that is characterized by search spaces of unknown topology and stochastic measures, these optimization approaches often prove inadequate. In this paper, we have explored the applicability of genetic algorithms to the simulation domain. Genetic algorithms provide a robust search strategy that does not require continuity and differentiability of the problem domain. The genetic algorithm successfully minimized the operation and support activities for a space vehicle, through a discrete event simulation model. The practical issues associated with simulation optimization, such as stochastic variables and constraints, were also taken into consideration.

  8. Stochastic Modelling of Linear Programming Application to Brewing Operational Systems

    Directory of Open Access Journals (Sweden)

    Akanbi O.P.

    2014-07-01

    Full Text Available System where a large number of interrelated operations exist, technically-based operational mechanism is always required to achieve potential. An intuitive solution, which is common practice in most of the breweries, perhaps may not uncover the optimal solution, as there is hardly any guarantee to satisfy the best policy application. There is always high foreign exchange involved in procurement of imported raw materials and thus increases the cost of production, abandonment and poor utilization of available locally-sourced raw materials. This study focuses on the approaches which highlight the steps and mechanisms involved in optimizing the wort extract by the use of different types of adjuncts and formulating wort production models which are useful in proffering expected solutions. Optimization techniques, the generalized models and an overview of typical brewing processes were considered.

  9. A practical guide for operational validation of discrete simulation models

    Directory of Open Access Journals (Sweden)

    Fabiano Leal

    2011-04-01

    Full Text Available As the number of simulation experiments increases, the necessity for validation and verification of these models demands special attention on the part of the simulation practitioners. By analyzing the current scientific literature, it is observed that the operational validation description presented in many papers does not agree on the importance designated to this process and about its applied techniques, subjective or objective. With the expectation of orienting professionals, researchers and students in simulation, this article aims to elaborate a practical guide through the compilation of statistical techniques in the operational validation of discrete simulation models. Finally, the guide's applicability was evaluated by using two study objects, which represent two manufacturing cells, one from the automobile industry and the other from a Brazilian tech company. For each application, the guide identified distinct steps, due to the different aspects that characterize the analyzed distributions

  10. Modelling of innovative SANEX process mal-operations

    International Nuclear Information System (INIS)

    McLachlan, F.; Taylor, R.; Whittaker, D.; Woodhead, D.; Geist, A.

    2016-01-01

    The innovative (i-) SANEX process for the separation of minor actinides from PUREX highly active raffinate is expected to employ a solvent phase comprising 0.2 M TODGA with 5 v/v% 1-octanol in an inert diluent. An initial extract / scrub section would be used to extract trivalent actinides and lanthanides from the feed whilst leaving other fission products in the aqueous phase, before the loaded solvent is contacted with a low acidity aqueous phase containing a sulphonated bis-triazinyl pyridine ligand (BTP) to effect a selective strip of the actinides, so yielding separate actinide (An) and lanthanide (Ln) product streams. This process has been demonstrated in lab scale trials at Juelich (FZJ). The SACSESS (Safety of Actinide Separation processes) project is focused on the evaluation and improvement of the safety of such future systems. A key element of this is the development of an understanding of the response of a process to upsets (mal-operations). It is only practical to study a small subset of possible mal-operations experimentally and consideration of the majority of mal-operations entails the use of a validated dynamic model of the process. Distribution algorithms for HNO_3, Am, Cm and the lanthanides have been developed and incorporated into a dynamic flowsheet model that has, so far, been configured to correspond to the extract-scrub section of the i-SANEX flowsheet trial undertaken at FZJ in 2013. Comparison is made between the steady state model results and experimental results. Results from modelling of low acidity and high temperature mal-operations are presented. (authors)

  11. Modelling of Reservoir Operations using Fuzzy Logic and ANNs

    Science.gov (United States)

    Van De Giesen, N.; Coerver, B.; Rutten, M.

    2015-12-01

    Today, almost 40.000 large reservoirs, containing approximately 6.000 km3 of water and inundating an area of almost 400.000 km2, can be found on earth. Since these reservoirs have a storage capacity of almost one-sixth of the global annual river discharge they have a large impact on the timing, volume and peaks of river discharges. Global Hydrological Models (GHM) are thus significantly influenced by these anthropogenic changes in river flows. We developed a parametrically parsimonious method to extract operational rules based on historical reservoir storage and inflow time-series. Managing a reservoir is an imprecise and vague undertaking. Operators always face uncertainties about inflows, evaporation, seepage losses and various water demands to be met. They often base their decisions on experience and on available information, like reservoir storage and the previous periods inflow. We modeled this decision-making process through a combination of fuzzy logic and artificial neural networks in an Adaptive-Network-based Fuzzy Inference System (ANFIS). In a sensitivity analysis, we compared results for reservoirs in Vietnam, Central Asia and the USA. ANFIS can indeed capture reservoirs operations adequately when fed with a historical monthly time-series of inflows and storage. It was shown that using ANFIS, operational rules of existing reservoirs can be derived without much prior knowledge about the reservoirs. Their validity was tested by comparing actual and simulated releases with each other. For the eleven reservoirs modelled, the normalised outflow, , was predicted with a MSE of 0.002 to 0.044. The rules can be incorporated into GHMs. After a network for a specific reservoir has been trained, the inflow calculated by the hydrological model can be combined with the release and initial storage to calculate the storage for the next time-step using a mass balance. Subsequently, the release can be predicted one time-step ahead using the inflow and storage.

  12. Fires involving radioactive materials : transference model; operative recommendations

    International Nuclear Information System (INIS)

    Rodriguez, C.E.; Puntarulo, L.J.; Canibano, J.A.

    1988-01-01

    In all aspects related to the nuclear activity, the occurrence of an explosion, fire or burst type accident, with or without victims, is directly related to the characteristics of the site. The present work analyses the different parameters involved, describing a transference model and recommendations for evaluation and control of the radiological risk for firemen. Special emphasis is placed on the measurement of the variables existing in this kind of operations

  13. Data Envelopment Analysis (DEA) Model in Operation Management

    Science.gov (United States)

    Malik, Meilisa; Efendi, Syahril; Zarlis, Muhammad

    2018-01-01

    Quality management is an effective system in operation management to develops, maintains, and improves quality from groups of companies that allow marketing, production, and service at the most economycal level as well as ensuring customer satisfication. Many companies are practicing quality management to improve their bussiness performance. One of performance measurement is through measurement of efficiency. One of the tools can be used to assess efficiency of companies performance is Data Envelopment Analysis (DEA). The aim of this paper is using Data Envelopment Analysis (DEA) model to assess efficiency of quality management. In this paper will be explained CCR, BCC, and SBM models to assess efficiency of quality management.

  14. Conformal operator product expansion in the Yukawa model

    International Nuclear Information System (INIS)

    Prati, M.C.

    1983-01-01

    Conformal techniques are applied to the Yukawa model, as an example of a theory with spinor fields. It is written the partial-wave analysis of the 4-point function of two scalars and two spinors in the channel phi psi → phi psi in terms of spinor tensor representations of the conformal group. Using this conformal expansion, it is diagonalized the Bethe-Salpeter equation, which is reduced to algebraic relations among the partial waves. It is shown that in the γ 5 -invariant model, but not in the general case, it is possible to derive dynamically from the expansions of the 4-point function the vacuum operator product phi psi>

  15. Modelling the basic error tendencies of human operators

    Energy Technology Data Exchange (ETDEWEB)

    Reason, J.

    1988-01-01

    The paper outlines the primary structural features of human cognition: a limited, serial workspace interacting with a parallel distributed knowledge base. It is argued that the essential computational features of human cognition - to be captured by an adequate operator model - reside in the mechanisms by which stored knowledge structures are selected and brought into play. Two such computational 'primitives' are identified: similarity-matching and frequency-gambling. These two retrieval heuristics, it is argued, shape both the overall character of human performance (i.e. its heavy reliance on pattern-matching) and its basic error tendencies ('strong-but-wrong' responses, confirmation, similarity and frequency biases, and cognitive 'lock-up'). The various features of human cognition are integrated with a dynamic operator model capable of being represented in software form. This computer model, when run repeatedly with a variety of problem configurations, should produce a distribution of behaviours which, in toto, simulate the general character of operator performance.

  16. Model validity and frequency band selection in operational modal analysis

    Science.gov (United States)

    Au, Siu-Kui

    2016-12-01

    Experimental modal analysis aims at identifying the modal properties (e.g., natural frequencies, damping ratios, mode shapes) of a structure using vibration measurements. Two basic questions are encountered when operating in the frequency domain: Is there a mode near a particular frequency? If so, how much spectral data near the frequency can be included for modal identification without incurring significant modeling error? For data with high signal-to-noise (s/n) ratios these questions can be addressed using empirical tools such as singular value spectrum. Otherwise they are generally open and can be challenging, e.g., for modes with low s/n ratios or close modes. In this work these questions are addressed using a Bayesian approach. The focus is on operational modal analysis, i.e., with 'output-only' ambient data, where identification uncertainty and modeling error can be significant and their control is most demanding. The approach leads to 'evidence ratios' quantifying the relative plausibility of competing sets of modeling assumptions. The latter involves modeling the 'what-if-not' situation, which is non-trivial but is resolved by systematic consideration of alternative models and using maximum entropy principle. Synthetic and field data are considered to investigate the behavior of evidence ratios and how they should be interpreted in practical applications.

  17. Communicating Sustainability: An Operational Model for Evaluating Corporate Websites

    Directory of Open Access Journals (Sweden)

    Alfonso Siano

    2016-09-01

    Full Text Available The interest in corporate sustainability has increased rapidly in recent years and has encouraged organizations to adopt appropriate digital communication strategies, in which the corporate website plays a key role. Despite this growing attention in both the academic and business communities, models for the analysis and evaluation of online sustainability communication have not been developed to date. This paper aims to develop an operational model to identify and assess the requirements of sustainability communication in corporate websites. It has been developed from a literature review on corporate sustainability and digital communication and the analysis of the websites of the organizations included in the “Global CSR RepTrak 2015” by the Reputation Institute. The model identifies the core dimensions of online sustainability communication (orientation, structure, ergonomics, content—OSEC, sub-dimensions, such as stakeholder engagement and governance tools, communication principles, and measurable items (e.g., presence of the materiality matrix, interactive graphs. A pilot study on the websites of the energy and utilities companies included in the Dow Jones Sustainability World Index 2015 confirms the applicability of the OSEC framework. Thus, the model can provide managers and digital communication consultants with an operational tool that is useful for developing an industry ranking and assessing the best practices. The model can also help practitioners to identify corrective actions in the critical areas of digital sustainability communication and avoid greenwashing.

  18. Stability of the matrix model in operator interpretation

    Directory of Open Access Journals (Sweden)

    Katsuta Sakai

    2017-12-01

    Full Text Available The IIB matrix model is one of the candidates for nonperturbative formulation of string theory, and it is believed that the model contains gravitational degrees of freedom in some manner. In some preceding works, it was proposed that the matrix model describes the curved space where the matrices represent differential operators that are defined on a principal bundle. In this paper, we study the dynamics of the model in this interpretation, and point out the necessity of the principal bundle from the viewpoint of the stability and diffeomorphism invariance. We also compute the one-loop correction which yields a mass term for each field due to the principal bundle. We find that the stability is not violated.

  19. Pyrometer model based on sensor physical structure and thermal operation

    International Nuclear Information System (INIS)

    Sebastian, Eduardo; Armiens, Carlos; Gomez-Elvira, Javier

    2010-01-01

    This paper proposes a new simplified thermal model for pyrometers, which takes into account both their internal and external physical structure and operation. The model is experimentally tested on the REMS GTS, an instrument for measuring ground temperature, which is part of the payload of the NASA MSL mission to Mars. The proposed model is based on an energy balance equation that represents the heat fluxes exchanged between sensor elements through radiation, conduction and convection. Despite being mathematically more complex than the more commonly used model, the proposed model makes it possible to design a methodology to compensate the effects of sensor spatial thermal gradients. The paper includes a practical methodology for identifying model constants, which is part of the GTS instrument calibration plan and uses a differential approach to avoid setup errors. Experimental results of the model identification methodology and a target temperature measurement performance after identification has been made are reported. Results demonstrate the good behaviour of the model, with errors below 0.15 deg. C in target temperature estimates.

  20. Prediction models for solitary pulmonary nodules based on curvelet textural features and clinical parameters.

    Science.gov (United States)

    Wang, Jing-Jing; Wu, Hai-Feng; Sun, Tao; Li, Xia; Wang, Wei; Tao, Li-Xin; Huo, Da; Lv, Ping-Xin; He, Wen; Guo, Xiu-Hua

    2013-01-01

    Lung cancer, one of the leading causes of cancer-related deaths, usually appears as solitary pulmonary nodules (SPNs) which are hard to diagnose using the naked eye. In this paper, curvelet-based textural features and clinical parameters are used with three prediction models [a multilevel model, a least absolute shrinkage and selection operator (LASSO) regression method, and a support vector machine (SVM)] to improve the diagnosis of benign and malignant SPNs. Dimensionality reduction of the original curvelet-based textural features was achieved using principal component analysis. In addition, non-conditional logistical regression was used to find clinical predictors among demographic parameters and morphological features. The results showed that, combined with 11 clinical predictors, the accuracy rates using 12 principal components were higher than those using the original curvelet-based textural features. To evaluate the models, 10-fold cross validation and back substitution were applied. The results obtained, respectively, were 0.8549 and 0.9221 for the LASSO method, 0.9443 and 0.9831 for SVM, and 0.8722 and 0.9722 for the multilevel model. All in all, it was found that using curvelet-based textural features after dimensionality reduction and using clinical predictors, the highest accuracy rate was achieved with SVM. The method may be used as an auxiliary tool to differentiate between benign and malignant SPNs in CT images.

  1. Yanqing solar field: Dynamic optical model and operational safety analysis

    International Nuclear Information System (INIS)

    Zhao, Dongming; Wang, Zhifeng; Xu, Ershu; Zhu, Lingzhi; Lei, Dongqiang; Xu, Li; Yuan, Guofeng

    2017-01-01

    Highlights: • A dynamic optical model of the Yanqing solar field was built. • Tracking angle characteristics were studied with different SCA layouts and time. • The average energy flux was simulated across four clear days. • Influences of defocus angles for energy flux were analyzed. - Abstract: A dynamic optical model was established for the Yanqing solar field at the parabolic trough solar thermal power plant and a simulation was conducted on four separate days of clear weather (March 3rd, June 2nd, September 25th, December 17th). The solar collector assembly (SCA) was comprised of a North-South and East-West layout. The model consisted of the following modules: DNI, SCA operational, and SCA optical. The tracking angle characteristics were analyzed and the results showed that the East-West layout of the tracking system was the most viable. The average energy flux was simulated for a given time period and different SCA layouts, yielding an average flux of 6 kW/m 2 , which was then used as the design and operational standards of the Yanqing parabolic trough plant. The mass flow of North-South layout was relatively stable. The influences of the defocus angles on both the average energy flux and the circumferential flux distribution were also studied. The results provided a theoretical basis for the following components: solar field design, mass flow control of the heat transfer fluid, design and operation of the tracking system, operational safety of SCAs, and power production prediction in the Yanqing 1 MW parabolic trough plant.

  2. Dynamic occupational risk model for offshore operations in harsh environments

    International Nuclear Information System (INIS)

    Song, Guozheng; Khan, Faisal; Wang, Hangzhou; Leighton, Shelly; Yuan, Zhi; Liu, Hanwen

    2016-01-01

    The expansion of offshore oil exploitation into remote areas (e.g., Arctic) with harsh environments has significantly increased occupational risks. Among occupational accidents, slips, trips and falls from height (STFs) account for a significant portion. Thus, a dynamic risk assessment of the three main occupational accidents is meaningful to decrease offshore occupational risks. Bow-tie Models (BTs) were established in this study for the risk analysis of STFs considering extreme environmental factors. To relax the limitations of BTs, Bayesian networks (BNs) were developed based on BTs to dynamically assess risks of STFs. The occurrence and consequence probabilities of STFs were respectively calculated using BTs and BNs, and the obtained probabilities verified BNs' rationality and advantage. Furthermore, the probability adaptation for STFs was accomplished in a specific scenario with BNs. Finally, posterior probabilities of basic events were achieved through diagnostic analysis, and critical basic events were analyzed based on their posterior likelihood to cause occupational accidents. The highlight is systematically analyzing STF accidents for offshore operations and dynamically assessing their risks considering the harsh environmental factors. This study can guide the allocation of prevention resources and benefit the safety management of offshore operations. - Highlights: • A novel dynamic risk model for occupational accidents. • First time consideration of harsh environment in occupational accident modeling. • A Bayesian network based model for risk management strategies.

  3. Island operation - modelling of a small hydro power system

    Energy Technology Data Exchange (ETDEWEB)

    Skarp, Stefan

    2000-02-01

    Simulation is a useful tool for investigating a system behaviour. It is a way to examine operating situations without having to perform them in reality. If someone for example wants to test an operating situation where the system possibly will demolish, a computer simulation could be a both cheaper and safer way than to do the test in reality. This master thesis performs and analyses a simulation, modelling an electronic power system. The system consists of a minor hydro power station, a wood refining industry, and interconnecting power system components. In the simulation situation the system works in a so called island operation. The thesis aims at making a capacity analysis of the current system. Above all, the goal is to find restrictions in load power profile of the consumer, under given circumstances. The computer software used in simulations is Matlab and its additional program PSB (Power System Blockset). The work has been carried out in co-operation with the power supplier Skellefteaa Kraft, where the problem formulation of this master thesis was founded.

  4. Operator realization of the SU(2) WZNW model

    International Nuclear Information System (INIS)

    Furlan, P.; Todorov, I.T.

    1995-12-01

    Decoupling the chiral dynamics in the canonical approach to the WZNW model requires an extended phase space that includes left and right monodromy variables M and M-bar. Earlier work on the subject, which traced back the quantum group symmetry of the model to the Lie-Poisson symmetry of the chiral symplectic form, left some open questions: How to reconcile the necessity to set M M-bar -1 = 1 (in order to recover the monodromy invariance of the local 2D group valued field g = uu-bar) with the fact the M and M-bar obey different exchange relations? What is the status of the quantum symmetry in the 2D theory in which the chiral fields u(x-t) and u-bar(x+t) commute? Is there a consistent operator formalism in the chiral (and the extended 2D) theory in the continuum limit? We propose a constructive affirmative answer to these questions for G = SU(2) by presenting the quantum field u and u-bar as sums of products of chiral vertex operators and q Bose creation and annihilation operators. (author). 17 refs

  5. Operator realization of the SU(2) WZNW model

    International Nuclear Information System (INIS)

    Furlan, P.; Hadjiivanov, L.K.; Todorov, I.T.

    1996-01-01

    Decoupling the chiral dynamics in the canonical approach to the WZNW model requires an extended phase space that includes left and right monodromy variables M and M. Earlier work on the subject, which traced back the quantum group symmetry of the model to the Lie-Poisson symmetry of the chiral symplectic form, left some open questions: - How to reconcile the necessity to set MM -1 =1 (in order to recover the monodromy invariance of the local 2D group valued field g=uu) with the fact the M and M obey different exchange relations? - What is the status of the quantum symmetry in the 2D theory in which the chiral fields u(x-t) and u(x+t) commute? - Is there a consistent operator formalism in the chiral (and the extended 2D) theory in the continuum limit? We propose a constructive affirmative answer to these questions for G=SU(2) by presenting the quantum fields u and u as sums of products of chiral vertex operators and q-Bose creation and annihilation operators. (orig.)

  6. Cost Model for Risk Assessment of Company Operation in Audit

    Directory of Open Access Journals (Sweden)

    S. V.

    2017-12-01

    Full Text Available This article explores the approach to assessing the risk of company activities termination by building a cost model. This model gives auditors information on managers’ understanding of factors influencing change in the value of assets and liabilities, and the methods to identify it in more effective and reliable ways. Based on this information, the auditor can assess the adequacy of use of the assumption on continuity of company operation by management personnel when preparing financial statements. Financial uncertainty entails real manifestations of factors creating risks of the occurrence of costs, revenue losses due their manifestations, which in the long run can be a reason for termination of company operation, and, therefore, need to be foreseen in the auditor’s assessment of the adequacy of use of the continuity assumption when preparing financial statements by company management. The purpose of the study is to explore and develop a methodology for use of cost models to assess the risk of termination of company operation in audit. The issue of methodology for assessing the audit risk through analyzing methods for company valuation has not been dealt with. The review of methodologies for assessing the risks of termination of company operation in course of audit gives grounds for the conclusion that use of cost models can be an effective methodology for identification and assessment of such risks. The analysis of the above methods gives understanding of the existing system for company valuation, integrated into the management system, and the consequences of its use, i. e. comparison of the asset price data with the accounting data and the market value of the asset data. Overvalued or undervalued company assets may be a sign of future sale or liquidation of a company, which may signal on high probability of termination of company operation. A wrong choice or application of valuation methods can be indicative of the risk of non

  7. Fuzzy expert systems models for operations research and management science

    Science.gov (United States)

    Turksen, I. B.

    1993-12-01

    Fuzzy expert systems can be developed for the effective use of management within the domains of concern associated with Operations Research and Management Science. These models are designed with: (1) expressive powers of representation embedded in linguistic variables and their linguistic values in natural language expressions, and (2) improved methods of interference based on fuzzy logic which is a generalization of multi-valued logic with fuzzy quantifiers. The results of these fuzzy expert system models are either (1) approximately good in comparison with their classical counterparts, or (2) much better than their counterparts. Moreover, for fuzzy expert systems models, it is only necessary to obtain ordinal scale data. Whereas for their classical counterparts, it is generally required that data be at least on ratio and absolute scale in order to guarantee the additivity and multiplicativity assumptions.

  8. Groundwater flow modelling of the excavation and operational phases - Laxemar

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, Urban (Computer-aided Fluid Engineering AB, Lyckeby (Sweden)); Rhen, Ingvar (SWECO Environment AB, Falun (Sweden))

    2010-12-15

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken a series of groundwater flow modelling studies. These represent time periods with different hydraulic conditions and the simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. The modelling study reported here presents calculated inflow rates, drawdown of the groundwater table and upconing of deep saline water for different levels of grouting efficiency during the excavation and operational phases of a final repository at Laxemar. The inflow calculations were accompanied by a sensitivity study, which among other matters handled the impact of different deposition hole rejection criteria. The report also presents tentative modelling results for the duration of the saturation phase, which starts once the used parts of the repository are being backfilled

  9. Ergonomic evaluation model of operational room based on team performance

    Directory of Open Access Journals (Sweden)

    YANG Zhiyi

    2017-05-01

    Full Text Available A theoretical calculation model based on the ergonomic evaluation of team performance was proposed in order to carry out the ergonomic evaluation of the layout design schemes of the action station in a multitasking operational room. This model was constructed in order to calculate and compare the theoretical value of team performance in multiple layout schemes by considering such substantial influential factors as frequency of communication, distance, angle, importance, human cognitive characteristics and so on. An experiment was finally conducted to verify the proposed model under the criteria of completion time and accuracy rating. As illustrated by the experiment results,the proposed approach is conductive to the prediction and ergonomic evaluation of the layout design schemes of the action station during early design stages,and provides a new theoretical method for the ergonomic evaluation,selection and optimization design of layout design schemes.

  10. System Dynamics Modeling for Emergency Operating System Resilience

    Energy Technology Data Exchange (ETDEWEB)

    Eng, Ang Wei; Kim, Jong Hyun [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2014-10-15

    The purpose of this paper is to present a causal model which explain human error cause-effect relationships of emergency operating system (EOS) by using system dynamics (SD) approach. The causal model will further quantified by analyzes nuclear power plant incidents/accidents data in Korea for simulation modeling. Emergency Operating System (EOS) is generally defined as a system which consists personnel, human-machine interface and procedures; and how these components interact and coordinate to respond to an incident or accident. Understanding the behavior of EOS especially personnel behavior and the factors influencing it during accident will contribute in human reliability evaluation. Human Reliability Analysis (HRA) is a method which assesses how human decisions and actions affect to system risk and further used to reduce the human errors probability. There are many HRA method used performance influencing factors (PIFs) to identify the causes of human errors. However, these methods have several limitations. In HRA, PIFs are assumed independent each other and relationship between them are not been study. Through the SD simulation, users able to simulate various situation of nuclear power plant respond to emergency from human and organizational aspects. The simulation also provides users a comprehensive view on how to improve the safety in plants. This paper presents a causal model that explained cause-effect relationships of EOS human. Through SD simulation, users able to identify the main contribution of human error easily. Users can also use SD simulation to predict when and how a human error occurs over time. In future work, the SD model can be expanded more on low level factors. The relationship within low level factors can investigated by using correlation method and further included in the model. This can enables users to study more detailed human error cause-effect relationships and the behavior of EOS. Another improvement can be made is on EOS factors

  11. System Dynamics Modeling for Emergency Operating System Resilience

    International Nuclear Information System (INIS)

    Eng, Ang Wei; Kim, Jong Hyun

    2014-01-01

    The purpose of this paper is to present a causal model which explain human error cause-effect relationships of emergency operating system (EOS) by using system dynamics (SD) approach. The causal model will further quantified by analyzes nuclear power plant incidents/accidents data in Korea for simulation modeling. Emergency Operating System (EOS) is generally defined as a system which consists personnel, human-machine interface and procedures; and how these components interact and coordinate to respond to an incident or accident. Understanding the behavior of EOS especially personnel behavior and the factors influencing it during accident will contribute in human reliability evaluation. Human Reliability Analysis (HRA) is a method which assesses how human decisions and actions affect to system risk and further used to reduce the human errors probability. There are many HRA method used performance influencing factors (PIFs) to identify the causes of human errors. However, these methods have several limitations. In HRA, PIFs are assumed independent each other and relationship between them are not been study. Through the SD simulation, users able to simulate various situation of nuclear power plant respond to emergency from human and organizational aspects. The simulation also provides users a comprehensive view on how to improve the safety in plants. This paper presents a causal model that explained cause-effect relationships of EOS human. Through SD simulation, users able to identify the main contribution of human error easily. Users can also use SD simulation to predict when and how a human error occurs over time. In future work, the SD model can be expanded more on low level factors. The relationship within low level factors can investigated by using correlation method and further included in the model. This can enables users to study more detailed human error cause-effect relationships and the behavior of EOS. Another improvement can be made is on EOS factors

  12. A model of individualized canonical microcircuits supporting cognitive operations.

    Directory of Open Access Journals (Sweden)

    Tim Kunze

    Full Text Available Major cognitive functions such as language, memory, and decision-making are thought to rely on distributed networks of a large number of basic elements, called canonical microcircuits. In this theoretical study we propose a novel canonical microcircuit model and find that it supports two basic computational operations: a gating mechanism and working memory. By means of bifurcation analysis we systematically investigate the dynamical behavior of the canonical microcircuit with respect to parameters that govern the local network balance, that is, the relationship between excitation and inhibition, and key intrinsic feedback architectures of canonical microcircuits. We relate the local behavior of the canonical microcircuit to cognitive processing and demonstrate how a network of interacting canonical microcircuits enables the establishment of spatiotemporal sequences in the context of syntax parsing during sentence comprehension. This study provides a framework for using individualized canonical microcircuits for the construction of biologically realistic networks supporting cognitive operations.

  13. Addressing drug adherence using an operations management model.

    Science.gov (United States)

    Nunlee, Martin; Bones, Michelle

    2014-01-01

    OBJECTIVE To provide a model that enables health systems and pharmacy benefit managers to provide medications reliably and test for reliability and validity in the analysis of adherence to drug therapy of chronic disease. SUMMARY The quantifiable model described here can be used in conjunction with behavioral designs of drug adherence assessments. The model identifies variables that can be reproduced and expanded across the management of chronic diseases with drug therapy. By creating a reorder point system for reordering medications, the model uses a methodology commonly seen in operations research. The design includes a safety stock of medication and current supply of medication, which increases the likelihood that patients will have a continuous supply of medications, thereby positively affecting adherence by removing barriers. CONCLUSION This method identifies an adherence model that quantifies variables related to recommendations from health care providers; it can assist health care and service delivery systems in making decisions that influence adherence based on the expected order cycle days and the expected daily quantity of medication administered. This model addresses the possession of medication as a barrier to adherence.

  14. Modelling of the operational behaviour of passive autocatalytic recombiners

    International Nuclear Information System (INIS)

    Schwarz, Ulrich

    2011-01-01

    Due to severe accidents in nuclear power plants, a significant amount of hydrogen can be produced. In pressurized water reactors, a possible and wide-spread measurement is the use of auto-catalytic recombiners. There are numerous numerical models describing the operational behaviour of recombiners for containment codes. The numerical model REKO-DIREKT was developed at the Forschungszentrum Juelich. This model describes the chemical reaction on the catalytic sheets by a physical model, as opposed to the usual codes based on empirical correlations. Additionally, there have been experimental studies concerning the catalytic recombination of hydrogen since the 1990s. The aim of this work is the further development of the program REKO-DIREKT to an independent recombiner model for severe accident and containment codes. Therefore, the catalyst model already existed has been submitted by a parameter optimization with an experimental database expanded during this work. In addition, a chimney model has been implemented which allows the calculation of the free convection flow through the recombiner housing due to the exothermal reaction. This model has been tested by experimental data gained by a recently built test facility. The complete recombiner model REKO-DIREKT has been validated by data from literature. Another aim of this work is the derivation of the reaction kinetics for recombiner designs regarding future reactor concepts. Therefore, experimental studies both on single catalytic coated meshes as well as on two meshes installed in a row have been performed in laboratory scale. By means of the measured data, a theoretical approach for the determination of the reaction rate has been derived.

  15. Object-oriented process dose modeling for glovebox operations

    International Nuclear Information System (INIS)

    Boerigter, S.T.; Fasel, J.H.; Kornreich, D.E.

    1999-01-01

    The Plutonium Facility at Los Alamos National Laboratory supports several defense and nondefense-related missions for the country by performing fabrication, surveillance, and research and development for materials and components that contain plutonium. Most operations occur in rooms with one or more arrays of gloveboxes connected to each other via trolley gloveboxes. Minimizing the effective dose equivalent (EDE) is a growing concern as a result of steadily declining allowable dose limits being imposed and a growing general awareness of safety in the workplace. In general, the authors discriminate three components of a worker's total EDE: the primary EDE, the secondary EDE, and background EDE. A particular background source of interest is the nuclear materials vault. The distinction between sources inside and outside of a particular room is arbitrary with the underlying assumption that building walls and floors provide significant shielding to justify including sources in other rooms in the background category. Los Alamos has developed the Process Modeling System (ProMoS) primarily for performing process analyses of nuclear operations. ProMoS is an object-oriented, discrete-event simulation package that has been used to analyze operations at Los Alamos and proposed facilities such as the new fabrication facilities for the Complex-21 effort. In the past, crude estimates of the process dose (the EDE received when a particular process occurred), room dose (the EDE received when a particular process occurred in a given room), and facility dose (the EDE received when a particular process occurred in the facility) were used to obtain an integrated EDE for a given process. Modifications to the ProMoS package were made to utilize secondary dose information to use dose modeling to enhance the process modeling efforts

  16. A hybrid spatiotemporal drought forecasting model for operational use

    Science.gov (United States)

    Vasiliades, L.; Loukas, A.

    2010-09-01

    Drought forecasting plays an important role in the planning and management of natural resources and water resource systems in a river basin. Early and timelines forecasting of a drought event can help to take proactive measures and set out drought mitigation strategies to alleviate the impacts of drought. Spatiotemporal data mining is the extraction of unknown and implicit knowledge, structures, spatiotemporal relationships, or patterns not explicitly stored in spatiotemporal databases. As one of data mining techniques, forecasting is widely used to predict the unknown future based upon the patterns hidden in the current and past data. This study develops a hybrid spatiotemporal scheme for integrated spatial and temporal forecasting. Temporal forecasting is achieved using feed-forward neural networks and the temporal forecasts are extended to the spatial dimension using a spatial recurrent neural network model. The methodology is demonstrated for an operational meteorological drought index the Standardized Precipitation Index (SPI) calculated at multiple timescales. 48 precipitation stations and 18 independent precipitation stations, located at Pinios river basin in Thessaly region, Greece, were used for the development and spatiotemporal validation of the hybrid spatiotemporal scheme. Several quantitative temporal and spatial statistical indices were considered for the performance evaluation of the models. Furthermore, qualitative statistical criteria based on contingency tables between observed and forecasted drought episodes were calculated. The results show that the lead time of forecasting for operational use depends on the SPI timescale. The hybrid spatiotemporal drought forecasting model could be operationally used for forecasting up to three months ahead for SPI short timescales (e.g. 3-6 months) up to six months ahead for large SPI timescales (e.g. 24 months). The above findings could be useful in developing a drought preparedness plan in the region.

  17. Operational derivation of Boltzmann distribution with Maxwell's demon model.

    Science.gov (United States)

    Hosoya, Akio; Maruyama, Koji; Shikano, Yutaka

    2015-11-24

    The resolution of the Maxwell's demon paradox linked thermodynamics with information theory through information erasure principle. By considering a demon endowed with a Turing-machine consisting of a memory tape and a processor, we attempt to explore the link towards the foundations of statistical mechanics and to derive results therein in an operational manner. Here, we present a derivation of the Boltzmann distribution in equilibrium as an example, without hypothesizing the principle of maximum entropy. Further, since the model can be applied to non-equilibrium processes, in principle, we demonstrate the dissipation-fluctuation relation to show the possibility in this direction.

  18. Integration of field data into operational snowmelt-runoff models

    International Nuclear Information System (INIS)

    Brandt, M.; Bergström, S.

    1994-01-01

    Conceptual runoff models have become standard tools for operational hydrological forecasting in Scandinavia. These models are normally based on observations from the national climatological networks, but in mountainous areas the stations are few and sometimes not representative. Due to the great economic importance of good hydrological forecasts for the hydro-power industry attempts have been made to improve the model simulations by support from field observations of the snowpack. The snowpack has been mapped by several methods; airborne gamma-spectrometry, airborne georadars, satellites and by conventional snow courses. The studies cover more than ten years of work in Sweden. The conclusion is that field observations of the snow cover have a potential for improvement of the forecasts of inflow to the reservoirs in the mountainous part of the country, where the climatological data coverages is poor. This is pronounced during years with unusual snow distribution. The potential for model improvement is smaller in the climatologically more homogeneous forested lowlands, where the climatological network is denser. The costs of introduction of airborne observations into the modelling procedure are high and can only be justified in areas of great hydropower potential. (author)

  19. Modeling of a bioethanol combustion engine under different operating conditions

    International Nuclear Information System (INIS)

    Hedfi, Hachem; Jedli, Hedi; Jbara, Abdessalem; Slimi, Khalifa

    2014-01-01

    Highlights: • Bioethanol/gasoline blends’ fuel effects on engine’s efficiency, CO and NOx emissions. • Fuel consumption and EGR optimizations with respect to estimated engine’s work. • Ignition timing and blends’ effects on engine’s efficiency. • Rich mixture, gasoline/bioethanol blends and EGR effects on engine’s efficiency. - Abstract: A physical model based on a thermodynamic analysis was designed to characterize the combustion reaction parameters. The time-variations of pressure and temperature required for the calculation of specific heat ratio are obtained from the solution of energy conservation equation. The chemical combustion of biofuel is modeled by an overall reaction in two-steps. The rich mixture and EGR were varied to obtain the optimum operating conditions for the engine. The NOx formation is modeled by using an eight-species six-step mechanism. The effect of various formation steps of NOx in combustion is considered via a phenomenological model of combustion speed. This simplified model, which has been validated by the most available published results, is used to characterize and control, in real time, the impact of biofuel on engine performances and NOx emissions as well. It has been demonstrated that a delay of the ignition timing leads to an increase of the gas mixture temperature and cylinder pressure. Furthermore, it has been found that the CO is lower near the stoichiometry. Nevertheless, we notice that lower rich mixture values result in small NOx emission rates

  20. Metric versus observable operator representation, higher spin models

    Science.gov (United States)

    Fring, Andreas; Frith, Thomas

    2018-02-01

    We elaborate further on the metric representation that is obtained by transferring the time-dependence from a Hermitian Hamiltonian to the metric operator in a related non-Hermitian system. We provide further insight into the procedure on how to employ the time-dependent Dyson relation and the quasi-Hermiticity relation to solve time-dependent Hermitian Hamiltonian systems. By solving both equations separately we argue here that it is in general easier to solve the former. We solve the mutually related time-dependent Schrödinger equation for a Hermitian and non-Hermitian spin 1/2, 1 and 3/2 model with time-independent and time-dependent metric, respectively. In all models the overdetermined coupled system of equations for the Dyson map can be decoupled algebraic manipulations and reduces to simple linear differential equations and an equation that can be converted into the non-linear Ermakov-Pinney equation.

  1. Modeling motive activation in the Operant Motives Test

    DEFF Research Database (Denmark)

    Runge, J. Malte; Lang, Jonas W. B.; Engeser, Stefan

    2016-01-01

    The Operant Motive Test (OMT) is a picture-based procedure that asks respondents to generate imaginative verbal behavior that is later coded for the presence of affiliation, power, and achievement-related motive content by trained coders. The OMT uses a larger number of pictures and asks...... on the dynamic model were .52, .62, and .73 for the affiliation, achievement, and power motive in the OMT, respectively. The second contribution of this article is a tutorial and R code that allows researchers to directly apply the dynamic Thurstonian IRT model to their data. The future use of the OMT...... respondents to provide more brief answers than earlier and more traditional picture-based implicit motive measures and has therefore become a frequently used measurement instrument in both research and practice. This article focuses on the psychometric response mechanism in the OMT and builds on recent...

  2. Operational modal analysis modeling, Bayesian inference, uncertainty laws

    CERN Document Server

    Au, Siu-Kui

    2017-01-01

    This book presents operational modal analysis (OMA), employing a coherent and comprehensive Bayesian framework for modal identification and covering stochastic modeling, theoretical formulations, computational algorithms, and practical applications. Mathematical similarities and philosophical differences between Bayesian and classical statistical approaches to system identification are discussed, allowing their mathematical tools to be shared and their results correctly interpreted. Many chapters can be used as lecture notes for the general topic they cover beyond the OMA context. After an introductory chapter (1), Chapters 2–7 present the general theory of stochastic modeling and analysis of ambient vibrations. Readers are first introduced to the spectral analysis of deterministic time series (2) and structural dynamics (3), which do not require the use of probability concepts. The concepts and techniques in these chapters are subsequently extended to a probabilistic context in Chapter 4 (on stochastic pro...

  3. Hadron matrix elements of quark operators in the relativistic quark model, 2. Model calculation

    Energy Technology Data Exchange (ETDEWEB)

    Arisue, H; Bando, M; Toya, M [Kyoto Univ. (Japan). Dept. of Physics; Sugimoto, H

    1979-11-01

    Phenomenological studies of the matrix elements of two- and four-quark operators are made on the basis of relativistic independent quark model for typical three cases of the potentials: rigid wall, linearly rising and Coulomb-like potentials. The values of the matrix elements of two-quark operators are relatively well reproduced in each case, but those of four-quark operators prove to be too small in the independent particle treatment. It is suggested that the short-range two-quark correlations must be taken into account in order to improve the values of the matrix elements of the four-quark operators.

  4. Modelling operator cognitive interactions in nuclear power plant safety evaluation

    International Nuclear Information System (INIS)

    Senders, J.W.; Moray, N.; Smiley, A.; Sellen, A.

    1985-08-01

    The overall objectives of the study were to review methods which are applicable to the analysis of control room operator cognitive interactions in nuclear plant safety evaluations and to indicate where future research effort in this area should be directed. This report is based on an exhaustive search and review of the literature on NPP (Nuclear Power Plant) operator error, human error, human cognitive function, and on human performance. A number of methods which have been proposed for the estimation of data for probabilistic risk analysis have been examined and have been found wanting. None addresses the problem of diagnosis error per se. Virtually all are concerned with the more easily detected and identified errors of action. None addresses underlying cause and mechanism. It is these mechanisms which must be understood if diagnosis errors and other cognitive errors are to be controlled and predicted. We have attempted to overcome the deficiencies of earlier work and have constructed a model/taxonomy, EXHUME, which we consider to be exhaustive. This construct has proved to be fruitful in organizing our thinking about the kinds of error that can occur and the nature of self-correcting mechanisms, and has guided our thinking in suggesting a research program which can provide the data needed for quantification of cognitive error rates and of the effects of mitigating efforts. In addition a preliminary outline of EMBED, a causal model of error, is given based on general behavioural research into perception, attention, memory, and decision making. 184 refs

  5. Classification of NLO operators for composite Higgs models

    Science.gov (United States)

    Alanne, Tommi; Bizot, Nicolas; Cacciapaglia, Giacomo; Sannino, Francesco

    2018-04-01

    We provide a general classification of template operators, up to next-to-leading order, that appear in chiral perturbation theories based on the two flavor patterns of spontaneous symmetry breaking SU (NF)/Sp (NF) and SU (NF)/SO (NF). All possible explicit-breaking sources parametrized by spurions transforming in the fundamental and in the two-index representations of the flavor symmetry are included. While our general framework can be applied to any model of strong dynamics, we specialize to composite-Higgs models, where the main explicit breaking sources are a current mass, the gauging of flavor symmetries, and the Yukawa couplings (for the top). For the top, we consider both bilinear couplings and linear ones à la partial compositeness. Our templates provide a basis for lattice calculations in specific models. As a special example, we consider the SU (4 )/Sp (4 )≅SO (6 )/SO (5 ) pattern which corresponds to the minimal fundamental composite-Higgs model. We further revisit issues related to the misalignment of the vacuum. In particular, we shed light on the physical properties of the singlet η , showing that it cannot develop a vacuum expectation value without explicit C P violation in the underlying theory.

  6. Proposal for operator's mental model using the concept of multilevel flow modeling

    International Nuclear Information System (INIS)

    Yoshimura, Seiichi; Takano, Kenichi; Sasou, Kunihide

    1995-01-01

    It is necessary to analyze an operator's thinking process and a operator team's intension forming process for preventing human errors in a highly advanced huge system like a nuclear power plant. Central Research Institute of Electric Power Industry is promoting a research project to establish human error prevention countermeasures by modeling the thinking and intension forming process. The important is the future prediction and the cause identification when abnormal situations occur in a nuclear power plant. The concept of Multilevel Flow Modeling (MFM) seems to be effective as an operator's mental model which performs the future prediction and the cause identification. MFM is a concept which qualitatively describes the plant functions by energy and mass flows and also describes the plant status by breaking down the targets in a hierarchical manner which a plant should achieve. In this paper, an operator's mental model using the concept of MFM was proposed and a nuclear power plant diagnosis support system using MFM was developed. The system evaluation test by personnel who have operational experience in nuclear power plants revealed that MFM was superior in the future prediction and the cause identification to a traditional nuclear power plant status display system which used mimics and trends. MFM proved to be useful as an operator's mental model by the test. (author)

  7. Groundwater flow modelling of the excavation and operational phases - Forsmark

    International Nuclear Information System (INIS)

    Svensson, Urban; Follin, Sven

    2010-07-01

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken a series of groundwater flow modelling studies. These represent time periods with different climate conditions and the simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. The modelling study reported here presents calculated inflow rates, drawdown of the groundwater table and upconing of deep saline water for different levels of grouting efficiency during the excavation and operational phases of a final repository at Forsmark. The inflow calculations are accompanied by a sensitivity study, which among other matters handles the impact of parameter heterogeneity, different deposition hole rejection criteria, and the SFR facility (the repository for short-lived radioactive waste located approximately 1 km to the north of the investigated candidate area for a final repository at Forsmark). The report also presents tentative modelling results for the duration of the saturation phase, which starts once the used parts of the repository are being backfilled

  8. Groundwater flow modelling of the excavation and operational phases - Forsmark

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, Urban (Computer-aided Fluid Engineering AB, Lyckeby (Sweden)); Follin, Sven (SF GeoLogic AB, Taeby (Sweden))

    2010-07-15

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken a series of groundwater flow modelling studies. These represent time periods with different climate conditions and the simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. The modelling study reported here presents calculated inflow rates, drawdown of the groundwater table and upconing of deep saline water for different levels of grouting efficiency during the excavation and operational phases of a final repository at Forsmark. The inflow calculations are accompanied by a sensitivity study, which among other matters handles the impact of parameter heterogeneity, different deposition hole rejection criteria, and the SFR facility (the repository for short-lived radioactive waste located approximately 1 km to the north of the investigated candidate area for a final repository at Forsmark). The report also presents tentative modelling results for the duration of the saturation phase, which starts once the used parts of the repository are being backfilled.

  9. Modeling the ecological impacts of Flaming Gorge Dam operations

    International Nuclear Information System (INIS)

    Yin, S.C.L.; LaGory, K.E.; Hayse, J.W.; Hlohowskyj, I.; Van Lonkhuyzen, R.A.; Cho, H.E.

    1996-01-01

    Hydropower operations at Flaming Gorge Dam on the Green River in Utah, US, can produce rapid downstream changes in flow and stage during a day. These changes can, in turn, affect ecological resources below the dam, including riparian vegetation, trout, and endangered fish. Four hydropower operational scenarios featuring varying degrees of hydropower-induced flow fluctuation were evaluated with hydrologic models and multispectral aerial videography of the river. Year-round high fluctuations would support the least amount of stable spawning habitat for trout and nursery habitat for endangered fish, and would have the greatest potential for reducing growth and over winter survival of fish. Seasonally, adjusted moderate fluctuation and seasonally adjusted steady flow scenarios could increase food production and over winter survival and would provide the greatest amount of spawning and nursery habitat for fish. The year-round high fluctuation, seasonally adjusted high fluctuation, and seasonally adjusted moderate fluctuation scenarios would result in a 5% decrease in upper riparian zone habitat. the seasonally adjusted steady flow scenario would result in an 8% increase in upper riparian zone habitat. Lower riparian zone habitat would increase by about 17% for year-round and seasonally adjusted high fluctuating flow scenarios but decrease by about 24% and 69% for seasonally adjusted moderate fluctuating and steady flow scenarios, respectively

  10. Modelling the operation of precipitator with vortex effect

    International Nuclear Information System (INIS)

    Eysseric-Emile, C.

    1994-01-01

    In the Purex process which is implemented for the processing of irradiated fuels to eliminate fission products and to recover and valorise uranium and plutonium under the form of end products, a precipitation operation occurs to prepare the plutonium oxalate. This research thesis aims at analysing hydrodynamic characteristics of a specific apparatus used for this precipitation, the precipitator with vortex effect. In a first part, the author presents the problems associated with precipitation operations, their implementation in the processing of irradiated fuels, and compares the considered precipitator with other devices used for the precipitation of radioactive compounds. He proposes a review of literature on the vortex effect in agitated vessel, highlights the key parameter (the forced vortex radius), and reports some preliminary measurements performed on the precipitator. The author then reports the study of liquid phase flows in the precipitator, measurements of rate of suspension, and the study of micro-mixing with reactants. He finally reports attempts to validate trends noticed during flow analysis and a first simple modelling of the precipitator [fr

  11. Considerations in modeling fission gas release during normal operation

    International Nuclear Information System (INIS)

    Rumble, E.T.; Lim, E.Y.; Stuart, R.G.

    1977-01-01

    The EPRI LWR fuel rod modeling code evaluation program analyzed seven fuel rods with experimental fission gas release data. In these cases, rod-averged burnups are less than 20,000 MWD/MTM, while the fission gas release fractions range roughly from 2 to 27%. Code results demonstrate the complexities in calculating fission gas release in certain operating regimes. Beyond this work, the behavior of a pre-pressurized PWR rod is simulated to average burnups of 40,000 MWD/MTM using GAPCON-THERMAL-2. Analysis of the sensitivity of fission gas release to power histories and release correlations indicate the strong impact that LMFBR type release correlations induce at high burnup. 15 refs

  12. Thermal evolution of the Schwinger model with matrix product operators

    International Nuclear Information System (INIS)

    Banuls, M.C.; Cirac, J.I.; Cichy, K.; Jansen, K.; Saito, H.

    2015-10-01

    We demonstrate the suitability of tensor network techniques for describing the thermal evolution of lattice gauge theories. As a benchmark case, we have studied the temperature dependence of the chiral condensate in the Schwinger model, using matrix product operators to approximate the thermal equilibrium states for finite system sizes with non-zero lattice spacings. We show how these techniques allow for reliable extrapolations in bond dimension, step width, system size and lattice spacing, and for a systematic estimation and control of all error sources involved in the calculation. The reached values of the lattice spacing are small enough to capture the most challenging region of high temperatures and the final results are consistent with the analytical prediction by Sachs and Wipf over a broad temperature range.

  13. AMFESYS: Modelling and diagnosis functions for operations support

    Science.gov (United States)

    Wheadon, J.

    1993-01-01

    Packetized telemetry, combined with low station coverage for close-earth satellites, may introduce new problems in presenting to the operator a clear picture of what the spacecraft is doing. A recent ESOC study has gone some way to show, by means of a practical demonstration, how the use of subsystem models combined with artificial intelligence techniques, within a real-time spacecraft control system (SCS), can help to overcome these problems. A spin-off from using these techniques can be an improvement in the reliability of the telemetry (TM) limit-checking function, as well as the telecommand verification function, of the Spacecraft Control systems (SCS). The problem and how it was addressed, including an overview of the 'AMF Expert System' prototype are described, and proposes further work which needs to be done to prove the concept. The Automatic Mirror Furnace is part of the payload of the European Retrievable Carrier (EURECA) spacecraft, which was launched in July 1992.

  14. First operation experiences with ITER-FEAT model pump

    International Nuclear Information System (INIS)

    Mack, A.; Day, Chr.; Haas, H.; Murdoch, D.K.; Boissin, J.C.; Schummer, P.

    2001-01-01

    Design and manufacturing of the model cryopump for ITER-FEAT have been finished. After acceptance tests at the contractor's premises the pump was installed in the TIMO-facility which was prepared for testing the pump under ITER-FEAT relevant operating conditions. The procedures for the final acceptance tests are described. Travelling time, positioning accuracy and leak rate of the main valve are within the requirements. The heat loads to the 5 and 80 K circuits are a factor two better than the designed values. The maximum pumping speeds for H 2 , D 2 , He, Ne were measured. The value of 58 m 3 /s for D 2 is well above the contractual required value of 40 m 3 /s

  15. Life Modeling for Nickel-Hydrogen Batteries in Geosynchronous Satellite Operation

    National Research Council Canada - National Science Library

    Zimmerman, A. H; Ang, V. J

    2005-01-01

    .... The model has been used to predict how properly designed and operated nickel-hydrogen battery lifetimes should depend on the operating environments and charge control methods typically used in GEO operation...

  16. EMMA model: an advanced operational mesoscale air quality model for urban and regional environments

    International Nuclear Information System (INIS)

    Jose, R.S.; Rodriguez, M.A.; Cortes, E.; Gonzalez, R.M.

    1999-01-01

    Mesoscale air quality models are an important tool to forecast and analyse the air quality in regional and urban areas. In recent years an increased interest has been shown by decision makers in these types of software tools. The complexity of such a model has grown exponentially with the increase of computer power. Nowadays, medium workstations can run operational versions of these modelling systems successfully. Presents a complex mesoscale air quality model which has been installed in the Environmental Office of the Madrid community (Spain) in order to forecast accurately the ozone, nitrogen dioxide and sulphur dioxide air concentrations in a 3D domain centred on Madrid city. Describes the challenging scientific matters to be solved in order to develop an operational version of the atmospheric mesoscale numerical pollution model for urban and regional areas (ANA). Some encouraging results have been achieved in the attempts to improve the accuracy of the predictions made by the version already installed. (Author)

  17. Modeling the Operation of a Platoon of Amphibious Vehicles for Support of Operational Test and Evaluation (OT&E)

    National Research Council Canada - National Science Library

    Gaver, Donald

    2001-01-01

    ...) of the Marine Corps' prospective Advanced Amphibious Assault Vehicle (AAAV). The model's emphasis is on suitability issues such as Operational Availability in an on-land (after ocean transit) mission region...

  18. Operational Leadership in the Information Age: A New Model

    National Research Council Canada - National Science Library

    Monis, Michael

    2000-01-01

    This report examines the effects of significant trends in the operational environment to arrive at a clearer picture of the evolving leadership challenges operational commanders face today and in the...

  19. Remote Sensing and Modeling for Improving Operational Aquatic Plant Management

    Science.gov (United States)

    Bubenheim, Dave

    2016-01-01

    The California Sacramento-San Joaquin River Delta is the hub for California’s water supply, conveying water from Northern to Southern California agriculture and communities while supporting important ecosystem services, agriculture, and communities in the Delta. Changes in climate, long-term drought, water quality changes, and expansion of invasive aquatic plants threatens ecosystems, impedes ecosystem restoration, and is economically, environmentally, and sociologically detrimental to the San Francisco Bay/California Delta complex. NASA Ames Research Center and the USDA-ARS partnered with the State of California and local governments to develop science-based, adaptive-management strategies for the Sacramento-San Joaquin Delta. The project combines science, operations, and economics related to integrated management scenarios for aquatic weeds to help land and waterway managers make science-informed decisions regarding management and outcomes. The team provides a comprehensive understanding of agricultural and urban land use in the Delta and the major water sheds (San Joaquin/Sacramento) supplying the Delta and interaction with drought and climate impacts on the environment, water quality, and weed growth. The team recommends conservation and modified land-use practices and aids local Delta stakeholders in developing management strategies. New remote sensing tools have been developed to enhance ability to assess conditions, inform decision support tools, and monitor management practices. Science gaps in understanding how native and invasive plants respond to altered environmental conditions are being filled and provide critical biological response parameters for Delta-SWAT simulation modeling. Operational agencies such as the California Department of Boating and Waterways provide testing and act as initial adopter of decision support tools. Methods developed by the project can become routine land and water management tools in complex river delta systems.

  20. Modeling Operating Modes for the Monju Nuclear Power Plant

    DEFF Research Database (Denmark)

    Lind, Morten; Yoshikawa, Hidekazu; Jørgensen, Sten Bay

    2012-01-01

    The specification of supervision and control tasks in complex processes requires definition of plant states on various levels of abstraction related to plant operation in start-up, normal operation and shut-down. Modes of plant operation are often specified in relation to a plant decomposition in...... for the Japanese fast breeder reactor plant MONJU....

  1. Modeling of a dependence between human operators in advanced main control rooms

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jaewhan; Jang, Seung-Cheol; Shin, Yeong Cheol

    2009-01-01

    For the human reliability analysis of main control room (MCR) operations, not only parameters such as the given situation and capability of the operators but also the dependence between the actions of the operators should be considered because MCR operations are team operations. The dependence between operators might be more prevalent in an advanced MCR in which operators share the same information using a computerized monitoring system or a computerized procedure system. Therefore, this work focused on the computerized operation environment of advanced MCRs and proposed a model to consider the dependence representing the recovery possibility of an operator error by another operator. The proposed model estimates human error probability values by considering adjustment values for a situation and dependence values for operators during the same operation using independent event trees. This work can be used to quantitatively calculate a more reliable operation failure probability for an advanced MCR. (author)

  2. Integrated model of port oil piping transportation system safety including operating environment threats

    OpenAIRE

    Kołowrocki, Krzysztof; Kuligowska, Ewa; Soszyńska-Budny, Joanna

    2017-01-01

    The paper presents an integrated general model of complex technical system, linking its multistate safety model and the model of its operation process including operating environment threats and considering variable at different operation states its safety structures and its components safety parameters. Under the assumption that the system has exponential safety function, the safety characteristics of the port oil piping transportation system are determined.

  3. NOAA Operational Model Archive Distribution System (NOMADS): High Availability Applications for Reliable Real Time Access to Operational Model Data

    Science.gov (United States)

    Alpert, J. C.; Wang, J.

    2009-12-01

    To reduce the impact of natural hazards and environmental changes, the National Centers for Environmental Prediction (NCEP) provide first alert and a preferred partner for environmental prediction services, and represents a critical national resource to operational and research communities affected by climate, weather and water. NOMADS is now delivering high availability services as part of NOAA’s official real time data dissemination at its Web Operations Center (WOC) server. The WOC is a web service used by organizational units in and outside NOAA, and acts as a data repository where public information can be posted to a secure and scalable content server. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development efforts aimed at advancing modeling and GEO-related tasks. The user (client) executes what is efficient to execute on the client and the server efficiently provides format independent access services. Client applications can execute on the server, if it is desired, but the same program can be executed on the client side with no loss of efficiency. In this way this paradigm lends itself to aggregation servers that act as servers of servers listing, searching catalogs of holdings, data mining, and updating information from the metadata descriptions that enable collections of data in disparate places to be simultaneously accessed, with results processed on servers and clients to produce a needed answer. The services used to access the operational model data output are the Open-source Project for a Network Data Access Protocol (OPeNDAP), implemented with the Grid Analysis and Display System (GrADS) Data Server (GDS), and applications for slicing, dicing and area sub-setting the large matrix of real time model data holdings. This approach insures an efficient use of computer resources because users transmit/receive only the data necessary for their tasks including

  4. Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.

    Science.gov (United States)

    Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic

    2017-03-01

    Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.

  5. Artificial Systems and Models for Risk Covering Operations

    Directory of Open Access Journals (Sweden)

    Laurenţiu Mihai Treapăt

    2017-04-01

    Full Text Available Mainly, this paper focuses on the roles of artificial intelligence based systems and especially on risk-covering operations. In this context, the paper comes with theoretical explanations on real-life based examples and applications. From a general perspective, the paper enriches its value with a wide discussion on the related subject. The paper aims to revise the volatilities’ estimation models and the correlations between the various time series and also by presenting the Risk Metrics methodology, as explained is a case study. The advantages that the VaR estimation offers, consist of its ability to quantitatively and numerically express the risk level of a portfolio, at a certain moment in time and also the risk of on open position (in titles, in FX, commodities or granted loans, belonging to an economic agent or even individual; hence, its role in a more efficient capital allocation, in the assumed risk delimitation, and also as a performance measurement instrument. In this paper and the study case that completes our work, we aim to prove how we can prevent considerable losses and even bankruptcies if VaR is known and applied accordingly. For this reason, the universities inRomaniashould include or increase their curricula with the study of the VaR model as an artificial intelligence tool. The simplicity of the presented case study, most probably, is the strongest argument of the current work because it can be understood also by the readers that are not necessarily very experienced in the risk management field.

  6. Operative and diagnostic hysteroscopy: A novel learning model combining new animal models and virtual reality simulation.

    Science.gov (United States)

    Bassil, Alfred; Rubod, Chrystèle; Borghesi, Yves; Kerbage, Yohan; Schreiber, Elie Servan; Azaïs, Henri; Garabedian, Charles

    2017-04-01

    Hysteroscopy is one of the most common gynaecological procedure. Training for diagnostic and operative hysteroscopy can be achieved through numerous previously described models like animal models or virtual reality simulation. We present our novel combined model associating virtual reality and bovine uteruses and bladders. End year residents in obstetrics and gynaecology attended a full day workshop. The workshop was divided in theoretical courses from senior surgeons and hands-on training in operative hysteroscopy and virtual reality Essure ® procedures using the EssureSim™ and Pelvicsim™ simulators with multiple scenarios. Theoretical and operative knowledge was evaluated before and after the workshop and General Points Averages (GPAs) were calculated and compared using a Student's T test. GPAs were significantly higher after the workshop was completed. The biggest difference was observed in operative knowledge (0,28 GPA before workshop versus 0,55 after workshop, pvirtual reality simulation is an efficient model not described before. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Discovery of salivary gland tumors’ biomarkers via co-regularized sparse-group lasso

    NARCIS (Netherlands)

    Imangaliyev, S.; Matse, J.H.; Bolscher, J.G.M.; Brakenhoff, R.H.; Wong, D.T.W.; Bloemena, E.; Veerman, E.C.I.; Levin, E.; Yamamoto, A.; Kida, T.; Uno, T.; Kuboyama, T.

    2017-01-01

    In this study, we discovered a panel of discriminative microRNAs in salivary gland tumors by application of statistical machine learning methods. We modelled multi-component interactions of salivary microRNAs to detect group-based associations among the features, enabling the distinction of

  8. Creation and annihilation operators for SU(3) in an SO(6,2) model

    International Nuclear Information System (INIS)

    Bracken, A.J.; MacGibbon, J.H.

    1984-01-01

    Creation and annihilation operators are defined which are Wigner operators (tensor shift operators) for SU(3). While the annihilation operators are simply boson operators, the creation operators are cubic polynomials in boson operators. Together they generate under commutation the Lie algebra of SO(6,2). A model for SU(3) is defined. The different SU(3) irreducible representations appear explicitly as manifestly covariant, irreducible tensors, whose orthogonality and normalisation properties are examined. Other Wigner operators for SU(3) can be constructed simply as products of the new creation and annihilation operators, or sums of such products. (author)

  9. Modeling and Control for Islanding Operation of Active Distribution Systems

    DEFF Research Database (Denmark)

    Cha, Seung-Tae; Wu, Qiuwei; Saleem, Arshad

    2011-01-01

    to stabilize the frequency. Different agents are defined to represent different resources in the distribution systems. A test platform with a real time digital simulator (RTDS), an OPen Connectivity (OPC) protocol server and the multi-agent based intelligent controller is established to test the proposed multi......Along with the increasing penetration of distributed generation (DG) in distribution systems, there are more resources for system operators to improve the operation and control of the whole system and enhance the reliability of electricity supply to customers. The distribution systems with DG...... are able to operate in is-landing operation mode intentionally or unintentionally. In order to smooth the transition from grid connected operation to islanding operation for distribution systems with DG, a multi-agent based controller is proposed to utilize different re-sources in the distribution systems...

  10. Intelligent control for modeling of real-time reservoir operation, part II: artificial neural network with operating rule curves

    Science.gov (United States)

    Chang, Ya-Ting; Chang, Li-Chiu; Chang, Fi-John

    2005-04-01

    To bridge the gap between academic research and actual operation, we propose an intelligent control system for reservoir operation. The methodology includes two major processes, the knowledge acquired and implemented, and the inference system. In this study, a genetic algorithm (GA) and a fuzzy rule base (FRB) are used to extract knowledge based on the historical inflow data with a design objective function and on the operating rule curves respectively. The adaptive network-based fuzzy inference system (ANFIS) is then used to implement the knowledge, to create the fuzzy inference system, and then to estimate the optimal reservoir operation. To investigate its applicability and practicability, the Shihmen reservoir, Taiwan, is used as a case study. For the purpose of comparison, a simulation of the currently used M-5 operating rule curve is also performed. The results demonstrate that (1) the GA is an efficient way to search the optimal input-output patterns, (2) the FRB can extract the knowledge from the operating rule curves, and (3) the ANFIS models built on different types of knowledge can produce much better performance than the traditional M-5 curves in real-time reservoir operation. Moreover, we show that the model can be more intelligent for reservoir operation if more information (or knowledge) is involved.

  11. Optimizing Warehouse Logistics Operations Through Site Selection Models: Istanbul, Turkey

    National Research Council Canada - National Science Library

    Erdemir, Ugur

    2003-01-01

    .... Given the dynamic environment surrounding the military operations, logistic sustainability requirements, rapid information technology developments, and budget-constrained Turkish DoD acquisition...

  12. Tracking of time-varying genomic regulatory networks with a LASSO-Kalman smoother

    OpenAIRE

    Khan, Jehandad; Bouaynaya, Nidhal; Fathallah-Shaykh, Hassan M

    2014-01-01

    It is widely accepted that cellular requirements and environmental conditions dictate the architecture of genetic regulatory networks. Nonetheless, the status quo in regulatory network modeling and analysis assumes an invariant network topology over time. In this paper, we refocus on a dynamic perspective of genetic networks, one that can uncover substantial topological changes in network structure during biological processes such as developmental growth. We propose a novel outlook on the inf...

  13. Exploiting Attribute Correlations: A Novel Trace Lasso-Based Weakly Supervised Dictionary Learning Method.

    Science.gov (United States)

    Wu, Lin; Wang, Yang; Pan, Shirui

    2017-12-01

    It is now well established that sparse representation models are working effectively for many visual recognition tasks, and have pushed forward the success of dictionary learning therein. Recent studies over dictionary learning focus on learning discriminative atoms instead of purely reconstructive ones. However, the existence of intraclass diversities (i.e., data objects within the same category but exhibit large visual dissimilarities), and interclass similarities (i.e., data objects from distinct classes but share much visual similarities), makes it challenging to learn effective recognition models. To this end, a large number of labeled data objects are required to learn models which can effectively characterize these subtle differences. However, labeled data objects are always limited to access, committing it difficult to learn a monolithic dictionary that can be discriminative enough. To address the above limitations, in this paper, we propose a weakly-supervised dictionary learning method to automatically learn a discriminative dictionary by fully exploiting visual attribute correlations rather than label priors. In particular, the intrinsic attribute correlations are deployed as a critical cue to guide the process of object categorization, and then a set of subdictionaries are jointly learned with respect to each category. The resulting dictionary is highly discriminative and leads to intraclass diversity aware sparse representations. Extensive experiments on image classification and object recognition are conducted to show the effectiveness of our approach.

  14. Modeling Operating Modes for the Monju Nuclear Power Plant

    DEFF Research Database (Denmark)

    Lind, Morten; Yoshikawa, H.; Jørgensen, Sten Bay

    2012-01-01

    of the process plant, its function and its structural elements. The paper explains how the means-end concepts of MFM can be used to provide formalized definitions of plant operation modes. The paper will introduce the mode types defined by MFM and show how selected operation modes can be represented...

  15. Comparison of operation optimization methods in energy system modelling

    DEFF Research Database (Denmark)

    Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian

    2013-01-01

    In areas with large shares of Combined Heat and Power (CHP) production, significant introduction of intermittent renewable power production may lead to an increased number of operational constraints. As the operation pattern of each utility plant is determined by optimization of economics......, possibilities for decoupling production constraints may be valuable. Introduction of heat pumps in the district heating network may pose this ability. In order to evaluate if the introduction of heat pumps is economically viable, we develop calculation methods for the operation patterns of each of the used...... energy technologies. In the paper, three frequently used operation optimization methods are examined with respect to their impact on operation management of the combined technologies. One of the investigated approaches utilises linear programming for optimisation, one uses linear programming with binary...

  16. Operational characteristics of nuclear power plants - modelling of operational safety; Pogonske karakteristike nuklearnih elektrana - modelsko izucavanje pogonske sigurnosti

    Energy Technology Data Exchange (ETDEWEB)

    Studovic, M [Masinski fakultet, Beograd (Yugoslavia)

    1984-07-01

    By operational experience of nuclear power plants and realize dlevel of availability of plant, systems and componenst reliabiliuty, operational safety and public protection, as a source on nature of distrurbances in power plant systems and lessons drawn by the TMI-2, in th epaper are discussed: examination of design safety for ultimate ensuring of safe operational conditions of the nuclear power plant; significance of the adequate action for keeping proess parameters in prescribed limits and reactor cooling rquirements; developed systems for measurements detection and monitoring all critical parameters in the nuclear steam supply system; contents of theoretical investigation and mathematical modeling of the physical phenomena and process in nuclear power plant system and components as software, supporting for ensuring of operational safety and new access in staff education process; program and progress of the investigation of some physical phenomena and mathematical modeling of nuclear plant transients, prepared at faculty of mechanical Engineering in Belgrade. (author)

  17. Simulation of nuclear plant operation into a stochastic energy production model

    International Nuclear Information System (INIS)

    Pacheco, R.L.

    1983-04-01

    A simulation model of nuclear plant operation is developed to fit into a stochastic energy production model. In order to improve the stochastic model used, and also reduce its computational time burdened by the aggregation of the model of nuclear plant operation, a study of tail truncation of the unsupplied demand distribution function has been performed. (E.G.) [pt

  18. Operators and representation theory canonical models for algebras of operators arising in quantum mechanics

    CERN Document Server

    Jorgensen, Palle E T

    1987-01-01

    Historically, operator theory and representation theory both originated with the advent of quantum mechanics. The interplay between the subjects has been and still is active in a variety of areas.This volume focuses on representations of the universal enveloping algebra, covariant representations in general, and infinite-dimensional Lie algebras in particular. It also provides new applications of recent results on integrability of finite-dimensional Lie algebras. As a central theme, it is shown that a number of recent developments in operator algebras may be handled in a particularly e

  19. Development of an operator`s mental model acquisition system. 1. Estimation of a physical mental model acquisition system

    Energy Technology Data Exchange (ETDEWEB)

    Ikeda, Mitsuru; Mizoguchi, Riichirou [Inst. of Scientific and Industrial Research, Osaka Univ., Ibaraki (Japan); Yoshikawa, Shinji; Ozawa, Kenji

    1997-03-01

    This report describes a technical survey of acquisition method of an operator`s understanding for functions and structures of his target nuclear plant. This method is to play a key role in the information processing framework to support on-training operators in forming their knowledge of the nuclear plants. This kind of technical framework is aiming at enhancing human operator`s ability to cope with anomaly plant situations which are difficult to expect from preceding experiences or engineering surveillance. In these cases, cause identifications and responding operation selections are desired to made not only empirically but also based on thoughts about possible phenomena to take place within the nuclear plant. This report focuses on a particular element technique, defined as `explanation-based knowledge acquisition`, as the candidate technique to potentially be extended to meet the requirement written above, and discusses about applicability to the learning support system and about necessary improvements, to identify future technical developments. (author)

  20. An Effect of the Co-Operative Network Model for Students' Quality in Thai Primary Schools

    Science.gov (United States)

    Khanthaphum, Udomsin; Tesaputa, Kowat; Weangsamoot, Visoot

    2016-01-01

    This research aimed: 1) to study the current and desirable states of the co-operative network in developing the learners' quality in Thai primary schools, 2) to develop a model of the co-operative network in developing the learners' quality, and 3) to examine the results of implementation of the co-operative network model in the primary school.…

  1. Collective operations in a file system based execution model

    Science.gov (United States)

    Shinde, Pravin; Van Hensbergen, Eric

    2013-02-19

    A mechanism is provided for group communications using a MULTI-PIPE synthetic file system. A master application creates a multi-pipe synthetic file in the MULTI-PIPE synthetic file system, the master application indicating a multi-pipe operation to be performed. The master application then writes a header-control block of the multi-pipe synthetic file specifying at least one of a multi-pipe synthetic file system name, a message type, a message size, a specific destination, or a specification of the multi-pipe operation. Any other application participating in the group communications then opens the same multi-pipe synthetic file. A MULTI-PIPE file system module then implements the multi-pipe operation as identified by the master application. The master application and the other applications then either read or write operation messages to the multi-pipe synthetic file and the MULTI-PIPE synthetic file system module performs appropriate actions.

  2. A generic accounting model to support operations management decisions

    NARCIS (Netherlands)

    Verdaasdonk, P.J.A.; Wouters, M.J.F.

    2001-01-01

    Information systems are generally unable to generate information about the financial consequences of operations management decisions. This is because the procedures for determining the relevant accounting information for decision support are not formalised in ways that can be implemented in

  3. Maintenance and operations cost model for DSN subsystems

    Science.gov (United States)

    Burt, R. W.; Lesh, J. R.

    1977-01-01

    A procedure is described which partitions the recurring costs of the Deep Space Network (DSN) over the individual DSN subsystems. The procedure results in a table showing the maintenance, operations, sustaining engineering and supportive costs for each subsystems.

  4. Modelling of elementary computer operations using the intellect method

    Energy Technology Data Exchange (ETDEWEB)

    Shabanov-kushnarenko, Yu P

    1982-01-01

    The formal and apparatus intellect theory is used to describe functions of machine intelligence. A mathematical description is proposed as well as a machine realisation as switched networks of some simple computer operations. 5 references.

  5. Technology Reference Model (TRM) Reports: Technology/Operating System Report

    Data.gov (United States)

    Department of Veterans Affairs — The One VA Enterprise Architecture (OneVA EA) is a comprehensive picture of the Department of Veterans Affairs' (VA) operations, capabilities and services and the...

  6. Methodology to evaluate the performance of simulation models for alternative compiler and operating system configurations

    Science.gov (United States)

    Simulation modelers increasingly require greater flexibility for model implementation on diverse operating systems, and they demand high computational speed for efficient iterative simulations. Additionally, model users may differ in preference for proprietary versus open-source software environment...

  7. TU-CD-BRB-01: Normal Lung CT Texture Features Improve Predictive Models for Radiation Pneumonitis

    International Nuclear Information System (INIS)

    Krafft, S; Briere, T; Court, L; Martel, M

    2015-01-01

    Purpose: Existing normal tissue complication probability (NTCP) models for radiation pneumonitis (RP) traditionally rely on dosimetric and clinical data but are limited in terms of performance and generalizability. Extraction of pre-treatment image features provides a potential new category of data that can improve NTCP models for RP. We consider quantitative measures of total lung CT intensity and texture in a framework for prediction of RP. Methods: Available clinical and dosimetric data was collected for 198 NSCLC patients treated with definitive radiotherapy. Intensity- and texture-based image features were extracted from the T50 phase of the 4D-CT acquired for treatment planning. A total of 3888 features (15 clinical, 175 dosimetric, and 3698 image features) were gathered and considered candidate predictors for modeling of RP grade≥3. A baseline logistic regression model with mean lung dose (MLD) was first considered. Additionally, a least absolute shrinkage and selection operator (LASSO) logistic regression was applied to the set of clinical and dosimetric features, and subsequently to the full set of clinical, dosimetric, and image features. Model performance was assessed by comparing area under the curve (AUC). Results: A simple logistic fit of MLD was an inadequate model of the data (AUC∼0.5). Including clinical and dosimetric parameters within the framework of the LASSO resulted in improved performance (AUC=0.648). Analysis of the full cohort of clinical, dosimetric, and image features provided further and significant improvement in model performance (AUC=0.727). Conclusions: To achieve significant gains in predictive modeling of RP, new categories of data should be considered in addition to clinical and dosimetric features. We have successfully incorporated CT image features into a framework for modeling RP and have demonstrated improved predictive performance. Validation and further investigation of CT image features in the context of RP NTCP

  8. A Knowledge-Based Expert System Using MFM Model for Operator Supporting

    International Nuclear Information System (INIS)

    Mo, Kun; Seong, Poong Hyun

    2005-01-01

    In this paper, a knowledge-based expert system using MFM (Multi-level Flow Modeling) is proposed for enhancing the operators' ability to cope with various situations in nuclear power plant. There are many complicated situations, in which regular and suitable operations should be done by operators accordingly. In order to help the operator to assess the situations promptly and accurately, and to regulate their operations according to these situations. it is necessary to develop an expert systems to help the operator for the fault diagnosis, alarm analysis, and operation results estimation for each operation. Many kinds of operator supporting systems focusing on different functions have been developed. Most of them used various methodologies for single diagnosis function or operation permission function. The proposed system integrated functions of fault diagnosis, alarm analysis and operation results estimation by the MFM basic algorithm for the operator supporting

  9. Vehicular pollution modeling using the operational street pollution model (OSPM) for Chembur, Mumbai (India)

    DEFF Research Database (Denmark)

    Kumar, Awkash; Ketzel, Matthias; Patil, Rashmi S.

    2016-01-01

    Megacities in India such as Mumbai and Delhi are among the most polluted places in the world. In the present study, the widely used operational street pollution model (OSPM) is applied for assessing pollutant loads in the street canyons of Chembur, a suburban area just outside Mumbai city. Chembur...... concentrations from the routine monitoring performed in Mumbai. NOx emissions originate mainly from vehicles which are ground-level sources and are emitting close to where people live. Therefore, those emissions are highly relevant. The modeled NOx concentration compared satisfactorily with observed data...

  10. Dynamic Computation of Change Operations in Version Management of Business Process Models

    Science.gov (United States)

    Küster, Jochen Malte; Gerth, Christian; Engels, Gregor

    Version management of business process models requires that changes can be resolved by applying change operations. In order to give a user maximal freedom concerning the application order of change operations, position parameters of change operations must be computed dynamically during change resolution. In such an approach, change operations with computed position parameters must be applicable on the model and dependencies and conflicts of change operations must be taken into account because otherwise invalid models can be constructed. In this paper, we study the concept of partially specified change operations where parameters are computed dynamically. We provide a formalization for partially specified change operations using graph transformation and provide a concept for their applicability. Based on this, we study potential dependencies and conflicts of change operations and show how these can be taken into account within change resolution. Using our approach, a user can resolve changes of business process models without being unnecessarily restricted to a certain order.

  11. Preliminary Hybrid Modeling of the Panama Canal: Operations and Salinity Diffusion

    Directory of Open Access Journals (Sweden)

    Luis Rabelo

    2012-01-01

    Full Text Available This paper deals with the initial modeling of water salinity and its diffusion into the lakes during lock operation on the Panama Canal. A hybrid operational model was implemented using the AnyLogic software simulation environment. This was accomplished by generating an operational discrete-event simulation model and a continuous simulation model based on differential equations, which modeled the salinity diffusion in the lakes. This paper presents that unique application and includes the effective integration of lock operations and its impact on the environment.

  12. Predicting adenocarcinoma recurrence using computational texture models of nodule components in lung CT

    International Nuclear Information System (INIS)

    Depeursinge, Adrien; Yanagawa, Masahiro; Leung, Ann N.; Rubin, Daniel L.

    2015-01-01

    Purpose: To investigate the importance of presurgical computed tomography (CT) intensity and texture information from ground-glass opacities (GGO) and solid nodule components for the prediction of adenocarcinoma recurrence. Methods: For this study, 101 patients with surgically resected stage I adenocarcinoma were selected. During the follow-up period, 17 patients had disease recurrence with six associated cancer-related deaths. GGO and solid tumor components were delineated on presurgical CT scans by a radiologist. Computational texture models of GGO and solid regions were built using linear combinations of steerable Riesz wavelets learned with linear support vector machines (SVMs). Unlike other traditional texture attributes, the proposed texture models are designed to encode local image scales and directions that are specific to GGO and solid tissue. The responses of the locally steered models were used as texture attributes and compared to the responses of unaligned Riesz wavelets. The texture attributes were combined with CT intensities to predict tumor recurrence and patient hazard according to disease-free survival (DFS) time. Two families of predictive models were compared: LASSO and SVMs, and their survival counterparts: Cox-LASSO and survival SVMs. Results: The best-performing predictive model of patient hazard was associated with a concordance index (C-index) of 0.81 ± 0.02 and was based on the combination of the steered models and CT intensities with survival SVMs. The same feature group and the LASSO model yielded the highest area under the receiver operating characteristic curve (AUC) of 0.8 ± 0.01 for predicting tumor recurrence, although no statistically significant difference was found when compared to using intensity features solely. For all models, the performance was found to be significantly higher when image attributes were based on the solid components solely versus using the entire tumors (p < 3.08 × 10 −5 ). Conclusions: This study

  13. Upcrowding energy co-operatives - Evaluating the potential of crowdfunding for business model innovation of energy co-operatives.

    Science.gov (United States)

    Dilger, Mathias Georg; Jovanović, Tanja; Voigt, Kai-Ingo

    2017-08-01

    Practice and theory have proven the relevance of energy co-operatives for civic participation in the energy turnaround. However, due to a still low awareness and changing regulation, there seems an unexploited potential of utilizing the legal form 'co-operative' in this context. The aim of this study is therefore to investigate the crowdfunding implementation in the business model of energy co-operatives in order to cope with the mentioned challenges. Based on a theoretical framework, we derive a Business Model Innovation (BMI) through crowdfunding including synergies and differences. A qualitative study design, particularly a multiple-case study of energy co-operatives, was chosen to prove the BMI and to reveal barriers. The results show that although most co-operatives are not familiar with crowdfunding, there is strong potential in opening up predominantly local structures to a broader group of members. Building on this, equity-based crowdfunding is revealed to be suitable for energy co-operatives as BMI and to accompany other challenges in the same way. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Renewal of operating licenses: the U.S. model

    International Nuclear Information System (INIS)

    Petroll, M.; Tveiten, B.

    2006-01-01

    Nearly half of the American nuclear power plants by now have been granted permits allowing them to be operated for twenty years more than originally planned. Procedures to this effect are under way for one quarter of U.S. nuclear power plants. For the operators, plant life extension, as a rule, is economically preferable to building new baseload plants or buying electricity from other sources. In the licensing procedures, the U.S. regulatory authority examines both environmental aspects and safety aspects of extended operation. The technical basis of assessment is the GALL report (Generic Aging Lessons Learnt) which by now has become the consolidated yardstick used by the authorities for safety assessment. In these procedures, the licensee is required to present updated design documents and, if applicable, extend or create from scratch programs of aging management. The case of the oldest nuclear power plant in operation in the United States is described to show the steps of an American licensing and administrative court procedure. Granting renewed operating permits began before President Bush's term and will continue independent of the change in government in 2008. (orig.)

  15. Modeling of biopharmaceutical processes. Part 2: Process chromatography unit operation

    DEFF Research Database (Denmark)

    Kaltenbrunner, Oliver; McCue, Justin; Engel, Philip

    2008-01-01

    Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent. The theoret......Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent...

  16. Modeling the wind-fields of accidental releases with an operational regional forecast model

    International Nuclear Information System (INIS)

    Albritton, J.R.; Lee, R.L.; Sugiyama, G.

    1995-01-01

    The Atmospheric Release Advisory Capability (ARAC) is an operational emergency preparedness and response organization supported primarily by the Departments of Energy and Defense. ARAC can provide real-time assessments of atmospheric releases of radioactive materials at any location in the world. ARAC uses robust three-dimensional atmospheric transport and dispersion models, extensive geophysical and dose-factor databases, meteorological data-acquisition systems, and an experienced staff. Although it was originally conceived and developed as an emergency response and assessment service for nuclear accidents, the ARAC system has been adapted to also simulate non-radiological hazardous releases. For example, in 1991 ARAC responded to three major events: the oil fires in Kuwait, the eruption of Mt. Pinatubo in the Philippines, and the herbicide spill into the upper Sacramento River in California. ARAC's operational simulation system, includes two three-dimensional finite-difference models: a diagnostic wind-field scheme, and a Lagrangian particle-in-cell transport and dispersion scheme. The meteorological component of ARAC's real-time response system employs models using real-time data from all available stations near the accident site to generate a wind-field for input to the transport and dispersion model. Here we report on simulation studies of past and potential release sites to show that even in the absence of local meteorological observational data, readily available gridded analysis and forecast data and a prognostic model, the Navy Operational Regional Atmospheric Prediction System, applied at an appropriate grid resolution can successfully simulate complex local flows

  17. High Altitude Venus Operations Concept Trajectory Design, Modeling and Simulation

    Science.gov (United States)

    Lugo, Rafael A.; Ozoroski, Thomas A.; Van Norman, John W.; Arney, Dale C.; Dec, John A.; Jones, Christopher A.; Zumwalt, Carlie H.

    2015-01-01

    A trajectory design and analysis that describes aerocapture, entry, descent, and inflation of manned and unmanned High Altitude Venus Operation Concept (HAVOC) lighter-than-air missions is presented. Mission motivation, concept of operations, and notional entry vehicle designs are presented. The initial trajectory design space is analyzed and discussed before investigating specific trajectories that are deemed representative of a feasible Venus mission. Under the project assumptions, while the high-mass crewed mission will require further research into aerodynamic decelerator technology, it was determined that the unmanned robotic mission is feasible using current technology.

  18. Data-Driven Model Reduction and Transfer Operator Approximation

    Science.gov (United States)

    Klus, Stefan; Nüske, Feliks; Koltai, Péter; Wu, Hao; Kevrekidis, Ioannis; Schütte, Christof; Noé, Frank

    2018-06-01

    In this review paper, we will present different data-driven dimension reduction techniques for dynamical systems that are based on transfer operator theory as well as methods to approximate transfer operators and their eigenvalues, eigenfunctions, and eigenmodes. The goal is to point out similarities and differences between methods developed independently by the dynamical systems, fluid dynamics, and molecular dynamics communities such as time-lagged independent component analysis, dynamic mode decomposition, and their respective generalizations. As a result, extensions and best practices developed for one particular method can be carried over to other related methods.

  19. Weak Memory Models with Matching Axiomatic and Operational Definitions

    OpenAIRE

    Zhang, Sizhuo; Vijayaraghavan, Muralidaran; Lustig, Dan; Arvind

    2017-01-01

    Memory consistency models are notorious for being difficult to define precisely, to reason about, and to verify. More than a decade of effort has gone into nailing down the definitions of the ARM and IBM Power memory models, and yet there still remain aspects of those models which (perhaps surprisingly) remain unresolved to this day. In response to these complexities, there has been somewhat of a recent trend in the (general-purpose) architecture community to limit new memory models to being ...

  20. Representing Operational Knowledge of PWR Plant by Using Multilevel Flow Modelling

    DEFF Research Database (Denmark)

    Zhang, Xinxin; Lind, Morten; Jørgensen, Sten Bay

    2014-01-01

    situation and support operational decisions. This paper will provide a general MFM model of the primary side in a standard Westinghouse Pressurized Water Reactor ( PWR ) system including sub - systems of Reactor Coolant System, Rod Control System, Chemical and Volume Control System, emergency heat removal......The aim of this paper is to explore the capability of representing operational knowledge by using Multilevel Flow Modelling ( MFM ) methodology. The paper demonstrate s how the operational knowledge can be inserted into the MFM models and be used to evaluate the plant state, identify the current...... systems. And the sub - systems’ functions will be decomposed into sub - models according to different operational situations. An operational model will be developed based on the operating procedure by using MFM symbols and this model can be used to implement coordination rules for organize the utilizati...

  1. Operational Semantics of a Weak Memory Model inspired by Go

    OpenAIRE

    Fava, Daniel Schnetzer; Stolz, Volker; Valle, Stian

    2017-01-01

    A memory model dictates which values may be returned when reading from memory. In a parallel computing setting, the memory model affects how processes communicate through shared memory. The design of a proper memory model is a balancing act. On one hand, memory models must be lax enough to allow common hardware and compiler optimizations. On the other, the more lax the model, the harder it is for developers to reason about their programs. In order to alleviate the burden on programmers, a wea...

  2. TECHNOLOGICAL PROCESS MODELING AIMING TO IMPROVE ITS OPERATIONS MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Ivan Mihajlović

    2011-11-01

    Full Text Available This paper presents the modeling procedure of one real technological system. In this study, thecopper extraction from the copper flotation waste generated at the Bor Copper Mine (Serbia, werethe object of modeling. Sufficient data base for statistical modeling was constructed using theorthogonal factorial design of the experiments. Mathematical model of investigated system wasdeveloped using the combination of linear and multiple linear statistical analysis approach. Thepurpose of such a model is obtaining optimal states of the system that enable efficient operationsmanagement. Besides technological and economical, ecological parameters of the process wereconsidered as crucial input variables.

  3. A Dynamic Pricing Model for Coordinated Sales and Operations

    NARCIS (Netherlands)

    M. Fleischmann (Moritz); J.M. Hall (Joseph); D.F. Pyke (David)

    2005-01-01

    textabstractRecent years have seen advances in research and management practice in the area of pricing, and particularly in dynamic pricing and revenue management. At the same time, researchers and managers have made dramatic improvements in operations and supply chain management. The interactions

  4. Dynamic Bayesian modeling for risk prediction in credit operations

    DEFF Research Database (Denmark)

    Borchani, Hanen; Martinez, Ana Maria; Masegosa, Andres

    2015-01-01

    Our goal is to do risk prediction in credit operations, and as data is collected continuously and reported on a monthly basis, this gives rise to a streaming data classification problem. Our analysis reveals some practical problems that have not previously been thoroughly analyzed in the context...

  5. Modelling ship operational reliability over a mission under regular inspections

    NARCIS (Netherlands)

    Christer, A.H.; Lee, S.K.

    1997-01-01

    A ship is required to operate for a fixed mission period. Should a critical item of equipment fail at sea, the ship is subject to a costly event with potentially high risk to ship and crew. Given warning of a pending defect, the ship can try to return to port under its own power and thus attempt to

  6. Steam boilers : Process models for improved operation and design

    NARCIS (Netherlands)

    Ahnert, F.

    2007-01-01

    Biomass combustion can be an economic way to contribute to the reduction of CO2 emissions, which are a main suspect of the so-called greenhouse effect. In order to promote a widespread utilization of biomass combustion, operational problems like fuel treatment, slagging, fouling and corrosion have

  7. Computer-Aided Transformation of PDE Models: Languages, Representations, and a Calculus of Operations

    Science.gov (United States)

    2016-01-05

    Computer-aided transformation of PDE models: languages, representations, and a calculus of operations A domain-specific embedded language called...languages, representations, and a calculus of operations Report Title A domain-specific embedded language called ibvp was developed to model initial...Computer-aided transformation of PDE models: languages, representations, and a calculus of operations 1 Vision and background Physical and engineered systems

  8. Optimal Operational Monetary Policy Rules in an Endogenous Growth Model: a calibrated analysis

    OpenAIRE

    Arato, Hiroki

    2009-01-01

    This paper constructs an endogenous growth New Keynesian model and considers growth and welfare effect of Taylor-type (operational) monetary policy rules. The Ramsey equilibrium and optimal operational monetary policy rule is also computed. In the calibrated model, the Ramseyoptimal volatility of inflation rate is smaller than that in standard exogenous growth New Keynesian model with physical capital accumulation. Optimal operational monetary policy rule makes nominal interest rate respond s...

  9. Integrated model of port oil piping transportation system safety including operating environment threats

    Directory of Open Access Journals (Sweden)

    Kołowrocki Krzysztof

    2017-06-01

    Full Text Available The paper presents an integrated general model of complex technical system, linking its multistate safety model and the model of its operation process including operating environment threats and considering variable at different operation states its safety structures and its components safety parameters. Under the assumption that the system has exponential safety function, the safety characteristics of the port oil piping transportation system are determined.

  10. A Novel Stress-Diathesis Model to Predict Risk of Post-operative Delirium: Implications for Intra-operative Management

    Directory of Open Access Journals (Sweden)

    Renée El-Gabalawy

    2017-08-01

    Full Text Available Introduction: Risk assessment for post-operative delirium (POD is poorly developed. Improved metrics could greatly facilitate peri-operative care as costs associated with POD are staggering. In this preliminary study, we develop a novel stress-diathesis model based on comprehensive pre-operative psychiatric and neuropsychological testing, a blood oxygenation level-dependent (BOLD magnetic resonance imaging (MRI carbon dioxide (CO2 stress test, and high fidelity measures of intra-operative parameters that may interact facilitating POD.Methods: The study was approved by the ethics board at the University of Manitoba and registered at clinicaltrials.gov as NCT02126215. Twelve patients were studied. Pre-operative psychiatric symptom measures and neuropsychological testing preceded MRI featuring a BOLD MRI CO2 stress test whereby BOLD scans were conducted while exposing participants to a rigorously controlled CO2 stimulus. During surgery the patient had hemodynamics and end-tidal gases downloaded at 0.5 hz. Post-operatively, the presence of POD and POD severity was comprehensively assessed using the Confusion Assessment Measure –Severity (CAM-S scoring instrument on days 0 (surgery through post-operative day 5, and patients were followed up at least 1 month post-operatively.Results: Six of 12 patients had no evidence of POD (non-POD. Three patients had POD and 3 had clinically significant confusional states (referred as subthreshold POD; ST-POD (score ≥ 5/19 on the CAM-S. Average severity for delirium was 1.3 in the non-POD group, 3.2 in ST-POD, and 6.1 in POD (F-statistic = 15.4, p < 0.001. Depressive symptoms, and cognitive measures of semantic fluency and executive functioning/processing speed were significantly associated with POD. Second level analysis revealed an increased inverse BOLD responsiveness to CO2 pre-operatively in ST-POD and marked increase in the POD groups when compared to the non-POD group. An association was also noted for

  11. What should a 'good' model of the NPP operator contain

    International Nuclear Information System (INIS)

    Bainbridge, L.

    1986-01-01

    Much of human factors design is done without reference to models. A 'scientific' cognitive model contains multi-level goal-oriented top-down processing, in which behaviour choice depends on working memory, mental and environmental constraints, and expected results. Simpler models are more practical for supporting 0 behaviour, or predicting performance limits. Many types of reason make numerical predictions of cognitive behaviour non trivial

  12. MATHEMATICAL MODEL OF WEAR CHARACTER FAILURE IN AIRCRAFT OPERATION

    OpenAIRE

    Радько, Олег Віталійович; Молдован, Володимир Дмитрович

    2016-01-01

    In this paper the mathematical model of failures associated with wear during aircraft exploitationis developed. Тhe calculations of the distribution function, distribution density and failurerate gamma distribution at low coefficients of variation and the relatively low value of averagewear rate for the current time, which varies quite widely. The results coincide well with thephysical concepts and can be used to build different models of aircraft. Gamma distribution is apretty good model for...

  13. A Computer Model for Determining Operational Centers of Gravity

    Science.gov (United States)

    2002-05-31

    current state of AI. For the beginner , the student text Artificial Intelligence: An Executive Overview (USMA 1994) is still useful for surveying the... flowchart that guides the determination. In preparing the model, the authors consulted numerous sources of opinion on the topic, to include doctrine...the logic of the general model in this manner resulted in a compact, unambiguous flowchart -style representation. The depiction of the general model

  14. Computerized operating cost model for industrial steam generation

    Energy Technology Data Exchange (ETDEWEB)

    Powers, T.D.

    1983-02-01

    Pending EPA regulations, establishing revised emission levels for industrial boilers are perceived to have an effect on the relative costs of steam production technologies. To aid in the comparison of competitive boiler technologies, the Steam Cost Code was developed which provides levelized steam costs reflecting the effects of a number of key steam cost parameters. The Steam Cost Code is a user interactive FORTRAN program designed to operate on a VAX computer system. The program requires the user to input a number of variables describing the design characteristics, capital costs, and operating conditions for a specific boiler system. Part of the input to the Steam Cost Code is the capital cost of the steam production system. The capital cost is obtained from a program called INDCEPT, developed by Oak Ridge National Laboratory under Department of Energy, Morgantown Energy Technology Center sponsorship.

  15. Towards a model of surgeons' leadership in the operating room.

    Science.gov (United States)

    Henrickson Parker, Sarah; Yule, Steven; Flin, Rhona; McKinley, Aileen

    2011-07-01

    There is widespread recognition that leadership skills are essential for effective performance in the workplace, but the evidence detailing effective leadership behaviours for surgeons during operations is unclear. Boolean searches of four on-line databases and detailed hand search of relevant references were conducted. A four stage screening process was adopted stipulating that articles presented empirical data on surgeons' intraoperative leadership behaviours. Ten relevant articles were identified and organised by method of investigation into (i) observation, (ii) questionnaire and (iii) interview studies. This review summarises the limited literature on surgeons' intraoperative leadership, and proposes a preliminary theoretically based structure for intraoperative leadership behaviours. This structure comprises seven categories with corresponding leadership components and covers two overarching themes related to task- and team-focus. Selected leadership theories which may be applicable to the operating room environment are also discussed. Further research is required to determine effective intraoperative leadership behaviours for safe surgical practice.

  16. Intelligent decision-making models for production and retail operations

    CERN Document Server

    Guo, Zhaoxia

    2016-01-01

    This book provides an overview of intelligent decision-making techniques and discusses their application in production and retail operations. Manufacturing and retail enterprises have stringent standards for using advanced and reliable techniques to improve decision-making processes, since these processes have significant effects on the performance of relevant operations and the entire supply chain. In recent years, researchers have been increasingly focusing attention on using intelligent techniques to solve various decision-making problems. The opening chapters provide an introduction to several commonly used intelligent techniques, such as genetic algorithm, harmony search, neural network and extreme learning machine. The book then explores the use of these techniques for handling various production and retail decision-making problems, such as production planning and scheduling, assembly line balancing, and sales forecasting.

  17. Cognitive modeling and dynamic probabilistic simulation of operating crew response to complex system accidents. Part 4: IDAC causal model of operator problem-solving response

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Y.H.J. [Center for Risk and Reliability, University of Maryland, College Park, MD 20742 (United States) and Paul Scherrer Institute, 5232 Villigen PSI (Switzerland)]. E-mail: yhc@umd.edu; Mosleh, A. [Center for Risk and Reliability, University of Maryland, College Park, MD 20742 (United States)

    2007-08-15

    This is the fourth in a series of five papers describing the Information, Decision, and Action in Crew context (IDAC) operator response model for human reliability analysis. An example application of this modeling technique is also discussed in this series. The model has been developed to probabilistically predicts the responses of a nuclear power plant control room operating crew in accident conditions. The operator response spectrum includes cognitive, emotional, and physical activities during the course of an accident. This paper assesses the effects of the performance-influencing factors (PIFs) affecting the operators' problem-solving responses including information pre-processing (I), diagnosis and decision making (D), and action execution (A). Literature support and justifications are provided for the assessment on the influences of PIFs.

  18. Cognitive modeling and dynamic probabilistic simulation of operating crew response to complex system accidents. Part 4: IDAC causal model of operator problem-solving response

    International Nuclear Information System (INIS)

    Chang, Y.H.J.; Mosleh, A.

    2007-01-01

    This is the fourth in a series of five papers describing the Information, Decision, and Action in Crew context (IDAC) operator response model for human reliability analysis. An example application of this modeling technique is also discussed in this series. The model has been developed to probabilistically predicts the responses of a nuclear power plant control room operating crew in accident conditions. The operator response spectrum includes cognitive, emotional, and physical activities during the course of an accident. This paper assesses the effects of the performance-influencing factors (PIFs) affecting the operators' problem-solving responses including information pre-processing (I), diagnosis and decision making (D), and action execution (A). Literature support and justifications are provided for the assessment on the influences of PIFs

  19. Information operation/information warfare modeling and simulation

    OpenAIRE

    Buettner, Raymond

    2000-01-01

    Information Operations have always been a part of warfare. However, this aspect of warfare is having ever-greater importance as forces rely more and more on information as an enabler. Modern information systems make possible very rapid creation, distribution, and utilization of information. These same systems have vulnerabilities that can be exploited by enemy forces. Information force-on-force is important and complex. New tools and procedures are needed for this warfare arena. As these t...

  20. Operational results from a physical power prediction model

    Energy Technology Data Exchange (ETDEWEB)

    Landberg, L [Risoe National Lab., Meteorology and Wind Energy Dept., Roskilde (Denmark)

    1999-03-01

    This paper will describe a prediction system which predicts the expected power output of a number of wind farms. The system is automatic and operates on-line. The paper will quantify the accuracy of the predictions and will also give examples of the performance for specific storm events. An actual implementation of the system will be described and the robustness demonstrated. (au) 11 refs.

  1. Modeling the Effects of Cyber Operations on Kinetic Battles

    Science.gov (United States)

    2014-06-01

    Nakashima, 2013). Equally dangerous are attacks targeting the national economy . In 2012, distributed denial of service (DDoS) attacks were carried out...enable our freedom of action in cyberspace. (USCYBERCOM Concept of Operations, v 1.0, 21 Sep 2010) Global Information Grid ( GIG ): The globally...managing information on demand to warfighters, policy makers, and support personnel. The GIG includes owned and leased communications and computing

  2. Modeling and Forecasting Large Realized Covariance Matrices and Portfolio Choice

    NARCIS (Netherlands)

    Callot, Laurent A.F.; Kock, Anders B.; Medeiros, Marcelo C.

    2017-01-01

    We consider modeling and forecasting large realized covariance matrices by penalized vector autoregressive models. We consider Lasso-type estimators to reduce the dimensionality and provide strong theoretical guarantees on the forecast capability of our procedure. We show that we can forecast

  3. Modeling contractor and company employee behavior in high hazard operation

    NARCIS (Netherlands)

    Lin, P.H.; Hanea, D.; Ale, B.J.M.

    2013-01-01

    The recent blow-out and subsequent environmental disaster in the Gulf of Mexico have highlighted a number of serious problems in scientific thinking about safety. Risk models have generally concentrated on technical failures, which are easier to model and for which there are more concrete data.

  4. A maintenance and operations cost model for DSN

    Science.gov (United States)

    Burt, R. W.; Kirkbride, H. L.

    1977-01-01

    A cost model for the DSN is developed which is useful in analyzing the 10-year Life Cycle Cost of the Bent Pipe Project. The philosophy behind the development and the use made of a computer data base are detailed; the applicability of this model to other projects is discussed.

  5. Vacuum expectation values for four-fermion operators. Model estimates

    International Nuclear Information System (INIS)

    Zhitnitskij, A.R.

    1985-01-01

    Some simple models (a system with a heavy quark, the rarefied insatanton gas) are used to investigate the problem of factorizability. Characteristics of vacuum fluctuations responsible for saturation of four-fermion vacuum expectation values which are known phenomenologically are discussed. A qualitative agreement between the model and phenomenologic;l estimates has been noted

  6. Vacuum expectation values of four-fermion operators. Model estimates

    International Nuclear Information System (INIS)

    Zhitnitskii, A.R.

    1985-01-01

    Simple models (a system with a heavy quark, a rarefied instanton gas) are used to study problems of factorizability. A discussion is given of the characteristics of the vacuum fluctuations responsible for saturation of the phenomenologically known four-fermion vacuum expectation values. Qualitative agreement between the model and phenomenological estimates is observed

  7. A model technology transfer program for independent operators: Kansas Technology Transfer Model (KTTM)

    Energy Technology Data Exchange (ETDEWEB)

    Schoeling, L.G.

    1993-09-01

    This report describes the development and testing of the Kansas Technology Transfer Model (KTTM) which is to be utilized as a regional model for the development of other technology transfer programs for independent operators throughout oil-producing regions in the US. It describes the linkage of the regional model with a proposed national technology transfer plan, an evaluation technique for improving and assessing the model, and the methodology which makes it adaptable on a regional basis. The report also describes management concepts helpful in managing a technology transfer program. The original Tertiary Oil Recovery Project (TORP) activities, upon which the KTTM is based, were developed and tested for Kansas and have proved to be effective in assisting independent operators in utilizing technology. Through joint activities of TORP and the Kansas Geological Survey (KGS), the KTTM was developed and documented for application in other oil-producing regions. During the course of developing this model, twelve documents describing the implementation of the KTTM were developed as deliverables to DOE. These include: (1) a problem identification (PI) manual describing the format and results of six PI workshops conducted in different areas of Kansas, (2) three technology workshop participant manuals on advanced waterflooding, reservoir description, and personal computer applications, (3) three technology workshop instructor manuals which provides instructor material for all three workshops, (4) three technologies were documented as demonstration projects which included reservoir management, permeability modification, and utilization of a liquid-level acoustic measuring device, (5) a bibliography of all literature utilized in the documents, and (6) a document which describes the KTTM.

  8. Linking Geomechanical Models with Observations of Microseismicity during CCS Operations

    Science.gov (United States)

    Verdon, J.; Kendall, J.; White, D.

    2012-12-01

    During CO2 injection for the purposes of carbon capture and storage (CCS), injection-induced fracturing of the overburden represents a key risk to storage integrity. Fractures in a caprock provide a pathway along which buoyant CO2 can rise and escape the storage zone. Therefore the ability to link field-scale geomechanical models with field geophysical observations is of paramount importance to guarantee secure CO2 storage. Accurate location of microseismic events identifies where brittle failure has occurred on fracture planes. This is a manifestation of the deformation induced by CO2 injection. As the pore pressure is increased during injection, effective stress is decreased, leading to inflation of the reservoir and deformation of surrounding rocks, which creates microseismicity. The deformation induced by injection can be simulated using finite-element mechanical models. Such a model can be used to predict when and where microseismicity is expected to occur. However, typical elements in a field scale mechanical models have decameter scales, while the rupture size for microseismic events are typically of the order of 1 square meter. This means that mapping modeled stress changes to predictions of microseismic activity can be challenging. Where larger scale faults have been identified, they can be included explicitly in the geomechanical model. Where movement is simulated along these discrete features, it can be assumed that microseismicity will occur. However, microseismic events typically occur on fracture networks that are too small to be simulated explicitly in a field-scale model. Therefore, the likelihood of microseismicity occurring must be estimated within a finite element that does not contain explicitly modeled discontinuities. This can be done in a number of ways, including the utilization of measures such as closeness on the stress state to predetermined failure criteria, either for planes with a defined orientation (the Mohr-Coulomb criteria) for

  9. Mathematical basis for the process of model simulation of drilling operations

    Energy Technology Data Exchange (ETDEWEB)

    Lipovetskiy, G M; Lebedinskiy, G L

    1979-01-01

    The authors describe the application of a method for the model simulation of drilling operations and for the solution of problems concerned with the planning and management of such operations. A description is offered for an approach to the simulator process when the drilling operations are part of a large system. An algorithm is provided for calculating complex events.

  10. AN-type Dunkl operators and new spin Calogero-Sutherland models

    International Nuclear Information System (INIS)

    Finkel, F.; Gomez-Ullate, D.; Gonzalez-Lopez, A.; Rodriguez, M.A.; Zhdanov, R.

    2001-01-01

    A new family of A N -type Dunkl operators preserving a polynomial subspace of finite dimension is constructed. Using a general quadratic combination of these operators and the usual Dunkl operators, several new families of exactly and quasi-exactly solvable quantum spin Calogero-Sutherland models are obtained. These include, in particular, three families of quasi-exactly solvable elliptic spin Hamiltonians. (orig.)

  11. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    Science.gov (United States)

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  12. Overall feature of EAST operation space by using simple Core-SOL-Divertor model

    International Nuclear Information System (INIS)

    Hiwatari, R.; Hatayama, A.; Zhu, S.; Takizuka, T.; Tomita, Y.

    2005-01-01

    We have developed a simple Core-SOL-Divertor (C-S-D) model to investigate qualitatively the overall features of the operational space for the integrated core and edge plasma. To construct the simple C-S-D model, a simple core plasma model of ITER physics guidelines and a two-point SOL-divertor model are used. The simple C-S-D model is applied to the study of the EAST operational space with lower hybrid current drive experiments under various kinds of trade-off for the basic plasma parameters. Effective methods for extending the operation space are also presented. As shown by this study for the EAST operation space, it is evident that the C-S-D model is a useful tool to understand qualitatively the overall features of the plasma operation space. (author)

  13. Virtual age model for equipment aging plant based on operation environment and service state

    International Nuclear Information System (INIS)

    Zhang Liming; Cai Qi; Zhao Xinwen; Chen Ling

    2010-01-01

    The accelerated life model based on the operation environment and service state was established by taking the virtual age as the equipment aging indices. The effect of different operation environments and service states on the reliability and virtual age under the continuum operation conditions and cycle operation conditions were analyzed, and the sensitivities of virtual age on operational environments and service states were studied. The results of the example application show that the effect of NPP equipment lifetime and the key parameters related to the reliability can be quantified by this model, and the result is in accordance with the reality.(authors)

  14. Modeling of Complex Adaptive Systems in Air Operations

    National Research Council Canada - National Science Library

    Busch, Timothy E; Trevisani, Dawn A

    2006-01-01

    .... Model predictive control theory provides the basis for this investigation. Given some set of objectives the military commander must devise a sequence of actions that transform the current state to the desired one...

  15. EDM - A model for optimising the short-term power operation of a complex hydroelectric network

    International Nuclear Information System (INIS)

    Tremblay, M.; Guillaud, C.

    1996-01-01

    In order to optimize the short-term power operation of a complex hydroelectric network, a new model called EDM was added to PROSPER, a water management analysis system developed by SNC-Lavalin. PROSPER is now divided into three parts: an optimization model (DDDP), a simulation model (ESOLIN), and an economic dispatch model (EDM) for the short-term operation. The operation of the KSEB hydroelectric system (located in southern India) with PROSPER was described. The long-term analysis with monthly time steps is assisted by the DDDP, and the daily analysis with hourly or half-hourly time steps is performed with the EDM model. 3 figs

  16. Environmental Management Model for Road Maintenance Operation Involving Community Participation

    Science.gov (United States)

    Triyono, A. R. H.; Setyawan, A.; Sobriyah; Setiono, P.

    2017-07-01

    Public expectations of Central Java, which is very high on demand fulfillment, especially road infrastructure as outlined in the number of complaints and community expectations tweeter, Short Mail Massage (SMS), e-mail and public reports from various media, Highways Department of Central Java province requires development model of environmental management in the implementation of a routine way by involving the community in order to fulfill the conditions of a representative, may serve road users safely and comfortably. This study used survey method with SEM analysis and SWOT with Latent Independent Variable (X), namely; Public Participation in the regulation, development, construction and supervision of road (PSM); Public behavior in the utilization of the road (PMJ) Provincial Road Service (PJP); Safety in the Provincial Road (KJP); Integrated Management System (SMT) and latent dependent variable (Y) routine maintenance of the provincial road that is integrated with the environmental management system and involve the participation of the community (MML). The result showed the implementation of routine maintenance of road conditions in Central Java province has yet to implement an environmental management by involving the community; Therefore developed environmental management model with the results of H1: Community Participation (PSM) has positive influence on the Model of Environmental Management (MML); H2: Behavior Society in Jalan Utilization (PMJ) positive effect on Model Environmental Management (MML); H3: Provincial Road Service (PJP) positive effect on Model Environmental Management (MML); H4: Safety in the Provincial Road (KJP) positive effect on Model Environmental Management (MML); H5: Integrated Management System (SMT) has positive influence on the Model of Environmental Management (MML). From the analysis obtained formulation model describing the relationship / influence of the independent variables PSM, PMJ, PJP, KJP, and SMT on the dependent variable

  17. "A model co-operative country": Irish-Finnish co-operative contacts at the turn of the twentieth century

    DEFF Research Database (Denmark)

    Hilson, Mary

    2017-01-01

    Agricultural co-operative societies were widely discussed across late nineteenth-century Europe as a potential solution to the problems of agricultural depression, land reform and rural poverty. In Finland, the agronomist Hannes Gebhard drew inspiration from examples across Europe in founding the...... that even before the First World War it was Finland, not Ireland, that had begun to be regarded as ‘a model co-operative country’....... between Irish and Finnish co-operators around the turn of the century, and examines the ways in which the parallels between the two countries were constructed and presented by those involved in these exchanges. I will also consider the reasons for the divergence in the development of cooperation, so...

  18. Accurate wind farm development and operation. Advanced wake modelling

    Energy Technology Data Exchange (ETDEWEB)

    Brand, A.; Bot, E.; Ozdemir, H. [ECN Unit Wind Energy, P.O. Box 1, NL 1755 ZG Petten (Netherlands); Steinfeld, G.; Drueke, S.; Schmidt, M. [ForWind, Center for Wind Energy Research, Carl von Ossietzky Universitaet Oldenburg, D-26129 Oldenburg (Germany); Mittelmeier, N. REpower Systems SE, D-22297 Hamburg (Germany))

    2013-11-15

    The ability is demonstrated to calculate wind farm wakes on the basis of ambient conditions that were calculated with an atmospheric model. Specifically, comparisons are described between predicted and observed ambient conditions, and between power predictions from three wind farm wake models and power measurements, for a single and a double wake situation. The comparisons are based on performance indicators and test criteria, with the objective to determine the percentage of predictions that fall within a given range about the observed value. The Alpha Ventus site is considered, which consists of a wind farm with the same name and the met mast FINO1. Data from the 6 REpower wind turbines and the FINO1 met mast were employed. The atmospheric model WRF predicted the ambient conditions at the location and the measurement heights of the FINO1 mast. May the predictability of the wind speed and the wind direction be reasonable if sufficiently sized tolerances are employed, it is fairly impossible to predict the ambient turbulence intensity and vertical shear. Three wind farm wake models predicted the individual turbine powers: FLaP-Jensen and FLaP-Ainslie from ForWind Oldenburg, and FarmFlow from ECN. The reliabilities of the FLaP-Ainslie and the FarmFlow wind farm wake models are of equal order, and higher than FLaP-Jensen. Any difference between the predictions from these models is most clear in the double wake situation. Here FarmFlow slightly outperforms FLaP-Ainslie.

  19. Theory model and experiment research about the cognition reliability of nuclear power plant operators

    International Nuclear Information System (INIS)

    Fang Xiang; Zhao Bingquan

    2000-01-01

    In order to improve the reliability of NPP operation, the simulation research on the reliability of nuclear power plant operators is needed. Making use of simulator of nuclear power plant as research platform, and taking the present international reliability research model-human cognition reliability for reference, the part of the model is modified according to the actual status of Chinese nuclear power plant operators and the research model of Chinese nuclear power plant operators obtained based on two-parameter Weibull distribution. Experiments about the reliability of nuclear power plant operators are carried out using the two-parameter Weibull distribution research model. Compared with those in the world, the same results are achieved. The research would be beneficial to the operation safety of nuclear power plant

  20. Activating Global Operating Models: The bridge from organization design to performance

    Directory of Open Access Journals (Sweden)

    Amy Kates

    2015-07-01

    Full Text Available This article introduces the concept of activation and discusses its use in the implementation of global operating models by large multinational companies. We argue that five particular activators help set in motion the complex strategies and organizations required by global operating models.

  1. Modeling and validating the grabbing forces of hydraulic log grapples used in forest operations

    Science.gov (United States)

    Jingxin Wang; Chris B. LeDoux; Lihai Wang

    2003-01-01

    The grabbing forces of log grapples were modeled and analyzed mathematically under operating conditions when grabbing logs from compact log piles and from bunch-like log piles. The grabbing forces are closely related to the structural parameters of the grapple, the weight of the grapple, and the weight of the log grabbed. An operational model grapple was designed and...

  2. Upper Rio Grande water operations model: A tool for enhanced system management

    Science.gov (United States)

    Gail Stockton; D. Michael Roark

    1999-01-01

    The Upper Rio Grande Water Operations Model (URGWOM) under development through a multi-agency effort has demonstrated capability to represent the physical river/reservoir system, to track and account for Rio Grande flows and imported San Juan flows, and to forecast flows at various points in the system. Testing of the Rio Chama portion of the water operations model was...

  3. Advanced autonomous model-based operation of industrial process systems (Autoprofit) : technological developments and future perspectives

    NARCIS (Netherlands)

    Ozkan, L.; Bombois, X.J.A.; Ludlage, J.H.A.; Rojas, C.R.; Hjalmarsson, H.; Moden, P.E.; Lundh, M.; Backx, A.C.P.M.; Van den Hof, P.M.J.

    2016-01-01

    Model-based operation support technology such as Model Predictive Control (MPC) is a proven and accepted technology for multivariable and constrained large scale control problems in process industry. Despite the growing number of successful implementations, the low level of operational efficiency of

  4. Inclusive zero-angle neutron spectra at the ISR and OPER-model

    International Nuclear Information System (INIS)

    Grigoryan, A.A.

    1977-01-01

    The invlusive zero-angle neutron spectra in pp-collisions measured at the ISR are compared with the OPER-model predictions. OPER-model rather well describes the experimental data. Some features of the spectra behaviour at fixed transverse momentum and large x are considered

  5. The Role of a Mental Model in Learning to Operate a Device.

    Science.gov (United States)

    Kieras, David E.; Bovair, Susan

    1984-01-01

    Describes three studies concerned with learning to operate a control panel device and how this learning is affected by understanding a device model that describes its internal mechanism. Results indicate benefits of a device model depend on whether it supports direct inference of exact steps required to operate the device. (Author/MBR)

  6. Econometric modelling of economic security in business operations management

    OpenAIRE

    Chagovets, L. О.; Nevezhin, V. P.; Zakharova, О. V.

    2014-01-01

    The article deals with econometric modeling of economic security. The model of evaluating transaction costs effect on the level of enterprise economic security is provided. The econometric models of evaluating economic security that are used in research are based on panel data. According to the results, the reserves for increasing the general level of economic security due to transaction costs reduction are revealed. Розглянуто питання економетричного моделювання економічної безпеки. Предс...

  7. Modelling and Operation of Diesel Engine Exhaust Gas Cleaning Systems

    DEFF Research Database (Denmark)

    Åberg, Andreas

    . Challenges with this technology include dosing the appropriate amount of urea to reach sufficient NOx conversion, while at the same time keeping NH3- slip from the exhaust system below the legislation. This requires efficient control algorithms. The focus of this thesis is modelling and control of the SCR...... parameters were estimated using bench-scale monolith isothermal data. Validation was done by simulating the out-put from a full-scale SCR monolith that was treating real engine gases from the European Transient Cycle (ETC). Results showed that the models were successfully calibrated, and that some......, and simulating the system....

  8. Running scenarios using the Waste Tank Safety and Operations Hanford Site model

    International Nuclear Information System (INIS)

    Stahlman, E.J.

    1995-11-01

    Management of the Waste Tank Safety and Operations (WTS ampersand O) at Hanford is a large and complex task encompassing 177 tanks and having a budget of over $500 million per year. To assist managers in this task, a model based on system dynamics was developed by the Massachusetts Institute of Technology. The model simulates the WTS ampersand O at the Hanford Tank Farms by modeling the planning, control, and flow of work conducted by Managers, Engineers, and Crafts. The model is described in Policy Analysis of Hanford Tank Farm Operations with System Dynamics Approach (Kwak 1995b) and Management Simulator for Hanford Tank Farm Operations (Kwak 1995a). This document provides guidance for users of the model in developing, running, and analyzing results of management scenarios. The reader is assumed to have an understanding of the model and its operation. Important parameters and variables in the model are described, and two scenarios are formulated as examples

  9. A model-based approach to operational event groups ranking

    Energy Technology Data Exchange (ETDEWEB)

    Simic, Zdenko [European Commission Joint Research Centre, Petten (Netherlands). Inst. for Energy and Transport; Maqua, Michael [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Koeln (Germany); Wattrelos, Didier [Institut de Radioprotection et de Surete Nucleaire (IRSN), Fontenay-aux-Roses (France)

    2014-04-15

    The operational experience (OE) feedback provides improvements in all industrial activities. Identification of the most important and valuable groups of events within accumulated experience is important in order to focus on a detailed investigation of events. The paper describes the new ranking method and compares it with three others. Methods have been described and applied to OE events utilised by nuclear power plants in France and Germany for twenty years. The results show that different ranking methods only roughly agree on which of the event groups are the most important ones. In the new ranking method the analytical hierarchy process is applied in order to assure consistent and comprehensive weighting determination for ranking indexes. The proposed method allows a transparent and flexible event groups ranking and identification of the most important OE for further more detailed investigation in order to complete the feedback. (orig.)

  10. Non-Hermitian Operator Modelling of Basic Cancer Cell Dynamics

    Science.gov (United States)

    Bagarello, Fabio; Gargano, Francesco

    2018-04-01

    We propose a dynamical system of tumor cells proliferation based on operatorial methods. The approach we propose is quantum-like: we use ladder and number operators to describe healthy and tumor cells birth and death, and the evolution is ruled by a non-hermitian Hamiltonian which includes, in a non reversible way, the basic biological mechanisms we consider for the system. We show that this approach is rather efficient in describing some processes of the cells. We further add some medical treatment, described by adding a suitable term in the Hamiltonian, which controls and limits the growth of tumor cells, and we propose an optimal approach to stop, and reverse, this growth.

  11. New Model of a Solar Wind Airplane for Geomatic Operations

    Science.gov (United States)

    Achachi, A.; Benatia, D.

    2015-08-01

    The ability for an aircraft to fly during a much extended period of time has become a key issue and a target of research, both in the domain of civilian aviation and unmanned aerial vehicles. This paper describes a new design and evaluating of solar wind aircraft with the objective to assess the impact of a new system design on overall flight crew performance. The required endurance is in the range of some hours in the case of law enforcement, border surveillance, forest fire fighting or power line inspection. However, other applications at high altitudes, such as geomatic operations for delivering geographic information, weather research and forecast, environmental monitoring, would require remaining airborne during days, weeks or even months. The design of GNSS non precision approach procedure for different airports is based on geomatic data.

  12. NEW MODEL OF A SOLAR WIND AIRPLANE FOR GEOMATIC OPERATIONS

    Directory of Open Access Journals (Sweden)

    A. Achachi

    2015-08-01

    Full Text Available The ability for an aircraft to fly during a much extended period of time has become a key issue and a target of research, both in the domain of civilian aviation and unmanned aerial vehicles. This paper describes a new design and evaluating of solar wind aircraft with the objective to assess the impact of a new system design on overall flight crew performance. The required endurance is in the range of some hours in the case of law enforcement, border surveillance, forest fire fighting or power line inspection. However, other applications at high altitudes, such as geomatic operations for delivering geographic information, weather research and forecast, environmental monitoring, would require remaining airborne during days, weeks or even months. The design of GNSS non precision approach procedure for different airports is based on geomatic data.

  13. Equivalence of the super Lax and local Dunkl operators for Calogero-like models

    International Nuclear Information System (INIS)

    Neelov, A I

    2004-01-01

    Following Shastry and Sutherland I construct the super Lax operators for the Calogero model in the oscillator potential. These operators can be used for the derivation of the eigenfunctions and integrals of motion of the Calogero model and its supersymmetric version. They allow us to infer several relations involving the Lax matrices for this model in a fast way. It is shown that the super Lax operators for the Calogero and Sutherland models can be expressed in terms of the supercharges and so-called local Dunkl operators constructed in our recent paper with M Ioffe. Several important relations involving Lax matrices and Hamiltonians of the Calogero and Sutherland models are easily derived from the properties of Dunkl operators

  14. Interagency Modeling Atmospheric Assessment Center Local Jurisdiction: IMAAC Operations Framework

    Science.gov (United States)

    2010-03-01

    proposed model ( Daft & Lengel, 1986). All six Ohio LINC Cities were interviewed face- to-face providing the basis for the research evaluating...Cincinnati, DHS should work in partnership with Cincinnati Urban Area Leadership to convene a randomly selected, but statistically-significant, UASI...response system. Internal document. Daft , R. L. & Lengel, R. H. (1986). Organizational Information Requirements, Media Richness and Structural

  15. Making Risk Models Operational for Situational Awareness and Decision Support

    International Nuclear Information System (INIS)

    Paulson, P.R.; Coles, G.; Shoemaker, S.

    2012-01-01

    We present CARIM, a decision support tool to aid in the evaluation of plans for converting control systems to digital instruments. The model provides the capability to optimize planning and resource allocation to reduce risk from multiple safety and economic perspectives. (author)

  16. Mathematical modeling of thermal runaway in semiconductor laser operation

    NARCIS (Netherlands)

    Smith, W.R.

    2000-01-01

    A mathematical model describing the coupling of electrical, optical and thermal effects in semiconductor lasers is introduced. Through a systematic asymptotic expansion, the governing system of differential equations is reduced to a single second-order boundary value problem. This highly nonlinear

  17. Practice What You Preach: Microfinance Business Models and Operational Efficiency

    NARCIS (Netherlands)

    Bos, J.W.B.; Millone, M.M.

    The microfinance sector has room for pure for-profit microfinance institutions (MFIs), non-profit organizations, and “social” for-profit firms that aim to pursue a double bottom line. Depending on their business model, these institutions target different types of borrowers, change the size of their

  18. Practice what you preach: Microfinance business models and operational efficiency

    NARCIS (Netherlands)

    Bos, J.W.B.; Millone, M.M.

    2013-01-01

    The microfinance sector is an example of a sector in which firms with different business models coexist. Next to pure for-profit microfinance institutions (MFIs), the sector has room for non-profit organizations, and includes 'social' for-profit firms that aim to maximize a double bot- tom line and

  19. Vehicular pollution modeling using the operational street pollution model (OSPM) for Chembur, Mumbai (India).

    Science.gov (United States)

    Kumar, Awkash; Ketzel, Matthias; Patil, Rashmi S; Dikshit, Anil Kumar; Hertel, Ole

    2016-06-01

    Megacities in India such as Mumbai and Delhi are among the most polluted places in the world. In the present study, the widely used operational street pollution model (OSPM) is applied for assessing pollutant loads in the street canyons of Chembur, a suburban area just outside Mumbai city. Chembur is both industrialized and highly congested with vehicles. There are six major street canyons in this area, for which modeling has been carried out for NOx and particulate matter (PM). The vehicle emission factors for Indian cities have been developed by Automotive Research Association of India (ARAI) for PM, not specifically for PM10 or PM2.5. The model has been applied for 4 days of winter season and for the whole year to see the difference of effect of meteorology. The urban background concentrations have been obtained from an air quality monitoring station. Results have been compared with measured concentrations from the routine monitoring performed in Mumbai. NOx emissions originate mainly from vehicles which are ground-level sources and are emitting close to where people live. Therefore, those emissions are highly relevant. The modeled NOx concentration compared satisfactorily with observed data. However, this was not the case for PM, most likely because the emission inventory did not contain emission terms due to resuspended particulate matter.

  20. Hydraulic modelling of drinking water treatment plant operations

    Directory of Open Access Journals (Sweden)

    L. C. Rietveld

    2009-06-01

    Full Text Available The flow through a unit of a drinking water treatment plant is one of the most important parameters in terms of a unit's effectiveness. In the present paper, a new EPAnet library is presented with the typical hydraulic elements for drinking water treatment processes well abstraction, rapid sand filtration and cascade and tower aeration. Using this treatment step library, a hydraulic model was set up, calibrated and validated for the drinking water treatment plant Harderbroek. With the actual valve position and pump speeds, the flows were calculated through the several treatment steps. A case shows the use of the model to calculate the new setpoints for the current frequency converters of the effluent pumps during a filter backwash.

  1. Operational cooling tower model (CTTOOL V1.0)

    Energy Technology Data Exchange (ETDEWEB)

    Aleman, S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); LocalDomainServers, L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Garrett, A. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-01-01

    Mechanical draft cooling towers (MDCT’s) are widely used to remove waste heat from industrial processes, including suspected proliferators of weapons of mass destruction (WMD). The temperature of the air being exhausted from the MDCT is proportional to the amount of thermal energy being removed from the process cooling water, although ambient weather conditions and cooling water flow rate must be known or estimated to calculate the rate of thermal energy dissipation (Q). It is theoretically possible to derive MDCT air exhaust temperatures from thermal images taken from a remote sensor. A numerical model of a MDCT is required to translate the air exhaust temperature to a Q. This report describes the MDCT model developed by the Problem Centered Integrated Analysis (PCIA) program that was designed to perform those computational tasks. The PCIA program is a collaborative effort between the Savannah River National Laboratory (SRNL), the Northrop-Grumman Corporation (NG) and the Aerospace Corporation (AERO).

  2. Modeling Operator Performance in Low Task Load Supervisory Domains

    Science.gov (United States)

    2011-06-01

    important to model the best and 65 worst performers separately. It is easy to see that the best performers were better multitaskers and more directed...the expected population this research will influence is expected to contain men and women between the ages of 18 and 50 with an interest in using...for your patience and great sense of humor. I could not ask for a better thesis reader. Thank you, Amy D’Agostino, for taking the time to read my

  3. Operational Ocean Modelling with the Harvard Ocean Prediction System

    Science.gov (United States)

    2008-11-01

    tno.nl TNO-rapportnummer TNO-DV2008 A417 Opdrachtnummer Datum november 2008 Auteur (s) dr. F.P.A. Lam dr. ir. M.W. Schouten dr. L.A. te Raa...area of theory and implementation of numerical schemes and parameterizations, ocean models have grown from experimental tools to full-blown ocean...sound propagation through mesoscale features using 3-D coupled mode theory , Thesis, Naval Postgraduate School, Monterey, USA. 1992. [9] Robinson

  4. Mathematical Modeling in Support of Military Operational Medicine

    Science.gov (United States)

    2006-07-01

    can also accept measured under armor data, in which case the vest and plate are not modeled by the code. The equations describing the motion of...normalized work done on the lung as well as probabilities of four levels of lung injury. In addition, the time histories of under armor pressure and of...chest wall motion can be saved in easily read disc files. INJURY-K code predictions for under armor pressure have shown excellent agreement with

  5. An expert system for modelling operators' behaviour in control of a steam generator

    International Nuclear Information System (INIS)

    Cacciabue, P.C.; Guida, G.; Pace, A.

    1987-01-01

    Modelling the mental processes of an operator in charge of controlling a complex industrial plant is a challenging issue currently tackled by several research projects both in the area of artificial intelligence and cognitive psychology. Progress in this field could greatly contribute not only to a deeper understanding of operator's behaviour, but also to the design of intelligent operator support systems. In this paper the authors report the preliminary results of an experimental research effort devoted to model the behaviour of a plant operator by means of Knowledge-based techniques. The main standpoints of their work is that the cognitive processes underlying operator's behaviour can be of three main different types, according to the actual situation where the operator works. In normal situations, or during training sessions, the operator is free to develop deep reasoning, using knowledge about plant structure and function and relying on the first physical principles that govern its behaviour

  6. A model of Franco-German co-operation

    International Nuclear Information System (INIS)

    Fischer, U.; Leverenz, R.

    1999-01-01

    In the early 1990s, the power station industry in Germany and France decided to further development of nuclear power station technology. As an initial stage, design measures were taken to reduce the probability of serious faults even further, below its existing, already very low level. Furthermore, steps were taken to restrict even these hypothetical faults, so that their effects are confined to the installation itself and do not have serious consequences for the people who live in the vicinity of the installation. There is no need for measures such as evacuation and resettlement of local residents. In particular, the fact that the output has been increased to 1750 MW, with the resultant low specific construction costs, produces low power generation costs, without compromising security. Comprehensive standardisation of the layout reduces the amount of time engineers spend on design and installation and exploits the advantages of series production. Operating costs are minimised by a high degree of availability, of 92%, and an increased degree of efficiency, of 36%, which have been achieved by means of an optimised maintenance concept. Increased burn-up means that fuel costs also contribute to low power generation costs. The EPR therefore represents more than a single option for power generation in the future in Germany, France, Europe and across the world. (orig.) [de

  7. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP

    Directory of Open Access Journals (Sweden)

    Li Deng

    2015-01-01

    Full Text Available In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming, using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model’s input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators’ operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  8. A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection

    Science.gov (United States)

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-01-01

    Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050

  9. Theoretical Models and Operational Frameworks in Public Health Ethics

    Science.gov (United States)

    Petrini, Carlo

    2010-01-01

    The article is divided into three sections: (i) an overview of the main ethical models in public health (theoretical foundations); (ii) a summary of several published frameworks for public health ethics (practical frameworks); and (iii) a few general remarks. Rather than maintaining the superiority of one position over the others, the main aim of the article is to summarize the basic approaches proposed thus far concerning the development of public health ethics by describing and comparing the various ideas in the literature. With this in mind, an extensive list of references is provided. PMID:20195441

  10. Theoretical Models and Operational Frameworks in Public Health Ethics

    Directory of Open Access Journals (Sweden)

    Carlo Petrini

    2010-01-01

    Full Text Available The article is divided into three sections: (i an overview of the main ethical models in public health (theoretical foundations; (ii a summary of several published frameworks for public health ethics (practical frameworks; and (iii a few general remarks. Rather than maintaining the superiority of one position over the others, the main aim of the article is to summarize the basic approaches proposed thus far concerning the development of public health ethics by describing and comparing the various ideas in the literature. With this in mind, an extensive list of references is provided.

  11. A flexible model for economic operational management of grid battery energy storage

    International Nuclear Information System (INIS)

    Fares, Robert L.; Webber, Michael E.

    2014-01-01

    To connect energy storage operational planning with real-time battery control, this paper integrates a dynamic battery model with an optimization program. First, we transform a behavioral circuit model designed to describe a variety of battery chemistries into a set of coupled nonlinear differential equations. Then, we discretize the differential equations to integrate the battery model with a GAMS (General Algebraic Modeling System) optimization program, which decides when the battery should charge and discharge to maximize its operating revenue. We demonstrate the capabilities of our model by applying it to lithium-ion (Li-ion) energy storage operating in Texas' restructured electricity market. By simulating 11 years of operation, we find that our model can robustly compute an optimal charge-discharge schedule that maximizes daily operating revenue without violating a battery's operating constraints. Furthermore, our results show there is significant variation in potential operating revenue from one day to the next. The revenue potential of Li-ion storage varies from approximately $0–1800/MWh of energy discharged, depending on the volatility of wholesale electricity prices during an operating day. Thus, it is important to consider the material degradation-related “cost” of performing a charge-discharge cycle in battery operational management, so that the battery only operates when revenue exceeds cost. - Highlights: • A flexible, dynamic battery model is integrated with an optimization program. • Electricity price data is used to simulate 11 years of Li-ion operation on the grid. • The optimization program robustly computes an optimal charge-discharge schedule. • Variation in daily Li-ion battery revenue potential from 2002 to 2012 is shown. • We find it is important to consider the cost of a grid duty cycle

  12. Renormalization Group Evolution of the Standard Model Dimension Six Operators I: Formalism and lambda Dependence

    CERN Document Server

    Jenkins, Elizabeth E; Trott, Michael

    2013-01-01

    We calculate the order \\lambda, \\lambda^2 and \\lambda y^2 terms of the 59 x 59 one-loop anomalous dimension matrix of dimension-six operators, where \\lambda and y are the Standard Model Higgs self-coupling and a generic Yukawa coupling, respectively. The dimension-six operators modify the running of the Standard Model parameters themselves, and we compute the complete one-loop result for this. We discuss how there is mixing between operators for which no direct one-particle-irreducible diagram exists, due to operator replacements by the equations of motion.

  13. Control software architecture and operating modes of the Model M-2 maintenance system

    Energy Technology Data Exchange (ETDEWEB)

    Satterlee, P.E. Jr.; Martin, H.L.; Herndon, J.N.

    1984-04-01

    The Model M-2 maintenance system is the first completely digitally controlled servomanipulator. The M-2 system allows dexterous operations to be performed remotely using bilateral force-reflecting master/slave techniques, and its integrated operator interface takes advantage of touch-screen-driven menus to allow selection of all possible operating modes. The control system hardware for this system has been described previously. This paper describes the architecture of the overall control system. The system's various modes of operation are identified, the software implementation of each is described, system diagnostic routines are described, and highlights of the computer-augmented operator interface are discussed. 3 references, 5 figures.

  14. Control software architecture and operating modes of the Model M-2 maintenance system

    International Nuclear Information System (INIS)

    Satterlee, P.E. Jr.; Martin, H.L.; Herndon, J.N.

    1984-04-01

    The Model M-2 maintenance system is the first completely digitally controlled servomanipulator. The M-2 system allows dexterous operations to be performed remotely using bilateral force-reflecting master/slave techniques, and its integrated operator interface takes advantage of touch-screen-driven menus to allow selection of all possible operating modes. The control system hardware for this system has been described previously. This paper describes the architecture of the overall control system. The system's various modes of operation are identified, the software implementation of each is described, system diagnostic routines are described, and highlights of the computer-augmented operator interface are discussed. 3 references, 5 figures

  15. Design basis for the operational modelling of the atmospheric dispersion

    International Nuclear Information System (INIS)

    Doury, A.

    1987-10-01

    Based on the latest practices at the Institut de Protection et de Surete Nucleaire of the Commissariat a l'Energie Atomique (CEA), we shall first present the basis elements used for a simple and adequate modelling method for assessing hypothetical atmospheric pollution from transient or continuous discharge with any given kinetics under various weather conditions which are not necessarily stationary or uniform, which are likely to occur even with little or no wind. Discharges shall be considered as sequences of instantaneous successive puffs. The parameters deduced experimentally or from observations are functions of the transfer time and cover all time and space scales. The restrictions of use are indicated, especially concerning heavy gases. Finally, simple formulas are proposed for concentrations and depositions so as to be able to make a rapid estimation of the orders of magnitude with almost no computation [fr

  16. Modeling of state recognition process of plant operator

    International Nuclear Information System (INIS)

    Hatakeyama, Naoki; Furuta, Kazuo

    2000-01-01

    It is necessary to automate Machine systems because they have become larger and more complicated these years. Generally speaking, humans hardly grasp the overall state in the automated systems. In fact it is reported that the accident caused by this problem occurs. To avoid such accidents, there were many studies to give human the authority of final decision making. In general it depends on circumstances whether the authority of decision making is given humans or machine systems. It is supposed therefore that humans and machine systems exchange their information each other and efficiently share their tasks. It is necessary that machine systems infer human intention in these systems. There were not enough considerations on state recognition process which is important to infer human intention. In this paper we first reconstructed human knowledge into a hierarchy and incorporated these knowledge into a Bayesian network. Next we modeled the state recognition process by using the Bayesian network. (author)

  17. Design basis for the operational modelling of the atmospheric dispersion

    International Nuclear Information System (INIS)

    Doury, A.

    1987-11-01

    Based on the latest practices at the Institut de Protection et de Surete Nucleaire of the Commissariat a l'Energie Atomique (CEA), we shall first present the basis elements used for a simple and adequate modelling method for assessing hypothetical atmospheric pollution from transient or continuous discharge with any given kinetics under various weather conditions which are not necessarily stationary or uniform, which are likely to occur even with little or no wind. Discharges shall be considered as sequences of instantaneous successive puffs. The parameters deduced experimentally or from observations are functions of the transfer time and cover all time and space scales. The restrictions of use are indicated, especially concerning heavy gases. Finally, simple formulas are proposed for concentrations and depositions so as to be able to make a rapid estimation of the orders of magnitude with almost no computation [fr

  18. Inference in High-dimensional Dynamic Panel Data Models

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Tang, Haihan

    We establish oracle inequalities for a version of the Lasso in high-dimensional fixed effects dynamic panel data models. The inequalities are valid for the coefficients of the dynamic and exogenous regressors. Separate oracle inequalities are derived for the fixed effects. Next, we show how one can...

  19. A framework for modelling the behaviour of a process control operator under stress

    International Nuclear Information System (INIS)

    Kan, C-C.F.; Roberts, P.D.; Smith, I.C.

    1990-01-01

    This paper proposes the basis for a framework for modelling effects of stress on the behaviour of a process control plant operator. The qualitative effects of stress on the cognitive processing ability of the operator are discussed. Stress is thought to mainly decrease the reasoning ability of the operator. The operator will experience increased rigidity in problem solving and the narrowing of his attention and perceptual field. At the same time, the operator will be increasingly reluctant in admitting that wrong decisions have been committed. Furthermore, he will revert to skill-based behaviours. The direct consequence of stress on the decision making mechanism of the operator is the selection of inappropriate choice of action. A formal representation of decision errors is proposed and various techniques are suggested for representing various mechanisms for decision error making. The degree of experience possessed by the operator is also an important factor to the operator's tolerance of stress. The framework also allows the experience of the operator to be integrated into the model. Such an operator model can be linked to a plant simulator and the complete behaviour of the plant then be simulated

  20. Expert System Models for Forecasting Forklifts Engagement in a Warehouse Loading Operation: A Case Study

    Directory of Open Access Journals (Sweden)

    Dejan Mirčetić

    2016-08-01

    Full Text Available The paper focuses on the problem of forklifts engagement in warehouse loading operations. Two expert system (ES models are created using several machine learning (ML models. Models try to mimic expert decisions while determining the forklifts engagement in the loading operation. Different ML models are evaluated and adaptive neuro fuzzy inference system (ANFIS and classification and regression trees (CART are chosen as the ones which have shown best results for the research purpose. As a case study, a central warehouse of a beverage company was used. In a beverage distribution chain, the proper engagement of forklifts in a loading operation is crucial for maintaining the defined customer service level. The created ES models represent a new approach for the rationalization of the forklifts usage, particularly for solving the problem of the forklifts engagement incargo loading. They are simple, easy to understand, reliable, and practically applicable tool for deciding on the engagement of the forklifts in a loading operation.

  1. The chemical energy unit partial oxidation reactor operation simulation modeling

    Science.gov (United States)

    Mrakin, A. N.; Selivanov, A. A.; Batrakov, P. A.; Sotnikov, D. G.

    2018-01-01

    The chemical energy unit scheme for synthesis gas, electric and heat energy production which is possible to be used both for the chemical industry on-site facilities and under field conditions is represented in the paper. The partial oxidation reactor gasification process mathematical model is described and reaction products composition and temperature determining algorithm flow diagram is shown. The developed software product verification showed good convergence of the experimental values and calculations according to the other programmes: the temperature determining relative discrepancy amounted from 4 to 5 %, while the absolute composition discrepancy ranged from 1 to 3%. The synthesis gas composition was found out practically not to depend on the supplied into the partial oxidation reactor (POR) water vapour enthalpy and compressor air pressure increase ratio. Moreover, air consumption coefficient α increase from 0.7 to 0.9 was found out to decrease synthesis gas target components (carbon and hydrogen oxides) specific yield by nearly 2 times and synthesis gas target components required ratio was revealed to be seen in the water vapour specific consumption area (from 5 to 6 kg/kg of fuel).

  2. Study on Developing Degradation Model for Nuclear Power Plants With Ageing Elements Affected on Operation Parameter

    International Nuclear Information System (INIS)

    Choi, Yong Won; Lim, Sung Won; Lee, Un Chul; Kim, Man Woong; Kim, Kab; Ryu, Yong Ho

    2009-01-01

    As a part of development the evaluation system of safety margin effects for degradation of CANDU reactors, it is required that the degradation model represents the distribution of each ageing factor's value during operating year. Unfortunately, it is not easy to make an explicit relation between the RELAP-CANDU parameters and ageing mechanism because of insufficient data and lack of applicable models. So, operating parameter related with ageing is used for range determination of ageing factor. Then, relation between operating parameter and ageing elements is analyzed and ageing constant values for degradation model are determined. Also the other ageing factor is derived for more accurate ageing analysis

  3. Model of the naval base logistic interoperability within the multinational operations

    Directory of Open Access Journals (Sweden)

    Bohdan Pac

    2011-12-01

    Full Text Available The paper concerns the model of the naval base logistics interoperability within the multinational operations conducted at sea by NATO or EU nations. The model includes the set of logistic requirements that NATO and EU expect from the contributing nations within the area of the logistic support provided to the forces operating out of the home bases. Model may reflect the scheme configuration, the set of requirements and its mathematical description for the naval base supporting multinational forces within maritime operations.

  4. Knowledge-enhanced network simulation modeling of the nuclear power plant operator

    International Nuclear Information System (INIS)

    Schryver, J.C.; Palko, L.E.

    1988-01-01

    Simulation models of the human operator of advanced control systems must provide an adequate account of the cognitive processes required to control these systems. The Integrated Reactor Operator/System (INTEROPS) prototype model was developed at Oak Ridge National Laboratory (ORNL) to demonstrate the feasibility of dynamically integrating a cognitive operator model and a continuous plant process model (ARIES-P) to provide predictions of the total response of a nuclear power plant during upset/emergency conditions. The model consists of a SAINT network of cognitive tasks enhanced with expertise provided by a knowledge-based fault diagnosis model. The INTEROPS prototype has been implemented in both closed and open loop modes. The prototype model is shown to be cognitively relevant by accounting for cognitive tunneling, confirmation bias, evidence chunking, intentional error, and forgetting

  5. An information theory-based approach to modeling the information processing of NPP operators

    International Nuclear Information System (INIS)

    Kim, Jong Hyun; Seong, Poong Hyun

    2002-01-01

    This paper proposes a quantitative approach to modeling the information processing of NPP operators. The aim of this work is to derive the amount of the information processed during a certain control task. The focus will be on i) developing a model for information processing of NPP operators and ii) quantifying the model. To resolve the problems of the previous approaches based on the information theory, i.e. the problems of single channel approaches, we primarily develop the information processing model having multiple stages, which contains information flows. Then the uncertainty of the information is quantified using the Conant's model, a kind of information theory

  6. Operator modeling of a loss-of-pumping accident using MicroSAINT

    International Nuclear Information System (INIS)

    Olsen, L.M.

    1992-01-01

    The Savannah River Laboratory (SRL) human factors group has been developing methods for analyzing nuclear reactor operator actions during hypothetical design-basis accident scenarios. The SRL reactors operate at a lower temperature and pressure than power reactors resulting in accident sequences that differ from those of power reactors. Current methodology development is focused on modeling control room operator response times dictated by system event times specified in the Savannah River Site Reactor Safety Analysis Report (SAR). The modeling methods must be flexible enough to incorporate changes to hardware, procedures, or postulated system event times and permit timely evaluation. The initial model developed was for the loss-of-pumping accident (LOPA) because a significant number of operator actions are required to respond to this postulated event. Human factors engineers had been researching and testing a network modeling simulation language called MicroSAINT to simulate operators' personal and interpersonal actions relative to operating system events. The LOPA operator modeling project demonstrated the versatility and flexibility of MicroSAINT for modeling control room crew interactions

  7. Modeling of behaviour of main type personnel in Kozloduy NPP during different operational conditions

    International Nuclear Information System (INIS)

    Hristova, R.; Kalchev, B.

    2000-01-01

    The subject of this article is the personnel behavior and initiating events modeling, based on the operational experience in the NPP 'Kozloduy' initiating events reports. The development of models on qualitative information is much more difficult comparing the quantitative modeling. The modelling process is based on the artificial intelligence theory and methods including knowledge base and inference machine in the frame of logical models and semantic networks. (author)

  8. On the Use of Variability Operations in the V-Modell XT Software Process Line

    DEFF Research Database (Denmark)

    Kuhrmann, Marco; Méndez Fernández, Daniel; Ternité, Thomas

    2016-01-01

    . In this article, we present a study on the feasibility of variability operations to support the development of software process lines in the context of the V-Modell XT. We analyze which variability operations are defined and practically used. We provide an initial catalog of variability operations...... as an improvement proposal for other process models. Our findings show that 69 variability operation types are defined across several metamodel versions of which, however, 25 remain unused. The found variability operations allow for systematically modifying the content of process model elements and the process......Software process lines provide a systematic approach to develop and manage software processes. It defines a reference process containing general process assets, whereas a well-defined customization approach allows process engineers to create new process variants, e.g., by extending or modifying...

  9. Knowledge model of trainee for training support system of plant operation

    Energy Technology Data Exchange (ETDEWEB)

    Furuhama, Yutaka; Furuta, Kazuo; Kondo, Shunsuke [Tokyo Univ. (Japan). Faculty of Engineering

    1996-10-01

    We have already proposed a knowledge model of a trainee, which model consists of two layers: hierarchical function and qualitative structure. We developed a method to generate normative operator knowledge based on this knowledge model structure, and to identify trainee`s intention by means of truth maintenance. The methods were tested by cognitive experiment using a prototype of training support system. (author)

  10. Safe design and operation of fluidized-bed reactors: Choice between reactor models

    NARCIS (Netherlands)

    Westerink, E.J.; Westerterp, K.R.

    1990-01-01

    For three different catalytic fluidized bed reactor models, two models presented by Werther and a model presented by van Deemter, the region of safe and unique operation for a chosen reaction system was investigated. Three reaction systems were used: the oxidation of benzene to maleic anhydride, the

  11. On the usability of quantitative modelling in operations strategy decission making

    NARCIS (Netherlands)

    Akkermans, H.A.; Bertrand, J.W.M.

    1997-01-01

    Quantitative modelling seems admirably suited to help managers in their strategic decision making on operations management issues, but in practice models are rarely used for this purpose. Investigates the reasons why, based on a detailed cross-case analysis of six cases of modelling-supported

  12. A new harvest operation cost model to evaluate forest harvest layout alternatives

    Science.gov (United States)

    Mark M. Clark; Russell D. Meller; Timothy P. McDonald; Chao Chi Ting

    1997-01-01

    The authors develop a new model for harvest operation costs that can be used to evaluate stands for potential harvest. The model is based on felling, extraction, and access costs, and is unique in its consideration of the interaction between harvest area shapes and access roads. The scientists illustrate the model and evaluate the impact of stand size, volume, and road...

  13. Systematic Assessment of Neutron and Gamma Backgrounds Relevant to Operational Modeling and Detection Technology Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Daniel E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hornback, Donald Eric [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Jeffrey O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nicholson, Andrew D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Patton, Bruce W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Peplow, Douglas E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ayaz-Maierhafer, Birsen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-01-01

    This report summarizes the findings of a two year effort to systematically assess neutron and gamma backgrounds relevant to operational modeling and detection technology implementation. The first year effort focused on reviewing the origins of background sources and their impact on measured rates in operational scenarios of interest. The second year has focused on the assessment of detector and algorithm performance as they pertain to operational requirements against the various background sources and background levels.

  14. Classification of effective operators for interactions between the Standard Model and dark matter

    International Nuclear Information System (INIS)

    Duch, M.; Grzadkowski, B.; Wudka, J.

    2015-01-01

    We construct a basis for effective operators responsible for interactions between the Standard Model and a dark sector composed of particles with spin ≤1. Redundant operators are eliminated using dim-4 equations of motion. We consider simple scenarios where the dark matter components are stabilized against decay by ℤ_2 symmetries. We determine operators which are loop-generated within an underlying theory and those that are potentially tree-level generated.

  15. River and Reservoir Operations Model, Truckee River basin, California and Nevada, 1998

    Science.gov (United States)

    Berris, Steven N.; Hess, Glen W.; Bohman, Larry R.

    2001-01-01

    The demand for all uses of water in the Truckee River Basin, California and Nevada, commonly is greater than can be supplied. Storage reservoirs in the system have a maximum effective total capacity equivalent to less than two years of average river flows, so longer-term droughts can result in substantial water-supply shortages for irrigation and municipal users and may stress fish and wildlife ecosystems. Title II of Public Law (P.L.) 101-618, the Truckee?Carson?Pyramid Lake Water Rights Settlement Act of 1990, provides a foundation for negotiating and developing operating criteria, known as the Truckee River Operating Agreement (TROA), to balance interstate and interbasin allocation of water rights among the many interests competing for water from the Truckee River. In addition to TROA, the Truckee River Water Quality Settlement Agreement (WQSA), signed in 1996, provides for acquisition of water rights to resolve water-quality problems during low flows along the Truckee River in Nevada. Efficient execution of many of the planning, management, or environmental assessment requirements of TROA and WQSA will require detailed water-resources data coupled with sound analytical tools. Analytical modeling tools constructed and evaluated with such data could help assess effects of alternative operational scenarios related to reservoir and river operations, water-rights transfers, and changes in irrigation practices. The Truckee?Carson Program of the U.S. Geological Survey, to support U.S. Department of the Interior implementation of P.L. 101-618, is developing a modeling system to support efficient water-resources planning, management, and allocation. The daily operations model documented herein is a part of the modeling system that includes a database management program, a graphical user interface program, and a program with modules that simulate river/reservoir operations and a variety of hydrologic processes. The operations module is capable of simulating lake

  16. MODELLING OF DECISION MAKING OF UNMANNED AERIAL VEHICLE'S OPERATOR IN EMERGENCY SITUATIONS

    Directory of Open Access Journals (Sweden)

    Volodymyr Kharchenko

    2017-03-01

    Full Text Available Purpose: lack of recommendation action algorithm of UAV operator in emergency situations; decomposition of the process of decision making (DM by UAV’s Operator in emergency situations; development of the structure of distributed decision support system (DDSS for remotely piloted aircraft; development of a database of local decision support system (DSS operators Remotely Piloted Aircraft Systems (RPAS; working-out of models DM by UAV’s Operator. Methods: Algoritm of actions of UAV operator by Wald criterion, Laplace criterion, Hurwitz criterion. Results: The program "UAV_AS" that gives to UAV operator recommendations on how to act in case of emergency. Discussion: The article deals with the problem of Unmanned Aerial Vehicles (UAV flights for decision of different tasks in emergency situation. Based on statistical data it was analyzing the types of emergencies for unmanned aircraft. Defined sequence of actions UAV operator and in case of emergencies.

  17. Pre-operative simulation of pediatric mastoid surgery with 3D-printed temporal bone models.

    Science.gov (United States)

    Rose, Austin S; Webster, Caroline E; Harrysson, Ola L A; Formeister, Eric J; Rawal, Rounak B; Iseli, Claire E

    2015-05-01

    As the process of additive manufacturing, or three-dimensional (3D) printing, has become more practical and affordable, a number of applications for the technology in the field of pediatric otolaryngology have been considered. One area of promise is temporal bone surgical simulation. Having previously developed a model for temporal bone surgical training using 3D printing, we sought to produce a patient-specific model for pre-operative simulation in pediatric otologic surgery. Our hypothesis was that the creation and pre-operative dissection of such a model was possible, and would demonstrate potential benefits in cases of abnormal temporal bone anatomy. In the case presented, an 11-year-old boy underwent a planned canal-wall-down (CWD) tympano-mastoidectomy for recurrent cholesteatoma preceded by a pre-operative surgical simulation using 3D-printed models of the temporal bone. The models were based on the child's pre-operative clinical CT scan and printed using multiple materials to simulate both bone and soft tissue structures. To help confirm the models as accurate representations of the child's anatomy, distances between various anatomic landmarks were measured and compared to the temporal bone CT scan and the 3D model. The simulation allowed the surgical team to appreciate the child's unusual temporal bone anatomy as well as any challenges that might arise in the safety of the temporal bone laboratory, prior to actual surgery in the operating room (OR). There was minimal variability, in terms of absolute distance (mm) and relative distance (%), in measurements between anatomic landmarks obtained from the patient intra-operatively, the pre-operative CT scan and the 3D-printed models. Accurate 3D temporal bone models can be rapidly produced based on clinical CT scans for pre-operative simulation of specific challenging otologic cases in children, potentially reducing medical errors and improving patient safety. Copyright © 2015 Elsevier Ireland Ltd. All rights

  18. Experience of Hungarian model project: 'Strengthening training for operational safety at Paks NPP'

    International Nuclear Information System (INIS)

    Kiss, I.

    1998-01-01

    Training of Operational Safety at Paks NPP is described including all the features of the project including namely: description of Paks NPP, its properties and performances; reasons for establishing Hungarian Model Project, its main goals, mentioning Hungarian and IAEA experts involved in the Project, its organization, operation, budget, current status together with its short term and long term impact

  19. Testing an integrated model of operations capabilities An empirical study of Australian airlines

    NARCIS (Netherlands)

    Nand, Alka Ashwini; Singh, Prakash J.; Power, Damien

    2013-01-01

    Purpose - The purpose of this paper is to test the integrated model of operations strategy as proposed by Schmenner and Swink to explain whether firms trade-off or accumulate capabilities, taking into account their positions relative to their asset and operating frontiers.

  20. OPERATING OF MOBILE MACHINE UNITS SYSTEM USING THE MODEL OF MULTICOMPONENT COMPLEX MOVEMENT

    Directory of Open Access Journals (Sweden)

    A. Lebedev

    2015-07-01

    Full Text Available To solve the problems of mobile machine units system operating it is proposed using complex multi-component (composite movement physical models. Implementation of the proposed method is possible by creating of automatic operating systems of fuel supply to the engines using linear accelerometers. Some examples to illustrate the proposed method are offered.

  1. Operating of mobile machine units system using the model of multicomponent complex movement

    OpenAIRE

    A. Lebedev; R. Kaidalov; N. Artiomov; M. Shulyak; M. Podrigalo; D. Abramov; D. Klets

    2015-01-01

    To solve the problems of mobile machine units system operating it is proposed using complex multi-component (composite) movement physical models. Implementation of the proposed method is possible by creating of automatic operating systems of fuel supply to the engines using linear accelerometers. Some examples to illustrate the proposed method are offered.

  2. An operator basis for the Standard Model with an added scalar singlet

    Energy Technology Data Exchange (ETDEWEB)

    Gripaios, Ben [Cavendish Laboratory, J.J. Thomson Avenue, Cambridge (United Kingdom); Sutherland, Dave [Cavendish Laboratory, J.J. Thomson Avenue, Cambridge (United Kingdom); Kavli Institute for Theoretical Physics, UCSB Kohn Hall, Santa Barbara CA (United States)

    2016-08-17

    Motivated by the possible di-gamma resonance at 750 GeV, we present a basis of effective operators for the Standard Model plus a scalar singlet at dimensions 5, 6, and 7. We point out that an earlier list at dimensions 5 and 6 contains two redundant operators at dimension 5.

  3. The effect of dietary fatty acids on post-operative inflammatory response in a porcine model

    DEFF Research Database (Denmark)

    Langerhuus, Sine Nygaard; Jensen, Karin Hjelholt; Tønnesen, Else Kirstine

    2012-01-01

    ), sunflower oil (SO, n 28), or animal fat (AF, n 28) was evaluated with respect to post-operative responses in inflammatory markers in a porcine model on aortic vascular prosthetic graft infection. In the early post-operative period (0 necrosis factor...

  4. Objective ARX Model Order Selection for Multi-Channel Human Operator Identification

    NARCIS (Netherlands)

    Roggenkämper, N; Pool, D.M.; Drop, F.M.; van Paassen, M.M.; Mulder, M.

    2016-01-01

    In manual control, the human operator primarily responds to visual inputs but may elect to make use of other available feedback paths such as physical motion, adopting a multi-channel control strategy. Hu- man operator identification procedures generally require a priori selection of the model

  5. Quantitative, steady-state properties of Catania's computational model of the operant reserve.

    Science.gov (United States)

    Berg, John P; McDowell, J J

    2011-05-01

    Catania (2005) found that a computational model of the operant reserve (Skinner, 1938) produced realistic behavior in initial, exploratory analyses. Although Catania's operant reserve computational model demonstrated potential to simulate varied behavioral phenomena, the model was not systematically tested. The current project replicated and extended the Catania model, clarified its capabilities through systematic testing, and determined the extent to which it produces behavior corresponding to matching theory. Significant departures from both classic and modern matching theory were found in behavior generated by the model across all conditions. The results suggest that a simple, dynamic operant model of the reflex reserve does not simulate realistic steady state behavior. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Computer-Aided Model Based Analysis for Design and Operation of a Copolymerization Process

    DEFF Research Database (Denmark)

    Lopez-Arenas, Maria Teresa; Sales-Cruz, Alfonso Mauricio; Gani, Rafiqul

    2006-01-01

    . This will allow analysis of the process behaviour, contribute to a better understanding of the polymerization process, help to avoid unsafe conditions of operation, and to develop operational and optimizing control strategies. In this work, through a computer-aided modeling system ICAS-MoT, two first......The advances in computer science and computational algorithms for process modelling, process simulation, numerical methods and design/synthesis algorithms, makes it advantageous and helpful to employ computer-aided modelling systems and tools for integrated process analysis. This is illustrated......-principles models have been investigated with respect to design and operational issues for solution copolymerization reactors in general, and for the methyl methacrylate/vinyl acetate system in particular. The Model 1 is taken from literature and is commonly used for low conversion region, while the Model 2 has...

  7. Influence of magnetic field on swap operation in Heisenberg XXZ model

    Energy Technology Data Exchange (ETDEWEB)

    Liu Jia [Department of Physics, Beijing University of Aeronautics and Astronautics, Beijing 100083 (China); Zhang Guofeng, E-mail: gf1978zhang@buaa.edu.c [Department of Physics, Beijing University of Aeronautics and Astronautics, Beijing 100083 (China); Chen Ziyu [Department of Physics, Beijing University of Aeronautics and Astronautics, Beijing 100083 (China)

    2009-05-01

    Swap operation based on a two-qubit Heisenberg XXZ model under a uniform magnetic field in arbitrary direction and magnitude is investigated. It is shown that swap gate can be implemented on some conditions and its feasibility is established.

  8. Influence of magnetic field on swap operation in Heisenberg XXZ model

    International Nuclear Information System (INIS)

    Liu Jia; Zhang Guofeng; Chen Ziyu

    2009-01-01

    Swap operation based on a two-qubit Heisenberg XXZ model under a uniform magnetic field in arbitrary direction and magnitude is investigated. It is shown that swap gate can be implemented on some conditions and its feasibility is established.

  9. Advanced Modeling of Ramp Operations including Departure Status at Secondary Airports, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This project addresses three modeling elements relevant to NASA's IADS research and ATD-2 project, two related to ramp operations at primary airports and one related...

  10. River Stream-Flow and Zayanderoud Reservoir Operation Modeling Using the Fuzzy Inference System

    Directory of Open Access Journals (Sweden)

    Saeed Jamali

    2007-12-01

    Full Text Available The Zayanderoud basin is located in the central plateau of Iran. As a result of population increase and agricultural and industrial developments, water demand on this basin has increased extensively. Given the importance of reservoir operation in water resource and management studies, the performance of fuzzy inference system (FIS for Zayanderoud reservoir operation is investigated in this paper. The model of operation consists of two parts. In the first part, the seasonal river stream-flow is forecasted using the fuzzy rule-based system. The southern oscillated index, rain, snow, and discharge are inputs of the model and the seasonal river stream-flow its output. In the second part, the operation model is constructed. The amount of releases is first optimized by a nonlinear optimization model and then the rule curves are extracted using the fuzzy inference system. This model operates on an "if-then" principle, where the "if" is a vector of fuzzy permits and "then" is the fuzzy result. The reservoir storage capacity, inflow, demand, and year condition factor are used as permits. Monthly release is taken as the consequence. The Zayanderoud basin is investigated as a case study. Different performance indices such as reliability, resiliency, and vulnerability are calculated. According to results, FIS works more effectively than the traditional reservoir operation methods such as standard operation policy (SOP or linear regression.

  11. A survey on the technologies and cases for the cognitive models of nuclear power plant operators

    International Nuclear Information System (INIS)

    Lee, Yong Hee; Chun, Se Woo; Seo, Sang Moon; Lee, Hyun Chul

    1993-04-01

    To enhance the safety and availability of nuclear power plants, it is necessary to develop the methodologies which can systematically analyze the interrelationships between plant operators and main process systems. Operator congnitive models enable to provide an explicit method to analyze how operator's congitive behavior reacts to the behavior of system changes. However, because no adequate model has been developed up to now, it is difficult to take an effective approach for the review, assessment and improvement of human factors. In this study, we have surveyed the techniques and the cases of operator model development, aiming to develop an operator's model as one of human engineering application methodologies. We have analyzed the cognitive characteristics of decision-making, which is one of the principal factors for modeling, and reviewed the methodologies and implementation thechniques used in the cases of the model development. We investigated the tendencies of the model developments by reviewing ten cases and especially CES, INTEROPS and COSIMO models which have been developed or are under development in nuclear fields. Also, we summarized the cognitive characteristics to be considered in the modeling for the purpose of modeling operator's decision-making. For modeling methodologies, we found a trend of the modeling that is software simulations based on the artificial intelligence technologies, especially focused in knowledge representation methods. Based on the results of our survey, we proposed a development approach and several urgent research subjects. We suggested the development simulation tools which can be applicable to the review, assessment and improvement of human factors, by implementing them as softwares using expert system development tools. The results of this study have been applied to our long-term project named 'The Development of Human Engineering Technologies.' (Author)

  12. The Regional Special Operations Headquarters: Franchising the NATO Model as a Hedge in Lean Times

    Science.gov (United States)

    2012-04-01

    1 AIR FORCE FELLOWS AIR UNIVERSITY THE REGIONAL SPECIAL OPERATIONS HEADQUARTERS: FRANCHISING THE NATO MODEL AS A HEDGE IN LEAN...Headquarters: Franchising The NATO Model As A Hedge In Lean Times 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...it is not copyrighted, but is the property of the United States government. 3 The Regional Special Operations Headquarters: Franchising the

  13. PWR plant operator training used full scope simulator incorporated MAAP model

    International Nuclear Information System (INIS)

    Matsumoto, Y.; Tabuchi, T.; Yamashita, T.; Komatsu, Y.; Tsubouchi, K.; Banka, T.; Mochizuki, T.; Nishimura, K.; Iizuka, H.

    2015-01-01

    NTC makes an effort with the understanding of plant behavior of core damage accident as part of our advanced training. For the Fukushima Daiichi Nuclear Power Station accident, we introduced the MAAP model into PWR operator training full scope simulator and also made the Severe Accident Visual Display unit. From 2014, we will introduce new training program for a core damage accident with PWR operator training full scope simulator incorporated the MAAP model and the Severe Accident Visual Display unit. (author)

  14. Modeling and operation optimization of a proton exchange membrane fuel cell system for maximum efficiency

    International Nuclear Information System (INIS)

    Han, In-Su; Park, Sang-Kyun; Chung, Chang-Bock

    2016-01-01

    Highlights: • A proton exchange membrane fuel cell system is operationally optimized. • A constrained optimization problem is formulated to maximize fuel cell efficiency. • Empirical and semi-empirical models for most system components are developed. • Sensitivity analysis is performed to elucidate the effects of major operating variables. • The optimization results are verified by comparison with actual operation data. - Abstract: This paper presents an operation optimization method and demonstrates its application to a proton exchange membrane fuel cell system. A constrained optimization problem was formulated to maximize the efficiency of a fuel cell system by incorporating practical models derived from actual operations of the system. Empirical and semi-empirical models for most of the system components were developed based on artificial neural networks and semi-empirical equations. Prior to system optimizations, the developed models were validated by comparing simulation results with the measured ones. Moreover, sensitivity analyses were performed to elucidate the effects of major operating variables on the system efficiency under practical operating constraints. Then, the optimal operating conditions were sought at various system power loads. The optimization results revealed that the efficiency gaps between the worst and best operation conditions of the system could reach 1.2–5.5% depending on the power output range. To verify the optimization results, the optimal operating conditions were applied to the fuel cell system, and the measured results were compared with the expected optimal values. The discrepancies between the measured and expected values were found to be trivial, indicating that the proposed operation optimization method was quite successful for a substantial increase in the efficiency of the fuel cell system.

  15. A Dynamic Operation Permission Technique Based on an MFM Model and Numerical Simulation

    International Nuclear Information System (INIS)

    Akio, Gofuku; Masahiro, Yonemura

    2011-01-01

    It is important to support operator activities to an abnormal plant situation where many counter actions are taken in relatively short time. The authors proposed a technique called dynamic operation permission to decrease human errors without eliminating creative idea of operators to cope with an abnormal plant situation by checking if the counter action taken is consistent with emergency operation procedure. If the counter action is inconsistent, a dynamic operation permission system warns it to operators. It also explains how and why the counter action is inconsistent and what influence will appear on the future plant behavior by a qualitative influence inference technique based on a model by the Mf (Multilevel Flow Modeling). However, the previous dynamic operation permission is not able to explain quantitative effects on plant future behavior. Moreover, many possible influence paths are derived because a qualitative reasoning does not give a solution when positive and negative influences are propagated to the same node. This study extends the dynamic operation permission by combining the qualitative reasoning and the numerical simulation technique. The qualitative reasoning based on an Mf model of plant derives all possible influence propagation paths. Then, a numerical simulation gives a prediction of plant future behavior in the case of taking a counter action. The influence propagation that does not coincide with the simulation results is excluded from possible influence paths. The extended technique is implemented in a dynamic operation permission system for an oil refinery plant. An MFM model and a static numerical simulator are developed. The results of dynamic operation permission for some abnormal plant situations show the improvement of the accuracy of dynamic operation permission and the quality of explanation for the effects of the counter action taken

  16. Operational freight carrier planning basic concepts, optimization models and advanced memetic algorithms

    CERN Document Server

    Schönberger, Jörn

    2005-01-01

    The modern freight carrier business requires a sophisticated automatic decision support in order to ensure the efficiency and reliability and therefore the survival of transport service providers. This book addresses these challenges and provides generic decision models for the short-term operations planning as well as advanced metaheuristics to obtain efficient operation plans. After a thorough analysis of the operations planning in the freight carrier business, decision models are derived. Their suitability is proven within a large number of numerical experiments, in which a new class of hybrid genetic search approaches demonstrate their appropriateness.

  17. Modelling and operation strategies of DLR's large scale thermocline test facility (TESIS)

    Science.gov (United States)

    Odenthal, Christian; Breidenbach, Nils; Bauer, Thomas

    2017-06-01

    In this work an overview of the TESIS:store thermocline test facility and its current construction status will be given. Based on this, the TESIS:store facility using sensible solid filler material is modelled with a fully transient model, implemented in MATLAB®. Results in terms of the impact of filler site and operation strategies will be presented. While low porosity and small particle diameters for the filler material are beneficial, operation strategy is one key element with potential for optimization. It is shown that plant operators have to ponder between utilization and exergetic efficiency. Different durations of the charging and discharging period enable further potential for optimizations.

  18. Modeling the design and operations of the federal radioactive waste management system

    International Nuclear Information System (INIS)

    Joy, D.S.; Nehls, J.W. Jr.; Harrison, I.G.; Miller, C.; Vogel, L.W.; Martin, J.D.; Capone, R.L.; Dougherty, L.

    1989-04-01

    Many configuration, transportation and operating alternatives are available to the Office of Civilian Radioactive Waste Management (OCRWM) in the design and operation of the Federal Radioactive Waste Management System (FWMS). Each alternative has different potential impacts on system throughput, efficiency and the thermal and radiological characteristics of the waste to be shipped, stored and emplaced. A need therefore exists for a quantitative means of assessing the ramifications of alternative system designs and operating strategies. We developed the Systems integration Operations/Logistics Model (SOLMOD). That model is used to replicate a user-specified system configuration and simulate the operation of that system -- from waste pickup at reactors to emplacement in a repository -- under a variety of operating strategies. The model can thus be used to assess system performance with or without Monitored Retrievable Storage (MRS), with or without consolidation at the repository, with varying shipping cask availability and so forth. This simulation capability is also intended to provide a tool for examining the impact of facility and equipment capacity and redundancy on overall waste processing capacity and system performance. SOLMOD can measure the impacts on system performance of certain operating contingencies. It can be used to test effects on transportation and waste pickup schedules resulting from a shut-down of one or more hot cells in the waste handling building at the repository or MRS. Simulation can also be used to study operating procedures and rules such as fuel pickup schedules, general freight vs. dedicated freight. 3 refs., 2 figs., 2 tabs

  19. Operator models for delivering municipal solid waste management services in developing countries: Part B: Decision support.

    Science.gov (United States)

    Soós, Reka; Whiteman, Andrew D; Wilson, David C; Briciu, Cosmin; Nürnberger, Sofia; Oelz, Barbara; Gunsilius, Ellen; Schwehn, Ekkehard

    2017-08-01

    This is the second of two papers reporting the results of a major study considering 'operator models' for municipal solid waste management (MSWM) in emerging and developing countries. Part A documents the evidence base, while Part B presents a four-step decision support system for selecting an appropriate operator model in a particular local situation. Step 1 focuses on understanding local problems and framework conditions; Step 2 on formulating and prioritising local objectives; and Step 3 on assessing capacities and conditions, and thus identifying strengths and weaknesses, which underpin selection of the operator model. Step 4A addresses three generic questions, including public versus private operation, inter-municipal co-operation and integration of services. For steps 1-4A, checklists have been developed as decision support tools. Step 4B helps choose locally appropriate models from an evidence-based set of 42 common operator models ( coms); decision support tools here are a detailed catalogue of the coms, setting out advantages and disadvantages of each, and a decision-making flowchart. The decision-making process is iterative, repeating steps 2-4 as required. The advantages of a more formal process include avoiding pre-selection of a particular com known to and favoured by one decision maker, and also its assistance in identifying the possible weaknesses and aspects to consider in the selection and design of operator models. To make the best of whichever operator models are selected, key issues which need to be addressed include the capacity of the public authority as 'client', management in general and financial management in particular.

  20. The operator model as a framework of research on errors and temporal, qualitative and analogical reasoning

    International Nuclear Information System (INIS)

    Decortis, F.; Drozdowicz, B.; Masson, M.

    1990-01-01

    In this paper the needs and requirements for developing a cognitive model of a human operator are discussed and the computer architecture, currently being developed, is described. Given the approach taken, namely the division of the problem into specialised tasks within an area and using the architecture chosen, it is possible to build independently several cognitive and psychological models such as errors and stress models, as well as models of temporal, qualitative and an analogical reasoning. (author)

  1. A method for aggregating external operating conditions in multi-generation system optimization models

    DEFF Research Database (Denmark)

    Lythcke-Jørgensen, Christoffer Ernst; Münster, Marie; Ensinas, Adriano Viana

    2016-01-01

    This paper presents a novel, simple method for reducing external operating condition datasets to be used in multi-generation system optimization models. The method, called the Characteristic Operating Pattern (CHOP) method, is a visually-based aggregation method that clusters reference data based...... on parameter values rather than time of occurrence, thereby preserving important information on short-term relations between the relevant operating parameters. This is opposed to commonly used methods where data are averaged over chronological periods (months or years), and extreme conditions are hidden...... in the averaged values. The CHOP method is tested in a case study where the operation of a fictive Danish combined heat and power plant is optimized over a historical 5-year period. The optimization model is solved using the full external operating condition dataset, a reduced dataset obtained using the CHOP...

  2. Trajectory-based morphological operators: a model for efficient image processing.

    Science.gov (United States)

    Jimeno-Morenilla, Antonio; Pujol, Francisco A; Molina-Carmona, Rafael; Sánchez-Romero, José L; Pujol, Mar

    2014-01-01

    Mathematical morphology has been an area of intensive research over the last few years. Although many remarkable advances have been achieved throughout these years, there is still a great interest in accelerating morphological operations in order for them to be implemented in real-time systems. In this work, we present a new model for computing mathematical morphology operations, the so-called morphological trajectory model (MTM), in which a morphological filter will be divided into a sequence of basic operations. Then, a trajectory-based morphological operation (such as dilation, and erosion) is defined as the set of points resulting from the ordered application of the instant basic operations. The MTM approach allows working with different structuring elements, such as disks, and from the experiments, it can be extracted that our method is independent of the structuring element size and can be easily applied to industrial systems and high-resolution images.

  3. Operation Modeling of Power Systems Integrated with Large-Scale New Energy Power Sources

    Directory of Open Access Journals (Sweden)

    Hui Li

    2016-10-01

    Full Text Available In the most current methods of probabilistic power system production simulation, the output characteristics of new energy power generation (NEPG has not been comprehensively considered. In this paper, the power output characteristics of wind power generation and photovoltaic power generation are firstly analyzed based on statistical methods according to their historical operating data. Then the characteristic indexes and the filtering principle of the NEPG historical output scenarios are introduced with the confidence level, and the calculation model of NEPG’s credible capacity is proposed. Based on this, taking the minimum production costs or the best energy-saving and emission-reduction effect as the optimization objective, the power system operation model with large-scale integration of new energy power generation (NEPG is established considering the power balance, the electricity balance and the peak balance. Besides, the constraints of the operating characteristics of different power generation types, the maintenance schedule, the load reservation, the emergency reservation, the water abandonment and the transmitting capacity between different areas are also considered. With the proposed power system operation model, the operation simulations are carried out based on the actual Northwest power grid of China, which resolves the new energy power accommodations considering different system operating conditions. The simulation results well verify the validity of the proposed power system operation model in the accommodation analysis for the power system which is penetrated with large scale NEPG.

  4. The operable modeling of simultaneous saccharification and fermentation of ethanol production from cellulose.

    Science.gov (United States)

    Shen, Jiacheng; Agblevor, Foster A

    2010-03-01

    An operable batch model of simultaneous saccharification and fermentation (SSF) for ethanol production from cellulose has been developed. The model includes four ordinary differential equations that describe the changes of cellobiose, glucose, yeast, and ethanol concentrations with respect to time. These equations were used to simulate the experimental data of the four main components in the SSF process of ethanol production from microcrystalline cellulose (Avicel PH101). The model parameters at 95% confidence intervals were determined by a MATLAB program based on the batch experimental data of the SSF. Both experimental data and model simulations showed that the cell growth was the rate-controlling step at the initial period in a series of reactions of cellulose to ethanol, and later, the conversion of cellulose to cellobiose controlled the process. The batch model was extended to the continuous and fed-batch operating models. For the continuous operation in the SSF, the ethanol productivities increased with increasing dilution rate, until a maximum value was attained, and rapidly decreased as the dilution rate approached the washout point. The model also predicted a relatively high ethanol mass for the fed-batch operation than the batch operation.

  5. Development of a subway operation incident delay model using accelerated failure time approaches.

    Science.gov (United States)

    Weng, Jinxian; Zheng, Yang; Yan, Xuedong; Meng, Qiang

    2014-12-01

    This study aims to develop a subway operational incident delay model using the parametric accelerated time failure (AFT) approach. Six parametric AFT models including the log-logistic, lognormal and Weibull models, with fixed and random parameters are built based on the Hong Kong subway operation incident data from 2005 to 2012, respectively. In addition, the Weibull model with gamma heterogeneity is also considered to compare the model performance. The goodness-of-fit test results show that the log-logistic AFT model with random parameters is most suitable for estimating the subway incident delay. First, the results show that a longer subway operation incident delay is highly correlated with the following factors: power cable failure, signal cable failure, turnout communication disruption and crashes involving a casualty. Vehicle failure makes the least impact on the increment of subway operation incident delay. According to these results, several possible measures, such as the use of short-distance and wireless communication technology (e.g., Wifi and Zigbee) are suggested to shorten the delay caused by subway operation incidents. Finally, the temporal transferability test results show that the developed log-logistic AFT model with random parameters is stable over time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Using model based systems engineering for the development of the Large Synoptic Survey Telescope's operational plan

    Science.gov (United States)

    Selvy, Brian M.; Claver, Charles; Willman, Beth; Petravick, Don; Johnson, Margaret; Reil, Kevin; Marshall, Stuart; Thomas, Sandrine; Lotz, Paul; Schumacher, German; Lim, Kian-Tat; Jenness, Tim; Jacoby, Suzanne; Emmons, Ben; Axelrod, Tim

    2016-08-01

    We† provide an overview of the Model Based Systems Engineering (MBSE) language, tool, and methodology being used in our development of the Operational Plan for Large Synoptic Survey Telescope (LSST) operations. LSST's Systems Engineering (SE) team is using a model-based approach to operational plan development to: 1) capture the topdown stakeholders' needs and functional allocations defining the scope, required tasks, and personnel needed for operations, and 2) capture the bottom-up operations and maintenance activities required to conduct the LSST survey across its distributed operations sites for the full ten year survey duration. To accomplish these complimentary goals and ensure that they result in self-consistent results, we have developed a holistic approach using the Sparx Enterprise Architect modeling tool and Systems Modeling Language (SysML). This approach utilizes SysML Use Cases, Actors, associated relationships, and Activity Diagrams to document and refine all of the major operations and maintenance activities that will be required to successfully operate the observatory and meet stakeholder expectations. We have developed several customized extensions of the SysML language including the creation of a custom stereotyped Use Case element with unique tagged values, as well as unique association connectors and Actor stereotypes. We demonstrate this customized MBSE methodology enables us to define: 1) the rolls each human Actor must take on to successfully carry out the activities associated with the Use Cases; 2) the skills each Actor must possess; 3) the functional allocation of all required stakeholder activities and Use Cases to organizational entities tasked with carrying them out; and 4) the organization structure required to successfully execute the operational survey. Our approach allows for continual refinement utilizing the systems engineering spiral method to expose finer levels of detail as necessary. For example, the bottom-up, Use Case

  7. Wake meandering of a model wind turbine operating in two different regimes

    Science.gov (United States)

    Foti, Daniel; Yang, Xiaolei; Campagnolo, Filippo; Maniaci, David; Sotiropoulos, Fotis

    2018-05-01

    The flow behind a model wind turbine under two different turbine operating regimes (region 2 for turbine operating at optimal condition with the maximum power coefficient and 1.4-deg pitch angle and region 3 for turbine operating at suboptimal condition with a lower power coefficient and 7-deg pitch angle) is investigated using wind tunnel experiments and numerical experiments using large-eddy simulation (LES) with actuator surface models for turbine blades and nacelle. Measurements from the model wind turbine experiment reveal that the power coefficient and turbine wake are affected by the operating regime. Simulations with and without a nacelle model are carried out for each operating condition to study the influence of the operating regime and nacelle on the formation of the hub vortex and wake meandering. Statistics and energy spectra of the simulated wakes are in good agreement with the measurements. For simulations with a nacelle model, the mean flow field is composed of an outer wake, caused by energy extraction by turbine blades, and an inner wake directly behind the nacelle, while for the simulations without a nacelle model, the central region of the wake is occupied by a jet. The simulations with the nacelle model reveal an unstable helical hub vortex expanding outward toward the outer wake, while the simulations without a nacelle model show a stable and columnar hub vortex. Because of the different interactions of the inner region of the wake with the outer region of the wake, a region with higher turbulence intensity is observed in the tip shear layer for the simulation with a nacelle model. The hub vortex for the turbine operating in region 3 remains in a tight helical spiral and intercepts the outer wake a few diameters further downstream than for the turbine operating in region 2. Wake meandering, a low-frequency large-scale motion of the wake, commences in the region of high turbulence intensity for all simulations with and without a nacelle model

  8. Singular Spectrum Near a Singular Point of Friedrichs Model Operators of Absolute Type

    International Nuclear Information System (INIS)

    Iakovlev, Serguei I.

    2006-01-01

    In L 2 (R) we consider a family of self adjoint operators of the Friedrichs model: A m =|t| m +V. Here |t| m is the operator of multiplication by the corresponding function of the independent variable t element of R, and (perturbation) is a trace-class integral operator with a continuous Hermitian kernel ν(t,x) satisfying some smoothness condition. These absolute type operators have one singular point of order m>0. Conditions on the kernel ν(t,x) are found guaranteeing the absence of the point spectrum and the singular continuous one of such operators near the origin. These conditions are actually necessary and sufficient. They depend on the finiteness of the rank of a perturbation operator and on the order of singularity. The sharpness of these conditions is confirmed by counterexamples

  9. A life cycle cost economics model for projects with uniformly varying operating costs. [management planning

    Science.gov (United States)

    Remer, D. S.

    1977-01-01

    A mathematical model is developed for calculating the life cycle costs for a project where the operating costs increase or decrease in a linear manner with time. The life cycle cost is shown to be a function of the investment costs, initial operating costs, operating cost gradient, project life time, interest rate for capital and salvage value. The results show that the life cycle cost for a project can be grossly underestimated (or overestimated) if the operating costs increase (or decrease) uniformly over time rather than being constant as is often assumed in project economic evaluations. The following range of variables is examined: (1) project life from 2 to 30 years; (2) interest rate from 0 to 15 percent per year; and (3) operating cost gradient from 5 to 90 percent of the initial operating costs. A numerical example plus tables and graphs is given to help calculate project life cycle costs over a wide range of variables.

  10. A hypothesis generation model of initiating events for nuclear power plant operators

    International Nuclear Information System (INIS)

    Sawhney, R.S.; Dodds, H.L.; Schryver, J.C.; Knee, H.E.

    1989-01-01

    The goal of existing alarm-filtering models is to provide the operator with the most accurate assessment of patterns of annunciated alarms. Some models are based on event-tree analysis, such as DuPont's Diagnosis of Multiple Alarms. Other models focus on improving hypothesis generation by deemphasizing alarms not relevant to the current plant scenario. Many such models utilize the alarm filtering system as a basis of dynamic prioritization. The Lisp-based alarm analysis model presented in this paper was developed for the Advanced Controls Program at Oak Ridge National Laboratory to dynamically prioritize hypotheses via an AFS by incorporating an unannunciated alarm analysis with other plant-based concepts. The objective of this effort is to develop an alarm analysis model that would allow greater flexibility and more accurate hypothesis generation than the prototype fault diagnosis model utilized in the Integrated Reactor Operator/System (INTEROPS) model. INTEROPS is a time-based predictive model of the nuclear power plant operator, which utilizes alarm information in a manner similar to the human operator. This is achieved by recoding the knowledge base from the personal computer-based expert system shell to a common Lisp structure, providing the ability to easily modify both the manner in which the knowledge is structured as well as the logic by which the program performs fault diagnosis

  11. Modeling of human operator dynamics in simple manual control utilizing time series analysis. [tracking (position)

    Science.gov (United States)

    Agarwal, G. C.; Osafo-Charles, F.; Oneill, W. D.; Gottlieb, G. L.

    1982-01-01

    Time series analysis is applied to model human operator dynamics in pursuit and compensatory tracking modes. The normalized residual criterion is used as a one-step analytical tool to encompass the processes of identification, estimation, and diagnostic checking. A parameter constraining technique is introduced to develop more reliable models of human operator dynamics. The human operator is adequately modeled by a second order dynamic system both in pursuit and compensatory tracking modes. In comparing the data sampling rates, 100 msec between samples is adequate and is shown to provide better results than 200 msec sampling. The residual power spectrum and eigenvalue analysis show that the human operator is not a generator of periodic characteristics.

  12. A simplified thermal model for a clothed human operator with thermoregulation

    Directory of Open Access Journals (Sweden)

    Zahid Akhtar khan

    2010-09-01

    Full Text Available This paper presents a simplified yet comprehensive mathematical model to predict steady state temperature distribution for various regions of male clothed human operators who are healthy, passive/active and lean/obese under the influence of different environmental conditions using thermoregulatory control concept. The present model is able to predict the core temperature, close to 37oC for a healthy, passive/active and lean/obese operator at normal ambient temperatures. It is observed that due to increase in body fat, BF the skin temperature, of the operator decreases by a small amount. However, effect of age of the operator on is found to be insignificant. The present model has been validated against the experimental data available in the literature.

  13. Multilevel flow models studio: human-centralized development for operation support system

    International Nuclear Information System (INIS)

    Zhou Yangping; Hidekazu Yoshikawa; Liu Jingquan; Yang Ming; Ouyang Jun

    2005-01-01

    Computerized Operation Support Systems (COSS), integrating Artificial Intelligence, Multimedia and Network Technology, are now being proposed for reducing operator's cognitive load for process operation. This study proposed a Human-Centralized Development (HCD) that COSS can be developed and maintained independently, conveniently and flexibly by operator and expert of industry system with little expertise on software development. A graphical interface system for HCD, Multilevel Flow Models Studio (MFMS), is proposed for development assistance of COSS. An Extensible Markup Language based file structure is designed to represent the Multilevel Flow Models (MFM) model for the target system. With a friendly graphical interface, MFMS mainly consists of two components: 1) an editor to intelligently assist user establish and maintain the MFM model; 2) an executor to implement the application for monitoring, diagnosis and operational instruction in terms of the established MFM model. A prototype MFMS system has been developed and applied to construct a trial operation support system for a Nuclear Power Plant simulated by RELAP5/MOD2. (authors)

  14. A review of operational, regional-scale, chemical weather forecasting models in Europe

    Directory of Open Access Journals (Sweden)

    J. Kukkonen

    2012-01-01

    Full Text Available Numerical models that combine weather forecasting and atmospheric chemistry are here referred to as chemical weather forecasting models. Eighteen operational chemical weather forecasting models on regional and continental scales in Europe are described and compared in this article. Topics discussed in this article include how weather forecasting and atmospheric chemistry models are integrated into chemical weather forecasting systems, how physical processes are incorporated into the models through parameterization schemes, how the model architecture affects the predicted variables, and how air chemistry and aerosol processes are formulated. In addition, we discuss sensitivity analysis and evaluation of the models, user operational requirements, such as model availability and documentation, and output availability and dissemination. In this manner, this article allows for the evaluation of the relative strengths and weaknesses of the various modelling systems and modelling approaches. Finally, this article highlights the most prominent gaps of knowledge for chemical weather forecasting models and suggests potential priorities for future research directions, for the following selected focus areas: emission inventories, the integration of numerical weather prediction and atmospheric chemical transport models, boundary conditions and nesting of models, data assimilation of the various chemical species, improved understanding and parameterization of physical processes, better evaluation of models against data and the construction of model ensembles.

  15. Phased mission modelling of systems with maintenance-free operating periods using simulated Petri nets

    Energy Technology Data Exchange (ETDEWEB)

    Chew, S.P.; Dunnett, S.J. [Department of Aeronautical and Automotive Engineering, Loughborough University, Loughborough, Leics (United Kingdom); Andrews, J.D. [Department of Aeronautical and Automotive Engineering, Loughborough University, Loughborough, Leics (United Kingdom)], E-mail: j.d.andrews@lboro.ac.uk

    2008-07-15

    A common scenario in engineering is that of a system which operates throughout several sequential and distinct periods of time, during which the modes and consequences of failure differ from one another. This type of operation is known as a phased mission, and for the mission to be a success the system must successfully operate throughout all of the phases. Examples include a rocket launch and an aeroplane flight. Component or sub-system failures may occur at any time during the mission, yet not affect the system performance until the phase in which their condition is critical. This may mean that the transition from one phase to the next is a critical event that leads to phase and mission failure, with the root cause being a component failure in a previous phase. A series of phased missions with no maintenance may be considered as a maintenance-free operating period (MFOP). This paper describes the use of a Petri net (PN) to model the reliability of the MFOP and phased missions scenario. The model uses Monte-Carlo simulation to obtain its results, and due to the modelling power of PNs, can consider complexities such as component failure rate interdependencies and mission abandonment. The model operates three different types of PN which interact to provide the overall system reliability modelling. The model is demonstrated and validated by considering two simple examples that can be solved analytically.

  16. Phased mission modelling of systems with maintenance-free operating periods using simulated Petri nets

    International Nuclear Information System (INIS)

    Chew, S.P.; Dunnett, S.J.; Andrews, J.D.

    2008-01-01

    A common scenario in engineering is that of a system which operates throughout several sequential and distinct periods of time, during which the modes and consequences of failure differ from one another. This type of operation is known as a phased mission, and for the mission to be a success the system must successfully operate throughout all of the phases. Examples include a rocket launch and an aeroplane flight. Component or sub-system failures may occur at any time during the mission, yet not affect the system performance until the phase in which their condition is critical. This may mean that the transition from one phase to the next is a critical event that leads to phase and mission failure, with the root cause being a component failure in a previous phase. A series of phased missions with no maintenance may be considered as a maintenance-free operating period (MFOP). This paper describes the use of a Petri net (PN) to model the reliability of the MFOP and phased missions scenario. The model uses Monte-Carlo simulation to obtain its results, and due to the modelling power of PNs, can consider complexities such as component failure rate interdependencies and mission abandonment. The model operates three different types of PN which interact to provide the overall system reliability modelling. The model is demonstrated and validated by considering two simple examples that can be solved analytically

  17. Practical applications of age-dependent reliability models and analysis of operational data

    Energy Technology Data Exchange (ETDEWEB)

    Lannoy, A.; Nitoi, M.; Backstrom, O.; Burgazzi, L.; Couallier, V.; Nikulin, M.; Derode, A.; Rodionov, A.; Atwood, C.; Fradet, F.; Antonov, A.; Berezhnoy, A.; Choi, S.Y.; Starr, F.; Dawson, J.; Palmen, H.; Clerjaud, L

    2005-07-01

    The purpose of the workshop was to present the experience of practical application of time-dependent reliability models. The program of the workshop comprises the following sessions: -) aging management and aging PSA (Probabilistic Safety Assessment), -) modeling, -) operation experience, and -) accelerating aging tests. In order to introduce time aging effect of particular component to the PSA model, it has been proposed to use the constant unavailability values on the short period of time (one year for example) calculated on the basis of age-dependent reliability models. As for modeling, it appears that the problem of too detailed statistical models for application is the lack of data for required parameters. As for operating experience, several methods of operating experience analysis have been presented (algorithms for reliability data elaboration and statistical identification of aging trend). As for accelerated aging tests, it is demonstrated that a combination of operating experience analysis with the results of accelerated aging tests of naturally aged equipment could provide a good basis for continuous operation of instrumentation and control systems.

  18. Practical applications of age-dependent reliability models and analysis of operational data

    International Nuclear Information System (INIS)

    Lannoy, A.; Nitoi, M.; Backstrom, O.; Burgazzi, L.; Couallier, V.; Nikulin, M.; Derode, A.; Rodionov, A.; Atwood, C.; Fradet, F.; Antonov, A.; Berezhnoy, A.; Choi, S.Y.; Starr, F.; Dawson, J.; Palmen, H.; Clerjaud, L.

    2005-01-01

    The purpose of the workshop was to present the experience of practical application of time-dependent reliability models. The program of the workshop comprises the following sessions: -) aging management and aging PSA (Probabilistic Safety Assessment), -) modeling, -) operation experience, and -) accelerating aging tests. In order to introduce time aging effect of particular component to the PSA model, it has been proposed to use the constant unavailability values on the short period of time (one year for example) calculated on the basis of age-dependent reliability models. As for modeling, it appears that the problem of too detailed statistical models for application is the lack of data for required parameters. As for operating experience, several methods of operating experience analysis have been presented (algorithms for reliability data elaboration and statistical identification of aging trend). As for accelerated aging tests, it is demonstrated that a combination of operating experience analysis with the results of accelerated aging tests of naturally aged equipment could provide a good basis for continuous operation of instrumentation and control systems

  19. On the Development of an Operational SWAN Model for the Black Sea (poster)

    NARCIS (Netherlands)

    Akpinar, A.; Van Vledder, G.P.

    2013-01-01

    This poster describes the results of some studies performed on the development of an efficient and operational SWAN model for the Black Sea. This model will be used to study the wind-wave climate and wave energy potential in the region and will provide boundary conditions for coastal engineering and

  20. Teaching Model Innovation of Production Operation Management Engaging in ERP Sandbox Simulation

    Directory of Open Access Journals (Sweden)

    Tinggui Chen

    2014-05-01

    Full Text Available In light of the course of production operation management status, this article proposes the innovation and reform of the teaching model from three aspects of from the curriculum syllabus reform, the simulation of typical teaching organization model, and the enterprise resource process (ERP sandbox application in the course practice. There are an exhaustive implementation procedure and a further discussion on the promotion outcome. The results indicate that the innovation of teaching model and case studying practice in production operation management based on ERP sandbox simulation is feasible.