WorldWideScience

Sample records for model selection results

  1. An Associative Index Model for the Results List Based on Vannevar Bush's Selection Concept

    Science.gov (United States)

    Cole, Charles; Julien, Charles-Antoine; Leide, John E.

    2010-01-01

    Introduction: We define the results list problem in information search and suggest the "associative index model", an ad-hoc, user-derived indexing solution based on Vannevar Bush's description of an associative indexing approach for his memex machine. We further define what selection means in indexing terms with reference to Charles…

  2. Impact of the choice of the precipitation reference data set on climate model selection and the resulting climate change signal

    Science.gov (United States)

    Gampe, D.; Ludwig, R.

    2017-12-01

    Regional Climate Models (RCMs) that downscale General Circulation Models (GCMs) are the primary tool to project future climate and serve as input to many impact models to assess the related changes and impacts under such climate conditions. Such RCMs are made available through the Coordinated Regional climate Downscaling Experiment (CORDEX). The ensemble of models provides a range of possible future climate changes around the ensemble mean climate change signal. The model outputs however are prone to biases compared to regional observations. A bias correction of these deviations is a crucial step in the impact modelling chain to allow the reproduction of historic conditions of i.e. river discharge. However, the detection and quantification of model biases are highly dependent on the selected regional reference data set. Additionally, in practice due to computational constraints it is usually not feasible to consider the entire ensembles of climate simulations with all members as input for impact models which provide information to support decision-making. Although more and more studies focus on model selection based on the preservation of the climate model spread, a selection based on validity, i.e. the representation of the historic conditions is still a widely applied approach. In this study, several available reference data sets for precipitation are selected to detect the model bias for the reference period 1989 - 2008 over the alpine catchment of the Adige River located in Northern Italy. The reference data sets originate from various sources, such as station data or reanalysis. These data sets are remapped to the common RCM grid at 0.11° resolution and several indicators, such as dry and wet spells, extreme precipitation and general climatology, are calculate to evaluate the capability of the RCMs to produce the historical conditions. The resulting RCM spread is compared against the spread of the reference data set to determine the related uncertainties and

  3. Selected recent results from AMANDA

    CERN Document Server

    Andrés, E; Bai, X; Barouch, G; Barwick, S W; Bay, R C; Becker, K H; Bergström, L; Bertrand, D; Bierenbaum, D; Biron, A; Booth, J; Botner, O; Bouchta, A; Boyce, M M; Carius, S; Chen, A; Chirkin, D; Conrad, J; Cooley, J; Costa, C G S; Cowen, D F; Dailing, J; Dalberg, E; De Young, T R; Desiati, P; Dewulf, J P; Doksus, P; Edsjö, J; Ekstrom, P; Erlandsson, B; Feser, T; Gaug, M; Goldschmidt, A; Goobar, A; Gray, L; Haase, H; Hallgren, A; Halzen, F; Hanson, K; Hardtke, R; He, Y D; Hellwig, M; Heukenkamp, H; Hill, G C; Hulth, P O; Hundertmark, S; Jacobsen, J; Kandhadai, V; Karle, A; Kim, J; Koci, B; Köpke, L; Kowalski, M; Leich, H; Leuthold, M; Lindahl, P; Liubarsky, I; Loaiza, P; Lowder, D M; Ludvig, J; Madsen, J; Marciniewski, P; Matis, H S; Mihályi, A; Mikolajski, T; Miller, T C; Minaeva, Y; Miocinovic, P; Mock, P C; Morse, R; Neunhoffer, T; Newcomer, F M; Niessen, P; Nygren, D R; Ogelman, H; Heros, C P D L; Porrata, R; Price, P B; Rawlins, K; Reed, C; Rhode, W; Richards, A; Richter, S; Martino, J R; Romenesko, P; Ross, D; Rubinstein, H; Sander, H G; Scheider, T; Schmidt, T; Schneider, D; Schneider, E; Schwarzl, R; Silvestri, A; Solarz, M; Spiczak, G M; Spiering, C; Starinsky, N; Steele, D; Steffen, P; Stokstad, R G; Streicher, O; Sun, A; Taboada, I; Thollander, L; Thon, T; Tilav, S; Usechak, N; Donckt, M V; Walck, C; Weinheimer, C; Wiebusch, C; Wischnewski, R; Wissing, H; Woschnagg, K; Wu, W; Yodh, G; Young, S

    2001-01-01

    We present a selection of results based on data taken in 1997 with the 302-PMT Antarctic Muon and Neutrino Detector Array-B10 ("AMANDA-B10") array. Atmospheric neutrinos created in the northern hemisphere are observed indirectly through their charged current interactions which produce relativistic, Cherenkov-light-emitting upgoing muons in the South Pole ice cap. The reconstructed angular distribution of these events is in good agreement with expectation and demonstrates the viability of this ice-based device as a neutrino telescope. Studies of nearly vertical upgoing muons limit the available parameter space for WIMP dark matter under the assumption that WIMPS are trapped in the earth's gravitational potential well and annihilate with one another near the earth's center.

  4. Influence of FGR complexity modelling on the practical results in gas pressure calculation of selected fuel elements from Dukovany NPP

    International Nuclear Information System (INIS)

    Lahodova, M.

    2001-01-01

    A modernization fuel system and advanced fuel for operation up to the high burnup are used in present time in Dukovany NPP. Reloading of the cores are evaluated using computer codes for thermomechanical behavior of the most loaded fuel rods. The paper presents results of parametric calculations performed by the NRI Rez integral code PIN, version 2000 (PIN2k) to assess influence of fission gas release modelling complexity on achieved results. The representative Dukovany NPP fuel rod irradiation history data are used and two cases of fuel parameter variables (soft and hard) are chosen for the comparison. Involved FGR models where the GASREL diffusion model developed in the NRI Rez plc and standard Weisman model that is recommended in the previous version of the PIN integral code. FGR calculation by PIN2k with GASREL model represents more realistic results than standard Weisman's model. Results for linear power, fuel centre temperature, FGR and gas pressure versus burnup are given for two fuel rods

  5. A modelling breakthrough for market design analysis to test massive intermittent generation integration in markets results of selected OPTIMATE studies

    DEFF Research Database (Denmark)

    Beaude, Francois; Atayi, A.; Bourmaud, J.-Y.

    2013-01-01

    The OPTIMATE1 platform focuses on electricity system and market designs modelling in order to assess current and innovative designs in Europe. The current paper describes the results of the first validation studies' conducted with the tool. These studies deal with day-ahead market rules, load...... flexibility, cross-border management and intermittent renewable support schemes with a view to better integrating large amounts of renewable energy in Europe. Market and system designs were assessed based on economic efficiency, security of supply2 and environmental impact3 indicators. These results give...

  6. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.

    2012-03-01

    Sample selection arises often in practice as a result of the partial observability of the outcome of interest in a study. In the presence of sample selection, the observed data do not represent a random sample from the population, even after controlling for explanatory variables. That is, data are missing not at random. Thus, standard analysis using only complete cases will lead to biased results. Heckman introduced a sample selection model to analyze such data and proposed a full maximum likelihood estimation method under the assumption of normality. The method was criticized in the literature because of its sensitivity to the normality assumption. In practice, data, such as income or expenditure data, often violate the normality assumption because of heavier tails. We first establish a new link between sample selection models and recently studied families of extended skew-elliptical distributions. Then, this allows us to introduce a selection-t (SLt) model, which models the error distribution using a Student\\'s t distribution. We study its properties and investigate the finite-sample performance of the maximum likelihood estimators for this model. We compare the performance of the SLt model to the conventional Heckman selection-normal (SLN) model and apply it to analyze ambulatory expenditures. Unlike the SLNmodel, our analysis using the SLt model provides statistical evidence for the existence of sample selection bias in these data. We also investigate the performance of the test for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical Association.

  7. Atmospheric Deposition Modeling Results

    Data.gov (United States)

    U.S. Environmental Protection Agency — This asset provides data on model results for dry and total deposition of sulfur, nitrogen and base cation species. Components include deposition velocities, dry...

  8. Selected results of the slovak coal research

    Directory of Open Access Journals (Sweden)

    Hredzák Slavomír

    1997-09-01

    Full Text Available The contribution gives the review of Slovak brown coal research in the last 10 years. The state and development trends of the coal research in Slovakia from the point of view of the clean coal technologies application are described. Some selected results which have been obtained at the Institute of Geotechnics of the Slovak Academy of Sciences are also introduced.

  9. Results from Coupled Optical and Electrical Sentaurus TCAD Models of a Gallium Phosphide on Silicon Electron Carrier Selective Contact Solar Cell

    Energy Technology Data Exchange (ETDEWEB)

    Limpert, Steven; Ghosh, Kunal; Wagner, Hannes; Bowden, Stuart; Honsberg, Christiana; Goodnick, Stephen; Bremner, Stephen; Green, Martin

    2014-06-09

    We report results from coupled optical and electrical Sentaurus TCAD models of a gallium phosphide (GaP) on silicon electron carrier selective contact (CSC) solar cell. Detailed analyses of current and voltage performance are presented for devices having substrate thicknesses of 10 μm, 50 μm, 100 μm and 150 μm, and with GaP/Si interfacial quality ranging from very poor to excellent. Ultimate potential performance was investigated using optical absorption profiles consistent with light trapping schemes of random pyramids with attached and detached rear reflector, and planar with an attached rear reflector. Results indicate Auger-limited open-circuit voltages up to 787 mV and efficiencies up to 26.7% may be possible for front-contacted devices.

  10. Recruiter Selection Model

    National Research Council Canada - National Science Library

    Halstead, John B

    2006-01-01

    .... The research uses a combination of statistical learning, feature selection methods, and multivariate statistics to determine the better prediction function approximation with features obtained...

  11. Model selection in periodic autoregressions

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); R. Paap (Richard)

    1994-01-01

    textabstractThis paper focuses on the issue of period autoagressive time series models (PAR) selection in practice. One aspect of model selection is the choice for the appropriate PAR order. This can be of interest for the valuation of economic models. Further, the appropriate PAR order is important

  12. Modeling Natural Selection

    Science.gov (United States)

    Bogiages, Christopher A.; Lotter, Christine

    2011-01-01

    In their research, scientists generate, test, and modify scientific models. These models can be shared with others and demonstrate a scientist's understanding of how the natural world works. Similarly, students can generate and modify models to gain a better understanding of the content, process, and nature of science (Kenyon, Schwarz, and Hug…

  13. Models selection and fitting

    International Nuclear Information System (INIS)

    Martin Llorente, F.

    1990-01-01

    The models of atmospheric pollutants dispersion are based in mathematic algorithms that describe the transport, diffusion, elimination and chemical reactions of atmospheric contaminants. These models operate with data of contaminants emission and make an estimation of quality air in the area. This model can be applied to several aspects of atmospheric contamination

  14. Selected System Models

    Science.gov (United States)

    Schmidt-Eisenlohr, F.; Puñal, O.; Klagges, K.; Kirsche, M.

    Apart from the general issue of modeling the channel, the PHY and the MAC of wireless networks, there are specific modeling assumptions that are considered for different systems. In this chapter we consider three specific wireless standards and highlight modeling options for them. These are IEEE 802.11 (as example for wireless local area networks), IEEE 802.16 (as example for wireless metropolitan networks) and IEEE 802.15 (as example for body area networks). Each section on these three systems discusses also at the end a set of model implementations that are available today.

  15. Multi-scale Mexican spotted owl (Strix occidentalis lucida) nest/roost habitat selection in Arizona and a comparison with single-scale modeling results

    Science.gov (United States)

    Brad C. Timm; Kevin McGarigal; Samuel A. Cushman; Joseph L. Ganey

    2016-01-01

    Efficacy of future habitat selection studies will benefit by taking a multi-scale approach. In addition to potentially providing increased explanatory power and predictive capacity, multi-scale habitat models enhance our understanding of the scales at which species respond to their environment, which is critical knowledge required to implement effective...

  16. Launch vehicle selection model

    Science.gov (United States)

    Montoya, Alex J.

    1990-01-01

    Over the next 50 years, humans will be heading for the Moon and Mars to build scientific bases to gain further knowledge about the universe and to develop rewarding space activities. These large scale projects will last many years and will require large amounts of mass to be delivered to Low Earth Orbit (LEO). It will take a great deal of planning to complete these missions in an efficient manner. The planning of a future Heavy Lift Launch Vehicle (HLLV) will significantly impact the overall multi-year launching cost for the vehicle fleet depending upon when the HLLV will be ready for use. It is desirable to develop a model in which many trade studies can be performed. In one sample multi-year space program analysis, the total launch vehicle cost of implementing the program reduced from 50 percent to 25 percent. This indicates how critical it is to reduce space logistics costs. A linear programming model has been developed to answer such questions. The model is now in its second phase of development, and this paper will address the capabilities of the model and its intended uses. The main emphasis over the past year was to make the model user friendly and to incorporate additional realistic constraints that are difficult to represent mathematically. We have developed a methodology in which the user has to be knowledgeable about the mission model and the requirements of the payloads. We have found a representation that will cut down the solution space of the problem by inserting some preliminary tests to eliminate some infeasible vehicle solutions. The paper will address the handling of these additional constraints and the methodology for incorporating new costing information utilizing learning curve theory. The paper will review several test cases that will explore the preferred vehicle characteristics and the preferred period of construction, i.e., within the next decade, or in the first decade of the next century. Finally, the paper will explore the interaction

  17. Distributional and efficiency results for subset selection

    NARCIS (Netherlands)

    Laan, van der P.

    1996-01-01

    Assume k (??k \\geq 2) populations are given. The associated independent random variables have continuous distribution functions with an unknown location parameter. The statistical selec??tion goal is to select a non??empty subset which contains the best population,?? that is the pop??ulation with

  18. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.; Genton, Marc G.

    2012-01-01

    for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical

  19. Selected new results from CUSB-II

    International Nuclear Information System (INIS)

    Lee-Franzini, J.

    1991-01-01

    Using the CUSB-II detector the inclusive photon spectrum was studied from 2.9x10 4 Υ(5S) decays. A strong signal has been observed due to B * → Bγ decays. The following results were obtained: (i) the average B * -B mass difference, (46.7±0.4) MeV, (ii) the photon yield per Υ(5S) decay, = 1.09±0.06 and (iii) the average velocity of the B * 's, = 0.156±0.010, for a mix of non strange (B) and strange (B s ) B * -mesons from Υ(5S) decays. The shape of the electron spectrum at the Υ(5S) indicates production of B mesons which are heavier than non-strange B's, presumably strange B's. Photon signals were observed in Υ(4S) decays, indicative of large decay rates involving annihilation of the bb-bar pair rather than decays to BB-bar meson pairs. A model independent upper limit of 3.3 to 5% has been obtained for the branching ratio of Υ(4S) → ggX, for 0 < M(X) < 1.2 GeV, at 90% confidence level. (R.P.) 19 refs., 1 tab

  20. Modelling of migration and fate of selected persistent organic pollutants in the Gulf of Gdansk and the Vistula catchment (Poland): selected results from the EU ELOISE EuroCat project

    Science.gov (United States)

    Zukowska, Barbara; Pacyna, Jozef; Namiesnik, Jacek

    2005-02-01

    The ELOISE EU EuroCat project integrated natural and social sciences to link the impacts affecting the coastal sea to the human activities developed along the catchments. In EuroCat project river catchments' changes and their impact on the inflow area were analysed. The information was linked with environmental models. The part of the EU ELOISE EuroCat project focusing on the Vistula River catchment and the Baltic Sea costal zone was named VisCat. Within the framework of the EU ELOISE EuroCat - VisCat project, CoZMo-POP (Coastal Zone Model for Persistent Organic Pollutants), a non-steady-state multicompartmental mass balance model of long-term chemical fate in the coastal environment or the drainage basin of a large lake environment was used. The model is parameterised and tested herein to simulate the long-term fate and distribution of selected HCHs (hexachlorocyclohexanes) and PCBs (polychlorinated biphenyls) in the Gulf of Gdansk and the Vistula River drainage basin environment. The model can also be used in the future to predict future concentrations in relation to various emission scenarios and in management of economic development and regulations of substance-emission to this environment. However, this would require more extensive efforts in the future on model parameterisation and validation in order to increase the confidence in current model outputs.

  1. Selected results from the ANTARES neutrino telescope

    International Nuclear Information System (INIS)

    Bouhou, B.

    2014-01-01

    ANTARES uses sea water as as a detection medium to observe cosmic neutrinos. The ANTARES neutrino telescope is taking data with its complete configuration since 2008. Its main goal is the detection of cosmic neutrinos from the Southern hemisphere sky, coming from Galactic and extragalactic sources. Recently, the ANTARES collaboration has published many results from data collected from 2007 to 2010 using detector configurations containing between 5 to 12 detection strings. Among those, search of point sources and diffuse flux from high energy cosmic neutrinos, both resulted in stringent and competitive upper limits for the flux of cosmic neutrinos. In addition, ANTARES is involved in multi-messenger projects looking for correlations between neutrinos and gamma rays or gravitational wave emitted by sources like Gamma-Ray bursts. In this paper we report on some recent results published by the ANTARES collaboration

  2. Use of thermodynamic sorption models to derive radionuclide Kd values for performance assessment: Selected results and recommendations of the NEA sorption project

    Science.gov (United States)

    Ochs, M.; Davis, J.A.; Olin, M.; Payne, T.E.; Tweed, C.J.; Askarieh, M.M.; Altmann, S.

    2006-01-01

    For the safe final disposal and/or long-term storage of radioactive wastes, deep or near-surface underground repositories are being considered world-wide. A central safety feature is the prevention, or sufficient retardation, of radionuclide (RN) migration to the biosphere. To this end, radionuclide sorption is one of the most important processes. Decreasing the uncertainty in radionuclide sorption may contribute significantly to reducing the overall uncertainty of a performance assessment (PA). For PA, sorption is typically characterised by distribution coefficients (Kd values). The conditional nature of Kd requires different estimates of this parameter for each set of geochemical conditions of potential relevance in a RN's migration pathway. As it is not feasible to measure sorption for every set of conditions, the derivation of Kd for PA must rely on data derived from representative model systems. As a result, uncertainty in Kd is largely caused by the need to derive values for conditions not explicitly addressed in experiments. The recently concluded NEA Sorption Project [1] showed that thermodynamic sorption models (TSMs) are uniquely suited to derive K d as a function of conditions, because they allow a direct coupling of sorption with variable solution chemistry and mineralogy in a thermodynamic framework. The results of the project enable assessment of the suitability of various TSM approaches for PA-relevant applications as well as of the potential and limitations of TSMs to model RN sorption in complex systems. ?? by Oldenbourg Wissenschaftsverlag.

  3. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...

  4. An evolutionary algorithm for model selection

    Energy Technology Data Exchange (ETDEWEB)

    Bicker, Karl [CERN, Geneva (Switzerland); Chung, Suh-Urk; Friedrich, Jan; Grube, Boris; Haas, Florian; Ketzer, Bernhard; Neubert, Sebastian; Paul, Stephan; Ryabchikov, Dimitry [Technische Univ. Muenchen (Germany)

    2013-07-01

    When performing partial-wave analyses of multi-body final states, the choice of the fit model, i.e. the set of waves to be used in the fit, can significantly alter the results of the partial wave fit. Traditionally, the models were chosen based on physical arguments and by observing the changes in log-likelihood of the fits. To reduce possible bias in the model selection process, an evolutionary algorithm was developed based on a Bayesian goodness-of-fit criterion which takes into account the model complexity. Starting from systematically constructed pools of waves which contain significantly more waves than the typical fit model, the algorithm yields a model with an optimal log-likelihood and with a number of partial waves which is appropriate for the number of events in the data. Partial waves with small contributions to the total intensity are penalized and likely to be dropped during the selection process, as are models were excessive correlations between single waves occur. Due to the automated nature of the model selection, a much larger part of the model space can be explored than would be possible in a manual selection. In addition the method allows to assess the dependence of the fit result on the fit model which is an important contribution to the systematic uncertainty.

  5. Selected Tether Applications Cost Model

    Science.gov (United States)

    Keeley, Michael G.

    1988-01-01

    Diverse cost-estimating techniques and data combined into single program. Selected Tether Applications Cost Model (STACOM 1.0) is interactive accounting software tool providing means for combining several independent cost-estimating programs into fully-integrated mathematical model capable of assessing costs, analyzing benefits, providing file-handling utilities, and putting out information in text and graphical forms to screen, printer, or plotter. Program based on Lotus 1-2-3, version 2.0. Developed to provide clear, concise traceability and visibility into methodology and rationale for estimating costs and benefits of operations of Space Station tether deployer system.

  6. THE TIME DOMAIN SPECTROSCOPIC SURVEY: VARIABLE SELECTION AND ANTICIPATED RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Morganson, Eric; Green, Paul J. [Harvard Smithsonian Center for Astrophysics, 60 Garden St, Cambridge, MA 02138 (United States); Anderson, Scott F.; Ruan, John J. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Myers, Adam D. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Eracleous, Michael; Brandt, William Nielsen [Department of Astronomy and Astrophysics, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA 16802 (United States); Kelly, Brandon [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106-9530 (United States); Badenes, Carlos [Department of Physics and Astronomy and Pittsburgh Particle Physics, Astrophysics and Cosmology Center (PITT PACC), University of Pittsburgh, 3941 O’Hara St, Pittsburgh, PA 15260 (United States); Bañados, Eduardo [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Blanton, Michael R. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States); Bershady, Matthew A. [Department of Astronomy, University of Wisconsin, 475 N. Charter St., Madison, WI 53706 (United States); Borissova, Jura [Instituto de Física y Astronomía, Universidad de Valparaíso, Av. Gran Bretaña 1111, Playa Ancha, Casilla 5030, and Millennium Institute of Astrophysics (MAS), Santiago (Chile); Burgett, William S. [GMTO Corp, Suite 300, 251 S. Lake Ave, Pasadena, CA 91101 (United States); Chambers, Kenneth, E-mail: emorganson@cfa.harvard.edu [Institute for Astronomy, University of Hawaii at Manoa, Honolulu, HI 96822 (United States); and others

    2015-06-20

    We present the selection algorithm and anticipated results for the Time Domain Spectroscopic Survey (TDSS). TDSS is an Sloan Digital Sky Survey (SDSS)-IV Extended Baryon Oscillation Spectroscopic Survey (eBOSS) subproject that will provide initial identification spectra of approximately 220,000 luminosity-variable objects (variable stars and active galactic nuclei across 7500 deg{sup 2} selected from a combination of SDSS and multi-epoch Pan-STARRS1 photometry. TDSS will be the largest spectroscopic survey to explicitly target variable objects, avoiding pre-selection on the basis of colors or detailed modeling of specific variability characteristics. Kernel Density Estimate analysis of our target population performed on SDSS Stripe 82 data suggests our target sample will be 95% pure (meaning 95% of objects we select have genuine luminosity variability of a few magnitudes or more). Our final spectroscopic sample will contain roughly 135,000 quasars and 85,000 stellar variables, approximately 4000 of which will be RR Lyrae stars which may be used as outer Milky Way probes. The variability-selected quasar population has a smoother redshift distribution than a color-selected sample, and variability measurements similar to those we develop here may be used to make more uniform quasar samples in large surveys. The stellar variable targets are distributed fairly uniformly across color space, indicating that TDSS will obtain spectra for a wide variety of stellar variables including pulsating variables, stars with significant chromospheric activity, cataclysmic variables, and eclipsing binaries. TDSS will serve as a pathfinder mission to identify and characterize the multitude of variable objects that will be detected photometrically in even larger variability surveys such as Large Synoptic Survey Telescope.

  7. MODEL SELECTION FOR SPECTROPOLARIMETRIC INVERSIONS

    International Nuclear Information System (INIS)

    Asensio Ramos, A.; Manso Sainz, R.; Martínez González, M. J.; Socas-Navarro, H.; Viticchié, B.; Orozco Suárez, D.

    2012-01-01

    Inferring magnetic and thermodynamic information from spectropolarimetric observations relies on the assumption of a parameterized model atmosphere whose parameters are tuned by comparison with observations. Often, the choice of the underlying atmospheric model is based on subjective reasons. In other cases, complex models are chosen based on objective reasons (for instance, the necessity to explain asymmetries in the Stokes profiles) but it is not clear what degree of complexity is needed. The lack of an objective way of comparing models has, sometimes, led to opposing views of the solar magnetism because the inferred physical scenarios are essentially different. We present the first quantitative model comparison based on the computation of the Bayesian evidence ratios for spectropolarimetric observations. Our results show that there is not a single model appropriate for all profiles simultaneously. Data with moderate signal-to-noise ratios (S/Ns) favor models without gradients along the line of sight. If the observations show clear circular and linear polarization signals above the noise level, models with gradients along the line are preferred. As a general rule, observations with large S/Ns favor more complex models. We demonstrate that the evidence ratios correlate well with simple proxies. Therefore, we propose to calculate these proxies when carrying out standard least-squares inversions to allow for model comparison in the future.

  8. Adverse selection model regarding tobacco consumption

    Directory of Open Access Journals (Sweden)

    Dumitru MARIN

    2006-01-01

    Full Text Available The impact of introducing a tax on tobacco consumption can be studied trough an adverse selection model. The objective of the model presented in the following is to characterize the optimal contractual relationship between the governmental authorities and the two type employees: smokers and non-smokers, taking into account that the consumers’ decision to smoke or not represents an element of risk and uncertainty. Two scenarios are run using the General Algebraic Modeling Systems software: one without taxes set on tobacco consumption and another one with taxes set on tobacco consumption, based on an adverse selection model described previously. The results of the two scenarios are compared in the end of the paper: the wage earnings levels and the social welfare in case of a smoking agent and in case of a non-smoking agent.

  9. A qualitative multi-attribute model for the selection of the private hydropower plant investments in Turkey: By foundation of the search results clustering engine (Carrot2), hydropower plant clustering, DEXi and DEXiTree

    Energy Technology Data Exchange (ETDEWEB)

    Saracoglu, B.O.

    2016-07-01

    The electricity demand in Turkey has been increasing for a while. Hydropower is one of the major electricity generation types to compensate this electricity demand in Turkey. Private investors (domestic and foreign) in the hydropower electricity generation sector have been looking for the most appropriate and satisfactory new private hydropower investment (PHPI) options and opportunities in Turkey. This study aims to present a qualitative multi-attribute decision making (MADM) model, that is easy, straightforward, and fast for the selection of the most satisfactory reasonable PHPI options during the very early investment stages (data and information poorness on projects). The data and information of the PHPI options was gathered from the official records on the official websites. A wide and deep literature review was conducted for the MADM models and for the hydropower industry. The attributes of the model were identified, selected, clustered and evaluated by the expert decision maker (EDM) opinion and by help of an open source search results clustering engine (Carrot2) (helpful for also comprehension). The PHPI options were clustered according to their installed capacities main property to analyze the options in the most appropriate, decidable, informative, understandable and meaningful way. A simple clustering algorithm for the PHPI options was executed in the current study. A template model for the selection of the most satisfactory PHPI options was built in the DEXi (Decision EXpert for Education) and the DEXiTree software. The basic attributes for the selection of the PHPI options were presented and afterwards the aggregate attributes were defined by the bottom-up structuring for the early investment stages. The attributes were also analyzed by help of Carrot2. The most satisfactory PHPI options in Turkey in the big options data set were selected for each PHPI options cluster by the EDM evaluations in the DEXi. (Author)

  10. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  11. Estimation of a multivariate mean under model selection uncertainty

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2014-05-01

    Full Text Available Model selection uncertainty would occur if we selected a model based on one data set and subsequently applied it for statistical inferences, because the "correct" model would not be selected with certainty.  When the selection and inference are based on the same dataset, some additional problems arise due to the correlation of the two stages (selection and inference. In this paper model selection uncertainty is considered and model averaging is proposed. The proposal is related to the theory of James and Stein of estimating more than three parameters from independent normal observations. We suggest that a model averaging scheme taking into account the selection procedure could be more appropriate than model selection alone. Some properties of this model averaging estimator are investigated; in particular we show using Stein's results that it is a minimax estimator and can outperform Stein-type estimators.

  12. A qualitative multi-attribute model for the selection of the private hydropower plant investments in Turkey: By foundation of the search results clustering engine (Carrot2, hydropower plant clustering, DEXi and DEXiTree

    Directory of Open Access Journals (Sweden)

    Burak Omer Saracoglu

    2016-03-01

    Full Text Available Purpose: The electricity demand in Turkey has been increasing for a while. Hydropower is one of the major electricity generation types to compensate this electricity demand in Turkey. Private investors (domestic and foreign in the hydropower electricity generation sector have been looking for the most appropriate and satisfactory new private hydropower investment (PHPI options and opportunities in Turkey. This study aims to present a qualitative multi-attribute decision making (MADM model, that is easy, straightforward, and fast for the selection of the most satisfactory reasonable PHPI options during the very early investment stages (data and information poorness on projects. Design/methodology/approach: The data and information of the PHPI options was gathered from the official records on the official websites. A wide and deep literature review was conducted for the MADM models and for the hydropower industry. The attributes of the model were identified, selected, clustered and evaluated by the expert decision maker (EDM opinion and by help of an open source search results clustering engine (Carrot2 (helpful for also comprehension. The PHPI options were clustered according to their installed capacities main property to analyze the options in the most appropriate, decidable, informative, understandable and meaningful way. A simple clustering algorithm for the PHPI options was executed in the current study. A template model for the selection of the most satisfactory PHPI options was built in the DEXi (Decision EXpert for Education and the DEXiTree software. Findings: The basic attributes for the selection of the PHPI options were presented and afterwards the aggregate attributes were defined by the bottom-up structuring for the early investment stages. The attributes were also analyzed by help of Carrot2. The most satisfactory PHPI options in Turkey in the big options data set were selected for each PHPI options cluster by the EDM evaluations in

  13. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss...... in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...

  14. Selected sports talent development models

    Directory of Open Access Journals (Sweden)

    Michal Vičar

    2017-06-01

    Full Text Available Background: Sports talent in the Czech Republic is generally viewed as a static, stable phenomena. It stands in contrast with widespread praxis carried out in Anglo-Saxon countries that emphasise its fluctuant nature. This is reflected in the current models describing its development. Objectives: The aim is to introduce current models of talent development in sport. Methods: Comparison and analysing of the following models: Balyi - Long term athlete development model, Côté - Developmental model of sport participation, Csikszentmihalyi - The flow model of optimal expertise, Bailey and Morley - Model of talent development. Conclusion: Current models of sport talent development approach talent as dynamic phenomenon, varying in time. They are based in particular on the work of Simonton and his Emergenic and epigenic model and of Gagné and his Differentiated model of giftedness and talent. Balyi's model is characterised by its applicability and impications for practice. Côté's model highlights the role of family and deliberate play. Both models describe periodization of talent development. Csikszentmihalyi's flow model explains how the athlete acquires experience and develops during puberty based on the structure of attention and flow experience. Bailey and Morley's model accents the situational approach to talent and development of skills facilitating its growth.

  15. Selected sports talent development models

    OpenAIRE

    Michal Vičar

    2017-01-01

    Background: Sports talent in the Czech Republic is generally viewed as a static, stable phenomena. It stands in contrast with widespread praxis carried out in Anglo-Saxon countries that emphasise its fluctuant nature. This is reflected in the current models describing its development. Objectives: The aim is to introduce current models of talent development in sport. Methods: Comparison and analysing of the following models: Balyi - Long term athlete development model, Côté - Developmen...

  16. Selection of robust methods. Numerical examples and results

    Czech Academy of Sciences Publication Activity Database

    Víšek, Jan Ámos

    2005-01-01

    Roč. 21, č. 11 (2005), s. 1-58 ISSN 1212-074X R&D Projects: GA ČR(CZ) GA402/03/0084 Institutional research plan: CEZ:AV0Z10750506 Keywords : robust regression * model selection * uniform consistency of M-estimators Subject RIV: BA - General Mathematics

  17. A Computational Model of Selection by Consequences

    Science.gov (United States)

    McDowell, J. J.

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of…

  18. The EURAD model: Design and first results

    International Nuclear Information System (INIS)

    1989-01-01

    The contributions are abridged versions of lectures delivered on the occasion of the presentation meeting of the EURAD project on the 20th and 21st of February 1989 in Cologne. EURAD stands for European Acid Deposition Model. The project takes one of the possible and necessary ways to search for scientific answers to the questions which the modifications of the atmosphere caused by anthropogenic influence raise. One of the objectives is to develop a realistic numeric model of long-distance transport of harmful substances in the troposphere over Europe and to use this model for the investigation of pollutant distribution but also for the support of their experimental study. The EURAD Model consists of two parts: a meteorologic mesoscale model and a chemical transport model. In the first part of the presentation, these parts are introduced and questions concerning the implementation of the entire model on the computer system CRAY X-MP/22 discussed. Afterwards it is reported upon the results of the test calculations for the cases 'Chernobyl' and 'Alpex'. Thereafter selected problems concerning the treatments of meteorological and air-chemistry processes as well as the parametrization of subscale processes within the model are discussed. The conclusion is made by two lectures upon emission evaluations and emission scenarios. (orig./KW) [de

  19. A computational model of selection by consequences.

    OpenAIRE

    McDowell, J J

    2004-01-01

    Darwinian selection by consequences was instantiated in a computational model that consisted of a repertoire of behaviors undergoing selection, reproduction, and mutation over many generations. The model in effect created a digital organism that emitted behavior continuously. The behavior of this digital organism was studied in three series of computational experiments that arranged reinforcement according to random-interval (RI) schedules. The quantitative features of the model were varied o...

  20. A Bayesian random effects discrete-choice model for resource selection: Population-level selection inference

    Science.gov (United States)

    Thomas, D.L.; Johnson, D.; Griffith, B.

    2006-01-01

    Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a

  1. Bayesian Model Selection under Time Constraints

    Science.gov (United States)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  2. A Dynamic Model for Limb Selection

    NARCIS (Netherlands)

    Cox, R.F.A; Smitsman, A.W.

    2008-01-01

    Two experiments and a model on limb selection are reported. In Experiment 1 left-handed and right-handed participants (N = 36) repeatedly used one hand for grasping a small cube. After a clear switch in the cube’s location, perseverative limb selection was revealed in both handedness groups. In

  3. A Gambler's Model of Natural Selection.

    Science.gov (United States)

    Nolan, Michael J.; Ostrovsky, David S.

    1996-01-01

    Presents an activity that highlights the mechanism and power of natural selection. Allows students to think in terms of modeling a biological process and instills an appreciation for a mathematical approach to biological problems. (JRH)

  4. Review and selection of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    Reeves, M.; Baker, N.A.; Duguid, J.O. [INTERA, Inc., Las Vegas, NV (United States)

    1994-04-04

    Since the 1960`s, ground-water flow models have been used for analysis of water resources problems. In the 1970`s, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970`s and well into the 1980`s focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M&O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing.

  5. Review and selection of unsaturated flow models

    International Nuclear Information System (INIS)

    Reeves, M.; Baker, N.A.; Duguid, J.O.

    1994-01-01

    Since the 1960's, ground-water flow models have been used for analysis of water resources problems. In the 1970's, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970's and well into the 1980's focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M ampersand O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M ampersand O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing

  6. Model Selection with the Linear Mixed Model for Longitudinal Data

    Science.gov (United States)

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  7. Engineering model cryocooler test results

    International Nuclear Information System (INIS)

    Skimko, M.A.; Stacy, W.D.; McCormick, J.A.

    1992-01-01

    This paper reports that recent testing of diaphragm-defined, Stirling-cycle machines and components has demonstrated cooling performance potential, validated the design code, and confirmed several critical operating characteristics. A breadboard cryocooler was rebuilt and tested from cryogenic to near-ambient cold end temperatures. There was a significant increase in capacity at cryogenic temperatures and the performance results compared will with code predictions at all temperatures. Further testing on a breadboard diaphragm compressor validated the calculated requirement for a minimum axial clearance between diaphragms and mating heads

  8. Selecting a model of supersymmetry breaking mediation

    International Nuclear Information System (INIS)

    AbdusSalam, S. S.; Allanach, B. C.; Dolan, M. J.; Feroz, F.; Hobson, M. P.

    2009-01-01

    We study the problem of selecting between different mechanisms of supersymmetry breaking in the minimal supersymmetric standard model using current data. We evaluate the Bayesian evidence of four supersymmetry breaking scenarios: mSUGRA, mGMSB, mAMSB, and moduli mediation. The results show a strong dependence on the dark matter assumption. Using the inferred cosmological relic density as an upper bound, minimal anomaly mediation is at least moderately favored over the CMSSM. Our fits also indicate that evidence for a positive sign of the μ parameter is moderate at best. We present constraints on the anomaly and gauge mediated parameter spaces and some previously unexplored aspects of the dark matter phenomenology of the moduli mediation scenario. We use sparticle searches, indirect observables and dark matter observables in the global fit and quantify robustness with respect to prior choice. We quantify how much information is contained within each constraint.

  9. Melody Track Selection Using Discriminative Language Model

    Science.gov (United States)

    Wu, Xiao; Li, Ming; Suo, Hongbin; Yan, Yonghong

    In this letter we focus on the task of selecting the melody track from a polyphonic MIDI file. Based on the intuition that music and language are similar in many aspects, we solve the selection problem by introducing an n-gram language model to learn the melody co-occurrence patterns in a statistical manner and determine the melodic degree of a given MIDI track. Furthermore, we propose the idea of using background model and posterior probability criteria to make modeling more discriminative. In the evaluation, the achieved 81.6% correct rate indicates the feasibility of our approach.

  10. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  11. COPS model estimates of LLEA availability near selected reactor sites

    International Nuclear Information System (INIS)

    Berkbigler, K.P.

    1979-11-01

    The COPS computer model has been used to estimate local law enforcement agency (LLEA) officer availability in the neighborhood of selected nuclear reactor sites. The results of these analyses are presented both in graphic and tabular form in this report

  12. Halo models of HI selected galaxies

    Science.gov (United States)

    Paul, Niladri; Choudhury, Tirthankar Roy; Paranjape, Aseem

    2018-06-01

    Modelling the distribution of neutral hydrogen (HI) in dark matter halos is important for studying galaxy evolution in the cosmological context. We use a novel approach to infer the HI-dark matter connection at the massive end (m_H{I} > 10^{9.8} M_{⊙}) from radio HI emission surveys, using optical properties of low-redshift galaxies as an intermediary. In particular, we use a previously calibrated optical HOD describing the luminosity- and colour-dependent clustering of SDSS galaxies and describe the HI content using a statistical scaling relation between the optical properties and HI mass. This allows us to compute the abundance and clustering properties of HI-selected galaxies and compare with data from the ALFALFA survey. We apply an MCMC-based statistical analysis to constrain the free parameters related to the scaling relation. The resulting best-fit scaling relation identifies massive HI galaxies primarily with optically faint blue centrals, consistent with expectations from galaxy formation models. We compare the Hi-stellar mass relation predicted by our model with independent observations from matched Hi-optical galaxy samples, finding reasonable agreement. As a further application, we make some preliminary forecasts for future observations of HI and optical galaxies in the expected overlap volume of SKA and Euclid/LSST.

  13. Expert System Model for Educational Personnel Selection

    Directory of Open Access Journals (Sweden)

    Héctor A. Tabares-Ospina

    2013-06-01

    Full Text Available The staff selection is a difficult task due to the subjectivity that the evaluation means. This process can be complemented using a system to support decision. This paper presents the implementation of an expert system to systematize the selection process of professors. The management of software development is divided into 4 parts: requirements, design, implementation and commissioning. The proposed system models a specific knowledge through relationships between variables evidence and objective.

  14. Automated sample plan selection for OPC modeling

    Science.gov (United States)

    Casati, Nathalie; Gabrani, Maria; Viswanathan, Ramya; Bayraktar, Zikri; Jaiswal, Om; DeMaris, David; Abdo, Amr Y.; Oberschmidt, James; Krause, Andreas

    2014-03-01

    It is desired to reduce the time required to produce metrology data for calibration of Optical Proximity Correction (OPC) models and also maintain or improve the quality of the data collected with regard to how well that data represents the types of patterns that occur in real circuit designs. Previous work based on clustering in geometry and/or image parameter space has shown some benefit over strictly manual or intuitive selection, but leads to arbitrary pattern exclusion or selection which may not be the best representation of the product. Forming the pattern selection as an optimization problem, which co-optimizes a number of objective functions reflecting modelers' insight and expertise, has shown to produce models with equivalent quality to the traditional plan of record (POR) set but in a less time.

  15. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  16. Linkage of PRA models. Phase 1, Results

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.; Knudsen, J.K.; Kelly, D.L.

    1995-12-01

    The goal of the Phase I work of the ``Linkage of PRA Models`` project was to postulate methods of providing guidance for US Nuclear Regulator Commission (NRC) personnel on the selection and usage of probabilistic risk assessment (PRA) models that are best suited to the analysis they are performing. In particular, methods and associated features are provided for (a) the selection of an appropriate PRA model for a particular analysis, (b) complementary evaluation tools for the analysis, and (c) a PRA model cross-referencing method. As part of this work, three areas adjoining ``linking`` analyses to PRA models were investigated: (a) the PRA models that are currently available, (b) the various types of analyses that are performed within the NRC, and (c) the difficulty in trying to provide a ``generic`` classification scheme to groups plants based upon a particular plant attribute.

  17. Linkage of PRA models. Phase 1, Results

    International Nuclear Information System (INIS)

    Smith, C.L.; Knudsen, J.K.; Kelly, D.L.

    1995-12-01

    The goal of the Phase I work of the ''Linkage of PRA Models'' project was to postulate methods of providing guidance for US Nuclear Regulator Commission (NRC) personnel on the selection and usage of probabilistic risk assessment (PRA) models that are best suited to the analysis they are performing. In particular, methods and associated features are provided for (a) the selection of an appropriate PRA model for a particular analysis, (b) complementary evaluation tools for the analysis, and (c) a PRA model cross-referencing method. As part of this work, three areas adjoining ''linking'' analyses to PRA models were investigated: (a) the PRA models that are currently available, (b) the various types of analyses that are performed within the NRC, and (c) the difficulty in trying to provide a ''generic'' classification scheme to groups plants based upon a particular plant attribute

  18. Model Selection in Data Analysis Competitions

    DEFF Research Database (Denmark)

    Wind, David Kofoed; Winther, Ole

    2014-01-01

    The use of data analysis competitions for selecting the most appropriate model for a problem is a recent innovation in the field of predictive machine learning. Two of the most well-known examples of this trend was the Netflix Competition and recently the competitions hosted on the online platform...... performers from Kaggle and use previous personal experiences from competing in Kaggle competitions. The stated hypotheses about feature engineering, ensembling, overfitting, model complexity and evaluation metrics give indications and guidelines on how to select a proper model for performing well...... Kaggle. In this paper, we will state and try to verify a set of qualitative hypotheses about predictive modelling, both in general and in the scope of data analysis competitions. To verify our hypotheses we will look at previous competitions and their outcomes, use qualitative interviews with top...

  19. The genealogy of samples in models with selection.

    Science.gov (United States)

    Neuhauser, C; Krone, S M

    1997-02-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.

  20. Selective bowel decontamination results in gram-positive translocation.

    Science.gov (United States)

    Jackson, R J; Smith, S D; Rowe, M I

    1990-05-01

    Colonization by enteric gram-negative bacteria with subsequent translocation is believed to be a major mechanism for infection in the critically ill patient. Selective bowel decontamination (SBD) has been used to control gram-negative infections by eliminating these potentially pathogenic bacteria while preserving anaerobic and other less pathogenic organisms. Infection with gram-positive organisms and anaerobes in two multivisceral transplant patients during SBD led us to investigate the effect of SBD on gut colonization and translocation. Twenty-four rats received enteral polymixin E, tobramycin, amphotericin B, and parenteral cefotaxime for 7 days (PTA + CEF); 23 received parenteral cefotaxime alone (CEF), 19 received the enteral antibiotics alone (PTA), 21 controls received no antibiotics. Cecal homogenates, mesenteric lymph node (MLN), liver, and spleen were cultured. Only 8% of the PTA + CEF group had gram-negative bacteria in cecal culture vs 52% CEF, 84% PTA, and 100% in controls. Log Enterococcal colony counts were higher in the PTA + CEF group (8.0 + 0.9) vs controls (5.4 + 0.4) P less than 0.01. Translocation of Enterococcus to the MLN was significantly increased in the PTA + CEF group (67%) vs controls (0%) P less than 0.01. SBD effectively eliminates gram-negative organisms from the gut in the rat model. Overgrowth and translocation of Enterococcus suggests that infection with gram-positive organisms may be a limitation of SBD.

  1. Elementary Teachers' Selection and Use of Visual Models

    Science.gov (United States)

    Lee, Tammy D.; Gail Jones, M.

    2018-02-01

    As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.

  2. Tc-99 Adsorption on Selected Activated Carbons - Batch Testing Results

    Energy Technology Data Exchange (ETDEWEB)

    Mattigod, Shas V.; Wellman, Dawn M.; Golovich, Elizabeth C.; Cordova, Elsa A.; Smith, Ronald M.

    2010-12-01

    CH2M HILL Plateau Remediation Company (CHPRC) is currently developing a 200-West Area groundwater pump-and-treat system as the remedial action selected under the Comprehensive Environmental Response, Compensation, and Liability Act Record of Decision for Operable Unit (OU) 200-ZP-1. This report documents the results of treatability tests Pacific Northwest National Laboratory researchers conducted to quantify the ability of selected activated carbon products (or carbons) to adsorb technetium-99 (Tc-99) from 200-West Area groundwater. The Tc-99 adsorption performance of seven activated carbons (J177601 Calgon Fitrasorb 400, J177606 Siemens AC1230AWC, J177609 Carbon Resources CR-1240-AW, J177611 General Carbon GC20X50, J177612 Norit GAC830, J177613 Norit GAC830, and J177617 Nucon LW1230) were evaluated using water from well 299-W19-36. Four of the best performing carbons (J177606 Siemens AC1230AWC, J177609 Carbon Resources CR-1240-AW, J177611 General Carbon GC20X50, and J177613 Norit GAC830) were selected for batch isotherm testing. The batch isotherm tests on four of the selected carbons indicated that under lower nitrate concentration conditions (382 mg/L), Kd values ranged from 6,000 to 20,000 mL/g. In comparison. Under higher nitrate (750 mg/L) conditions, there was a measureable decrease in Tc-99 adsorption with Kd values ranging from 3,000 to 7,000 mL/g. The adsorption data fit both the Langmuir and the Freundlich equations. Supplemental tests were conducted using the two carbons that demonstrated the highest adsorption capacity to resolve the issue of the best fit isotherm. These tests indicated that Langmuir isotherms provided the best fit for Tc-99 adsorption under low nitrate concentration conditions. At the design basis concentration of Tc 0.865 µg/L(14,700 pCi/L), the predicted Kd values from using Langmuir isotherm constants were 5,980 mL/g and 6,870 mL/g for for the two carbons. These Kd values did not meet the target Kd value of 9,000 mL/g. Tests

  3. Subset selection for an epsilon-best population : efficiency results

    NARCIS (Netherlands)

    Laan, van der P.

    1991-01-01

    An almost best or an \\epsilon-best population is defined as a population with location parameter on a distance not larger than \\epsilon (\\geq 0) from the best population (with largest value of the location parameter). For the subset selection tables with the relative efficiency of selecting an

  4. Review and selection of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-09-10

    Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer ground-water flow models; to conduct performance assessments; and to develop performance assessment models, where necessary. In the area of scientific modeling, the M&O CRWMS has the following responsibilities: To provide overall management and integration of modeling activities. To provide a framework for focusing modeling and model development. To identify areas that require increased or decreased emphasis. To ensure that the tools necessary to conduct performance assessment are available. These responsibilities are being initiated through a three-step process. It consists of a thorough review of existing models, testing of models which best fit the established requirements, and making recommendations for future development that should be conducted. Future model enhancement will then focus on the models selected during this activity. Furthermore, in order to manage future model development, particularly in those areas requiring substantial enhancement, the three-step process will be updated and reported periodically in the future.

  5. Expatriates Selection: An Essay of Model Analysis

    Directory of Open Access Journals (Sweden)

    Rui Bártolo-Ribeiro

    2015-03-01

    Full Text Available The business expansion to other geographical areas with different cultures from which organizations were created and developed leads to the expatriation of employees to these destinations. Recruitment and selection procedures of expatriates do not always have the intended success leading to an early return of these professionals with the consequent organizational disorders. In this study, several articles published in the last five years were analyzed in order to identify the most frequently mentioned dimensions in the selection of expatriates in terms of success and failure. The characteristics in the selection process that may increase prediction of adaptation of expatriates to new cultural contexts of the some organization were studied according to the KSAOs model. Few references were found concerning Knowledge, Skills and Abilities dimensions in the analyzed papers. There was a strong predominance on the evaluation of Other Characteristics, and was given more importance to dispositional factors than situational factors for promoting the integration of the expatriates.

  6. Skewed factor models using selection mechanisms

    KAUST Repository

    Kim, Hyoung-Moon

    2015-12-21

    Traditional factor models explicitly or implicitly assume that the factors follow a multivariate normal distribution; that is, only moments up to order two are involved. However, it may happen in real data problems that the first two moments cannot explain the factors. Based on this motivation, here we devise three new skewed factor models, the skew-normal, the skew-tt, and the generalized skew-normal factor models depending on a selection mechanism on the factors. The ECME algorithms are adopted to estimate related parameters for statistical inference. Monte Carlo simulations validate our new models and we demonstrate the need for skewed factor models using the classic open/closed book exam scores dataset.

  7. Skewed factor models using selection mechanisms

    KAUST Repository

    Kim, Hyoung-Moon; Maadooliat, Mehdi; Arellano-Valle, Reinaldo B.; Genton, Marc G.

    2015-01-01

    Traditional factor models explicitly or implicitly assume that the factors follow a multivariate normal distribution; that is, only moments up to order two are involved. However, it may happen in real data problems that the first two moments cannot explain the factors. Based on this motivation, here we devise three new skewed factor models, the skew-normal, the skew-tt, and the generalized skew-normal factor models depending on a selection mechanism on the factors. The ECME algorithms are adopted to estimate related parameters for statistical inference. Monte Carlo simulations validate our new models and we demonstrate the need for skewed factor models using the classic open/closed book exam scores dataset.

  8. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  9. Ensembling Variable Selectors by Stability Selection for the Cox Model

    Directory of Open Access Journals (Sweden)

    Qing-Yan Yin

    2017-01-01

    Full Text Available As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs have gained much interest due to their many advantages. Stability selection (Meinshausen and Bühlmann, 2010, a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region Λ in lasso and the parameter λmin properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Λ and λmin. Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.

  10. Behavioral optimization models for multicriteria portfolio selection

    Directory of Open Access Journals (Sweden)

    Mehlawat Mukesh Kumar

    2013-01-01

    Full Text Available In this paper, behavioral construct of suitability is used to develop a multicriteria decision making framework for portfolio selection. To achieve this purpose, we rely on multiple methodologies. Analytical hierarchy process technique is used to model the suitability considerations with a view to obtaining the suitability performance score in respect of each asset. A fuzzy multiple criteria decision making method is used to obtain the financial quality score of each asset based upon investor's rating on the financial criteria. Two optimization models are developed for optimal asset allocation considering simultaneously financial and suitability criteria. An empirical study is conducted on randomly selected assets from National Stock Exchange, Mumbai, India to demonstrate the effectiveness of the proposed methodology.

  11. Sample selection and taste correlation in discrete choice transport modelling

    DEFF Research Database (Denmark)

    Mabit, Stefan Lindhard

    2008-01-01

    explain counterintuitive results in value of travel time estimation. However, the results also point at the difficulty of finding suitable instruments for the selection mechanism. Taste heterogeneity is another important aspect of discrete choice modelling. Mixed logit models are designed to capture...... the question for a broader class of models. It is shown that the original result may be somewhat generalised. Another question investigated is whether mode choice operates as a self-selection mechanism in the estimation of the value of travel time. The results show that self-selection can at least partly...... of taste correlation in willingness-to-pay estimation are presented. The first contribution addresses how to incorporate taste correlation in the estimation of the value of travel time for public transport. Given a limited dataset the approach taken is to use theory on the value of travel time as guidance...

  12. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2015-01-01

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman's two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  13. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  14. Results of the naive quark model

    International Nuclear Information System (INIS)

    Gignoux, C.

    1987-10-01

    The hypotheses and limits of the naive quark model are recalled and results on nucleon-nucleon scattering and possible multiquark states are presented. Results show that with this model, ropers do not come. For hadron-hadron interactions, the model predicts Van der Waals forces that the resonance group method does not allow. Known many-body forces are not found in the model. The lack of mesons shows up in the absence of a far reaching force. However, the model does have strengths. It is free from spuriousness of center of mass, and allows a democratic handling of flavor. It has few parameters, and its predictions are very good [fr

  15. Item selection via Bayesian IRT models.

    Science.gov (United States)

    Arima, Serena

    2015-02-10

    With reference to a questionnaire that aimed to assess the quality of life for dysarthric speakers, we investigate the usefulness of a model-based procedure for reducing the number of items. We propose a mixed cumulative logit model, which is known in the psychometrics literature as the graded response model: responses to different items are modelled as a function of individual latent traits and as a function of item characteristics, such as their difficulty and their discrimination power. We jointly model the discrimination and the difficulty parameters by using a k-component mixture of normal distributions. Mixture components correspond to disjoint groups of items. Items that belong to the same groups can be considered equivalent in terms of both difficulty and discrimination power. According to decision criteria, we select a subset of items such that the reduced questionnaire is able to provide the same information that the complete questionnaire provides. The model is estimated by using a Bayesian approach, and the choice of the number of mixture components is justified according to information criteria. We illustrate the proposed approach on the basis of data that are collected for 104 dysarthric patients by local health authorities in Lecce and in Milan. Copyright © 2014 John Wiley & Sons, Ltd.

  16. Science and Information Conference 2015 : Extended and Selected Results

    CERN Document Server

    Kapoor, Supriya; Bhatia, Rahul

    2016-01-01

    This book is a collection of extended chapters from the selected papers that were published in the proceedings of Science and Information (SAI) Conference 2015. It contains twenty-one chapters in the field of Computational Intelligence, which received highly recommended feedback during SAI Conference 2015 review process. During the three-day event 260 scientists, technology developers, young researcher including PhD students, and industrial practitioners from 56 countries have engaged intensively in presentations, demonstrations, open panel sessions and informal discussions. .

  17. Factors influencing creep model equation selection

    International Nuclear Information System (INIS)

    Holdsworth, S.R.; Askins, M.; Baker, A.; Gariboldi, E.; Holmstroem, S.; Klenk, A.; Ringel, M.; Merckling, G.; Sandstrom, R.; Schwienheer, M.; Spigarelli, S.

    2008-01-01

    During the course of the EU-funded Advanced-Creep Thematic Network, ECCC-WG1 reviewed the applicability and effectiveness of a range of model equations to represent the accumulation of creep strain in various engineering alloys. In addition to considering the experience of network members, the ability of several models to describe the deformation characteristics of large single and multi-cast collations of ε(t,T,σ) creep curves have been evaluated in an intensive assessment inter-comparison activity involving three steels, 21/4 CrMo (P22), 9CrMoVNb (Steel-91) and 18Cr13NiMo (Type-316). The choice of the most appropriate creep model equation for a given application depends not only on the high-temperature deformation characteristics of the material under consideration, but also on the characteristics of the dataset, the number of casts for which creep curves are available and on the strain regime for which an analytical representation is required. The paper focuses on the factors which can influence creep model selection and model-fitting approach for multi-source, multi-cast datasets

  18. Interpreting Results from the Multinomial Logit Model

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2015-01-01

    This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there see...... suitable for both interpretation and communication of results. The pratical steps are illustrated through an application of the MLM to the choice of foreign market entry mode.......This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there seem...... to be systematic issues with regard to how researchers interpret their results when using the MLM. In this study, I present a set of guidelines critical to analyzing and interpreting results from the MLM. The procedure involves intuitive graphical representations of predicted probabilities and marginal effects...

  19. Selected results from the Mark II at SPEAR

    International Nuclear Information System (INIS)

    Scharre, D.L.

    1980-06-01

    Recent results on radiative transitions from the psi(3095), charmed meson decay, and the Cabibbo-suppressed decay tau → K* ν/sub tau/ are reviewed. The results come primarily from the Mark II experiment at SPEAR, but preliminary results from the Crystal Ball experiment on psi radiative transitions are also discussed

  20. Model selection for contingency tables with algebraic statistics

    NARCIS (Netherlands)

    Krampe, A.; Kuhnt, S.; Gibilisco, P.; Riccimagno, E.; Rogantin, M.P.; Wynn, H.P.

    2009-01-01

    Goodness-of-fit tests based on chi-square approximations are commonly used in the analysis of contingency tables. Results from algebraic statistics combined with MCMC methods provide alternatives to the chi-square approximation. However, within a model selection procedure usually a large number of

  1. Short-Run Asset Selection using a Logistic Model

    Directory of Open Access Journals (Sweden)

    Walter Gonçalves Junior

    2011-06-01

    Full Text Available Investors constantly look for significant predictors and accurate models to forecast future results, whose occasional efficacy end up being neutralized by market efficiency. Regardless, such predictors are widely used for seeking better (and more unique perceptions. This paper aims to investigate to what extent some of the most notorious indicators have discriminatory power to select stocks, and if it is feasible with such variables to build models that could anticipate those with good performance. In order to do that, logistical regressions were conducted with stocks traded at Bovespa using the selected indicators as explanatory variables. Investigated in this study were the outputs of Bovespa Index, liquidity, the Sharpe Ratio, ROE, MB, size and age evidenced to be significant predictors. Also examined were half-year, logistical models, which were adjusted in order to check the potential acceptable discriminatory power for the asset selection.

  2. Uniform design based SVM model selection for face recognition

    Science.gov (United States)

    Li, Weihong; Liu, Lijuan; Gong, Weiguo

    2010-02-01

    Support vector machine (SVM) has been proved to be a powerful tool for face recognition. The generalization capacity of SVM depends on the model with optimal hyperparameters. The computational cost of SVM model selection results in application difficulty in face recognition. In order to overcome the shortcoming, we utilize the advantage of uniform design--space filling designs and uniformly scattering theory to seek for optimal SVM hyperparameters. Then we propose a face recognition scheme based on SVM with optimal model which obtained by replacing the grid and gradient-based method with uniform design. The experimental results on Yale and PIE face databases show that the proposed method significantly improves the efficiency of SVM model selection.

  3. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    Science.gov (United States)

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable

  4. Evidence accumulation as a model for lexical selection.

    Science.gov (United States)

    Anders, R; Riès, S; van Maanen, L; Alario, F X

    2015-11-01

    We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of alternatives, which each have varying activations (or signal supports), that are largely resultant of an initial stimulus recognition. We thoroughly present a case for how such a process may be theoretically explained by the evidence accumulation paradigm, and we demonstrate how this paradigm can be directly related or combined with conventional psycholinguistic theory and their simulatory instantiations (generally, neural network models). Then with a demonstrative application on a large new real data set, we establish how the empirical evidence accumulation approach is able to provide parameter results that are informative to leading psycholinguistic theory, and that motivate future theoretical development. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  6. Selective Oxidation of Lignin Model Compounds.

    Science.gov (United States)

    Gao, Ruili; Li, Yanding; Kim, Hoon; Mobley, Justin K; Ralph, John

    2018-05-02

    Lignin, the planet's most abundant renewable source of aromatic compounds, is difficult to degrade efficiently to welldefined aromatics. We developed a microwave-assisted catalytic Swern oxidation system using an easily prepared catalyst, MoO 2 Cl 2 (DMSO) 2 , and DMSO as the solvent and oxidant. It demonstrated high efficiency in transforming lignin model compounds containing the units and functional groups found in native lignins. The aromatic ring substituents strongly influenced the selectivity of β-ether phenolic dimer cleavage to generate sinapaldehyde and coniferaldehyde, monomers not usually produced by oxidative methods. Time-course studies on two key intermediates provided insight into the reaction pathway. Owing to the broad scope of this oxidation system and the insight gleaned with regard to its mechanism, this strategy could be adapted and applied in a general sense to the production of useful aromatic chemicals from phenolics and lignin. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Selected spectroscopic results on element 115 decay chains

    International Nuclear Information System (INIS)

    Rudolph, D.; Forsberg, U.; Golubev, P.; Sarmiento, L.G.; Yakushev, A.; Andersson, L.L.; Di Nitto, A.; Duellmann, Ch.E.; Gates, J.M.; Gregorich, K.E.

    2015-01-01

    Thirty correlated α-decay chains were observed in an experiment studying the fusion-evaporation reaction 48 Ca + 243 Am at the GSI Helmholtzzentrum fuer Schwerionenforschung. The decay characteristics of the majority of these 30 chains are consistent with previous observations and interpretations of such chains to originate from isotopes of element Z = 115. High-resolution α-photon coincidence spectroscopy in conjunction with comprehensive Monte-Carlo simulations allow to propose excitation schemes of atomic nuclei of the heaviest elements, thereby probing nuclear structure models near the 'Island of Stability' with unprecedented experimental precision. (author)

  8. Model Selection in Historical Research Using Approximate Bayesian Computation

    Science.gov (United States)

    Rubio-Campillo, Xavier

    2016-01-01

    Formal Models and History Computational models are increasingly being used to study historical dynamics. This new trend, which could be named Model-Based History, makes use of recently published datasets and innovative quantitative methods to improve our understanding of past societies based on their written sources. The extensive use of formal models allows historians to re-evaluate hypotheses formulated decades ago and still subject to debate due to the lack of an adequate quantitative framework. The initiative has the potential to transform the discipline if it solves the challenges posed by the study of historical dynamics. These difficulties are based on the complexities of modelling social interaction, and the methodological issues raised by the evaluation of formal models against data with low sample size, high variance and strong fragmentation. Case Study This work examines an alternate approach to this evaluation based on a Bayesian-inspired model selection method. The validity of the classical Lanchester’s laws of combat is examined against a dataset comprising over a thousand battles spanning 300 years. Four variations of the basic equations are discussed, including the three most common formulations (linear, squared, and logarithmic) and a new variant introducing fatigue. Approximate Bayesian Computation is then used to infer both parameter values and model selection via Bayes Factors. Impact Results indicate decisive evidence favouring the new fatigue model. The interpretation of both parameter estimations and model selection provides new insights into the factors guiding the evolution of warfare. At a methodological level, the case study shows how model selection methods can be used to guide historical research through the comparison between existing hypotheses and empirical evidence. PMID:26730953

  9. ASYMMETRIC PRICE TRANSMISSION MODELING: THE IMPORTANCE OF MODEL COMPLEXITY AND THE PERFORMANCE OF THE SELECTION CRITERIA

    Directory of Open Access Journals (Sweden)

    Henry de-Graft Acquah

    2013-01-01

    Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.

  10. Random effect selection in generalised linear models

    DEFF Research Database (Denmark)

    Denwood, Matt; Houe, Hans; Forkman, Björn

    We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...

  11. Results of steel containment vessel model test

    International Nuclear Information System (INIS)

    Luk, V.K.; Ludwigsen, J.S.; Hessheimer, M.F.; Komine, Kuniaki; Matsumoto, Tomoyuki; Costello, J.F.

    1998-05-01

    A series of static overpressurization tests of scale models of nuclear containment structures is being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the US Nuclear Regulatory Commission. Two tests are being conducted: (1) a test of a model of a steel containment vessel (SCV) and (2) a test of a model of a prestressed concrete containment vessel (PCCV). This paper summarizes the conduct of the high pressure pneumatic test of the SCV model and the results of that test. Results of this test are summarized and are compared with pretest predictions performed by the sponsoring organizations and others who participated in a blind pretest prediction effort. Questions raised by this comparison are identified and plans for posttest analysis are discussed

  12. Hidden Markov Model for Stock Selection

    Directory of Open Access Journals (Sweden)

    Nguyet Nguyen

    2015-10-01

    Full Text Available The hidden Markov model (HMM is typically used to predict the hidden regimes of observation data. Therefore, this model finds applications in many different areas, such as speech recognition systems, computational molecular biology and financial market predictions. In this paper, we use HMM for stock selection. We first use HMM to make monthly regime predictions for the four macroeconomic variables: inflation (consumer price index (CPI, industrial production index (INDPRO, stock market index (S&P 500 and market volatility (VIX. At the end of each month, we calibrate HMM’s parameters for each of these economic variables and predict its regimes for the next month. We then look back into historical data to find the time periods for which the four variables had similar regimes with the forecasted regimes. Within those similar periods, we analyze all of the S&P 500 stocks to identify which stock characteristics have been well rewarded during the time periods and assign scores and corresponding weights for each of the stock characteristics. A composite score of each stock is calculated based on the scores and weights of its features. Based on this algorithm, we choose the 50 top ranking stocks to buy. We compare the performances of the portfolio with the benchmark index, S&P 500. With an initial investment of $100 in December 1999, over 15 years, in December 2014, our portfolio had an average gain per annum of 14.9% versus 2.3% for the S&P 500.

  13. Psyche Mission: Scientific Models and Instrument Selection

    Science.gov (United States)

    Polanskey, C. A.; Elkins-Tanton, L. T.; Bell, J. F., III; Lawrence, D. J.; Marchi, S.; Park, R. S.; Russell, C. T.; Weiss, B. P.

    2017-12-01

    NASA has chosen to explore (16) Psyche with their 14th Discovery-class mission. Psyche is a 226-km diameter metallic asteroid hypothesized to be the exposed core of a planetesimal that was stripped of its rocky mantle by multiple hit and run collisions in the early solar system. The spacecraft launch is planned for 2022 with arrival at the asteroid in 2026 for 21 months of operations. The Psyche investigation has five primary scientific objectives: A. Determine whether Psyche is a core, or if it is unmelted material. B. Determine the relative ages of regions of Psyche's surface. C. Determine whether small metal bodies incorporate the same light elements as are expected in the Earth's high-pressure core. D. Determine whether Psyche was formed under conditions more oxidizing or more reducing than Earth's core. E. Characterize Psyche's topography. The mission's task was to select the appropriate instruments to meet these objectives. However, exploring a metal world, rather than one made of ice, rock, or gas, requires development of new scientific models for Psyche to support the selection of the appropriate instruments for the payload. If Psyche is indeed a planetary core, we expect that it should have a detectable magnetic field. However, the strength of the magnetic field can vary by orders of magnitude depending on the formational history of Psyche. The implications of both the extreme low-end and the high-end predictions impact the magnetometer and mission design. For the imaging experiment, what can the team expect for the morphology of a heavily impacted metal body? Efforts are underway to further investigate the differences in crater morphology between high velocity impacts into metal and rock to be prepared to interpret the images of Psyche when they are returned. Finally, elemental composition measurements at Psyche using nuclear spectroscopy encompass a new and unexplored phase space of gamma-ray and neutron measurements. We will present some end

  14. Selecting an optimal mixed products using grey relationship model

    Directory of Open Access Journals (Sweden)

    Farshad Faezy Razi

    2013-06-01

    Full Text Available This paper presents an integrated supplier selection and inventory management using grey relationship model (GRM as well as multi-objective decision making process. The proposed model of this paper first ranks different suppliers based on GRM technique and then determines the optimum level of inventory by considering different objectives. To show the implementation of the proposed model, we use some benchmark data presented by Talluri and Baker [Talluri, S., & Baker, R. C. (2002. A multi-phase mathematical programming approach for effective supply chain design. European Journal of Operational Research, 141(3, 544-558.]. The preliminary results indicate that the proposed model of this paper is capable of handling different criteria for supplier selection.

  15. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  16. Selected Test Results from the Encell Technology Nickel Iron Battery

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, Summer Kamal Rhodes [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Advanced Power Sources R& D; Baca, Wes Edmund [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Advanced Power Sources R& D; Avedikian, Kristan [Encell Technology, Alachua, FL (United States)

    2014-09-01

    The performance of the Encell Nickel Iron (NiFe) battery was measured. Tests included capacity, capacity as a function of rate, capacity as a function of temperature, charge retention (28-day), efficiency, accelerated life projection, and water refill evaluation. The goal of this work was to evaluate the general performance of the Encell NiFe battery technology for stationary applications and demonstrate the chemistry's capabilities in extreme conditions. Test results have indicated that the Encell NiFe battery technology can provide power levels up to the 6C discharge rate, ampere-hour efficiency above 70%. In summary, the Encell batteries have met performance metrics established by the manufacturer. Long-term cycle tests are not included in this report. A cycle test at elevated temperature was run, funded by the manufacturer, which Encell uses to predict long-term cycling performance, and which passed their prescribed metrics.

  17. Integrated model for supplier selection and performance evaluation

    Directory of Open Access Journals (Sweden)

    Borges de Araújo, Maria Creuza

    2015-08-01

    Full Text Available This paper puts forward a model for selecting suppliers and evaluating the performance of those already working with a company. A simulation was conducted in a food industry. This sector has high significance in the economy of Brazil. The model enables the phases of selecting and evaluating suppliers to be integrated. This is important so that a company can have partnerships with suppliers who are able to meet their needs. Additionally, a group method is used to enable managers who will be affected by this decision to take part in the selection stage. Finally, the classes resulting from the performance evaluation are shown to support the contractor in choosing the most appropriate relationship with its suppliers.

  18. Modelling rainfall erosion resulting from climate change

    Science.gov (United States)

    Kinnell, Peter

    2016-04-01

    It is well known that soil erosion leads to agricultural productivity decline and contributes to water quality decline. The current widely used models for determining soil erosion for management purposes in agriculture focus on long term (~20 years) average annual soil loss and are not well suited to determining variations that occur over short timespans and as a result of climate change. Soil loss resulting from rainfall erosion is directly dependent on the product of runoff and sediment concentration both of which are likely to be influenced by climate change. This presentation demonstrates the capacity of models like the USLE, USLE-M and WEPP to predict variations in runoff and erosion associated with rainfall events eroding bare fallow plots in the USA with a view to modelling rainfall erosion in areas subject to climate change.

  19. INTRAVAL test case 1b - modelling results

    International Nuclear Information System (INIS)

    Jakob, A.; Hadermann, J.

    1991-07-01

    This report presents results obtained within Phase I of the INTRAVAL study. Six different models are fitted to the results of four infiltration experiments with 233 U tracer on small samples of crystalline bore cores originating from deep drillings in Northern Switzerland. Four of these are dual porosity media models taking into account advection and dispersion in water conducting zones (either tubelike veins or planar fractures), matrix diffusion out of these into pores of the solid phase, and either non-linear or linear sorption of the tracer onto inner surfaces. The remaining two are equivalent porous media models (excluding matrix diffusion) including either non-linear sorption onto surfaces of a single fissure family or linear sorption onto surfaces of several different fissure families. The fits to the experimental data have been carried out by Marquardt-Levenberg procedure yielding error estimates of the parameters, correlation coefficients and also, as a measure for the goodness of the fits, the minimum values of the χ 2 merit function. The effects of different upstream boundary conditions are demonstrated and the penetration depth for matrix diffusion is discussed briefly for both alternative flow path scenarios. The calculations show that the dual porosity media models are significantly more appropriate to the experimental data than the single porosity media concepts. Moreover, it is matrix diffusion rather than the non-linearity of the sorption isotherm which is responsible for the tailing part of the break-through curves. The extracted parameter values for some models for both the linear and non-linear (Freundlich) sorption isotherms are consistent with the results of independent static batch sorption experiments. From the fits, it is generally not possible to discriminate between the two alternative flow path geometries. On the basis of the modelling results, some proposals for further experiments are presented. (author) 15 refs., 23 figs., 7 tabs

  20. Modeling and Solving the Liner Shipping Service Selection Problem

    DEFF Research Database (Denmark)

    Karsten, Christian Vad; Balakrishnan, Anant

    We address a tactical planning problem, the Liner Shipping Service Selection Problem (LSSSP), facing container shipping companies. Given estimated demand between various ports, the LSSSP entails selecting the best subset of non-simple cyclic sailing routes from a given pool of candidate routes...... to accurately model transshipment costs and incorporate routing policies such as maximum transit time, maritime cabotage rules, and operational alliances. Our hop-indexed arc flow model is smaller and easier to solve than path flow models. We outline a preprocessing procedure that exploits both the routing...... requirements and the hop limits to reduce problem size, and describe techniques to accelerate the solution procedure. We present computational results for realistic problem instances from the benchmark suite LINER-LIB....

  1. A new Russell model for selecting suppliers

    NARCIS (Netherlands)

    Azadi, Majid; Shabani, Amir; Farzipoor Saen, Reza

    2014-01-01

    Recently, supply chain management (SCM) has been considered by many researchers. Supplier evaluation and selection plays a significant role in establishing an effective SCM. One of the techniques that can be used for selecting suppliers is data envelopment analysis (DEA). In some situations, to

  2. Modeling shape selection of buckled dielectric elastomers

    Science.gov (United States)

    Langham, Jacob; Bense, Hadrien; Barkley, Dwight

    2018-02-01

    A dielectric elastomer whose edges are held fixed will buckle, given a sufficiently applied voltage, resulting in a nontrivial out-of-plane deformation. We study this situation numerically using a nonlinear elastic model which decouples two of the principal electrostatic stresses acting on an elastomer: normal pressure due to the mutual attraction of oppositely charged electrodes and tangential shear ("fringing") due to repulsion of like charges at the electrode edges. These enter via physically simplified boundary conditions that are applied in a fixed reference domain using a nondimensional approach. The method is valid for small to moderate strains and is straightforward to implement in a generic nonlinear elasticity code. We validate the model by directly comparing the simulated equilibrium shapes with the experiment. For circular electrodes which buckle axisymetrically, the shape of the deflection profile is captured. Annular electrodes of different widths produce azimuthal ripples with wavelengths that match our simulations. In this case, it is essential to compute multiple equilibria because the first model solution obtained by the nonlinear solver (Newton's method) is often not the energetically favored state. We address this using a numerical technique known as "deflation." Finally, we observe the large number of different solutions that may be obtained for the case of a long rectangular strip.

  3. An integrated model for supplier selection process

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In today's highly competitive manufacturing environment, the supplier selection process becomes one of crucial activities in supply chain management. In order to select the best supplier(s) it is not only necessary to continuously tracking and benchmarking performance of suppliers but also to make a tradeoff between tangible and intangible factors some of which may conflict. In this paper an integration of case-based reasoning (CBR), analytical network process (ANP) and linear programming (LP) is proposed to solve the supplier selection problem.

  4. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  5. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  6. Dealing with selection bias in educational transition models

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads Meier

    2011-01-01

    This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational tr...... account for selection on unobserved variables and high-quality data are both required in order to estimate credible educational transition models.......This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational...... transitions to be correlated across transitions. We use simulated and real data to illustrate how the BPSM improves on the traditional Mare model in terms of correcting for selection bias and providing credible estimates of the effect of family background on educational success. We conclude that models which...

  7. Discussion of gas trade model (GTM) results

    International Nuclear Information System (INIS)

    Manne, A.

    1989-01-01

    This is in response to your invitation to comment on the structure of GTM and also upon the differences between its results and those of other models participating in EMF9. First a word upon the structure. GTM was originally designed to provide both regional and sectoral detail within the North American market for natural gas at a single point in time, e.g. the year 2000. It is a spatial equilibrium model in which a solution is obtained by maximizing a nonlinear function, the sum of consumers and producers surplus. Since transport costs are included in producers cost, this formulation automatically ensures that geographical price differentials will not differ by more than transport costs. For purposes of EMF9, GTM was modified to allow for resource development and depletion over time

  8. The Danish national passenger modelModel specification and results

    DEFF Research Database (Denmark)

    Rich, Jeppe; Hansen, Christian Overgaard

    2016-01-01

    The paper describes the structure of the new Danish National Passenger model and provides on this basis a general discussion of large-scale model design, cost-damping and model validation. The paper aims at providing three main contributions to the existing literature. Firstly, at the general level......, the paper provides a description of a large-scale forecast model with a discussion of the linkage between population synthesis, demand and assignment. Secondly, the paper gives specific attention to model specification and in particular choice of functional form and cost-damping. Specifically we suggest...... a family of logarithmic spline functions and illustrate how it is applied in the model. Thirdly and finally, we evaluate model sensitivity and performance by evaluating the distance distribution and elasticities. In the paper we present results where the spline-function is compared with more traditional...

  9. Superconducting solenoid model magnet test results

    Energy Technology Data Exchange (ETDEWEB)

    Carcagno, R.; Dimarco, J.; Feher, S.; Ginsburg, C.M.; Hess, C.; Kashikhin, V.V.; Orris, D.F.; Pischalnikov, Y.; Sylvester, C.; Tartaglia, M.A.; Terechkine, I.; /Fermilab

    2006-08-01

    Superconducting solenoid magnets suitable for the room temperature front end of the Fermilab High Intensity Neutrino Source (formerly known as Proton Driver), an 8 GeV superconducting H- linac, have been designed and fabricated at Fermilab, and tested in the Fermilab Magnet Test Facility. We report here results of studies on the first model magnets in this program, including the mechanical properties during fabrication and testing in liquid helium at 4.2 K, quench performance, and magnetic field measurements. We also describe new test facility systems and instrumentation that have been developed to accomplish these tests.

  10. Superconducting solenoid model magnet test results

    International Nuclear Information System (INIS)

    Carcagno, R.; Dimarco, J.; Feher, S.; Ginsburg, C.M.; Hess, C.; Kashikhin, V.V.; Orris, D.F.; Pischalnikov, Y.; Sylvester, C.; Tartaglia, M.A.; Terechkine, I.; Tompkins, J.C.; Wokas, T.; Fermilab

    2006-01-01

    Superconducting solenoid magnets suitable for the room temperature front end of the Fermilab High Intensity Neutrino Source (formerly known as Proton Driver), an 8 GeV superconducting H- linac, have been designed and fabricated at Fermilab, and tested in the Fermilab Magnet Test Facility. We report here results of studies on the first model magnets in this program, including the mechanical properties during fabrication and testing in liquid helium at 4.2 K, quench performance, and magnetic field measurements. We also describe new test facility systems and instrumentation that have been developed to accomplish these tests

  11. A decision model for energy resource selection in China

    International Nuclear Information System (INIS)

    Wang Bing; Kocaoglu, Dundar F.; Daim, Tugrul U.; Yang Jiting

    2010-01-01

    This paper evaluates coal, petroleum, natural gas, nuclear energy and renewable energy resources as energy alternatives for China through use of a hierarchical decision model. The results indicate that although coal is still the major preferred energy alternative, it is followed closely by renewable energy. The sensitivity analysis indicates that the most critical criterion for energy selection is the current energy infrastructure. A hierarchical decision model is used, and expert judgments are quantified, to evaluate the alternatives. Criteria used for the evaluations are availability, current energy infrastructure, price, safety, environmental impacts and social impacts.

  12. Genomic Selection in Plant Breeding: Methods, Models, and Perspectives.

    Science.gov (United States)

    Crossa, José; Pérez-Rodríguez, Paulino; Cuevas, Jaime; Montesinos-López, Osval; Jarquín, Diego; de Los Campos, Gustavo; Burgueño, Juan; González-Camacho, Juan M; Pérez-Elizalde, Sergio; Beyene, Yoseph; Dreisigacker, Susanne; Singh, Ravi; Zhang, Xuecai; Gowda, Manje; Roorkiwal, Manish; Rutkoski, Jessica; Varshney, Rajeev K

    2017-11-01

    Genomic selection (GS) facilitates the rapid selection of superior genotypes and accelerates the breeding cycle. In this review, we discuss the history, principles, and basis of GS and genomic-enabled prediction (GP) as well as the genetics and statistical complexities of GP models, including genomic genotype×environment (G×E) interactions. We also examine the accuracy of GP models and methods for two cereal crops and two legume crops based on random cross-validation. GS applied to maize breeding has shown tangible genetic gains. Based on GP results, we speculate how GS in germplasm enhancement (i.e., prebreeding) programs could accelerate the flow of genes from gene bank accessions to elite lines. Recent advances in hyperspectral image technology could be combined with GS and pedigree-assisted breeding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. A comparison of two methods for prediction of response and rates of inbreeding in selected populations with the results obtained in two selection experiments

    Directory of Open Access Journals (Sweden)

    Verrier Etienne

    2005-05-01

    Full Text Available Abstract Selection programmes are mainly concerned with increasing genetic gain. However, short-term progress should not be obtained at the expense of the within-population genetic variability. Different prediction models for the evolution within a small population of the genetic mean of a selected trait, its genetic variance and its inbreeding have been developed but have mainly been validated through Monte Carlo simulation studies. The purpose of this study was to compare theoretical predictions to experimental results. Two deterministic methods were considered, both grounded on a polygenic additive model. Differences between theoretical predictions and experimental results arise from differences between the true and the assumed genetic model, and from mathematical simplifications applied in the prediction methods. Two sets of experimental lines of chickens were used in this study: the Dutch lines undergoing true truncation mass selection, the other lines (French undergoing mass selection with a restriction on the representation of the different families. This study confirmed, on an experimental basis, that modelling is an efficient approach to make useful predictions of the evolution of selected populations although the basic assumptions considered in the models (polygenic additive model, normality of the distribution, base population at the equilibrium, etc. are not met in reality. The two deterministic methods compared yielded results that were close to those observed in real data, especially when the selection scheme followed the rules of strict mass selection: for instance, both predictions overestimated the genetic gain in the French experiment, whereas both predictions were close to the observed values in the Dutch experiment.

  14. An Improved Nested Sampling Algorithm for Model Selection and Assessment

    Science.gov (United States)

    Zeng, X.; Ye, M.; Wu, J.; WANG, D.

    2017-12-01

    Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.

  15. Uncertainty associated with selected environmental transport models

    International Nuclear Information System (INIS)

    Little, C.A.; Miller, C.W.

    1979-11-01

    A description is given of the capabilities of several models to predict accurately either pollutant concentrations in environmental media or radiological dose to human organs. The models are discussed in three sections: aquatic or surface water transport models, atmospheric transport models, and terrestrial and aquatic food chain models. Using data published primarily by model users, model predictions are compared to observations. This procedure is infeasible for food chain models and, therefore, the uncertainty embodied in the models input parameters, rather than the model output, is estimated. Aquatic transport models are divided into one-dimensional, longitudinal-vertical, and longitudinal-horizontal models. Several conclusions were made about the ability of the Gaussian plume atmospheric dispersion model to predict accurately downwind air concentrations from releases under several sets of conditions. It is concluded that no validation study has been conducted to test the predictions of either aquatic or terrestrial food chain models. Using the aquatic pathway from water to fish to an adult for 137 Cs as an example, a 95% one-tailed confidence limit interval for the predicted exposure is calculated by examining the distributions of the input parameters. Such an interval is found to be 16 times the value of the median exposure. A similar one-tailed limit for the air-grass-cow-milk-thyroid for 131 I and infants was 5.6 times the median dose. Of the three model types discussed in this report,the aquatic transport models appear to do the best job of predicting observed concentrations. However, this conclusion is based on many fewer aquatic validation data than were availaable for atmospheric model validation

  16. On the selection of ordinary differential equation models with application to predator-prey dynamical models.

    Science.gov (United States)

    Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J

    2015-03-01

    We consider model selection and estimation in a context where there are competing ordinary differential equation (ODE) models, and all the models are special cases of a "full" model. We propose a computationally inexpensive approach that employs statistical estimation of the full model, followed by a combination of a least squares approximation (LSA) and the adaptive Lasso. We show the resulting method, here called the LSA method, to be an (asymptotically) oracle model selection method. The finite sample performance of the proposed LSA method is investigated with Monte Carlo simulations, in which we examine the percentage of selecting true ODE models, the efficiency of the parameter estimation compared to simply using the full and true models, and coverage probabilities of the estimated confidence intervals for ODE parameters, all of which have satisfactory performances. Our method is also demonstrated by selecting the best predator-prey ODE to model a lynx and hare population dynamical system among some well-known and biologically interpretable ODE models. © 2014, The International Biometric Society.

  17. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  18. Establishment of selected acute pulmonary thromboembolism model in experimental sheep

    International Nuclear Information System (INIS)

    Fan Jihai; Gu Xiulian; Chao Shengwu; Zhang Peng; Fan Ruilin; Wang Li'na; Wang Lulu; Wang Ling; Li Bo; Chen Taotao

    2010-01-01

    Objective: To establish a selected acute pulmonary thromboembolism model in experimental sheep suitable for animal experiment. Methods: By using Seldinger's technique the catheter sheath was placed in both the femoral vein and femoral artery in ten sheep. Under C-arm DSA guidance the catheter was inserted through the catheter sheath into the pulmonary artery. Via the catheter appropriate amount of sheep autologous blood clots was injected into the selected pulmonary arteries. The selected acute pulmonary thromboembolism model was thus established. Pulmonary angiography was performed to check the results. The pulmonary arterial pressure, femoral artery pressure,heart rates and partial pressure of oxygen in arterial blood (PaO 2 ) were determined both before and after the treatment. The above parameters obtained after the procedure were compared with the recorded parameters measured before the procedure, and the sheep model quality was evaluated. Results: The baseline of pulmonary arterial pressure was (27.30 ± 9.58) mmHg,femoral artery pressure was (126.4 ± 13.72) mmHg, heart rate was (103 ± 15) bpm and PaO 2 was (87.7 ± 12.04) mmHg. Sixty minutes after the injection of (30 ± 5) ml thrombotic agglomerates, the pulmonary arterial pressures rose to (52 ± 49) mmHg, femoral artery pressures dropped to (100 ± 21) mmHg. The heart rates went up to (150 ± 26) bpm. The PaO 2 fell to (25.3 ± 11.2) mmHg. After the procedure the above parameters were significantly different from that measured before the procedure in all ten animals (P < 0.01). The pulmonary arteriography clearly demonstrated that the selected pulmonary arteries were successfully embolized. Conclusion: The anatomy of sheep's femoral veins,vena cava system, pulmonary artery and right heart system are suitable for the establishment of the catheter passage, for this reason, selected acute pulmonary thromboembolism model can be easily created in experimental sheep. The technique is feasible and the model

  19. Scale Model Thruster Acoustic Measurement Results

    Science.gov (United States)

    Vargas, Magda; Kenny, R. Jeremy

    2013-01-01

    The Space Launch System (SLS) Scale Model Acoustic Test (SMAT) is a 5% scale representation of the SLS vehicle, mobile launcher, tower, and launch pad trench. The SLS launch propulsion system will be comprised of the Rocket Assisted Take-Off (RATO) motors representing the solid boosters and 4 Gas Hydrogen (GH2) thrusters representing the core engines. The GH2 thrusters were tested in a horizontal configuration in order to characterize their performance. In Phase 1, a single thruster was fired to determine the engine performance parameters necessary for scaling a single engine. A cluster configuration, consisting of the 4 thrusters, was tested in Phase 2 to integrate the system and determine their combined performance. Acoustic and overpressure data was collected during both test phases in order to characterize the system's acoustic performance. The results from the single thruster and 4- thuster system are discussed and compared.

  20. CMS standard model Higgs boson results

    Directory of Open Access Journals (Sweden)

    Garcia-Abia Pablo

    2013-11-01

    Full Text Available In July 2012 CMS announced the discovery of a new boson with properties resembling those of the long-sought Higgs boson. The analysis of the proton-proton collision data recorded by the CMS detector at the LHC, corresponding to integrated luminosities of 5.1 fb−1 at √s = 7 TeV and 19.6 fb−1 at √s = 8 TeV, confirm the Higgs-like nature of the new boson, with a signal strength associated with vector bosons and fermions consistent with the expectations for a standard model (SM Higgs boson, and spin-parity clearly favouring the scalar nature of the new boson. In this note I review the updated results of the CMS experiment.

  1. Modelling Extortion Racket Systems: Preliminary Results

    Science.gov (United States)

    Nardin, Luis G.; Andrighetto, Giulia; Székely, Áron; Conte, Rosaria

    Mafias are highly powerful and deeply entrenched organised criminal groups that cause both economic and social damage. Overcoming, or at least limiting, their harmful effects is a societally beneficial objective, which renders its dynamics understanding an objective of both scientific and political interests. We propose an agent-based simulation model aimed at understanding how independent and combined effects of legal and social norm-based processes help to counter mafias. Our results show that legal processes are effective in directly countering mafias by reducing their activities and changing the behaviour of the rest of population, yet they are not able to change people's mind-set that renders the change fragile. When combined with social norm-based processes, however, people's mind-set shifts towards a culture of legality rendering the observed behaviour resilient to change.

  2. New results in the Dual Parton Model

    International Nuclear Information System (INIS)

    Van, J.T.T.; Capella, A.

    1984-01-01

    In this paper, the similarity between the x distribution for particle production and the fragmentation functions are observed in e+e- collisions and in deep inelastic scattering are presented. Based on the observation, the authors develop a complete approach to multiparticle production which incorporates the most important features and concepts learned about high energy collisions. 1. Topological expansion : the dominant diagram at high energy corresponds to the simplest topology. 2. Unitarity : diagrams of various topology contribute to the cross sections in a way that unitary is preserved. 3. Regge behaviour and Duality. 4. Partonic structure of hadrons. These general theoretical ideas, result from many joint experimental and theoretical efforts on the study of soft hadron physics. The dual parton model is able to explain all the experimental features from FNAL to SPS collider energies. It has all the properties of an S-matrix theory and provides a unified description of hadron-hadron, hadron-nucleus and nucleus-nucleus collisions

  3. Pareto-Optimal Model Selection via SPRINT-Race.

    Science.gov (United States)

    Zhang, Tiantian; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2018-02-01

    In machine learning, the notion of multi-objective model selection (MOMS) refers to the problem of identifying the set of Pareto-optimal models that optimize by compromising more than one predefined objectives simultaneously. This paper introduces SPRINT-Race, the first multi-objective racing algorithm in a fixed-confidence setting, which is based on the sequential probability ratio with indifference zone test. SPRINT-Race addresses the problem of MOMS with multiple stochastic optimization objectives in the proper Pareto-optimality sense. In SPRINT-Race, a pairwise dominance or non-dominance relationship is statistically inferred via a non-parametric, ternary-decision, dual-sequential probability ratio test. The overall probability of falsely eliminating any Pareto-optimal models or mistakenly returning any clearly dominated models is strictly controlled by a sequential Holm's step-down family-wise error rate control method. As a fixed-confidence model selection algorithm, the objective of SPRINT-Race is to minimize the computational effort required to achieve a prescribed confidence level about the quality of the returned models. The performance of SPRINT-Race is first examined via an artificially constructed MOMS problem with known ground truth. Subsequently, SPRINT-Race is applied on two real-world applications: 1) hybrid recommender system design and 2) multi-criteria stock selection. The experimental results verify that SPRINT-Race is an effective and efficient tool for such MOMS problems. code of SPRINT-Race is available at https://github.com/watera427/SPRINT-Race.

  4. Multilevel selection in a resource-based model

    Science.gov (United States)

    Ferreira, Fernando Fagundes; Campos, Paulo R. A.

    2013-07-01

    In the present work we investigate the emergence of cooperation in a multilevel selection model that assumes limiting resources. Following the work by R. J. Requejo and J. Camacho [Phys. Rev. Lett.0031-900710.1103/PhysRevLett.108.038701 108, 038701 (2012)], the interaction among individuals is initially ruled by a prisoner's dilemma (PD) game. The payoff matrix may change, influenced by the resource availability, and hence may also evolve to a non-PD game. Furthermore, one assumes that the population is divided into groups, whose local dynamics is driven by the payoff matrix, whereas an intergroup competition results from the nonuniformity of the growth rate of groups. We study the probability that a single cooperator can invade and establish in a population initially dominated by defectors. Cooperation is strongly favored when group sizes are small. We observe the existence of a critical group size beyond which cooperation becomes counterselected. Although the critical size depends on the parameters of the model, it is seen that a saturation value for the critical group size is achieved. The results conform to the thought that the evolutionary history of life repeatedly involved transitions from smaller selective units to larger selective units.

  5. Selection of Hydrological Model for Waterborne Release

    International Nuclear Information System (INIS)

    Blanchard, A.

    1999-01-01

    This evaluation will aid in determining the potential impacts of liquid releases to downstream populations on the Savannah River. The purpose of this report is to evaluate the two available models and determine the appropriate model for use in following waterborne release analyses. Additionally, this report will document the Design Basis and Beyond Design Basis accidents to be used in the future study

  6. Application of Bayesian Model Selection for Metal Yield Models using ALEGRA and Dakota.

    Energy Technology Data Exchange (ETDEWEB)

    Portone, Teresa; Niederhaus, John Henry; Sanchez, Jason James; Swiler, Laura Painton

    2018-02-01

    This report introduces the concepts of Bayesian model selection, which provides a systematic means of calibrating and selecting an optimal model to represent a phenomenon. This has many potential applications, including for comparing constitutive models. The ideas described herein are applied to a model selection problem between different yield models for hardened steel under extreme loading conditions.

  7. Finiteness results for Abelian tree models

    NARCIS (Netherlands)

    Draisma, J.; Eggermont, R.H.

    2015-01-01

    Equivariant tree models are statistical models used in the reconstruction of phylogenetic trees from genetic data. Here equivariant refers to a symmetry group imposed on the root distribution and on the transition matrices in the model. We prove that if that symmetry group is Abelian, then the

  8. Finiteness results for Abelian tree models

    NARCIS (Netherlands)

    Draisma, J.; Eggermont, R.H.

    2012-01-01

    Equivariant tree models are statistical models used in the reconstruction of phylogenetic trees from genetic data. Here equivariant refers to a symmetry group imposed on the root distribution and on the transition matrices in the model. We prove that if that symmetry group is Abelian, then the

  9. Finiteness results for Abelian tree models

    NARCIS (Netherlands)

    Draisma, J.; Eggermont, R.H.

    2015-01-01

    Equivariant tree models are statistical models used in the reconstruction of phylogenetic trees from genetic data. Here equivariant§ refers to a symmetry group imposed on the root distribution and on the transition matrices in the model. We prove that if that symmetry group is Abelian, then the

  10. Astrophysical Model Selection in Gravitational Wave Astronomy

    Science.gov (United States)

    Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.

    2012-01-01

    Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.

  11. Generation IV and transmutation materials (GETMAT) project: First assessment of selected results

    International Nuclear Information System (INIS)

    Fazio, Concetta; Serrano, Marta; Gessi, Alessandro; Henry, Jean; Malerba, Lorenzo

    2015-01-01

    The Generation IV and Transmutation Material (GETMAT) project has been initiated within the 7. EURATOM framework programme with the objective to support the development of innovative reactor designs. Emphasis has been put on the investigation, both in the theoretical and experimental domains, of selected material properties that are cross-cutting among the various Generation IV and Transmutation systems. The selection of the properties to be investigated has been performed by identifying relevant conditions of key components as cores and primary systems. Moreover, taking into account the envisaged conditions of these components it turned out that innovative materials might be a better choice with respect to conventional nuclear grade steels. Therefore, ODS alloys and 9-12 Cr Ferritic/Martensitic (F/M) steels have been selected as reference for the GETMAT project. The R and D activities have been focused on basic characterisation of ODS alloys produced ad hoc for the project and on an extensive PIE programme of F/M steels irradiated in previous programmes. Finally, first principle modelling studies to explain irradiation hardening and embrittlement of F/M alloys were an additional important task. The objective of this manuscript is to make a first assessment of the results obtained within GETMAT. (authors)

  12. A model for selecting leadership styles.

    Science.gov (United States)

    Perkins, V J

    1992-01-01

    Occupational therapists lead a variety of groups during their professional activities. Such groups include therapy groups, treatment teams and management meetings. Therefore it is important for each therapist to understand theories of leadership and be able to select the most effective style for him or herself in specific situations. This paper presents a review of leadership theory and research as well as therapeutic groups. It then integrates these areas to assist students and new therapists in identifying a style that is effective for a particular group.

  13. Immersive visualization of dynamic CFD model results

    International Nuclear Information System (INIS)

    Comparato, J.R.; Ringel, K.L.; Heath, D.J.

    2004-01-01

    With immersive visualization the engineer has the means for vividly understanding problem causes and discovering opportunities to improve design. Software can generate an interactive world in which collaborators experience the results of complex mathematical simulations such as computational fluid dynamic (CFD) modeling. Such software, while providing unique benefits over traditional visualization techniques, presents special development challenges. The visualization of large quantities of data interactively requires both significant computational power and shrewd data management. On the computational front, commodity hardware is outperforming large workstations in graphical quality and frame rates. Also, 64-bit commodity computing shows promise in enabling interactive visualization of large datasets. Initial interactive transient visualization methods and examples are presented, as well as development trends in commodity hardware and clustering. Interactive, immersive visualization relies on relevant data being stored in active memory for fast response to user requests. For large or transient datasets, data management becomes a key issue. Techniques for dynamic data loading and data reduction are presented as means to increase visualization performance. (author)

  14. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  15. Applying a Hybrid MCDM Model for Six Sigma Project Selection

    Directory of Open Access Journals (Sweden)

    Fu-Kwun Wang

    2014-01-01

    Full Text Available Six Sigma is a project-driven methodology; the projects that provide the maximum financial benefits and other impacts to the organization must be prioritized. Project selection (PS is a type of multiple criteria decision making (MCDM problem. In this study, we present a hybrid MCDM model combining the decision-making trial and evaluation laboratory (DEMATEL technique, analytic network process (ANP, and the VIKOR method to evaluate and improve Six Sigma projects for reducing performance gaps in each criterion and dimension. We consider the film printing industry of Taiwan as an empirical case. The results show that our study not only can use the best project selection, but can also be used to analyze the gaps between existing performance values and aspiration levels for improving the gaps in each dimension and criterion based on the influential network relation map.

  16. Selection of Representative Models for Decision Analysis Under Uncertainty

    Science.gov (United States)

    Meira, Luis A. A.; Coelho, Guilherme P.; Santos, Antonio Alberto S.; Schiozer, Denis J.

    2016-03-01

    The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.

  17. Overview of the New England wind integration. Study and selected results

    Energy Technology Data Exchange (ETDEWEB)

    Norden, John R.; Henson, William L.W. [ISO New England, Holyoke, MA (United States)

    2010-07-01

    ISO New England commissioned a comprehensive wind integration study to be completed in the early fall of 2010: the New England Wind Integration Study (NEWIS). The NEWIS assesses the efects of scenarios that encompass a range of wind-power penetrations in New England using statistical and simulation analysis including the development of a mesoscale wind-to-power model for the New England and Maritime wind resources areas. It also determines the impacts of integrating increasing amounts of wind generation resources for New England, as well as, the measures that may be available to the ISO for responding to any challenges while enabling the integration of wind-power. This paper provides an overview of the study then focuses on selected near final results, particularly with regard to the varying capacity factor, capacity value and siting that were determined as part of the study. The full results of the NEWIS will be released in the fall of 2010. (orig.)

  18. Continuum model for chiral induced spin selectivity in helical molecules

    Energy Technology Data Exchange (ETDEWEB)

    Medina, Ernesto [Centro de Física, Instituto Venezolano de Investigaciones Científicas, 21827, Caracas 1020 A (Venezuela, Bolivarian Republic of); Groupe de Physique Statistique, Institut Jean Lamour, Université de Lorraine, 54506 Vandoeuvre-les-Nancy Cedex (France); Department of Chemistry and Biochemistry, Arizona State University, Tempe, Arizona 85287 (United States); González-Arraga, Luis A. [IMDEA Nanoscience, Cantoblanco, 28049 Madrid (Spain); Finkelstein-Shapiro, Daniel; Mujica, Vladimiro [Department of Chemistry and Biochemistry, Arizona State University, Tempe, Arizona 85287 (United States); Berche, Bertrand [Centro de Física, Instituto Venezolano de Investigaciones Científicas, 21827, Caracas 1020 A (Venezuela, Bolivarian Republic of); Groupe de Physique Statistique, Institut Jean Lamour, Université de Lorraine, 54506 Vandoeuvre-les-Nancy Cedex (France)

    2015-05-21

    A minimal model is exactly solved for electron spin transport on a helix. Electron transport is assumed to be supported by well oriented p{sub z} type orbitals on base molecules forming a staircase of definite chirality. In a tight binding interpretation, the spin-orbit coupling (SOC) opens up an effective π{sub z} − π{sub z} coupling via interbase p{sub x,y} − p{sub z} hopping, introducing spin coupled transport. The resulting continuum model spectrum shows two Kramers doublet transport channels with a gap proportional to the SOC. Each doubly degenerate channel satisfies time reversal symmetry; nevertheless, a bias chooses a transport direction and thus selects for spin orientation. The model predicts (i) which spin orientation is selected depending on chirality and bias, (ii) changes in spin preference as a function of input Fermi level and (iii) back-scattering suppression protected by the SO gap. We compute the spin current with a definite helicity and find it to be proportional to the torsion of the chiral structure and the non-adiabatic Aharonov-Anandan phase. To describe room temperature transport, we assume that the total transmission is the result of a product of coherent steps.

  19. Engineering Glass Passivation Layers -Model Results

    Energy Technology Data Exchange (ETDEWEB)

    Skorski, Daniel C.; Ryan, Joseph V.; Strachan, Denis M.; Lepry, William C.

    2011-08-08

    The immobilization of radioactive waste into glass waste forms is a baseline process of nuclear waste management not only in the United States, but worldwide. The rate of radionuclide release from these glasses is a critical measure of the quality of the waste form. Over long-term tests and using extrapolations of ancient analogues, it has been shown that well designed glasses exhibit a dissolution rate that quickly decreases to a slow residual rate for the lifetime of the glass. The mechanistic cause of this decreased corrosion rate is a subject of debate, with one of the major theories suggesting that the decrease is caused by the formation of corrosion products in such a manner as to present a diffusion barrier on the surface of the glass. Although there is much evidence of this type of mechanism, there has been no attempt to engineer the effect to maximize the passivating qualities of the corrosion products. This study represents the first attempt to engineer the creation of passivating phases on the surface of glasses. Our approach utilizes interactions between the dissolving glass and elements from the disposal environment to create impermeable capping layers. By drawing from other corrosion studies in areas where passivation layers have been successfully engineered to protect the bulk material, we present here a report on mineral phases that are likely have a morphological tendency to encrust the surface of the glass. Our modeling has focused on using the AFCI glass system in a carbonate, sulfate, and phosphate rich environment. We evaluate the minerals predicted to form to determine the likelihood of the formation of a protective layer on the surface of the glass. We have also modeled individual ions in solutions vs. pH and the addition of aluminum and silicon. These results allow us to understand the pH and ion concentration dependence of mineral formation. We have determined that iron minerals are likely to form a complete incrustation layer and we plan

  20. CIEMAT model results for Esthwaite Water

    International Nuclear Information System (INIS)

    Aguero, A.; Garcia-Olivares, A.

    2000-01-01

    This study used the transfer model PRYMA-LO, developed by CIEMAT-IMA, Madrid, Spain, to simulate the transfer of Cs-137 in watershed scenarios. The main processes considered by the model include: transfer of the fallout to the ground, incorporation of the fallout radioisotopes into the water flow, and their removal from the system. The model was tested against observation data obtained in water and sediments of Esthwaite Water, Lake District, UK. This comparison made it possible to calibrate the parameters of the model to the specific scenario

  1. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  2. Quantitative modeling of selective lysosomal targeting for drug design

    DEFF Research Database (Denmark)

    Trapp, Stefan; Rosania, G.; Horobin, R.W.

    2008-01-01

    log K ow. These findings were validated with experimental results and by a comparison to the properties of antimalarial drugs in clinical use. For ten active compounds, nine were predicted to accumulate to a greater extent in lysosomes than in other organelles, six of these were in the optimum range...... predicted by the model and three were close. Five of the antimalarial drugs were lipophilic weak dibasic compounds. The predicted optimum properties for a selective accumulation of weak bivalent bases in lysosomes are consistent with experimental values and are more accurate than any prior calculation...

  3. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  4. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  5. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  6. Selection of Hydrological Model for Waterborne Release

    International Nuclear Information System (INIS)

    Blanchard, A.

    1999-01-01

    Following a request from the States of South Carolina and Georgia, downstream radiological consequences from postulated accidental aqueous releases at the three Savannah River Site nonreactor nuclear facilities will be examined. This evaluation will aid in determining the potential impacts of liquid releases to downstream populations on the Savannah River. The purpose of this report is to evaluate the two available models and determine the appropriate model for use in following waterborne release analyses. Additionally, this report will document the accidents to be used in the future study

  7. Adapting AIC to conditional model selection

    NARCIS (Netherlands)

    T. van Ommen (Thijs)

    2012-01-01

    textabstractIn statistical settings such as regression and time series, we can condition on observed information when predicting the data of interest. For example, a regression model explains the dependent variables $y_1, \\ldots, y_n$ in terms of the independent variables $x_1, \\ldots, x_n$.

  8. A Permutation Approach for Selecting the Penalty Parameter in Penalized Model Selection

    Science.gov (United States)

    Sabourin, Jeremy A; Valdar, William; Nobel, Andrew B

    2015-01-01

    Summary We describe a simple, computationally effcient, permutation-based procedure for selecting the penalty parameter in LASSO penalized regression. The procedure, permutation selection, is intended for applications where variable selection is the primary focus, and can be applied in a variety of structural settings, including that of generalized linear models. We briefly discuss connections between permutation selection and existing theory for the LASSO. In addition, we present a simulation study and an analysis of real biomedical data sets in which permutation selection is compared with selection based on the following: cross-validation (CV), the Bayesian information criterion (BIC), Scaled Sparse Linear Regression, and a selection method based on recently developed testing procedures for the LASSO. PMID:26243050

  9. Strong morphological and crystallographic texture and resulting yield strength anisotropy in selective laser melted tantalum

    International Nuclear Information System (INIS)

    Thijs, Lore; Montero Sistiaga, Maria Luz; Wauthle, Ruben; Xie, Qingge; Kruth, Jean-Pierre; Van Humbeeck, Jan

    2013-01-01

    Selective laser melting (SLM) makes use of a high energy density laser beam to melt successive layers of metallic powders in order to create functional parts. The energy density of the laser is high enough to melt refractory metals like Ta and produce mechanically sound parts. Furthermore, the localized heat input causes a strong directional cooling and solidification. Epitaxial growth due to partial remelting of the previous layer, competitive growth mechanism and a specific global direction of heat flow during SLM of Ta result in the formation of long columnar grains with a 〈1 1 1〉 preferential crystal orientation along the building direction. The microstructure was visualized using both optical and scanning electron microscopy equipped with electron backscattered diffraction and the global crystallographic texture was measured using X-ray diffraction. The thermal profile around the melt pool was modeled using a pragmatic model for SLM. Furthermore, rotation of the scanning direction between different layers was seen to promote the competitive growth. As a result, the texture strength increased to as large as 4.7 for rotating the scanning direction 90° every layer. By comparison of the yield strength measured by compression tests in different orientations and the averaged Taylor factor calculated using the viscoplastic self-consistent model, it was found that both the morphological and crystallographic texture observed in SLM Ta contribute to yield strength anisotropy

  10. Flying Training Capacity Model: Initial Results

    National Research Council Canada - National Science Library

    Lynch, Susan

    2005-01-01

    OBJECTIVE: (1) Determine the flying training capacity for 6 bases: * Sheppard AFB * Randolph AFB * Moody AFB * Columbus AFB * Laughlin AFB * Vance AFB * (2) Develop versatile flying training capacity simulation model for AETC...

  11. Acute leukemia classification by ensemble particle swarm model selection.

    Science.gov (United States)

    Escalante, Hugo Jair; Montes-y-Gómez, Manuel; González, Jesús A; Gómez-Gil, Pilar; Altamirano, Leopoldo; Reyes, Carlos A; Reta, Carolina; Rosales, Alejandro

    2012-07-01

    Acute leukemia is a malignant disease that affects a large proportion of the world population. Different types and subtypes of acute leukemia require different treatments. In order to assign the correct treatment, a physician must identify the leukemia type or subtype. Advanced and precise methods are available for identifying leukemia types, but they are very expensive and not available in most hospitals in developing countries. Thus, alternative methods have been proposed. An option explored in this paper is based on the morphological properties of bone marrow images, where features are extracted from medical images and standard machine learning techniques are used to build leukemia type classifiers. This paper studies the use of ensemble particle swarm model selection (EPSMS), which is an automated tool for the selection of classification models, in the context of acute leukemia classification. EPSMS is the application of particle swarm optimization to the exploration of the search space of ensembles that can be formed by heterogeneous classification models in a machine learning toolbox. EPSMS does not require prior domain knowledge and it is able to select highly accurate classification models without user intervention. Furthermore, specific models can be used for different classification tasks. We report experimental results for acute leukemia classification with real data and show that EPSMS outperformed the best results obtained using manually designed classifiers with the same data. The highest performance using EPSMS was of 97.68% for two-type classification problems and of 94.21% for more than two types problems. To the best of our knowledge, these are the best results reported for this data set. Compared with previous studies, these improvements were consistent among different type/subtype classification tasks, different features extracted from images, and different feature extraction regions. The performance improvements were statistically significant

  12. Automation of Endmember Pixel Selection in SEBAL/METRIC Model

    Science.gov (United States)

    Bhattarai, N.; Quackenbush, L. J.; Im, J.; Shaw, S. B.

    2015-12-01

    The commonly applied surface energy balance for land (SEBAL) and its variant, mapping evapotranspiration (ET) at high resolution with internalized calibration (METRIC) models require manual selection of endmember (i.e. hot and cold) pixels to calibrate sensible heat flux. Current approaches for automating this process are based on statistical methods and do not appear to be robust under varying climate conditions and seasons. In this paper, we introduce a new approach based on simple machine learning tools and search algorithms that provides an automatic and time efficient way of identifying endmember pixels for use in these models. The fully automated models were applied on over 100 cloud-free Landsat images with each image covering several eddy covariance flux sites in Florida and Oklahoma. Observed land surface temperatures at automatically identified hot and cold pixels were within 0.5% of those from pixels manually identified by an experienced operator (coefficient of determination, R2, ≥ 0.92, Nash-Sutcliffe efficiency, NSE, ≥ 0.92, and root mean squared error, RMSE, ≤ 1.67 K). Daily ET estimates derived from the automated SEBAL and METRIC models were in good agreement with their manual counterparts (e.g., NSE ≥ 0.91 and RMSE ≤ 0.35 mm day-1). Automated and manual pixel selection resulted in similar estimates of observed ET across all sites. The proposed approach should reduce time demands for applying SEBAL/METRIC models and allow for their more widespread and frequent use. This automation can also reduce potential bias that could be introduced by an inexperienced operator and extend the domain of the models to new users.

  13. Development of Solar Drying Model for Selected Cambodian Fish Species

    Science.gov (United States)

    Hubackova, Anna; Kucerova, Iva; Chrun, Rithy; Chaloupkova, Petra; Banout, Jan

    2014-01-01

    A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6°C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg·h−1. Based on coefficient of determination (R 2), chi-square (χ 2) test, and root-mean-square error (RMSE), the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing. PMID:25250381

  14. Development of Solar Drying Model for Selected Cambodian Fish Species

    Directory of Open Access Journals (Sweden)

    Anna Hubackova

    2014-01-01

    Full Text Available A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6°C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg·h−1. Based on coefficient of determination (R2, chi-square (χ2 test, and root-mean-square error (RMSE, the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing.

  15. Modeling HIV-1 drug resistance as episodic directional selection.

    Science.gov (United States)

    Murrell, Ben; de Oliveira, Tulio; Seebregts, Chris; Kosakovsky Pond, Sergei L; Scheffler, Konrad

    2012-01-01

    The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS) which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.

  16. Modeling HIV-1 drug resistance as episodic directional selection.

    Directory of Open Access Journals (Sweden)

    Ben Murrell

    Full Text Available The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.

  17. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  18. Two-step variable selection in quantile regression models

    Directory of Open Access Journals (Sweden)

    FAN Yali

    2015-06-01

    Full Text Available We propose a two-step variable selection procedure for high dimensional quantile regressions, in which the dimension of the covariates, pn is much larger than the sample size n. In the first step, we perform ℓ1 penalty, and we demonstrate that the first step penalized estimator with the LASSO penalty can reduce the model from an ultra-high dimensional to a model whose size has the same order as that of the true model, and the selected model can cover the true model. The second step excludes the remained irrelevant covariates by applying the adaptive LASSO penalty to the reduced model obtained from the first step. Under some regularity conditions, we show that our procedure enjoys the model selection consistency. We conduct a simulation study and a real data analysis to evaluate the finite sample performance of the proposed approach.

  19. Graphical interpretation of numerical model results

    International Nuclear Information System (INIS)

    Drewes, D.R.

    1979-01-01

    Computer software has been developed to produce high quality graphical displays of data from a numerical grid model. The code uses an existing graphical display package (DISSPLA) and overcomes some of the problems of both line-printer output and traditional graphics. The software has been designed to be flexible enough to handle arbitrarily placed computation grids and a variety of display requirements

  20. Partner Selection Optimization Model of Agricultural Enterprises in Supply Chain

    OpenAIRE

    Feipeng Guo; Qibei Lu

    2013-01-01

    With more and more importance of correctly selecting partners in supply chain of agricultural enterprises, a large number of partner evaluation techniques are widely used in the field of agricultural science research. This study established a partner selection model to optimize the issue of agricultural supply chain partner selection. Firstly, it constructed a comprehensive evaluation index system after analyzing the real characteristics of agricultural supply chain. Secondly, a heuristic met...

  1. Effect of Model Selection on Computed Water Balance Components

    NARCIS (Netherlands)

    Jhorar, R.K.; Smit, A.A.M.F.R.; Roest, C.W.J.

    2009-01-01

    Soil water flow modelling approaches as used in four selected on-farm water management models, namely CROPWAT. FAIDS, CERES and SWAP, are compared through numerical experiments. The soil water simulation approaches used in the first three models are reformulated to incorporate ail evapotranspiration

  2. Ignalina NPP Safety Analysis: Models and Results

    International Nuclear Information System (INIS)

    Uspuras, E.

    1999-01-01

    Research directions, linked to safety assessment of the Ignalina NPP, of the scientific safety analysis group are presented: Thermal-hydraulic analysis of accidents and operational transients; Thermal-hydraulic assessment of Ignalina NPP Accident Localization System and other compartments; Structural analysis of plant components, piping and other parts of Main Circulation Circuit; Assessment of RBMK-1500 reactor core and other. Models and main works carried out last year are described. (author)

  3. Impact of selected troposphere models on Precise Point Positioning convergence

    Science.gov (United States)

    Kalita, Jakub; Rzepecka, Zofia

    2016-04-01

    The Precise Point Positioning (PPP) absolute method is currently intensively investigated in order to reach fast convergence time. Among various sources that influence the convergence of the PPP, the tropospheric delay is one of the most important. Numerous models of tropospheric delay are developed and applied to PPP processing. However, with rare exceptions, the quality of those models does not allow fixing the zenith path delay tropospheric parameter, leaving difference between nominal and final value to the estimation process. Here we present comparison of several PPP result sets, each of which based on different troposphere model. The respective nominal values are adopted from models: VMF1, GPT2w, MOPS and ZERO-WET. The PPP solution admitted as reference is based on the final troposphere product from the International GNSS Service (IGS). The VMF1 mapping function was used for all processing variants in order to provide capability to compare impact of applied nominal values. The worst case initiates zenith wet delay with zero value (ZERO-WET). Impact from all possible models for tropospheric nominal values should fit inside both IGS and ZERO-WET border variants. The analysis is based on data from seven IGS stations located in mid-latitude European region from year 2014. For the purpose of this study several days with the most active troposphere were selected for each of the station. All the PPP solutions were determined using gLAB open-source software, with the Kalman filter implemented independently by the authors of this work. The processing was performed on 1 hour slices of observation data. In addition to the analysis of the output processing files, the presented study contains detailed analysis of the tropospheric conditions for the selected data. The overall results show that for the height component the VMF1 model outperforms GPT2w and MOPS by 35-40% and ZERO-WET variant by 150%. In most of the cases all solutions converge to the same values during first

  4. Validation of elk resource selection models with spatially independent data

    Science.gov (United States)

    Priscilla K. Coe; Bruce K. Johnson; Michael J. Wisdom; John G. Cook; Marty Vavra; Ryan M. Nielson

    2011-01-01

    Knowledge of how landscape features affect wildlife resource use is essential for informed management. Resource selection functions often are used to make and validate predictions about landscape use; however, resource selection functions are rarely validated with data from landscapes independent of those from which the models were built. This problem has severely...

  5. A Working Model of Natural Selection Illustrated by Table Tennis

    Science.gov (United States)

    Dinc, Muhittin; Kilic, Selda; Aladag, Caner

    2013-01-01

    Natural selection is one of the most important topics in biology and it helps to clarify the variety and complexity of organisms. However, students in almost every stage of education find it difficult to understand the mechanism of natural selection and they can develop misconceptions about it. This article provides an active model of natural…

  6. Augmented Self-Modeling as an Intervention for Selective Mutism

    Science.gov (United States)

    Kehle, Thomas J.; Bray, Melissa A.; Byer-Alcorace, Gabriel F.; Theodore, Lea A.; Kovac, Lisa M.

    2012-01-01

    Selective mutism is a rare disorder that is difficult to treat. It is often associated with oppositional defiant behavior, particularly in the home setting, social phobia, and, at times, autism spectrum disorder characteristics. The augmented self-modeling treatment has been relatively successful in promoting rapid diminishment of selective mutism…

  7. Response to selection in finite locus models with nonadditive effects

    NARCIS (Netherlands)

    Esfandyari, Hadi; Henryon, Mark; Berg, Peer; Thomasen, Jørn Rind; Bijma, Piter; Sørensen, Anders Christian

    2017-01-01

    Under the finite-locus model in the absence of mutation, the additive genetic variation is expected to decrease when directional selection is acting on a population, according to quantitative-genetic theory. However, some theoretical studies of selection suggest that the level of additive

  8. Target Selection Models with Preference Variation Between Offenders

    NARCIS (Netherlands)

    Townsley, Michael; Birks, Daniel; Ruiter, Stijn; Bernasco, Wim; White, Gentry

    2016-01-01

    Objectives: This study explores preference variation in location choice strategies of residential burglars. Applying a model of offender target selection that is grounded in assertions of the routine activity approach, rational choice perspective, crime pattern and social disorganization theories,

  9. Molecular modelling of a chemodosimeter for the selective detection ...

    Indian Academy of Sciences (India)

    Wintec

    Molecular modelling of a chemodosimeter for the selective detection of. As(III) ion in water. † ... high levels of arsenic cause severe skin diseases in- cluding skin cancer ..... Special Attention to Groundwater in SE Asia (eds) D. Chakraborti, A ...

  10. Model catalysis by size-selected cluster deposition

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Scott [Univ. of Utah, Salt Lake City, UT (United States)

    2015-11-20

    This report summarizes the accomplishments during the last four years of the subject grant. Results are presented for experiments in which size-selected model catalysts were studied under surface science and aqueous electrochemical conditions. Strong effects of cluster size were found, and by correlating the size effects with size-dependent physical properties of the samples measured by surface science methods, it was possible to deduce mechanistic insights, such as the factors that control the rate-limiting step in the reactions. Results are presented for CO oxidation, CO binding energetics and geometries, and electronic effects under surface science conditions, and for the electrochemical oxygen reduction reaction, ethanol oxidation reaction, and for oxidation of carbon by water.

  11. Ensemble Prediction Model with Expert Selection for Electricity Price Forecasting

    Directory of Open Access Journals (Sweden)

    Bijay Neupane

    2017-01-01

    Full Text Available Forecasting of electricity prices is important in deregulated electricity markets for all of the stakeholders: energy wholesalers, traders, retailers and consumers. Electricity price forecasting is an inherently difficult problem due to its special characteristic of dynamicity and non-stationarity. In this paper, we present a robust price forecasting mechanism that shows resilience towards the aggregate demand response effect and provides highly accurate forecasted electricity prices to the stakeholders in a dynamic environment. We employ an ensemble prediction model in which a group of different algorithms participates in forecasting 1-h ahead the price for each hour of a day. We propose two different strategies, namely, the Fixed Weight Method (FWM and the Varying Weight Method (VWM, for selecting each hour’s expert algorithm from the set of participating algorithms. In addition, we utilize a carefully engineered set of features selected from a pool of features extracted from the past electricity price data, weather data and calendar data. The proposed ensemble model offers better results than the Autoregressive Integrated Moving Average (ARIMA method, the Pattern Sequence-based Forecasting (PSF method and our previous work using Artificial Neural Networks (ANN alone on the datasets for New York, Australian and Spanish electricity markets.

  12. A Network Analysis Model for Selecting Sustainable Technology

    Directory of Open Access Journals (Sweden)

    Sangsung Park

    2015-09-01

    Full Text Available Most companies develop technologies to improve their competitiveness in the marketplace. Typically, they then patent these technologies around the world in order to protect their intellectual property. Other companies may use patented technologies to develop new products, but must pay royalties to the patent holders or owners. Should they fail to do so, this can result in legal disputes in the form of patent infringement actions between companies. To avoid such situations, companies attempt to research and develop necessary technologies before their competitors do so. An important part of this process is analyzing existing patent documents in order to identify emerging technologies. In such analyses, extracting sustainable technology from patent data is important, because sustainable technology drives technological competition among companies and, thus, the development of new technologies. In addition, selecting sustainable technologies makes it possible to plan their R&D (research and development efficiently. In this study, we propose a network model that can be used to select the sustainable technology from patent documents, based on the centrality and degree of a social network analysis. To verify the performance of the proposed model, we carry out a case study using actual patent data from patent databases.

  13. Model Selection in Continuous Test Norming With GAMLSS.

    Science.gov (United States)

    Voncken, Lieke; Albers, Casper J; Timmerman, Marieke E

    2017-06-01

    To compute norms from reference group test scores, continuous norming is preferred over traditional norming. A suitable continuous norming approach for continuous data is the use of the Box-Cox Power Exponential model, which is found in the generalized additive models for location, scale, and shape. Applying the Box-Cox Power Exponential model for test norming requires model selection, but it is unknown how well this can be done with an automatic selection procedure. In a simulation study, we compared the performance of two stepwise model selection procedures combined with four model-fit criteria (Akaike information criterion, Bayesian information criterion, generalized Akaike information criterion (3), cross-validation), varying data complexity, sampling design, and sample size in a fully crossed design. The new procedure combined with one of the generalized Akaike information criterion was the most efficient model selection procedure (i.e., required the smallest sample size). The advocated model selection procedure is illustrated with norming data of an intelligence test.

  14. Selection Criteria in Regime Switching Conditional Volatility Models

    Directory of Open Access Journals (Sweden)

    Thomas Chuffart

    2015-05-01

    Full Text Available A large number of nonlinear conditional heteroskedastic models have been proposed in the literature. Model selection is crucial to any statistical data analysis. In this article, we investigate whether the most commonly used selection criteria lead to choice of the right specification in a regime switching framework. We focus on two types of models: the Logistic Smooth Transition GARCH and the Markov-Switching GARCH models. Simulation experiments reveal that information criteria and loss functions can lead to misspecification ; BIC sometimes indicates the wrong regime switching framework. Depending on the Data Generating Process used in the experiments, great care is needed when choosing a criterion.

  15. Microplasticity of MMC. Experimental results and modelling

    International Nuclear Information System (INIS)

    Maire, E.; Lormand, G.; Gobin, P.F.; Fougeres, R.

    1993-01-01

    The microplastic behavior of several MMC is investigated by means of tension and compression tests. This behavior is assymetric : the proportional limit is higher in tension than in compression but the work hardening rate is higher in compression. These differences are analysed in terms of maxium of the Tresca's shear stress at the interface (proportional limit) and of the emission of dislocation loops during the cooling (work hardening rate). On another hand, a model is proposed to calculate the value of the yield stress, describing the composite as a material composed of three phases : inclusion, unaffected matrix and matrix surrounding the inclusion having a gradient in the density of the thermally induced dilocations. (orig.)

  16. Microplasticity of MMC. Experimental results and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Maire, E. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Lormand, G. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Gobin, P.F. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Fougeres, R. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France))

    1993-11-01

    The microplastic behavior of several MMC is investigated by means of tension and compression tests. This behavior is assymetric : the proportional limit is higher in tension than in compression but the work hardening rate is higher in compression. These differences are analysed in terms of maxium of the Tresca's shear stress at the interface (proportional limit) and of the emission of dislocation loops during the cooling (work hardening rate). On another hand, a model is proposed to calculate the value of the yield stress, describing the composite as a material composed of three phases : inclusion, unaffected matrix and matrix surrounding the inclusion having a gradient in the density of the thermally induced dilocations. (orig.).

  17. Radial Domany-Kinzel models with mutation and selection

    Science.gov (United States)

    Lavrentovich, Maxim O.; Korolev, Kirill S.; Nelson, David R.

    2013-01-01

    We study the effect of spatial structure, genetic drift, mutation, and selective pressure on the evolutionary dynamics in a simplified model of asexual organisms colonizing a new territory. Under an appropriate coarse-graining, the evolutionary dynamics is related to the directed percolation processes that arise in voter models, the Domany-Kinzel (DK) model, contact process, and so on. We explore the differences between linear (flat front) expansions and the much less familiar radial (curved front) range expansions. For the radial expansion, we develop a generalized, off-lattice DK model that minimizes otherwise persistent lattice artifacts. With both simulations and analytical techniques, we study the survival probability of advantageous mutants, the spatial correlations between domains of neutral strains, and the dynamics of populations with deleterious mutations. “Inflation” at the frontier leads to striking differences between radial and linear expansions. For a colony with initial radius R0 expanding at velocity v, significant genetic demixing, caused by local genetic drift, occurs only up to a finite time t*=R0/v, after which portions of the colony become causally disconnected due to the inflating perimeter of the expanding front. As a result, the effect of a selective advantage is amplified relative to genetic drift, increasing the survival probability of advantageous mutants. Inflation also modifies the underlying directed percolation transition, introducing novel scaling functions and modifications similar to a finite-size effect. Finally, we consider radial range expansions with deflating perimeters, as might arise from colonization initiated along the shores of an island.

  18. The Use of Evolution in a Central Action Selection Model

    Directory of Open Access Journals (Sweden)

    F. Montes-Gonzalez

    2007-01-01

    Full Text Available The use of effective central selection provides flexibility in design by offering modularity and extensibility. In earlier papers we have focused on the development of a simple centralized selection mechanism. Our current goal is to integrate evolutionary methods in the design of non-sequential behaviours and the tuning of specific parameters of the selection model. The foraging behaviour of an animal robot (animat has been modelled in order to integrate the sensory information from the robot to perform selection that is nearly optimized by the use of genetic algorithms. In this paper we present how selection through optimization finally arranges the pattern of presented behaviours for the foraging task. Hence, the execution of specific parts in a behavioural pattern may be ruled out by the tuning of these parameters. Furthermore, the intensive use of colour segmentation from a colour camera for locating a cylinder sets a burden on the calculations carried out by the genetic algorithm.

  19. A Hybrid Multiple Criteria Decision Making Model for Supplier Selection

    Directory of Open Access Journals (Sweden)

    Chung-Min Wu

    2013-01-01

    Full Text Available The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain their weights. To avoid calculation and additional pairwise comparisons of ANP, a technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. The use of a combination of the fuzzy Delphi method, ANP, and TOPSIS, proposing an MCDM model for supplier selection, and applying these to a real case are the unique features of this study.

  20. A model selection support system for numerical simulations of nuclear thermal-hydraulics

    International Nuclear Information System (INIS)

    Gofuku, Akio; Shimizu, Kenji; Sugano, Keiji; Yoshikawa, Hidekazu; Wakabayashi, Jiro

    1990-01-01

    In order to execute efficiently a dynamic simulation of a large-scaled engineering system such as a nuclear power plant, it is necessary to develop intelligent simulation support system for all phases of the simulation. This study is concerned with the intelligent support for the program development phase and is engaged in the adequate model selection support method by applying AI (Artificial Intelligence) techniques to execute a simulation consistent with its purpose and conditions. A proto-type expert system to support the model selection for numerical simulations of nuclear thermal-hydraulics in the case of cold leg small break loss-of-coolant accident of PWR plant is now under development on a personal computer. The steps to support the selection of both fluid model and constitutive equations for the drift flux model have been developed. Several cases of model selection were carried out and reasonable model selection results were obtained. (author)

  1. Variable selection in Logistic regression model with genetic algorithm.

    Science.gov (United States)

    Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

    2018-02-01

    Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

  2. Addressing selected problems of the modelling of digital control systems

    International Nuclear Information System (INIS)

    Sedlak, J.

    2004-12-01

    The introduction of digital systems to practical activities at nuclear power plants brings about new requirements for their modelling for the purposes of reliability analyses required for plant licensing as well as for inclusion into PSA studies and subsequent use in applications for the assessment of events, limits and conditions, and risk monitoring. It is very important to assess, both qualitatively and quantitatively, the effect of this change on operational safety. The report describes selected specific features of reliability analysis of digital system and recommends methodological procedures. The chapters of the report are as follows: (1) Flexibility and multifunctionality of the system. (2) General framework of reliability analyses (Understanding the system; Qualitative analysis; Quantitative analysis; Assessment of results, comparison against criteria; Documenting system reliability analyses; Asking for comments and their evaluation); and (3) Suitable reliability models (Reliability models of basic events; Monitored components with repair immediately following defect or failure; Periodically tested components; Constant unavailability (probability of failure to demand); Application of reliability models for electronic components; Example of failure rate decomposition; Example modified for diagnosis successfulness; Transfer of reliability analyses to PSA; Common cause failures - CCF; Software backup and CCF type failures, software versus hardware). (P.A.)

  3. A CONCEPTUAL MODEL FOR IMPROVED PROJECT SELECTION AND PRIORITISATION

    Directory of Open Access Journals (Sweden)

    P. J. Viljoen

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Project portfolio management processes are often designed and operated as a series of stages (or project phases and gates. However, the flow of such a process is often slow, characterised by queues waiting for a gate decision and by repeated work from previous stages waiting for additional information or for re-processing. In this paper the authors propose a conceptual model that applies supply chain and constraint management principles to the project portfolio management process. An advantage of the proposed model is that it provides the ability to select and prioritise projects without undue changes to project schedules. This should result in faster flow through the system.

    AFRIKAANSE OPSOMMING: Prosesse om portefeuljes van projekte te bestuur word normaalweg ontwerp en bedryf as ’n reeks fases en hekke. Die vloei deur so ’n proses is dikwels stadig en word gekenmerk deur toue wat wag vir besluite by die hekke en ook deur herwerk van vorige fases wat wag vir verdere inligting of vir herprosessering. In hierdie artikel word ‘n konseptuele model voorgestel. Die model berus op die beginsels van voorsieningskettings sowel as van beperkingsbestuur, en bied die voordeel dat projekte geselekteer en geprioritiseer kan word sonder onnodige veranderinge aan projekskedules. Dit behoort te lei tot versnelde vloei deur die stelsel.

  4. Applying Four Different Risk Models in Local Ore Selection

    International Nuclear Information System (INIS)

    Richmond, Andrew

    2002-01-01

    Given the uncertainty in grade at a mine location, a financially risk-averse decision-maker may prefer to incorporate this uncertainty into the ore selection process. A FORTRAN program risksel is presented to calculate local risk-adjusted optimal ore selections using a negative exponential utility function and three dominance models: mean-variance, mean-downside risk, and stochastic dominance. All four methods are demonstrated in a grade control environment. In the case study, optimal selections range with the magnitude of financial risk that a decision-maker is prepared to accept. Except for the stochastic dominance method, the risk models reassign material from higher cost to lower cost processing options as the aversion to financial risk increases. The stochastic dominance model usually was unable to determine the optimal local selection

  5. Statistical model selection with “Big Data”

    Directory of Open Access Journals (Sweden)

    Jurgen A. Doornik

    2015-12-01

    Full Text Available Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by non-statistical criteria (the formulation problem, using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem, using a viable approach that resolves the computational problem of immense numbers of possible models.

  6. Review of Current Standard Model Results in ATLAS

    CERN Document Server

    Brandt, Gerhard; The ATLAS collaboration

    2018-01-01

    This talk highlights results selected from the Standard Model research programme of the ATLAS Collaboration at the Large Hadron Collider. Results using data from $p-p$ collisions at $\\sqrt{s}=7,8$~TeV in LHC Run-1 as well as results using data at $\\sqrt{s}=13$~TeV in LHC Run-2 are covered. The status of cross section measurements from soft QCD processes and jet production as well as photon production are presented. The presentation extends to vector boson production with associated jets. Precision measurements of the production of $W$ and $Z$ bosons, including a first measurement of the mass of the $W$ bosons, $m_W$, are discussed. The programme to measure electroweak processes with di-boson and tri-boson final states is outlined. All presented measurements are compatible with Standard Model descriptions and allow to further constrain it. In addition they allow to probe new physics which would manifest through extra gauge couplings, or Standard Model gauge couplings deviating from their predicted value.

  7. A Shift in the Thermoregulatory Curve as a Result of Selection for High Activity-Related Aerobic Metabolism

    Directory of Open Access Journals (Sweden)

    Clare Stawski

    2017-12-01

    Full Text Available According to the “aerobic capacity model,” endothermy in birds and mammals evolved as a result of natural selection favoring increased persistent locomotor activity, fuelled by aerobic metabolism. However, this also increased energy expenditure even during rest, with the lowest metabolic rates occurring in the thermoneutral zone (TNZ and increasing at ambient temperatures (Ta below and above this range, depicted by the thermoregulatory curve. In our experimental evolution system, four lines of bank voles (Myodes glareolus have been selected for high swim-induced aerobic metabolism and four unselected lines have been maintained as a control. In addition to a 50% higher rate of oxygen consumption during swimming, the selected lines have also evolved a 7.3% higher mass-adjusted basal metabolic rate. Therefore, we asked whether voles from selected lines would also display a shift in the thermoregulatory curve and an increased body temperature (Tb during exposure to high Ta. To test these hypotheses we measured the RMR and Tb of selected and control voles at Ta from 10 to 34°C. As expected, RMR within and around the TNZ was higher in selected lines. Further, the Tb of selected lines within the TNZ was greater than the Tb of control lines, particularly at the maximum measured Ta of 34°C, suggesting that selected voles are more prone to hyperthermia. Interestingly, our results revealed that while the slope of the thermoregulatory curve below the lower critical temperature (LCT is significantly lower in the selected lines, the LCT (26.1°C does not differ. Importantly, selected voles also evolved a higher maximum thermogenesis, but thermal conductance did not increase. As a consequence, the minimum tolerated temperature, calculated from an extrapolation of the thermoregulatory curve, is 8.4°C lower in selected (−28.6°C than in control lines (−20.2°C. Thus, selection for high aerobic exercise performance, even though operating under

  8. A Shift in the Thermoregulatory Curve as a Result of Selection for High Activity-Related Aerobic Metabolism.

    Science.gov (United States)

    Stawski, Clare; Koteja, Paweł; Sadowska, Edyta T

    2017-01-01

    According to the "aerobic capacity model," endothermy in birds and mammals evolved as a result of natural selection favoring increased persistent locomotor activity, fuelled by aerobic metabolism. However, this also increased energy expenditure even during rest, with the lowest metabolic rates occurring in the thermoneutral zone (TNZ) and increasing at ambient temperatures (T a ) below and above this range, depicted by the thermoregulatory curve. In our experimental evolution system, four lines of bank voles ( Myodes glareolus ) have been selected for high swim-induced aerobic metabolism and four unselected lines have been maintained as a control. In addition to a 50% higher rate of oxygen consumption during swimming, the selected lines have also evolved a 7.3% higher mass-adjusted basal metabolic rate. Therefore, we asked whether voles from selected lines would also display a shift in the thermoregulatory curve and an increased body temperature (T b ) during exposure to high T a . To test these hypotheses we measured the RMR and T b of selected and control voles at T a from 10 to 34°C. As expected, RMR within and around the TNZ was higher in selected lines. Further, the T b of selected lines within the TNZ was greater than the T b of control lines, particularly at the maximum measured T a of 34°C, suggesting that selected voles are more prone to hyperthermia. Interestingly, our results revealed that while the slope of the thermoregulatory curve below the lower critical temperature (LCT) is significantly lower in the selected lines, the LCT (26.1°C) does not differ. Importantly, selected voles also evolved a higher maximum thermogenesis, but thermal conductance did not increase. As a consequence, the minimum tolerated temperature, calculated from an extrapolation of the thermoregulatory curve, is 8.4°C lower in selected (-28.6°C) than in control lines (-20.2°C). Thus, selection for high aerobic exercise performance, even though operating under thermally

  9. Selection, calibration, and validation of models of tumor growth.

    Science.gov (United States)

    Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C

    2016-11-01

    This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory

  10. Models of microbiome evolution incorporating host and microbial selection.

    Science.gov (United States)

    Zeng, Qinglong; Wu, Steven; Sukumaran, Jeet; Rodrigo, Allen

    2017-09-25

    Numerous empirical studies suggest that hosts and microbes exert reciprocal selective effects on their ecological partners. Nonetheless, we still lack an explicit framework to model the dynamics of both hosts and microbes under selection. In a previous study, we developed an agent-based forward-time computational framework to simulate the neutral evolution of host-associated microbial communities in a constant-sized, unstructured population of hosts. These neutral models allowed offspring to sample microbes randomly from parents and/or from the environment. Additionally, the environmental pool of available microbes was constituted by fixed and persistent microbial OTUs and by contributions from host individuals in the preceding generation. In this paper, we extend our neutral models to allow selection to operate on both hosts and microbes. We do this by constructing a phenome for each microbial OTU consisting of a sample of traits that influence host and microbial fitnesses independently. Microbial traits can influence the fitness of hosts ("host selection") and the fitness of microbes ("trait-mediated microbial selection"). Additionally, the fitness effects of traits on microbes can be modified by their hosts ("host-mediated microbial selection"). We simulate the effects of these three types of selection, individually or in combination, on microbiome diversities and the fitnesses of hosts and microbes over several thousand generations of hosts. We show that microbiome diversity is strongly influenced by selection acting on microbes. Selection acting on hosts only influences microbiome diversity when there is near-complete direct or indirect parental contribution to the microbiomes of offspring. Unsurprisingly, microbial fitness increases under microbial selection. Interestingly, when host selection operates, host fitness only increases under two conditions: (1) when there is a strong parental contribution to microbial communities or (2) in the absence of a strong

  11. On model selections for repeated measurement data in clinical studies.

    Science.gov (United States)

    Zou, Baiming; Jin, Bo; Koch, Gary G; Zhou, Haibo; Borst, Stephen E; Menon, Sandeep; Shuster, Jonathan J

    2015-05-10

    Repeated measurement designs have been widely used in various randomized controlled trials for evaluating long-term intervention efficacies. For some clinical trials, the primary research question is how to compare two treatments at a fixed time, using a t-test. Although simple, robust, and convenient, this type of analysis fails to utilize a large amount of collected information. Alternatively, the mixed-effects model is commonly used for repeated measurement data. It models all available data jointly and allows explicit assessment of the overall treatment effects across the entire time spectrum. In this paper, we propose an analytic strategy for longitudinal clinical trial data where the mixed-effects model is coupled with a model selection scheme. The proposed test statistics not only make full use of all available data but also utilize the information from the optimal model deemed for the data. The performance of the proposed method under various setups, including different data missing mechanisms, is evaluated via extensive Monte Carlo simulations. Our numerical results demonstrate that the proposed analytic procedure is more powerful than the t-test when the primary interest is to test for the treatment effect at the last time point. Simulations also reveal that the proposed method outperforms the usual mixed-effects model for testing the overall treatment effects across time. In addition, the proposed framework is more robust and flexible in dealing with missing data compared with several competing methods. The utility of the proposed method is demonstrated by analyzing a clinical trial on the cognitive effect of testosterone in geriatric men with low baseline testosterone levels. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Development of an Environment for Software Reliability Model Selection

    Science.gov (United States)

    1992-09-01

    now is directed to other related problems such as tools for model selection, multiversion programming, and software fault tolerance modeling... multiversion programming, 7. Hlardware can be repaired by spare modules, which is not. the case for software, 2-6 N. Preventive maintenance is very important

  13. Analytical Modelling Of Milling For Tool Design And Selection

    International Nuclear Information System (INIS)

    Fontaine, M.; Devillez, A.; Dudzinski, D.

    2007-01-01

    This paper presents an efficient analytical model which allows to simulate a large panel of milling operations. A geometrical description of common end mills and of their engagement in the workpiece material is proposed. The internal radius of the rounded part of the tool envelope is used to define the considered type of mill. The cutting edge position is described for a constant lead helix and for a constant local helix angle. A thermomechanical approach of oblique cutting is applied to predict forces acting on the tool and these results are compared with experimental data obtained from milling tests on a 42CrMo4 steel for three classical types of mills. The influence of some tool's geometrical parameters on predicted cutting forces is presented in order to propose optimisation criteria for design and selection of cutting tools

  14. Fuzzy Investment Portfolio Selection Models Based on Interval Analysis Approach

    Directory of Open Access Journals (Sweden)

    Haifeng Guo

    2012-01-01

    Full Text Available This paper employs fuzzy set theory to solve the unintuitive problem of the Markowitz mean-variance (MV portfolio model and extend it to a fuzzy investment portfolio selection model. Our model establishes intervals for expected returns and risk preference, which can take into account investors' different investment appetite and thus can find the optimal resolution for each interval. In the empirical part, we test this model in Chinese stocks investment and find that this model can fulfill different kinds of investors’ objectives. Finally, investment risk can be decreased when we add investment limit to each stock in the portfolio, which indicates our model is useful in practice.

  15. Diversified models for portfolio selection based on uncertain semivariance

    Science.gov (United States)

    Chen, Lin; Peng, Jin; Zhang, Bo; Rosyida, Isnaini

    2017-02-01

    Since the financial markets are complex, sometimes the future security returns are represented mainly based on experts' estimations due to lack of historical data. This paper proposes a semivariance method for diversified portfolio selection, in which the security returns are given subjective to experts' estimations and depicted as uncertain variables. In the paper, three properties of the semivariance of uncertain variables are verified. Based on the concept of semivariance of uncertain variables, two types of mean-semivariance diversified models for uncertain portfolio selection are proposed. Since the models are complex, a hybrid intelligent algorithm which is based on 99-method and genetic algorithm is designed to solve the models. In this hybrid intelligent algorithm, 99-method is applied to compute the expected value and semivariance of uncertain variables, and genetic algorithm is employed to seek the best allocation plan for portfolio selection. At last, several numerical examples are presented to illustrate the modelling idea and the effectiveness of the algorithm.

  16. A Primer for Model Selection: The Decisive Role of Model Complexity

    Science.gov (United States)

    Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang

    2018-03-01

    Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)

  17. Selection of an appropriately simple storm runoff model

    Directory of Open Access Journals (Sweden)

    A. I. J. M. van Dijk

    2010-03-01

    Full Text Available An appropriately simple event runoff model for catchment hydrological studies was derived. The model was selected from several variants as having the optimum balance between simplicity and the ability to explain daily observations of streamflow from 260 Australian catchments (23–1902 km2. Event rainfall and runoff were estimated from the observations through a combination of baseflow separation and storm flow recession analysis, producing a storm flow recession coefficient (kQF. Various model structures with up to six free parameters were investigated, covering most of the equations applied in existing lumped catchment models. The performance of alternative structures and free parameters were expressed in Aikake's Final Prediction Error Criterion (FPEC and corresponding Nash-Sutcliffe model efficiencies (NSME for event runoff totals. For each model variant, the number of free parameters was reduced in steps based on calculated parameter sensitivity. The resulting optimal model structure had two or three free parameters; the first describing the non-linear relationship between event rainfall and runoff (Smax, the second relating runoff to antecedent groundwater storage (CSg, and a third that described initial rainfall losses (Li, but which could be set at 8 mm without affecting model performance too much. The best three parameter model produced a median NSME of 0.64 and outperformed, for example, the Soil Conservation Service Curve Number technique (median NSME 0.30–0.41. Parameter estimation in ungauged catchments is likely to be challenging: 64% of the variance in kQF among stations could be explained by catchment climate indicators and spatial correlation, but corresponding numbers were a modest 45% for CSg, 21% for Smax and none for Li, respectively. In gauged catchments, better

  18. Multi-Criteria Decision Making For Determining A Simple Model of Supplier Selection

    Science.gov (United States)

    Harwati

    2017-06-01

    Supplier selection is a decision with many criteria. Supplier selection model usually involves more than five main criteria and more than 10 sub-criteria. In fact many model includes more than 20 criteria. Too many criteria involved in supplier selection models sometimes make it difficult to apply in many companies. This research focuses on designing supplier selection that easy and simple to be applied in the company. Analytical Hierarchy Process (AHP) is used to weighting criteria. The analysis results there are four criteria that are easy and simple can be used to select suppliers: Price (weight 0.4) shipment (weight 0.3), quality (weight 0.2) and services (weight 0.1). A real case simulation shows that simple model provides the same decision with a more complex model.

  19. Hyperopt: a Python library for model selection and hyperparameter optimization

    Science.gov (United States)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  20. Selection of climate change scenario data for impact modelling

    DEFF Research Database (Denmark)

    Sloth Madsen, M; Fox Maule, C; MacKellar, N

    2012-01-01

    Impact models investigating climate change effects on food safety often need detailed climate data. The aim of this study was to select climate change projection data for selected crop phenology and mycotoxin impact models. Using the ENSEMBLES database of climate model output, this study...... illustrates how the projected climate change signal of important variables as temperature, precipitation and relative humidity depends on the choice of the climate model. Using climate change projections from at least two different climate models is recommended to account for model uncertainty. To make...... the climate projections suitable for impact analysis at the local scale a weather generator approach was adopted. As the weather generator did not treat all the necessary variables, an ad-hoc statistical method was developed to synthesise realistic values of missing variables. The method is presented...

  1. Criteria for selecting implementation science theories and frameworks: results from an international survey

    Directory of Open Access Journals (Sweden)

    Sarah A. Birken

    2017-10-01

    Full Text Available Abstract Background Theories provide a synthesizing architecture for implementation science. The underuse, superficial use, and misuse of theories pose a substantial scientific challenge for implementation science and may relate to challenges in selecting from the many theories in the field. Implementation scientists may benefit from guidance for selecting a theory for a specific study or project. Understanding how implementation scientists select theories will help inform efforts to develop such guidance. Our objective was to identify which theories implementation scientists use, how they use theories, and the criteria used to select theories. Methods We identified initial lists of uses and criteria for selecting implementation theories based on seminal articles and an iterative consensus process. We incorporated these lists into a self-administered survey for completion by self-identified implementation scientists. We recruited potential respondents at the 8th Annual Conference on the Science of Dissemination and Implementation in Health and via several international email lists. We used frequencies and percentages to report results. Results Two hundred twenty-three implementation scientists from 12 countries responded to the survey. They reported using more than 100 different theories spanning several disciplines. Respondents reported using theories primarily to identify implementation determinants, inform data collection, enhance conceptual clarity, and guide implementation planning. Of the 19 criteria presented in the survey, the criteria used by the most respondents to select theory included analytic level (58%, logical consistency/plausibility (56%, empirical support (53%, and description of a change process (54%. The criteria used by the fewest respondents included fecundity (10%, uniqueness (12%, and falsifiability (15%. Conclusions Implementation scientists use a large number of criteria to select theories, but there is little

  2. A SUPPLIER SELECTION MODEL FOR SOFTWARE DEVELOPMENT OUTSOURCING

    Directory of Open Access Journals (Sweden)

    Hancu Lucian-Viorel

    2010-12-01

    Full Text Available This paper presents a multi-criteria decision making model used for supplier selection for software development outsourcing on e-marketplaces. This model can be used in auctions. The supplier selection process becomes complex and difficult on last twenty years since the Internet plays an important role in business management. Companies have to concentrate their efforts on their core activities and the others activities should be realized by outsourcing. They can achieve significant cost reduction by using e-marketplaces in their purchase process and by using decision support systems on supplier selection. In the literature were proposed many approaches for supplier evaluation and selection process. The performance of potential suppliers is evaluated using multi criteria decision making methods rather than considering a single factor cost.

  3. Adverse Selection Models with Three States of Nature

    Directory of Open Access Journals (Sweden)

    Daniela MARINESCU

    2011-02-01

    Full Text Available In the paper we analyze an adverse selection model with three states of nature, where both the Principal and the Agent are risk neutral. When solving the model, we use the informational rents and the efforts as variables. We derive the optimal contract in the situation of asymmetric information. The paper ends with the characteristics of the optimal contract and the main conclusions of the model.

  4. Comparing the staffing models of outsourcing in selected companies

    OpenAIRE

    Chaloupková, Věra

    2010-01-01

    This thesis deals with problems of takeover of employees in outsourcing. The capital purpose is to compare the staffing model of outsourcing in selected companies. To compare in selected companies I chose multi-criteria analysis. This thesis is dividend into six chapters. The first charter is devoted to the theoretical part. In this charter describes the basic concepts as outsourcing, personal aspects, phase of the outsourcing projects, communications and culture. The rest of thesis is devote...

  5. Modeling Directional Selectivity Using Self-Organizing Delay-Aadaptation Maps

    OpenAIRE

    Tversky, Mr. Tal; Miikkulainen, Dr. Risto

    2002-01-01

    Using a delay adaptation learning rule, we model the activity-dependent development of directionally selective cells in the primary visual cortex. Based on input stimuli, a learning rule shifts delays to create synchronous arrival of spikes at cortical cells. As a result, delays become tuned creating a smooth cortical map of direction selectivity. This result demonstrates how delay adaption can serve as a powerful abstraction for modeling temporal learning in the brain.

  6. Loss of spent fuel pool cooling PRA: Model and results

    International Nuclear Information System (INIS)

    Siu, N.; Khericha, S.; Conroy, S.; Beck, S.; Blackman, H.

    1996-09-01

    This letter report documents models for quantifying the likelihood of loss of spent fuel pool cooling; models for identifying post-boiling scenarios that lead to core damage; qualitative and quantitative results generated for a selected plant that account for plant design and operational practices; a comparison of these results and those generated from earlier studies; and a review of available data on spent fuel pool accidents. The results of this study show that for a representative two-unit boiling water reactor, the annual probability of spent fuel pool boiling is 5 x 10 -5 and the annual probability of flooding associated with loss of spent fuel pool cooling scenarios is 1 x 10 -3 . Qualitative arguments are provided to show that the likelihood of core damage due to spent fuel pool boiling accidents is low for most US commercial nuclear power plants. It is also shown that, depending on the design characteristics of a given plant, the likelihood of either: (a) core damage due to spent fuel pool-associated flooding, or (b) spent fuel damage due to pool dryout, may not be negligible

  7. Parameter Selection and Performance Analysis of Mobile Terminal Models Based on Unity3D

    Institute of Scientific and Technical Information of China (English)

    KONG Li-feng; ZHAO Hai-ying; XU Guang-mei

    2014-01-01

    Mobile platform is now widely seen as a promising multimedia service with a favorable user group and market prospect. To study the influence of mobile terminal models on the quality of scene roaming, a parameter setting platform of mobile terminal models is established to select the parameter selection and performance index on different mobile platforms in this paper. This test platform is established based on model optimality principle, analyzing the performance curve of mobile terminals in different scene models and then deducing the external parameter of model establishment. Simulation results prove that the established test platform is able to analyze the parameter and performance matching list of a mobile terminal model.

  8. On a Robust MaxEnt Process Regression Model with Sample-Selection

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2018-04-01

    Full Text Available In a regression analysis, a sample-selection bias arises when a dependent variable is partially observed as a result of the sample selection. This study introduces a Maximum Entropy (MaxEnt process regression model that assumes a MaxEnt prior distribution for its nonparametric regression function and finds that the MaxEnt process regression model includes the well-known Gaussian process regression (GPR model as a special case. Then, this special MaxEnt process regression model, i.e., the GPR model, is generalized to obtain a robust sample-selection Gaussian process regression (RSGPR model that deals with non-normal data in the sample selection. Various properties of the RSGPR model are established, including the stochastic representation, distributional hierarchy, and magnitude of the sample-selection bias. These properties are used in the paper to develop a hierarchical Bayesian methodology to estimate the model. This involves a simple and computationally feasible Markov chain Monte Carlo algorithm that avoids analytical or numerical derivatives of the log-likelihood function of the model. The performance of the RSGPR model in terms of the sample-selection bias correction, robustness to non-normality, and prediction, is demonstrated through results in simulations that attest to its good finite-sample performance.

  9. ERP Software Selection Model using Analytic Network Process

    OpenAIRE

    Lesmana , Andre Surya; Astanti, Ririn Diar; Ai, The Jin

    2014-01-01

    During the implementation of Enterprise Resource Planning (ERP) in any company, one of the most important issues is the selection of ERP software that can satisfy the needs and objectives of the company. This issue is crucial since it may affect the duration of ERP implementation and the costs incurred for the ERP implementation. This research tries to construct a model of the selection of ERP software that are beneficial to the company in order to carry out the selection of the right ERP sof...

  10. Criteria for selecting implementation science theories and frameworks: results from an international survey.

    Science.gov (United States)

    Birken, Sarah A; Powell, Byron J; Shea, Christopher M; Haines, Emily R; Alexis Kirk, M; Leeman, Jennifer; Rohweder, Catherine; Damschroder, Laura; Presseau, Justin

    2017-10-30

    Theories provide a synthesizing architecture for implementation science. The underuse, superficial use, and misuse of theories pose a substantial scientific challenge for implementation science and may relate to challenges in selecting from the many theories in the field. Implementation scientists may benefit from guidance for selecting a theory for a specific study or project. Understanding how implementation scientists select theories will help inform efforts to develop such guidance. Our objective was to identify which theories implementation scientists use, how they use theories, and the criteria used to select theories. We identified initial lists of uses and criteria for selecting implementation theories based on seminal articles and an iterative consensus process. We incorporated these lists into a self-administered survey for completion by self-identified implementation scientists. We recruited potential respondents at the 8th Annual Conference on the Science of Dissemination and Implementation in Health and via several international email lists. We used frequencies and percentages to report results. Two hundred twenty-three implementation scientists from 12 countries responded to the survey. They reported using more than 100 different theories spanning several disciplines. Respondents reported using theories primarily to identify implementation determinants, inform data collection, enhance conceptual clarity, and guide implementation planning. Of the 19 criteria presented in the survey, the criteria used by the most respondents to select theory included analytic level (58%), logical consistency/plausibility (56%), empirical support (53%), and description of a change process (54%). The criteria used by the fewest respondents included fecundity (10%), uniqueness (12%), and falsifiability (15%). Implementation scientists use a large number of criteria to select theories, but there is little consensus on which are most important. Our results suggest that the

  11. Economic assessment model architecture for AGC/AVLIS selection

    International Nuclear Information System (INIS)

    Hoglund, R.L.

    1984-01-01

    The economic assessment model architecture described provides the flexibility and completeness in economic analysis that the selection between AGC and AVLIS demands. Process models which are technology-specific will provide the first-order responses of process performance and cost to variations in process parameters. The economics models can be used to test the impacts of alternative deployment scenarios for a technology. Enterprise models provide global figures of merit for evaluating the DOE perspective on the uranium enrichment enterprise, and business analysis models compute the financial parameters from the private investor's viewpoint

  12. IT vendor selection model by using structural equation model & analytical hierarchy process

    Science.gov (United States)

    Maitra, Sarit; Dominic, P. D. D.

    2012-11-01

    Selecting and evaluating the right vendors is imperative for an organization's global marketplace competitiveness. Improper selection and evaluation of potential vendors can dwarf an organization's supply chain performance. Numerous studies have demonstrated that firms consider multiple criteria when selecting key vendors. This research intends to develop a new hybrid model for vendor selection process with better decision making. The new proposed model provides a suitable tool for assisting decision makers and managers to make the right decisions and select the most suitable vendor. This paper proposes a Hybrid model based on Structural Equation Model (SEM) and Analytical Hierarchy Process (AHP) for long-term strategic vendor selection problems. The five steps framework of the model has been designed after the thorough literature study. The proposed hybrid model will be applied using a real life case study to assess its effectiveness. In addition, What-if analysis technique will be used for model validation purpose.

  13. Uncertain programming models for portfolio selection with uncertain returns

    Science.gov (United States)

    Zhang, Bo; Peng, Jin; Li, Shengguo

    2015-10-01

    In an indeterminacy economic environment, experts' knowledge about the returns of securities consists of much uncertainty instead of randomness. This paper discusses portfolio selection problem in uncertain environment in which security returns cannot be well reflected by historical data, but can be evaluated by the experts. In the paper, returns of securities are assumed to be given by uncertain variables. According to various decision criteria, the portfolio selection problem in uncertain environment is formulated as expected-variance-chance model and chance-expected-variance model by using the uncertainty programming. Within the framework of uncertainty theory, for the convenience of solving the models, some crisp equivalents are discussed under different conditions. In addition, a hybrid intelligent algorithm is designed in the paper to provide a general method for solving the new models in general cases. At last, two numerical examples are provided to show the performance and applications of the models and algorithm.

  14. Experiment selection for the discrimination of semi-quantitative models of dynamical systems

    NARCIS (Netherlands)

    Vatcheva, [No Value; de Jong, H; Bernard, O; Mars, NJI

    Modeling an experimental system often results in a number of alternative models that are all justified by the available experimental data. To discriminate among these models, additional experiments are needed. Existing methods for the selection of discriminatory experiments in statistics and in

  15. National HIV prevalence estimates for sub-Saharan Africa: controlling selection bias with Heckman-type selection models

    Science.gov (United States)

    Hogan, Daniel R; Salomon, Joshua A; Canning, David; Hammitt, James K; Zaslavsky, Alan M; Bärnighausen, Till

    2012-01-01

    Objectives Population-based HIV testing surveys have become central to deriving estimates of national HIV prevalence in sub-Saharan Africa. However, limited participation in these surveys can lead to selection bias. We control for selection bias in national HIV prevalence estimates using a novel approach, which unlike conventional imputation can account for selection on unobserved factors. Methods For 12 Demographic and Health Surveys conducted from 2001 to 2009 (N=138 300), we predict HIV status among those missing a valid HIV test with Heckman-type selection models, which allow for correlation between infection status and participation in survey HIV testing. We compare these estimates with conventional ones and introduce a simulation procedure that incorporates regression model parameter uncertainty into confidence intervals. Results Selection model point estimates of national HIV prevalence were greater than unadjusted estimates for 10 of 12 surveys for men and 11 of 12 surveys for women, and were also greater than the majority of estimates obtained from conventional imputation, with significantly higher HIV prevalence estimates for men in Cote d'Ivoire 2005, Mali 2006 and Zambia 2007. Accounting for selective non-participation yielded 95% confidence intervals around HIV prevalence estimates that are wider than those obtained with conventional imputation by an average factor of 4.5. Conclusions Our analysis indicates that national HIV prevalence estimates for many countries in sub-Saharan African are more uncertain than previously thought, and may be underestimated in several cases, underscoring the need for increasing participation in HIV surveys. Heckman-type selection models should be included in the set of tools used for routine estimation of HIV prevalence. PMID:23172342

  16. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

  17. Multi-scale habitat selection modeling: A review and outlook

    Science.gov (United States)

    Kevin McGarigal; Ho Yi Wan; Kathy A. Zeller; Brad C. Timm; Samuel A. Cushman

    2016-01-01

    Scale is the lens that focuses ecological relationships. Organisms select habitat at multiple hierarchical levels and at different spatial and/or temporal scales within each level. Failure to properly address scale dependence can result in incorrect inferences in multi-scale habitat selection modeling studies.

  18. The use of vector bootstrapping to improve variable selection precision in Lasso models

    NARCIS (Netherlands)

    Laurin, C.; Boomsma, D.I.; Lubke, G.H.

    2016-01-01

    The Lasso is a shrinkage regression method that is widely used for variable selection in statistical genetics. Commonly, K-fold cross-validation is used to fit a Lasso model. This is sometimes followed by using bootstrap confidence intervals to improve precision in the resulting variable selections.

  19. Portfolio Effects of Renewable Energies - Basics, Models, Exemplary Results

    Energy Technology Data Exchange (ETDEWEB)

    Wiese, Andreas; Herrmann, Matthias

    2007-07-01

    The combination of sites and technologies to so-called renewable energy portfolios, which are being developed and implemented under the same financing umbrella, is currently the subject of intense discussion in the finance world. The resulting portfolio effect may allow the prediction of a higher return with the same risk or the same return with a lower risk - always in comparison with the investment in a single project. Models are currently being developed to analyse this subject and derive the portfolio effect. In particular, the effect of the spatial distribution, as well as the effects of using different technologies, suppliers and cost assumptions with different level of uncertainties, are of importance. Wind parks, photovoltaic, biomass, biogas and hydropower are being considered. The status of the model development and first results are being presented in the current paper. In a first example, the portfolio effect has been calculated and analysed using selected parameters for a wind energy portfolio of 39 sites distributed over Europe. Consequently it has been shown that the predicted yield, with the predetermined probabilities between 75 to 90%, is 3 - 8% higher than the sum of the yields for the individual wind parks using the same probabilities. (auth)

  20. Engineering DNA Backbone Interactions Results in TALE Scaffolds with Enhanced 5-Methylcytosine Selectivity.

    Science.gov (United States)

    Rathi, Preeti; Witte, Anna; Summerer, Daniel

    2017-11-08

    Transcription activator-like effectors (TALEs) are DNA major-groove binding proteins widely used for genome targeting. TALEs contain an N-terminal region (NTR) and a central repeat domain (CRD). Repeats of the CRD selectively recognize each one DNA nucleobase, offering programmability. Moreover, repeats with selectivity for 5-methylcytosine (5mC) and its oxidized derivatives can be designed for analytical applications. However, both TALE domains also nonspecifically interact with DNA phosphates via basic amino acids. To enhance the 5mC selectivity of TALEs, we aimed to decrease the nonselective binding energy of TALEs. We substituted basic amino acids with alanine in the NTR and identified TALE mutants with increased selectivity. We then analysed conserved, DNA phosphate-binding KQ diresidues in CRD repeats and identified further improved mutants. Combination of mutations in the NTR and CRD was highly synergetic and resulted in TALE scaffolds with up to 4.3-fold increased selectivity in genomic 5mC analysis via affinity enrichment. Moreover, transcriptional activation in HEK293T cells by a TALE-VP64 construct based on this scaffold design exhibited a 3.5-fold increased 5mC selectivity. This provides perspectives for improved 5mC analysis and for the 5mC-conditional control of TALE-based editing constructs in vivo.

  1. CHAIN-WISE GENERALIZATION OF ROAD NETWORKS USING MODEL SELECTION

    Directory of Open Access Journals (Sweden)

    D. Bulatov

    2017-05-01

    Full Text Available Streets are essential entities of urban terrain and their automatized extraction from airborne sensor data is cumbersome because of a complex interplay of geometric, topological and semantic aspects. Given a binary image, representing the road class, centerlines of road segments are extracted by means of skeletonization. The focus of this paper lies in a well-reasoned representation of these segments by means of geometric primitives, such as straight line segments as well as circle and ellipse arcs. We propose the fusion of raw segments based on similarity criteria; the output of this process are the so-called chains which better match to the intuitive perception of what a street is. Further, we propose a two-step approach for chain-wise generalization. First, the chain is pre-segmented using circlePeucker and finally, model selection is used to decide whether two neighboring segments should be fused to a new geometric entity. Thereby, we consider both variance-covariance analysis of residuals and model complexity. The results on a complex data-set with many traffic roundabouts indicate the benefits of the proposed procedure.

  2. Fixation probability in a two-locus intersexual selection model.

    Science.gov (United States)

    Durand, Guillermo; Lessard, Sabin

    2016-06-01

    We study a two-locus model of intersexual selection in a finite haploid population reproducing according to a discrete-time Moran model with a trait locus expressed in males and a preference locus expressed in females. We show that the probability of ultimate fixation of a single mutant allele for a male ornament introduced at random at the trait locus given any initial frequency state at the preference locus is increased by weak intersexual selection and recombination, weak or strong. Moreover, this probability exceeds the initial frequency of the mutant allele even in the case of a costly male ornament if intersexual selection is not too weak. On the other hand, the probability of ultimate fixation of a single mutant allele for a female preference towards a male ornament introduced at random at the preference locus is increased by weak intersexual selection and weak recombination if the female preference is not costly, and is strong enough in the case of a costly male ornament. The analysis relies on an extension of the ancestral recombination-selection graph for samples of haplotypes to take into account events of intersexual selection, while the symbolic calculation of the fixation probabilities is made possible in a reasonable time by an optimizing algorithm. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    International Nuclear Information System (INIS)

    Zhou, Z; Folkert, M; Wang, J

    2016-01-01

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.

  4. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Z; Folkert, M; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.

  5. Spatial Fleming-Viot models with selection and mutation

    CERN Document Server

    Dawson, Donald A

    2014-01-01

    This book constructs a rigorous framework for analysing selected phenomena in evolutionary theory of populations arising due to the combined effects of migration, selection and mutation in a spatial stochastic population model, namely the evolution towards fitter and fitter types through punctuated equilibria. The discussion is based on a number of new methods, in particular multiple scale analysis, nonlinear Markov processes and their entrance laws, atomic measure-valued evolutions and new forms of duality (for state-dependent mutation and multitype selection) which are used to prove ergodic theorems in this context and are applicable for many other questions and renormalization analysis for a variety of phenomena (stasis, punctuated equilibrium, failure of naive branching approximations, biodiversity) which occur due to the combination of rare mutation, mutation, resampling, migration and selection and make it necessary to mathematically bridge the gap (in the limit) between time and space scales.

  6. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....

  7. Model selection and inference a practical information-theoretic approach

    CERN Document Server

    Burnham, Kenneth P

    1998-01-01

    This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...

  8. Working covariance model selection for generalized estimating equations.

    Science.gov (United States)

    Carey, Vincent J; Wang, You-Gan

    2011-11-20

    We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.

  9. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    Science.gov (United States)

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  10. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  11. Tumor-Selective Cytotoxicity of Nitidine Results from Its Rapid Accumulation into Mitochondria

    Directory of Open Access Journals (Sweden)

    Hironori Iwasaki

    2017-01-01

    Full Text Available We identified a nitidine- (NTD- accumulating organelle and evaluated the net cytotoxicity of accumulated NTD. To evaluate tumor cell selectivity of the drug, we evaluated its selective cytotoxicity against 39 human cancer cell lines (JFCR39 panel, and the profile was compared with those of known anticancer drugs. Organelle specificity of NTD was visualized using organelle-targeted fluorescent proteins. Real-time analysis of cell growth, proliferation, and cytotoxicity was performed using the xCELLigence system. Selectivity of NTD in the JFCR39 panel was evaluated. Mitochondria-specific accumulation of NTD was observed. Real-time cytotoxicity analysis suggested that the mechanism of NTD-induced cell death is independent of the cell cycle. Short-term treatment indicated that this cytotoxicity only resulted from the accumulation of NTD into the mitochondria. The results from the JFCR39 panel indicated that NTD-mediated cytotoxicity resulted from unique mechanisms compared with those of other known anticancer drugs. These results suggested that the cytotoxicity of NTD is only induced by its accumulation in mitochondria. The drug triggered mitochondrial dysfunction in less than 2 h. Similarity analysis of the selectivity of NTD in 39 tumor cell lines strongly supported the unique tumor cell specificity of NTD. Thus, these features indicate that NTD may be a promising antitumor drug for new combination chemotherapies.

  12. Bayesian model selection of template forward models for EEG source reconstruction.

    Science.gov (United States)

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-06-01

    Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Attention-based Memory Selection Recurrent Network for Language Modeling

    OpenAIRE

    Liu, Da-Rong; Chuang, Shun-Po; Lee, Hung-yi

    2016-01-01

    Recurrent neural networks (RNNs) have achieved great success in language modeling. However, since the RNNs have fixed size of memory, their memory cannot store all the information about the words it have seen before in the sentence, and thus the useful long-term information may be ignored when predicting the next words. In this paper, we propose Attention-based Memory Selection Recurrent Network (AMSRN), in which the model can review the information stored in the memory at each previous time ...

  14. The Selection of ARIMA Models with or without Regressors

    DEFF Research Database (Denmark)

    Johansen, Søren; Riani, Marco; Atkinson, Anthony C.

    We develop a $C_{p}$ statistic for the selection of regression models with stationary and nonstationary ARIMA error term. We derive the asymptotic theory of the maximum likelihood estimators and show they are consistent and asymptotically Gaussian. We also prove that the distribution of the sum...

  15. Computationally efficient thermal-mechanical modelling of selective laser melting

    NARCIS (Netherlands)

    Yang, Y.; Ayas, C.; Brabazon, Dermot; Naher, Sumsun; Ul Ahad, Inam

    2017-01-01

    The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is

  16. Multivariate time series modeling of selected childhood diseases in ...

    African Journals Online (AJOL)

    This paper is focused on modeling the five most prevalent childhood diseases in Akwa Ibom State using a multivariate approach to time series. An aggregate of 78,839 reported cases of malaria, upper respiratory tract infection (URTI), Pneumonia, anaemia and tetanus were extracted from five randomly selected hospitals in ...

  17. A model of directional selection applied to the evolution of drug resistance in HIV-1.

    Science.gov (United States)

    Seoighe, Cathal; Ketwaroo, Farahnaz; Pillay, Visva; Scheffler, Konrad; Wood, Natasha; Duffet, Rodger; Zvelebil, Marketa; Martinson, Neil; McIntyre, James; Morris, Lynn; Hide, Winston

    2007-04-01

    Understanding how pathogens acquire resistance to drugs is important for the design of treatment strategies, particularly for rapidly evolving viruses such as HIV-1. Drug treatment can exert strong selective pressures and sites within targeted genes that confer resistance frequently evolve far more rapidly than the neutral rate. Rapid evolution at sites that confer resistance to drugs can be used to help elucidate the mechanisms of evolution of drug resistance and to discover or corroborate novel resistance mutations. We have implemented standard maximum likelihood methods that are used to detect diversifying selection and adapted them for use with serially sampled reverse transcriptase (RT) coding sequences isolated from a group of 300 HIV-1 subtype C-infected women before and after single-dose nevirapine (sdNVP) to prevent mother-to-child transmission. We have also extended the standard models of codon evolution for application to the detection of directional selection. Through simulation, we show that the directional selection model can provide a substantial improvement in sensitivity over models of diversifying selection. Five of the sites within the RT gene that are known to harbor mutations that confer resistance to nevirapine (NVP) strongly supported the directional selection model. There was no evidence that other mutations that are known to confer NVP resistance were selected in this cohort. The directional selection model, applied to serially sampled sequences, also had more power than the diversifying selection model to detect selection resulting from factors other than drug resistance. Because inference of selection from serial samples is unlikely to be adversely affected by recombination, the methods we describe may have general applicability to the analysis of positive selection affecting recombining coding sequences when serially sampled data are available.

  18. Rank-based model selection for multiple ions quantum tomography

    International Nuclear Information System (INIS)

    Guţă, Mădălin; Kypraios, Theodore; Dryden, Ian

    2012-01-01

    The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the ‘sparsity’ properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods—the Akaike information criterion (AIC) and the Bayesian information criterion (BIC)—two models consisting of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of four ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a four ions experiment aimed at creating a Smolin state of rank 4. By applying the two methods together with the Pearson χ 2 test we conclude that the data can be suitably described with a model whose rank is between 7 and 9. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements. (paper)

  19. Natural Gamma Emitters after a Selective Chemical Separation of a TENORM residue: Preliminary Results

    International Nuclear Information System (INIS)

    Alves de Freitas, Antonio; Abrao, Alcidio; Godoy dos Santos, Adir Janete; Pecequilo, Brigitte Roxana Soreanu

    2008-01-01

    An analytical procedure was established in order to obtain selective fractions containing radium isotopes ( 228 Ra), thorium ( 232 Th), and rare earths from RETOTER (REsiduo de TOrio e TErras Raras), a solid residue rich in rare earth elements, thorium isotopes and small amount of natural uranium generated from the operation of a thorium pilot plant for purification and production of pure thorium nitrate at IPEN -CNEN/SP. The paper presents preliminary results of 228 Ra, 226 Ra, 238 U, 210 Pb, and 40 K concentrations in the selective fractions and total residue determined by high-resolution gamma spectroscopy, considering radioactive equilibrium of the samples

  20. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  1. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang; Cheng, James; Xiao, Xiaokui; Fujimaki, Ryohei; Muraoka, Yusuke

    2017-01-01

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  2. Measures and limits of models of fixation selection.

    Directory of Open Access Journals (Sweden)

    Niklas Wilming

    Full Text Available Models of fixation selection are a central tool in the quest to understand how the human mind selects relevant information. Using this tool in the evaluation of competing claims often requires comparing different models' relative performance in predicting eye movements. However, studies use a wide variety of performance measures with markedly different properties, which makes a comparison difficult. We make three main contributions to this line of research: First we argue for a set of desirable properties, review commonly used measures, and conclude that no single measure unites all desirable properties. However the area under the ROC curve (a classification measure and the KL-divergence (a distance measure of probability distributions combine many desirable properties and allow a meaningful comparison of critical model performance. We give an analytical proof of the linearity of the ROC measure with respect to averaging over subjects and demonstrate an appropriate correction of entropy-based measures like KL-divergence for small sample sizes in the context of eye-tracking data. Second, we provide a lower bound and an upper bound of these measures, based on image-independent properties of fixation data and between subject consistency respectively. Based on these bounds it is possible to give a reference frame to judge the predictive power of a model of fixation selection. We provide open-source python code to compute the reference frame. Third, we show that the upper, between subject consistency bound holds only for models that predict averages of subject populations. Departing from this we show that incorporating subject-specific viewing behavior can generate predictions which surpass that upper bound. Taken together, these findings lay out the required information that allow a well-founded judgment of the quality of any model of fixation selection and should therefore be reported when a new model is introduced.

  3. Generalized Selectivity Description for Polymeric Ion-Selective Electrodes Based on the Phase Boundary Potential Model.

    Science.gov (United States)

    Bakker, Eric

    2010-02-15

    A generalized description of the response behavior of potentiometric polymer membrane ion-selective electrodes is presented on the basis of ion-exchange equilibrium considerations at the sample-membrane interface. This paper includes and extends on previously reported theoretical advances in a more compact yet more comprehensive form. Specifically, the phase boundary potential model is used to derive the origin of the Nernstian response behavior in a single expression, which is valid for a membrane containing any charge type and complex stoichiometry of ionophore and ion-exchanger. This forms the basis for a generalized expression of the selectivity coefficient, which may be used for the selectivity optimization of ion-selective membranes containing electrically charged and neutral ionophores of any desired stoichiometry. It is shown to reduce to expressions published previously for specialized cases, and may be effectively applied to problems relevant in modern potentiometry. The treatment is extended to mixed ion solutions, offering a comprehensive yet formally compact derivation of the response behavior of ion-selective electrodes to a mixture of ions of any desired charge. It is compared to predictions by the less accurate Nicolsky-Eisenman equation. The influence of ion fluxes or any form of electrochemical excitation is not considered here, but may be readily incorporated if an ion-exchange equilibrium at the interface may be assumed in these cases.

  4. Soft shoulders ahead: spurious signatures of soft and partial selective sweeps result from linked hard sweeps.

    Science.gov (United States)

    Schrider, Daniel R; Mendes, Fábio K; Hahn, Matthew W; Kern, Andrew D

    2015-05-01

    Characterizing the nature of the adaptive process at the genetic level is a central goal for population genetics. In particular, we know little about the sources of adaptive substitution or about the number of adaptive variants currently segregating in nature. Historically, population geneticists have focused attention on the hard-sweep model of adaptation in which a de novo beneficial mutation arises and rapidly fixes in a population. Recently more attention has been given to soft-sweep models, in which alleles that were previously neutral, or nearly so, drift until such a time as the environment shifts and their selection coefficient changes to become beneficial. It remains an active and difficult problem, however, to tease apart the telltale signatures of hard vs. soft sweeps in genomic polymorphism data. Through extensive simulations of hard- and soft-sweep models, here we show that indeed the two might not be separable through the use of simple summary statistics. In particular, it seems that recombination in regions linked to, but distant from, sites of hard sweeps can create patterns of polymorphism that closely mirror what is expected to be found near soft sweeps. We find that a very similar situation arises when using haplotype-based statistics that are aimed at detecting partial or ongoing selective sweeps, such that it is difficult to distinguish the shoulder of a hard sweep from the center of a partial sweep. While knowing the location of the selected site mitigates this problem slightly, we show that stochasticity in signatures of natural selection will frequently cause the signal to reach its zenith far from this site and that this effect is more severe for soft sweeps; thus inferences of the target as well as the mode of positive selection may be inaccurate. In addition, both the time since a sweep ends and biologically realistic levels of allelic gene conversion lead to errors in the classification and identification of selective sweeps. This

  5. Selective reporting of antibiotic susceptibility test results in European countries: an ESCMID cross-sectional survey.

    Science.gov (United States)

    Pulcini, Céline; Tebano, Gianpiero; Mutters, Nico T; Tacconelli, Evelina; Cambau, Emmanuelle; Kahlmeter, Gunnar; Jarlier, Vincent

    2017-02-01

    Selective reporting of antibiotic susceptibility test (AST) results is one possible laboratory-based antibiotic stewardship intervention. The primary aim of this study was to identify where and how selective reporting of AST results is implemented in Europe both in inpatient and in outpatient settings. An ESCMID cross-sectional, self-administered, internet-based survey was conducted among all EUCIC (European Committee on Infection Control) or EUCAST (European Committee on Antimicrobial Susceptibility Testing) national representatives in Europe and Israel. Of 38 countries, 36 chose to participate in the survey. Selective reporting of AST results was implemented in 11/36 countries (31%), was partially implemented in 4/36 (11%) and was limited to local initiatives or was not adopted in 21/36 (58%). It was endorsed as standard of care by health authorities in only three countries. The organisation of selective reporting was everywhere discretionally managed by each laboratory, with a pronounced intra- and inter-country variability. The most frequent application was in uncomplicated community-acquired infections, particularly urinary tract and skin and soft-tissue infections. The list of reported antibiotics ranged from a few first-line options, to longer reports where only last-resort antibiotics were hidden. Several barriers to implementation were reported, mainly lack of guidelines, poor system support, insufficient resources, and lack of professionals' capability. In conclusion, selective reporting of AST results is poorly implemented in Europe and is applied with a huge heterogeneity of practices. Development of an international framework, based on existing initiatives and identified barriers, could favour its dissemination as one important element of antibiotic stewardship programmes. Copyright © 2017 Elsevier B.V. and International Society of Chemotherapy. All rights reserved.

  6. Fisher-Wright model with deterministic seed bank and selection.

    Science.gov (United States)

    Koopmann, Bendix; Müller, Johannes; Tellier, Aurélien; Živković, Daniel

    2017-04-01

    Seed banks are common characteristics to many plant species, which allow storage of genetic diversity in the soil as dormant seeds for various periods of time. We investigate an above-ground population following a Fisher-Wright model with selection coupled with a deterministic seed bank assuming the length of the seed bank is kept constant and the number of seeds is large. To assess the combined impact of seed banks and selection on genetic diversity, we derive a general diffusion model. The applied techniques outline a path of approximating a stochastic delay differential equation by an appropriately rescaled stochastic differential equation. We compute the equilibrium solution of the site-frequency spectrum and derive the times to fixation of an allele with and without selection. Finally, it is demonstrated that seed banks enhance the effect of selection onto the site-frequency spectrum while slowing down the time until the mutation-selection equilibrium is reached. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Forecasting house prices in the 50 states using Dynamic Model Averaging and Dynamic Model Selection

    DEFF Research Database (Denmark)

    Bork, Lasse; Møller, Stig Vinther

    2015-01-01

    We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves substantia......We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves...

  8. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  9. A model for the sustainable selection of building envelope assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Huedo, Patricia, E-mail: huedo@uji.es [Universitat Jaume I (Spain); Mulet, Elena, E-mail: emulet@uji.es [Universitat Jaume I (Spain); López-Mesa, Belinda, E-mail: belinda@unizar.es [Universidad de Zaragoza (Spain)

    2016-02-15

    The aim of this article is to define an evaluation model for the environmental impacts of building envelopes to support planners in the early phases of materials selection. The model is intended to estimate environmental impacts for different combinations of building envelope assemblies based on scientifically recognised sustainability indicators. These indicators will increase the amount of information that existing catalogues show to support planners in the selection of building assemblies. To define the model, first the environmental indicators were selected based on the specific aims of the intended sustainability assessment. Then, a simplified LCA methodology was developed to estimate the impacts applicable to three types of dwellings considering different envelope assemblies, building orientations and climate zones. This methodology takes into account the manufacturing, installation, maintenance and use phases of the building. Finally, the model was validated and a matrix in Excel was created as implementation of the model. - Highlights: • Method to assess the envelope impacts based on a simplified LCA • To be used at an earlier phase than the existing methods in a simple way. • It assigns a score by means of known sustainability indicators. • It estimates data about the embodied and operating environmental impacts. • It compares the investment costs with the costs of the consumed energy.

  10. A model for the sustainable selection of building envelope assemblies

    International Nuclear Information System (INIS)

    Huedo, Patricia; Mulet, Elena; López-Mesa, Belinda

    2016-01-01

    The aim of this article is to define an evaluation model for the environmental impacts of building envelopes to support planners in the early phases of materials selection. The model is intended to estimate environmental impacts for different combinations of building envelope assemblies based on scientifically recognised sustainability indicators. These indicators will increase the amount of information that existing catalogues show to support planners in the selection of building assemblies. To define the model, first the environmental indicators were selected based on the specific aims of the intended sustainability assessment. Then, a simplified LCA methodology was developed to estimate the impacts applicable to three types of dwellings considering different envelope assemblies, building orientations and climate zones. This methodology takes into account the manufacturing, installation, maintenance and use phases of the building. Finally, the model was validated and a matrix in Excel was created as implementation of the model. - Highlights: • Method to assess the envelope impacts based on a simplified LCA • To be used at an earlier phase than the existing methods in a simple way. • It assigns a score by means of known sustainability indicators. • It estimates data about the embodied and operating environmental impacts. • It compares the investment costs with the costs of the consumed energy.

  11. On selection of optimal stochastic model for accelerated life testing

    International Nuclear Information System (INIS)

    Volf, P.; Timková, J.

    2014-01-01

    This paper deals with the problem of proper lifetime model selection in the context of statistical reliability analysis. Namely, we consider regression models describing the dependence of failure intensities on a covariate, for instance, a stressor. Testing the model fit is standardly based on the so-called martingale residuals. Their analysis has already been studied by many authors. Nevertheless, the Bayes approach to the problem, in spite of its advantages, is just developing. We shall present the Bayes procedure of estimation in several semi-parametric regression models of failure intensity. Then, our main concern is the Bayes construction of residual processes and goodness-of-fit tests based on them. The method is illustrated with both artificial and real-data examples. - Highlights: • Statistical survival and reliability analysis and Bayes approach. • Bayes semi-parametric regression modeling in Cox's and AFT models. • Bayes version of martingale residuals and goodness-of-fit test

  12. Model building strategy for logistic regression: purposeful selection.

    Science.gov (United States)

    Zhang, Zhongheng

    2016-03-01

    Logistic regression is one of the most commonly used models to account for confounders in medical literature. The article introduces how to perform purposeful selection model building strategy with R. I stress on the use of likelihood ratio test to see whether deleting a variable will have significant impact on model fit. A deleted variable should also be checked for whether it is an important adjustment of remaining covariates. Interaction should be checked to disentangle complex relationship between covariates and their synergistic effect on response variable. Model should be checked for the goodness-of-fit (GOF). In other words, how the fitted model reflects the real data. Hosmer-Lemeshow GOF test is the most widely used for logistic regression model.

  13. Statistical modelling in biostatistics and bioinformatics selected papers

    CERN Document Server

    Peng, Defen

    2014-01-01

    This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and fu...

  14. Variable Selection for Regression Models of Percentile Flows

    Science.gov (United States)

    Fouad, G.

    2017-12-01

    Percentile flows describe the flow magnitude equaled or exceeded for a given percent of time, and are widely used in water resource management. However, these statistics are normally unavailable since most basins are ungauged. Percentile flows of ungauged basins are often predicted using regression models based on readily observable basin characteristics, such as mean elevation. The number of these independent variables is too large to evaluate all possible models. A subset of models is typically evaluated using automatic procedures, like stepwise regression. This ignores a large variety of methods from the field of feature (variable) selection and physical understanding of percentile flows. A study of 918 basins in the United States was conducted to compare an automatic regression procedure to the following variable selection methods: (1) principal component analysis, (2) correlation analysis, (3) random forests, (4) genetic programming, (5) Bayesian networks, and (6) physical understanding. The automatic regression procedure only performed better than principal component analysis. Poor performance of the regression procedure was due to a commonly used filter for multicollinearity, which rejected the strongest models because they had cross-correlated independent variables. Multicollinearity did not decrease model performance in validation because of a representative set of calibration basins. Variable selection methods based strictly on predictive power (numbers 2-5 from above) performed similarly, likely indicating a limit to the predictive power of the variables. Similar performance was also reached using variables selected based on physical understanding, a finding that substantiates recent calls to emphasize physical understanding in modeling for predictions in ungauged basins. The strongest variables highlighted the importance of geology and land cover, whereas widely used topographic variables were the weakest predictors. Variables suffered from a high

  15. X-Ray Observations of Optically Selected, Radio-quiet Quasars. I. The ASCA Results

    Science.gov (United States)

    George, I. M.; Turner, T. J.; Yaqoob, T.; Netzer, H.; Laor, A.; Mushotzky, R. F.; Nandra, K.; Takahashi, T.

    2000-03-01

    We present the result of 27 ASCA observations of 26 radio-quiet quasars (RQQs) from the Palomar-Green (PG) survey. The sample is not statistically complete, but it is reasonably representative of RQQs in the PG survey. For many of the sources, the ASCA data are presented here for the first time. All the RQQs were detected except for two objects, both of which contain broad absorption lines in the optical band. We find the variability characteristics of the sources to be consistent with Seyfert 1 galaxies. A power law offers an acceptable description of the time-averaged spectra in the 2-10 keV (quasar frame) band for all but one data set. The best-fitting values of the photon index vary from object to object over the range 1.5~=2 and dispersion σ(Γ2-10)~=0.25. The distribution of Γ2-10 is therefore similar to that observed in other RQ active galactic nuclei (AGNs) and seems to be unrelated to X-ray luminosity. No single model adequately describes the full 0.6-10 keV (observed frame) continuum of all the RQQs. Approximately 50% of the sources can be adequately described by a single power law or by a power law with only very subtle deviations. All but one of the remaining data sets were found to have convex spectra (flattening as one moves to higher energies). The exception is PG 1411+442, in which a substantial column density (NH,z~2x1023 cm-2) obscures ~98% of the continuum. We find only five (maybe six) of 14 objects with z<~0.25 to have ``soft excesses'' at energies <~1 keV, but we find no universal shape for these spectral components. The spectrum of PG 1244+026 contains a rather narrow emission feature centered at an energy ~1 keV (quasar frame). The detection rate of absorption due to ionized material in these RQQs is lower than that seen in Seyfert 1 galaxies. In part, this may be due to selection effects. However, when detected, the absorbers in the RQQs exhibit a similar range of column density and ionization parameter as Seyfert 1 galaxies. We find

  16. A concurrent optimization model for supplier selection with fuzzy quality loss

    International Nuclear Information System (INIS)

    Rosyidi, C.; Murtisari, R.; Jauhari, W.

    2017-01-01

    The purpose of this research is to develop a concurrent supplier selection model to minimize the purchasing cost and fuzzy quality loss considering process capability and assembled product specification. Design/methodology/approach: This research integrates fuzzy quality loss in the model to concurrently solve the decision making in detailed design stage and manufacturing stage. Findings: The resulted model can be used to concurrently select the optimal supplier and determine the tolerance of the components. The model balances the purchasing cost and fuzzy quality loss. Originality/value: An assembled product consists of many components which must be purchased from the suppliers. Fuzzy quality loss is integrated in the supplier selection model to allow the vagueness in final assembly by grouping the assembly into several grades according to the resulted assembly tolerance.

  17. A concurrent optimization model for supplier selection with fuzzy quality loss

    Energy Technology Data Exchange (ETDEWEB)

    Rosyidi, C.; Murtisari, R.; Jauhari, W.

    2017-07-01

    The purpose of this research is to develop a concurrent supplier selection model to minimize the purchasing cost and fuzzy quality loss considering process capability and assembled product specification. Design/methodology/approach: This research integrates fuzzy quality loss in the model to concurrently solve the decision making in detailed design stage and manufacturing stage. Findings: The resulted model can be used to concurrently select the optimal supplier and determine the tolerance of the components. The model balances the purchasing cost and fuzzy quality loss. Originality/value: An assembled product consists of many components which must be purchased from the suppliers. Fuzzy quality loss is integrated in the supplier selection model to allow the vagueness in final assembly by grouping the assembly into several grades according to the resulted assembly tolerance.

  18. Statistical approach for selection of regression model during validation of bioanalytical method

    Directory of Open Access Journals (Sweden)

    Natalija Nakov

    2014-06-01

    Full Text Available The selection of an adequate regression model is the basis for obtaining accurate and reproducible results during the bionalytical method validation. Given the wide concentration range, frequently present in bioanalytical assays, heteroscedasticity of the data may be expected. Several weighted linear and quadratic regression models were evaluated during the selection of the adequate curve fit using nonparametric statistical tests: One sample rank test and Wilcoxon signed rank test for two independent groups of samples. The results obtained with One sample rank test could not give statistical justification for the selection of linear vs. quadratic regression models because slight differences between the error (presented through the relative residuals were obtained. Estimation of the significance of the differences in the RR was achieved using Wilcoxon signed rank test, where linear and quadratic regression models were treated as two independent groups. The application of this simple non-parametric statistical test provides statistical confirmation of the choice of an adequate regression model.

  19. Modelling Technical and Economic Parameters in Selection of Manufacturing Devices

    Directory of Open Access Journals (Sweden)

    Naqib Daneshjo

    2017-11-01

    Full Text Available Sustainable science and technology development is also conditioned by continuous development of means of production which have a key role in structure of each production system. Mechanical nature of the means of production is complemented by controlling and electronic devices in context of intelligent industry. A selection of production machines for a technological process or technological project has so far been practically resolved, often only intuitively. With regard to increasing intelligence, the number of variable parameters that have to be considered when choosing a production device is also increasing. It is necessary to use computing techniques and decision making methods according to heuristic methods and more precise methodological procedures during the selection. The authors present an innovative model for optimization of technical and economic parameters in the selection of manufacturing devices for industry 4.0.

  20. Results from the nuclear microprobe PIXE analysis of selected rare earth fluor compounds

    International Nuclear Information System (INIS)

    Hollerman, William A.; Gates, Earl; Boudreaux, Philip; Glass, Gary A.

    2002-01-01

    Most previous research measures fluorescence properties over the macroscopic regime. Properties of individual microscopic grains could be significantly different than those measured over the macroscopic scale. Until recently, it was difficult to measure properties of individual fluor grains. Existing characterization techniques like scanning electron microscopy are not practical, since the resulting fluorescence masks the electron surface profile. Starting in September 2000, a research program was initiated at the Acadiana Research Laboratory to determine microscopic fluorescence properties for selected inorganic rare earth compounds. The initial phase of this program utilized microscopic proton induced X-ray emission (μPIXE) to characterize the elemental composition of individual fluor grains. Results show that both individual grains and small clusters of grains could be seen using μPIXE. Maps of this type can be used to estimate grain dimensions for the selected rare earth fluor. This technique is a new and innovative method to characterize a fluor material

  1. Effect of material selection and background impurity on interface property and resulted CIP-GMR performance

    International Nuclear Information System (INIS)

    Peng Xilin; Morrone, Augusto; Nikolaev, Konstantin; Kief, Mark; Ostrowski, Mark

    2009-01-01

    In this paper, we investigated the effect of background base pressure, wafer-transferring time between process modules, and stack layer material selection on the current-in-plane giant magneto-resistive (CIP-GMR) interface properties and the resulted CIP-GMR performance. Experimental results showed that seed layer/AFM interface, AFM/pinned layer (PL) interface, pinned layer/Ru interface, and reference layer (RL)/Cu spacer interface are among the most critical ones for a CIP-GMR device. By reducing the background impurity level (water moisture and oxygen), optimizing the wafer process flow sequence, and careful stack-layer material selection, such critical interfaces in a CIP-GMR device can be preserved. Consequently, a much robust GMR performance control can be achieved.

  2. Decision support model for selecting and evaluating suppliers in the construction industry

    Directory of Open Access Journals (Sweden)

    Fernando Schramm

    2012-12-01

    Full Text Available A structured evaluation of the construction industry's suppliers, considering aspects which make their quality and credibility evident, can be a strategic tool to manage this specific supply chain. This study proposes a multi-criteria decision model for suppliers' selection from the construction industry, as well as an efficient evaluation procedure for the selected suppliers. The model is based on SMARTER (Simple Multi-Attribute Rating Technique Exploiting Ranking method and its main contribution is a new approach to structure the process of suppliers' selection, establishing explicit strategic policies on which the company management system relied to make the suppliers selection. This model was applied to a Civil Construction Company in Brazil and the main results demonstrate the efficiency of the proposed model. This study allowed the development of an approach to Construction Industry which was able to provide a better relationship among its managers, suppliers and partners.

  3. Selection of Models for Ingestion Pathway and Relocation Radii Determination

    International Nuclear Information System (INIS)

    Blanchard, A.

    1998-01-01

    The distance at which intermediate phase protective actions (such as food interdiction and relocation) may be needed following postulated accidents at three Savannah River Site nonreactor nuclear facilities will be determined by modeling. The criteria used to select dispersion/deposition models are presented. Several models were considered, including ARAC, MACCS, HOTSPOT, WINDS (coupled with PUFF-PLUME), and UFOTRI. Although ARAC and WINDS are expected to provide more accurate modeling of atmospheric transport following an actual release, analyses consistent with regulatory guidance for planning purposes may be accomplished with comparatively simple dispersion models such as HOTSPOT and UFOTRI. A recommendation is made to use HOTSPOT for non-tritium facilities and UFOTRI for tritium facilities

  4. Selection of Models for Ingestion Pathway and Relocation

    International Nuclear Information System (INIS)

    Blanchard, A.; Thompson, J.M.

    1998-01-01

    The area in which intermediate phase protective actions (such as food interdiction and relocation) may be needed following postulated accidents at three Savannah River Site nonreactor nuclear facilities will be determined by modeling. The criteria used to select dispersion/deposition models are presented. Several models are considered, including ARAC, MACCS, HOTSPOT, WINDS (coupled with PUFF-PLUME), and UFOTRI. Although ARAC and WINDS are expected to provide more accurate modeling of atmospheric transport following an actual release, analyses consistent with regulatory guidance for planning purposes may be accomplished with comparatively simple dispersion models such as HOTSPOT and UFOTRI. A recommendation is made to use HOTSPOT for non-tritium facilities and UFOTRI for tritium facilities. The most recent Food and Drug Administration Derived Intervention Levels (August 1998) are adopted as evaluation guidelines for ingestion pathways

  5. Selection of Models for Ingestion Pathway and Relocation

    International Nuclear Information System (INIS)

    Blanchard, A.; Thompson, J.M.

    1999-01-01

    The area in which intermediate phase protective actions (such as food interdiction and relocation) may be needed following postulated accidents at three Savannah River Site nonreactor nuclear facilities will be determined by modeling. The criteria used to select dispersion/deposition models are presented. Several models are considered, including ARAC, MACCS, HOTSPOT, WINDS (coupled with PUFF-PLUME), and UFOTRI. Although ARAC and WINDS are expected to provide more accurate modeling of atmospheric transport following an actual release, analyses consistent with regulatory guidance for planning purposes may be accomplished with comparatively simple dispersion models such as HOTSPOT and UFOTRI. A recommendation is made to use HOTSPOT for non-tritium facilities and UFOTRI for tritium facilities. The most recent Food and Drug Administration Derived Intervention Levels (August 1998) are adopted as evaluation guidelines for ingestion pathways

  6. Predicting artificailly drained areas by means of selective model ensemble

    DEFF Research Database (Denmark)

    Møller, Anders Bjørn; Beucher, Amélie; Iversen, Bo Vangsø

    . The approaches employed include decision trees, discriminant analysis, regression models, neural networks and support vector machines amongst others. Several models are trained with each method, using variously the original soil covariates and principal components of the covariates. With a large ensemble...... out since the mid-19th century, and it has been estimated that half of the cultivated area is artificially drained (Olesen, 2009). A number of machine learning approaches can be used to predict artificially drained areas in geographic space. However, instead of choosing the most accurate model....... The study aims firstly to train a large number of models to predict the extent of artificially drained areas using various machine learning approaches. Secondly, the study will develop a method for selecting the models, which give a good prediction of artificially drained areas, when used in conjunction...

  7. Coupled variable selection for regression modeling of complex treatment patterns in a clinical cancer registry.

    Science.gov (United States)

    Schmidtmann, I; Elsäßer, A; Weinmann, A; Binder, H

    2014-12-30

    For determining a manageable set of covariates potentially influential with respect to a time-to-event endpoint, Cox proportional hazards models can be combined with variable selection techniques, such as stepwise forward selection or backward elimination based on p-values, or regularized regression techniques such as component-wise boosting. Cox regression models have also been adapted for dealing with more complex event patterns, for example, for competing risks settings with separate, cause-specific hazard models for each event type, or for determining the prognostic effect pattern of a variable over different landmark times, with one conditional survival model for each landmark. Motivated by a clinical cancer registry application, where complex event patterns have to be dealt with and variable selection is needed at the same time, we propose a general approach for linking variable selection between several Cox models. Specifically, we combine score statistics for each covariate across models by Fisher's method as a basis for variable selection. This principle is implemented for a stepwise forward selection approach as well as for a regularized regression technique. In an application to data from hepatocellular carcinoma patients, the coupled stepwise approach is seen to facilitate joint interpretation of the different cause-specific Cox models. In conditional survival models at landmark times, which address updates of prediction as time progresses and both treatment and other potential explanatory variables may change, the coupled regularized regression approach identifies potentially important, stably selected covariates together with their effect time pattern, despite having only a small number of events. These results highlight the promise of the proposed approach for coupling variable selection between Cox models, which is particularly relevant for modeling for clinical cancer registries with their complex event patterns. Copyright © 2014 John Wiley & Sons

  8. Detecting consistent patterns of directional adaptation using differential selection codon models.

    Science.gov (United States)

    Parto, Sahar; Lartillot, Nicolas

    2017-06-23

    Phylogenetic codon models are often used to characterize the selective regimes acting on protein-coding sequences. Recent methodological developments have led to models explicitly accounting for the interplay between mutation and selection, by modeling the amino acid fitness landscape along the sequence. However, thus far, most of these models have assumed that the fitness landscape is constant over time. Fluctuations of the fitness landscape may often be random or depend on complex and unknown factors. However, some organisms may be subject to systematic changes in selective pressure, resulting in reproducible molecular adaptations across independent lineages subject to similar conditions. Here, we introduce a codon-based differential selection model, which aims to detect and quantify the fine-grained consistent patterns of adaptation at the protein-coding level, as a function of external conditions experienced by the organism under investigation. The model parameterizes the global mutational pressure, as well as the site- and condition-specific amino acid selective preferences. This phylogenetic model is implemented in a Bayesian MCMC framework. After validation with simulations, we applied our method to a dataset of HIV sequences from patients with known HLA genetic background. Our differential selection model detects and characterizes differentially selected coding positions specifically associated with two different HLA alleles. Our differential selection model is able to identify consistent molecular adaptations as a function of repeated changes in the environment of the organism. These models can be applied to many other problems, ranging from viral adaptation to evolution of life-history strategies in plants or animals.

  9. A robust multi-objective global supplier selection model under currency fluctuation and price discount

    Science.gov (United States)

    Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman

    2017-06-01

    Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.

  10. Jagiellonian University Selected Results on the CKM Angle $\\gamma $ Measurement at the LHCb

    CERN Document Server

    Krupa, Wojciech

    2017-01-01

    The LHCb is a single arm forward spectrometer designed to study heavy-flavour physics at the LHC. Its very precise tracking and excellent particle identification play currently a major role in providing the world-best measurements of the Unitary Triangle parameters. In this paper, selected results of the Cabibbo–Kobayashi–Maskawa (CKM) angle $\\gamma$ measurements, with special attention for $B \\rightarrow DK$ decays family, obtained at the LHCb, are presented.

  11. Stochastic isotropic hyperelastic materials: constitutive calibration and model selection

    Science.gov (United States)

    Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain

    2018-03-01

    Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.

  12. Modeling selective pressures on phytoplankton in the global ocean.

    Directory of Open Access Journals (Sweden)

    Jason G Bragg

    Full Text Available Our view of marine microbes is transforming, as culture-independent methods facilitate rapid characterization of microbial diversity. It is difficult to assimilate this information into our understanding of marine microbe ecology and evolution, because their distributions, traits, and genomes are shaped by forces that are complex and dynamic. Here we incorporate diverse forces--physical, biogeochemical, ecological, and mutational--into a global ocean model to study selective pressures on a simple trait in a widely distributed lineage of picophytoplankton: the nitrogen use abilities of Synechococcus and Prochlorococcus cyanobacteria. Some Prochlorococcus ecotypes have lost the ability to use nitrate, whereas their close relatives, marine Synechococcus, typically retain it. We impose mutations for the loss of nitrogen use abilities in modeled picophytoplankton, and ask: in which parts of the ocean are mutants most disadvantaged by losing the ability to use nitrate, and in which parts are they least disadvantaged? Our model predicts that this selective disadvantage is smallest for picophytoplankton that live in tropical regions where Prochlorococcus are abundant in the real ocean. Conversely, the selective disadvantage of losing the ability to use nitrate is larger for modeled picophytoplankton that live at higher latitudes, where Synechococcus are abundant. In regions where we expect Prochlorococcus and Synechococcus populations to cycle seasonally in the real ocean, we find that model ecotypes with seasonal population dynamics similar to Prochlorococcus are less disadvantaged by losing the ability to use nitrate than model ecotypes with seasonal population dynamics similar to Synechococcus. The model predictions for the selective advantage associated with nitrate use are broadly consistent with the distribution of this ability among marine picocyanobacteria, and at finer scales, can provide insights into interactions between temporally varying

  13. Modeling selective pressures on phytoplankton in the global ocean.

    Science.gov (United States)

    Bragg, Jason G; Dutkiewicz, Stephanie; Jahn, Oliver; Follows, Michael J; Chisholm, Sallie W

    2010-03-10

    Our view of marine microbes is transforming, as culture-independent methods facilitate rapid characterization of microbial diversity. It is difficult to assimilate this information into our understanding of marine microbe ecology and evolution, because their distributions, traits, and genomes are shaped by forces that are complex and dynamic. Here we incorporate diverse forces--physical, biogeochemical, ecological, and mutational--into a global ocean model to study selective pressures on a simple trait in a widely distributed lineage of picophytoplankton: the nitrogen use abilities of Synechococcus and Prochlorococcus cyanobacteria. Some Prochlorococcus ecotypes have lost the ability to use nitrate, whereas their close relatives, marine Synechococcus, typically retain it. We impose mutations for the loss of nitrogen use abilities in modeled picophytoplankton, and ask: in which parts of the ocean are mutants most disadvantaged by losing the ability to use nitrate, and in which parts are they least disadvantaged? Our model predicts that this selective disadvantage is smallest for picophytoplankton that live in tropical regions where Prochlorococcus are abundant in the real ocean. Conversely, the selective disadvantage of losing the ability to use nitrate is larger for modeled picophytoplankton that live at higher latitudes, where Synechococcus are abundant. In regions where we expect Prochlorococcus and Synechococcus populations to cycle seasonally in the real ocean, we find that model ecotypes with seasonal population dynamics similar to Prochlorococcus are less disadvantaged by losing the ability to use nitrate than model ecotypes with seasonal population dynamics similar to Synechococcus. The model predictions for the selective advantage associated with nitrate use are broadly consistent with the distribution of this ability among marine picocyanobacteria, and at finer scales, can provide insights into interactions between temporally varying ocean processes and

  14. A Bayesian framework for adaptive selection, calibration, and validation of coarse-grained models of atomistic systems

    Energy Technology Data Exchange (ETDEWEB)

    Farrell, Kathryn, E-mail: kfarrell@ices.utexas.edu; Oden, J. Tinsley, E-mail: oden@ices.utexas.edu; Faghihi, Danial, E-mail: danial@ices.utexas.edu

    2015-08-15

    A general adaptive modeling algorithm for selection and validation of coarse-grained models of atomistic systems is presented. A Bayesian framework is developed to address uncertainties in parameters, data, and model selection. Algorithms for computing output sensitivities to parameter variances, model evidence and posterior model plausibilities for given data, and for computing what are referred to as Occam Categories in reference to a rough measure of model simplicity, make up components of the overall approach. Computational results are provided for representative applications.

  15. A Bayesian framework for adaptive selection, calibration, and validation of coarse-grained models of atomistic systems

    Science.gov (United States)

    Farrell, Kathryn; Oden, J. Tinsley; Faghihi, Danial

    2015-08-01

    A general adaptive modeling algorithm for selection and validation of coarse-grained models of atomistic systems is presented. A Bayesian framework is developed to address uncertainties in parameters, data, and model selection. Algorithms for computing output sensitivities to parameter variances, model evidence and posterior model plausibilities for given data, and for computing what are referred to as Occam Categories in reference to a rough measure of model simplicity, make up components of the overall approach. Computational results are provided for representative applications.

  16. Modeling selective attention using a neuromorphic analog VLSI device.

    Science.gov (United States)

    Indiveri, G

    2000-12-01

    Attentional mechanisms are required to overcome the problem of flooding a limited processing capacity system with information. They are present in biological sensory systems and can be a useful engineering tool for artificial visual systems. In this article we present a hardware model of a selective attention mechanism implemented on a very large-scale integration (VLSI) chip, using analog neuromorphic circuits. The chip exploits a spike-based representation to receive, process, and transmit signals. It can be used as a transceiver module for building multichip neuromorphic vision systems. We describe the circuits that carry out the main processing stages of the selective attention mechanism and provide experimental data for each circuit. We demonstrate the expected behavior of the model at the system level by stimulating the chip with both artificially generated control signals and signals obtained from a saliency map, computed from an image containing several salient features.

  17. A Model of Social Selection and Successful Altruism

    Science.gov (United States)

    1989-10-07

    D., The evolution of social behavior. Annual Reviews of Ecological Systems, 5:325-383 (1974). 2. Dawkins , R., The selfish gene . Oxford: Oxford...alive and well. it will be important to re- examine this striking historical experience,-not in terms o, oversimplified models of the " selfish gene ," but...Darwinian Analysis The acceptance by many modern geneticists of the axiom that the basic unit of selection Is the " selfish gene " quickly led to the

  18. Order Selection for General Expression of Nonlinear Autoregressive Model Based on Multivariate Stepwise Regression

    Science.gov (United States)

    Shi, Jinfei; Zhu, Songqing; Chen, Ruwen

    2017-12-01

    An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.

  19. Selected topics in photochemistry of nucleic acids. Recent results and perspectives

    International Nuclear Information System (INIS)

    Loeber, G.; Kittler, L.

    1977-01-01

    Recent results on the following photoreactions of nucleic acids are reported: photochemistry of aza-bases and minor bases, formation of photoproducts of the non-cyclobutane type, formations of furocoumarin-pyrimidine photoadducts, fluorescence of dye-nucleic acid complexes and their role in chromosomal fluorescence staining, and mechanisms of the photochemical reaction. Results are discussed with respect to: (i) photobiological relevance of light-induced defects in nucleic acids; (ii) possibilities of achieving higher selectivity of light-induced defects in nucleic acids; (iii) the use of nucleic acid photochemistry to analyze genetic material. An extensive bibliography is included. (author)

  20. Selection of key terrain attributes for SOC model

    DEFF Research Database (Denmark)

    Greve, Mogens Humlekrog; Adhikari, Kabindra; Chellasamy, Menaka

    As an important component of the global carbon pool, soil organic carbon (SOC) plays an important role in the global carbon cycle. SOC pool is the basic information to carry out global warming research, and needs to sustainable use of land resources. Digital terrain attributes are often use...... was selected, total 2,514,820 data mining models were constructed by 71 differences grid from 12m to 2304m and 22 attributes, 21 attributes derived by DTM and the original elevation. Relative importance and usage of each attributes in every model were calculated. Comprehensive impact rates of each attribute...

  1. Modeling the effect of selection history on pop-out visual search.

    Directory of Open Access Journals (Sweden)

    Yuan-Chi Tseng

    Full Text Available While attentional effects in visual selection tasks have traditionally been assigned "top-down" or "bottom-up" origins, more recently it has been proposed that there are three major factors affecting visual selection: (1 physical salience, (2 current goals and (3 selection history. Here, we look further into selection history by investigating Priming of Pop-out (POP and the Distractor Preview Effect (DPE, two inter-trial effects that demonstrate the influence of recent history on visual search performance. Using the Ratcliff diffusion model, we model observed saccadic selections from an oddball search experiment that included a mix of both POP and DPE conditions. We find that the Ratcliff diffusion model can effectively model the manner in which selection history affects current attentional control in visual inter-trial effects. The model evidence shows that bias regarding the current trial's most likely target color is the most critical parameter underlying the effect of selection history. Our results are consistent with the view that the 3-item color-oddball task used for POP and DPE experiments is best understood as an attentional decision making task.

  2. Bayesian model selection validates a biokinetic model for zirconium processing in humans

    Science.gov (United States)

    2012-01-01

    Background In radiation protection, biokinetic models for zirconium processing are of crucial importance in dose estimation and further risk analysis for humans exposed to this radioactive substance. They provide limiting values of detrimental effects and build the basis for applications in internal dosimetry, the prediction for radioactive zirconium retention in various organs as well as retrospective dosimetry. Multi-compartmental models are the tool of choice for simulating the processing of zirconium. Although easily interpretable, determining the exact compartment structure and interaction mechanisms is generally daunting. In the context of observing the dynamics of multiple compartments, Bayesian methods provide efficient tools for model inference and selection. Results We are the first to apply a Markov chain Monte Carlo approach to compute Bayes factors for the evaluation of two competing models for zirconium processing in the human body after ingestion. Based on in vivo measurements of human plasma and urine levels we were able to show that a recently published model is superior to the standard model of the International Commission on Radiological Protection. The Bayes factors were estimated by means of the numerically stable thermodynamic integration in combination with a recently developed copula-based Metropolis-Hastings sampler. Conclusions In contrast to the standard model the novel model predicts lower accretion of zirconium in bones. This results in lower levels of noxious doses for exposed individuals. Moreover, the Bayesian approach allows for retrospective dose assessment, including credible intervals for the initially ingested zirconium, in a significantly more reliable fashion than previously possible. All methods presented here are readily applicable to many modeling tasks in systems biology. PMID:22863152

  3. When Field Experiments Yield Unexpected Results: Lessons Learned from Measuring Selection in White Sands Lizards

    Science.gov (United States)

    Hardwick, Kayla M.; Harmon, Luke J.; Hardwick, Scott D.; Rosenblum, Erica Bree

    2015-01-01

    Determining the adaptive significance of phenotypic traits is key for understanding evolution and diversification in natural populations. However, evolutionary biologists have an incomplete understanding of how specific traits affect fitness in most populations. The White Sands system provides an opportunity to study the adaptive significance of traits in an experimental context. Blanched color evolved recently in three species of lizards inhabiting the gypsum dunes of White Sands and is likely an adaptation to avoid predation. To determine whether there is a relationship between color and susceptibility to predation in White Sands lizards, we conducted enclosure experiments, quantifying survivorship of Holbrookia maculate exhibiting substrate-matched and substrate-mismatched phenotypes. Lizards in our study experienced strong predation. Color did not have a significant effect on survival, but we found several unexpected relationships including variation in predation over small spatial and temporal scales. In addition, we detected a marginally significant interaction between sex and color, suggesting selection for substrate matching may be stronger for males than females. We use our results as a case study to examine six major challenges frequently encountered in field-based studies of natural selection, and suggest that insight into the complexities of selection often results when experiments turn out differently than expected. PMID:25714838

  4. The Living Dead: Transformative Experiences in Modelling Natural Selection

    Science.gov (United States)

    Petersen, Morten Rask

    2017-01-01

    This study considers how students change their coherent conceptual understanding of natural selection through a hands-on simulation. The results show that most students change their understanding. In addition, some students also underwent a transformative experience and used their new knowledge in a leisure time activity. These transformative…

  5. Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2011-01-01

    the attributes in the database into small, usually two-dimensional distributions. We describe several optimizations that can make selectivity estimation highly efficient, and we present a complete implementation inside PostgreSQL’s query optimizer. Experimental results indicate an order of magnitude better...

  6. results

    Directory of Open Access Journals (Sweden)

    Salabura Piotr

    2017-01-01

    Full Text Available HADES experiment at GSI is the only high precision experiment probing nuclear matter in the beam energy range of a few AGeV. Pion, proton and ion beams are used to study rare dielectron and strangeness probes to diagnose properties of strongly interacting matter in this energy regime. Selected results from p + A and A + A collisions are presented and discussed.

  7. Procedure for the Selection and Validation of a Calibration Model I-Description and Application.

    Science.gov (United States)

    Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D

    2017-05-01

    Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Covariate selection for the semiparametric additive risk model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers covariate selection for the additive hazards model. This model is particularly simple to study theoretically and its practical implementation has several major advantages to the similar methodology for the proportional hazards model. One complication compared...... and study their large sample properties for the situation where the number of covariates p is smaller than the number of observations. We also show that the adaptive Lasso has the oracle property. In many practical situations, it is more relevant to tackle the situation with large p compared with the number...... of observations. We do this by studying the properties of the so-called Dantzig selector in the setting of the additive risk model. Specifically, we establish a bound on how close the solution is to a true sparse signal in the case where the number of covariates is large. In a simulation study, we also compare...

  9. Optimal foraging in marine ecosystem models: selectivity, profitability and switching

    DEFF Research Database (Denmark)

    Visser, Andre W.; Fiksen, Ø.

    2013-01-01

    ecological mechanics and evolutionary logic as a solution to diet selection in ecosystem models. When a predator can consume a range of prey items it has to choose which foraging mode to use, which prey to ignore and which ones to pursue, and animals are known to be particularly skilled in adapting...... to the preference functions commonly used in models today. Indeed, depending on prey class resolution, optimal foraging can yield feeding rates that are considerably different from the ‘switching functions’ often applied in marine ecosystem models. Dietary inclusion is dictated by two optimality choices: 1...... by letting predators maximize energy intake or more properly, some measure of fitness where predation risk and cost are also included. An optimal foraging or fitness maximizing approach will give marine ecosystem models a sound principle to determine trophic interactions...

  10. An Introduction to Model Selection: Tools and Algorithms

    Directory of Open Access Journals (Sweden)

    Sébastien Hélie

    2006-03-01

    Full Text Available Model selection is a complicated matter in science, and psychology is no exception. In particular, the high variance in the object of study (i.e., humans prevents the use of Popper’s falsification principle (which is the norm in other sciences. Therefore, the desirability of quantitative psychological models must be assessed by measuring the capacity of the model to fit empirical data. In the present paper, an error measure (likelihood, as well as five methods to compare model fits (the likelihood ratio test, Akaike’s information criterion, the Bayesian information criterion, bootstrapping and cross-validation, are presented. The use of each method is illustrated by an example, and the advantages and weaknesses of each method are also discussed.

  11. The uncertainty analysis of model results a practical guide

    CERN Document Server

    Hofer, Eduard

    2018-01-01

    This book is a practical guide to the uncertainty analysis of computer model applications. Used in many areas, such as engineering, ecology and economics, computer models are subject to various uncertainties at the level of model formulations, parameter values and input data. Naturally, it would be advantageous to know the combined effect of these uncertainties on the model results as well as whether the state of knowledge should be improved in order to reduce the uncertainty of the results most effectively. The book supports decision-makers, model developers and users in their argumentation for an uncertainty analysis and assists them in the interpretation of the analysis results.

  12. Dual-task results and the lateralization of spatial orientation: artifact of test selection?

    Science.gov (United States)

    Bowers, C A; Milham, L M; Price, C

    1998-01-01

    An investigation was conducted to identify the degree to which results regarding the lateralization of spatial orientation among men and women are artifacts of test selection. A dual-task design was used to study possible lateralization differences, providing baseline and dual-task measures of spatial-orientation performance, right- and left-hand tapping, and vocalization of "cat, dog, horse." The Guilford-Zimmerman Test (Guilford & Zimmerman, 1953), the Eliot-Price Test (Eliot & Price, 1976), and the Stumpf-Fay Cube Perspectives Test (Stumpf & Fay, 1983) were the three spatial-orientation tests used to investigate possible artifacts of test selection. Twenty-eight right-handed male and 39 right-handed female undergraduates completed random baseline and dual-task sessions. Analyses indicated no significant sex-related differences in spatial-orientation ability for all three tests. Furthermore, there was no evidence of differential lateralization of spatial orientation between the sexes.

  13. Selective Media for Actinide Collection and Pre-Concentration: Results of FY 2006 Studies

    Energy Technology Data Exchange (ETDEWEB)

    Lumetta, Gregg J.; Addleman, Raymond S.; Hay, Benjamin P.; Hubler, Timothy L.; Levitskaia, Tatiana G.; Sinkov, Sergey I.; Snow, Lanee A.; Warner, Marvin G.; Latesky, Stanley L.

    2006-11-17

    3] > 0.3 M. Preliminary results suggest that the Kl?ui resins can separate Pu(IV) from sample solutions containing high concentrations of competing ions. Conceptual protocols for recovery of the Pu from the resin for subsequent analysis have been proposed, but further work is needed to perfect these techniques. Work on this subject will be continued in FY 2007. Automated laboratory equipment (in conjunction with Task 3 of the NA-22 Automation Project) will be used in FY 2007 to improve the efficiency of these experiments. The sorption of actinide ions on self-assembled monolayer on mesoporous supports materials containing diphosphonate groups was also investigated. These materials also showed a very high affinity for tetravalent actinides, and they also sorbed U(VI) fairly strongly. Computational Ligand Design An extended MM3 molecular mechanics model was developed for calculating the structures of Kl?ui ligand complexes. This laid the groundwork necessary to perform the computer-aided design of bis-Kl?ui architectures tailored for Pu(IV) complexation. Calculated structures of the Kl?ui ligand complexes [Pu(Kl?ui)2(OH2)2]2+ and [Fe(Kl?ui)2]+ indicate a ''bent'' sandwich arrangement of the Kl?ui ligands in the Pu(IV) complex, whereas the Fe(III) complex prefers a ''linear'' octahedral arrangement of the two Kl?ui ligands. This offers the possibility that two Kl?ui ligands can be tethered together to form a material with very high binding affinity for Pu(IV) over Fe(III). The next step in the design process is to use de novo molecule building software (HostDesigner) to identify potential candidate architectures.

  14. Selective Media for Actinide Collection and Pre-Concentration: Results of FY 2006 Studies

    International Nuclear Information System (INIS)

    Lumetta, Gregg J.; Addleman, Raymond S.; Hay, Benjamin P.; Hubler, Timothy L.; Levitskaia, Tatiana G.; Sinkov, Sergey I.; Snow, Lanee A.; Warner, Marvin G.; Latesky, Stanley L.

    2006-01-01

    ] > 0.3 M. Preliminary results suggest that the Kl?ui resins can separate Pu(IV) from sample solutions containing high concentrations of competing ions. Conceptual protocols for recovery of the Pu from the resin for subsequent analysis have been proposed, but further work is needed to perfect these techniques. Work on this subject will be continued in FY 2007. Automated laboratory equipment (in conjunction with Task 3 of the NA-22 Automation Project) will be used in FY 2007 to improve the efficiency of these experiments. The sorption of actinide ions on self-assembled monolayer on mesoporous supports materials containing diphosphonate groups was also investigated. These materials also showed a very high affinity for tetravalent actinides, and they also sorbed U(VI) fairly strongly. Computational Ligand Design An extended MM3 molecular mechanics model was developed for calculating the structures of Kl?ui ligand complexes. This laid the groundwork necessary to perform the computer-aided design of bis-Kl?ui architectures tailored for Pu(IV) complexation. Calculated structures of the Kl?ui ligand complexes [Pu(Kl?ui)2(OH2)2]2+ and [Fe(Kl?ui)2]+ indicate a ''bent'' sandwich arrangement of the Kl?ui ligands in the Pu(IV) complex, whereas the Fe(III) complex prefers a ''linear'' octahedral arrangement of the two Kl?ui ligands. This offers the possibility that two Kl?ui ligands can be tethered together to form a material with very high binding affinity for Pu(IV) over Fe(III). The next step in the design process is to use de novo molecule building software (HostDesigner) to identify potential candidate architectures.

  15. A comparison of two methods for prediction of response and rates of inbreeding in selected populations with the results obtained in two selection experiments

    NARCIS (Netherlands)

    Loywyck, V.; Bijma, P.; Pinard-van der Laan, M.H.; Arendonk, van J.A.M.; Verrier, E.

    2005-01-01

    Selection programmes are mainly concerned with increasing genetic gain. However, short-term progress should not be obtained at the expense of the within-population genetic variability. Different prediction models for the evolution within a small population of the genetic mean of a selected trait,

  16. Selection bias in species distribution models: An econometric approach on forest trees based on structural modeling

    Science.gov (United States)

    Martin-StPaul, N. K.; Ay, J. S.; Guillemot, J.; Doyen, L.; Leadley, P.

    2014-12-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global changes on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of applications on forest trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8km). We also compared the outputs of the SSDM with outputs of a classical SDM (i.e. Biomod ensemble modelling) in terms of bioclimatic response curves and potential distributions under current climate and climate change scenarios. The shapes of the bioclimatic response curves and the modelled species distribution maps differed markedly between SSDM and classical SDMs, with contrasted patterns according to species and spatial resolutions. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents

  17. V and V Efforts of Auroral Precipitation Models: Preliminary Results

    Science.gov (United States)

    Zheng, Yihua; Kuznetsova, Masha; Rastaetter, Lutz; Hesse, Michael

    2011-01-01

    Auroral precipitation models have been valuable both in terms of space weather applications and space science research. Yet very limited testing has been performed regarding model performance. A variety of auroral models are available, including empirical models that are parameterized by geomagnetic indices or upstream solar wind conditions, now casting models that are based on satellite observations, or those derived from physics-based, coupled global models. In this presentation, we will show our preliminary results regarding V&V efforts of some of the models.

  18. STUDY CONCERNING THE ELABORATION OF CERTAIN ORIENTATION MODELS AND THE INITIAL SELECTION FOR SPEED SKATING

    Directory of Open Access Journals (Sweden)

    Vaida Marius

    2009-12-01

    Full Text Available In realizing this study I started from the premise that, by elaborating certain orientation models and initial selection for the speed skating and their application will appear superior results, necessary results, taking into account the actual evolution of the high performance sport in general and of the speed skating, in special.The target of this study has been the identification of an orientation model and a complete initial selection that should be based on the favorable aptitudes of the speed skating. On the basis of the made researched orientation models and initial selection has been made, things that have been demonstrated experimental that are not viable, the study starting from the data of the 120 copies, the complete experiment being made by 32 subjects separated in two groups, one using the proposed model and the other formed fromsubjects randomly selected.These models can serve as common working instruments both for the orientation process and for the initial selection one, being able to integrate in the proper practical activity, these being used easily both by coaches that are in charge with the proper selection of the athletes but also by the physical education teachers orschool teachers that are in contact with children of an early age.

  19. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection.

    Directory of Open Access Journals (Sweden)

    Mark N Read

    2016-09-01

    Full Text Available The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto

  20. Sequential Markov chain Monte Carlo filter with simultaneous model selection for electrocardiogram signal modeling.

    Science.gov (United States)

    Edla, Shwetha; Kovvali, Narayan; Papandreou-Suppappola, Antonia

    2012-01-01

    Constructing statistical models of electrocardiogram (ECG) signals, whose parameters can be used for automated disease classification, is of great importance in precluding manual annotation and providing prompt diagnosis of cardiac diseases. ECG signals consist of several segments with different morphologies (namely the P wave, QRS complex and the T wave) in a single heart beat, which can vary across individuals and diseases. Also, existing statistical ECG models exhibit a reliance upon obtaining a priori information from the ECG data by using preprocessing algorithms to initialize the filter parameters, or to define the user-specified model parameters. In this paper, we propose an ECG modeling technique using the sequential Markov chain Monte Carlo (SMCMC) filter that can perform simultaneous model selection, by adaptively choosing from different representations depending upon the nature of the data. Our results demonstrate the ability of the algorithm to track various types of ECG morphologies, including intermittently occurring ECG beats. In addition, we use the estimated model parameters as the feature set to classify between ECG signals with normal sinus rhythm and four different types of arrhythmia.

  1. Mental health courts and their selection processes: modeling variation for consistency.

    Science.gov (United States)

    Wolff, Nancy; Fabrikant, Nicole; Belenko, Steven

    2011-10-01

    Admission into mental health courts is based on a complicated and often variable decision-making process that involves multiple parties representing different expertise and interests. To the extent that eligibility criteria of mental health courts are more suggestive than deterministic, selection bias can be expected. Very little research has focused on the selection processes underpinning problem-solving courts even though such processes may dominate the performance of these interventions. This article describes a qualitative study designed to deconstruct the selection and admission processes of mental health courts. In this article, we describe a multi-stage, complex process for screening and admitting clients into mental health courts. The selection filtering model that is described has three eligibility screening stages: initial, assessment, and evaluation. The results of this study suggest that clients selected by mental health courts are shaped by the formal and informal selection criteria, as well as by the local treatment system.

  2. The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection

    Science.gov (United States)

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2013-01-01

    Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…

  3. Wind scatterometry with improved ambiguity selection and rain modeling

    Science.gov (United States)

    Draper, David Willis

    Although generally accurate, the quality of SeaWinds on QuikSCAT scatterometer ocean vector winds is compromised by certain natural phenomena and retrieval algorithm limitations. This dissertation addresses three main contributors to scatterometer estimate error: poor ambiguity selection, estimate uncertainty at low wind speeds, and rain corruption. A quality assurance (QA) analysis performed on SeaWinds data suggests that about 5% of SeaWinds data contain ambiguity selection errors and that scatterometer estimation error is correlated with low wind speeds and rain events. Ambiguity selection errors are partly due to the "nudging" step (initialization from outside data). A sophisticated new non-nudging ambiguity selection approach produces generally more consistent wind than the nudging method in moderate wind conditions. The non-nudging method selects 93% of the same ambiguities as the nudged data, validating both techniques, and indicating that ambiguity selection can be accomplished without nudging. Variability at low wind speeds is analyzed using tower-mounted scatterometer data. According to theory, below a threshold wind speed, the wind fails to generate the surface roughness necessary for wind measurement. A simple analysis suggests the existence of the threshold in much of the tower-mounted scatterometer data. However, the backscatter does not "go to zero" beneath the threshold in an uncontrolled environment as theory suggests, but rather has a mean drop and higher variability below the threshold. Rain is the largest weather-related contributor to scatterometer error, affecting approximately 4% to 10% of SeaWinds data. A simple model formed via comparison of co-located TRMM PR and SeaWinds measurements characterizes the average effect of rain on SeaWinds backscatter. The model is generally accurate to within 3 dB over the tropics. The rain/wind backscatter model is used to simultaneously retrieve wind and rain from SeaWinds measurements. The simultaneous

  4. Selection of autochthonous Oenococcus oeni strains according to their oenological properties and vinification results.

    Science.gov (United States)

    Ruiz, Patricia; Izquierdo, Pedro Miguel; Seseña, Susana; Palop, María Llanos

    2010-02-28

    The goal of this study is to carry out a characterization of 84 Oenococcus oeni strains isolated from Tempranillo wine samples taken at the cellars in Castilla-La Mancha, in order to select those showing the highest potential as oenological starter cultures. Various oenological properties were analyzed and the ability of some of these strains to grow and undergo MLF in simulated laboratory microvinifications was tested. Twenty-two strains were selected on the basis of fermentation assays and the eight that produced the best results in the chemical analysis of the wines were chosen for further assays. None of the eight strains was either able to produce biogenic amines or displayed tannase or anthocyanase activities. On the other hand all presented activity against p-NP-beta Glucopyranoside, p-NP-alpha Glucopyranoside and p-NP-beta xylopyranoside. Randomly Amplified Polymorphic DNA (RAPD)-PCR was used to determine the colonizing ability of the inoculated strains. C22L9 and D13L13 strains showed the highest implantation values. On the basis of this characterization, two strains have been selected which are suitable as starter cultures for MLF of Tempranillo wine. Use of these strains will ensure that MLF proceeds successfully and gives retention of the organoleptic characteristics of wines made in Castilla-La Mancha. (c) 2009 Elsevier B.V. All rights reserved.

  5. ExEP yield modeling tool and validation test results

    Science.gov (United States)

    Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul

    2017-09-01

    EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.

  6. Cross-validation pitfalls when selecting and assessing regression and classification models.

    Science.gov (United States)

    Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon

    2014-03-29

    We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.

  7. An Improved Test Selection Optimization Model Based on Fault Ambiguity Group Isolation and Chaotic Discrete PSO

    Directory of Open Access Journals (Sweden)

    Xiaofeng Lv

    2018-01-01

    Full Text Available Sensor data-based test selection optimization is the basis for designing a test work, which ensures that the system is tested under the constraint of the conventional indexes such as fault detection rate (FDR and fault isolation rate (FIR. From the perspective of equipment maintenance support, the ambiguity isolation has a significant effect on the result of test selection. In this paper, an improved test selection optimization model is proposed by considering the ambiguity degree of fault isolation. In the new model, the fault test dependency matrix is adopted to model the correlation between the system fault and the test group. The objective function of the proposed model is minimizing the test cost with the constraint of FDR and FIR. The improved chaotic discrete particle swarm optimization (PSO algorithm is adopted to solve the improved test selection optimization model. The new test selection optimization model is more consistent with real complicated engineering systems. The experimental result verifies the effectiveness of the proposed method.

  8. Results of radiation tests at cryogenic temperature on some selected organic materials for the LHC

    International Nuclear Information System (INIS)

    Tavlet, M.; Schoenbacher, H.

    1999-01-01

    In the near future, particle accelerators and detectors as well as fusion reactors will operate at cryogenic temperatures. At temperatures as low as 2 K, the organic materials used for the insulation of the superconducting magnets and cables will be exposed to high radiation levels. In this work, a representative selection of organic materials comprising insulating films, cable insulations and epoxy-type-impregnated resins were exposed to neutron and gamma radiation of nuclear reactors, both at ambient and cryogenic temperatures, and were subsequently mechanically tested. The results show that the radiation degradation is never worse in a cryogenic fluid than it is in usual ambient conditions. (author)

  9. [Location selection for Shenyang urban parks based on GIS and multi-objective location allocation model].

    Science.gov (United States)

    Zhou, Yuan; Shi, Tie-Mao; Hu, Yuan-Man; Gao, Chang; Liu, Miao; Song, Lin-Qi

    2011-12-01

    Based on geographic information system (GIS) technology and multi-objective location-allocation (LA) model, and in considering of four relatively independent objective factors (population density level, air pollution level, urban heat island effect level, and urban land use pattern), an optimized location selection for the urban parks within the Third Ring of Shenyang was conducted, and the selection results were compared with the spatial distribution of existing parks, aimed to evaluate the rationality of the spatial distribution of urban green spaces. In the location selection of urban green spaces in the study area, the factor air pollution was most important, and, compared with single objective factor, the weighted analysis results of multi-objective factors could provide optimized spatial location selection of new urban green spaces. The combination of GIS technology with LA model would be a new approach for the spatial optimizing of urban green spaces.

  10. METHODS OF SELECTING THE EFFECTIVE MODELS OF BUILDINGS REPROFILING PROJECTS

    Directory of Open Access Journals (Sweden)

    Александр Иванович МЕНЕЙЛЮК

    2016-02-01

    Full Text Available The article highlights the important task of project management in reprofiling of buildings. It is expedient to pay attention to selecting effective engineering solutions to reduce the duration and cost reduction at the project management in the construction industry. This article presents a methodology for the selection of efficient organizational and technical solutions for the reconstruction of buildings reprofiling. The method is based on a compilation of project variants in the program Microsoft Project and experimental statistical analysis using the program COMPEX. The introduction of this technique in the realigning of buildings allows choosing efficient models of projects, depending on the given constraints. Also, this technique can be used for various construction projects.

  11. A Reliability Based Model for Wind Turbine Selection

    Directory of Open Access Journals (Sweden)

    A.K. Rajeevan

    2013-06-01

    Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.

  12. Verification of aseismic design model by using experimental results

    International Nuclear Information System (INIS)

    Mizuno, N.; Sugiyama, N.; Suzuki, T.; Shibata, Y.; Miura, K.; Miyagawa, N.

    1985-01-01

    A lattice model is applied as an analysis model for an aseismic design of the Hamaoka nuclear reactor building. With object to verify an availability of this design model, two reinforced concrete blocks are constructed on the ground and the forced vibration tests are carried out. The test results are well followed by simulation analysis using the lattice model. Damping value of the ground obtained from the test is more conservative than the design value. (orig.)

  13. Assessment of public acceptability in site selection process. The methodology and the results

    International Nuclear Information System (INIS)

    Zeleznik, N.; Kralj, M.; Polic, M.; Kos, D.; Pek Drapal, D.

    2005-01-01

    The site selection process for the low and intermediate radioactive waste (LILW) repository in Slovenia follows the mixed mode approach according to the model proposed by IAEA. After finishing the conceptual and planning stage in 1999, and after identification of the potentially suitable areas in the area survey stage in 2001, ARAO (Agency for radwaste management) invited all municipalities to volunteer in the procedure of placing the LILW repository in the physical environment. A positive response was received from eight municipalities, though three municipalities later resigned from it. A selection between twelve locations in these five municipalities had to be done because Slovenian procedure provides for only three locations to be further evaluated in the stage of identification of potentially suitable sites. A pre-feasibility study of the public acceptability, together with the technical aspects (safety, technical functionality, economic, environmental and spatial aspects) was performed. The aspect of public acceptability included objective and subjective evaluation criteria. The former included information obtained from studies of demography, data on local economy, infrastructure and eventual environmental problems, media analysis, and earlier public opinion polls. The latter included data obtained from topical workshops, free phone line, telephone interviews with the general public and personal interviews with representatives of decision makers and public opinion leaders, as well as a public opinion poll in all included communities. Evaluated municipalities were ranked regarding their social suitability for the radioactive waste site. (author)

  14. Model selection for integrated pest management with stochasticity.

    Science.gov (United States)

    Akman, Olcay; Comar, Timothy D; Hrozencik, Daniel

    2018-04-07

    In Song and Xiang (2006), an integrated pest management model with periodically varying climatic conditions was introduced. In order to address a wider range of environmental effects, the authors here have embarked upon a series of studies resulting in a more flexible modeling approach. In Akman et al. (2013), the impact of randomly changing environmental conditions is examined by incorporating stochasticity into the birth pulse of the prey species. In Akman et al. (2014), the authors introduce a class of models via a mixture of two birth-pulse terms and determined conditions for the global and local asymptotic stability of the pest eradication solution. With this work, the authors unify the stochastic and mixture model components to create further flexibility in modeling the impacts of random environmental changes on an integrated pest management system. In particular, we first determine the conditions under which solutions of our deterministic mixture model are permanent. We then analyze the stochastic model to find the optimal value of the mixing parameter that minimizes the variance in the efficacy of the pesticide. Additionally, we perform a sensitivity analysis to show that the corresponding pesticide efficacy determined by this optimization technique is indeed robust. Through numerical simulations we show that permanence can be preserved in our stochastic model. Our study of the stochastic version of the model indicates that our results on the deterministic model provide informative conclusions about the behavior of the stochastic model. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Bayesian selection of misspecified models is overconfident and may cause spurious posterior probabilities for phylogenetic trees.

    Science.gov (United States)

    Yang, Ziheng; Zhu, Tianqi

    2018-02-20

    The Bayesian method is noted to produce spuriously high posterior probabilities for phylogenetic trees in analysis of large datasets, but the precise reasons for this overconfidence are unknown. In general, the performance of Bayesian selection of misspecified models is poorly understood, even though this is of great scientific interest since models are never true in real data analysis. Here we characterize the asymptotic behavior of Bayesian model selection and show that when the competing models are equally wrong, Bayesian model selection exhibits surprising and polarized behaviors in large datasets, supporting one model with full force while rejecting the others. If one model is slightly less wrong than the other, the less wrong model will eventually win when the amount of data increases, but the method may become overconfident before it becomes reliable. We suggest that this extreme behavior may be a major factor for the spuriously high posterior probabilities for evolutionary trees. The philosophical implications of our results to the application of Bayesian model selection to evaluate opposing scientific hypotheses are yet to be explored, as are the behaviors of non-Bayesian methods in similar situations.

  16. Fuzzy Goal Programming Approach in Selective Maintenance Reliability Model

    Directory of Open Access Journals (Sweden)

    Neha Gupta

    2013-12-01

    Full Text Available 800x600 In the present paper, we have considered the allocation problem of repairable components for a parallel-series system as a multi-objective optimization problem and have discussed two different models. In first model the reliability of subsystems are considered as different objectives. In second model the cost and time spent on repairing the components are considered as two different objectives. These two models is formulated as multi-objective Nonlinear Programming Problem (MONLPP and a Fuzzy goal programming method is used to work out the compromise allocation in multi-objective selective maintenance reliability model in which we define the membership functions of each objective function and then transform membership functions into equivalent linear membership functions by first order Taylor series and finally by forming a fuzzy goal programming model obtain a desired compromise allocation of maintenance components. A numerical example is also worked out to illustrate the computational details of the method.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

  17. Selection of models to calculate the LLW source term

    International Nuclear Information System (INIS)

    Sullivan, T.M.

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab

  18. Selection Strategies for Social Influence in the Threshold Model

    Science.gov (United States)

    Karampourniotis, Panagiotis; Szymanski, Boleslaw; Korniss, Gyorgy

    The ubiquity of online social networks makes the study of social influence extremely significant for its applications to marketing, politics and security. Maximizing the spread of influence by strategically selecting nodes as initiators of a new opinion or trend is a challenging problem. We study the performance of various strategies for selection of large fractions of initiators on a classical social influence model, the Threshold model (TM). Under the TM, a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. The strategies we study are of two kinds: strategies based solely on the initial network structure (Degree-rank, Dominating Sets, PageRank etc.) and strategies that take into account the change of the states of the nodes during the evolution of the cascade, e.g. the greedy algorithm. We find that the performance of these strategies depends largely on both the network structure properties, e.g. the assortativity, and the distribution of the thresholds assigned to the nodes. We conclude that the optimal strategy needs to combine the network specifics and the model specific parameters to identify the most influential spreaders. Supported in part by ARL NS-CTA, ARO, and ONR.

  19. A Dual-Stage Two-Phase Model of Selective Attention

    Science.gov (United States)

    Hubner, Ronald; Steinhauser, Marco; Lehle, Carola

    2010-01-01

    The dual-stage two-phase (DSTP) model is introduced as a formal and general model of selective attention that includes both an early and a late stage of stimulus selection. Whereas at the early stage information is selected by perceptual filters whose selectivity is relatively limited, at the late stage stimuli are selected more efficiently on a…

  20. Island-Model Genomic Selection for Long-Term Genetic Improvement of Autogamous Crops.

    Science.gov (United States)

    Yabe, Shiori; Yamasaki, Masanori; Ebana, Kaworu; Hayashi, Takeshi; Iwata, Hiroyoshi

    2016-01-01

    Acceleration of genetic improvement of autogamous crops such as wheat and rice is necessary to increase cereal production in response to the global food crisis. Population and pedigree methods of breeding, which are based on inbred line selection, are used commonly in the genetic improvement of autogamous crops. These methods, however, produce a few novel combinations of genes in a breeding population. Recurrent selection promotes recombination among genes and produces novel combinations of genes in a breeding population, but it requires inaccurate single-plant evaluation for selection. Genomic selection (GS), which can predict genetic potential of individuals based on their marker genotype, might have high reliability of single-plant evaluation and might be effective in recurrent selection. To evaluate the efficiency of recurrent selection with GS, we conducted simulations using real marker genotype data of rice cultivars. Additionally, we introduced the concept of an "island model" inspired by evolutionary algorithms that might be useful to maintain genetic variation through the breeding process. We conducted GS simulations using real marker genotype data of rice cultivars to evaluate the efficiency of recurrent selection and the island model in an autogamous species. Results demonstrated the importance of producing novel combinations of genes through recurrent selection. An initial population derived from admixture of multiple bi-parental crosses showed larger genetic gains than a population derived from a single bi-parental cross in whole cycles, suggesting the importance of genetic variation in an initial population. The island-model GS better maintained genetic improvement in later generations than the other GS methods, suggesting that the island-model GS can utilize genetic variation in breeding and can retain alleles with small effects in the breeding population. The island-model GS will become a new breeding method that enhances the potential of genomic

  1. Island-Model Genomic Selection for Long-Term Genetic Improvement of Autogamous Crops.

    Directory of Open Access Journals (Sweden)

    Shiori Yabe

    Full Text Available Acceleration of genetic improvement of autogamous crops such as wheat and rice is necessary to increase cereal production in response to the global food crisis. Population and pedigree methods of breeding, which are based on inbred line selection, are used commonly in the genetic improvement of autogamous crops. These methods, however, produce a few novel combinations of genes in a breeding population. Recurrent selection promotes recombination among genes and produces novel combinations of genes in a breeding population, but it requires inaccurate single-plant evaluation for selection. Genomic selection (GS, which can predict genetic potential of individuals based on their marker genotype, might have high reliability of single-plant evaluation and might be effective in recurrent selection. To evaluate the efficiency of recurrent selection with GS, we conducted simulations using real marker genotype data of rice cultivars. Additionally, we introduced the concept of an "island model" inspired by evolutionary algorithms that might be useful to maintain genetic variation through the breeding process. We conducted GS simulations using real marker genotype data of rice cultivars to evaluate the efficiency of recurrent selection and the island model in an autogamous species. Results demonstrated the importance of producing novel combinations of genes through recurrent selection. An initial population derived from admixture of multiple bi-parental crosses showed larger genetic gains than a population derived from a single bi-parental cross in whole cycles, suggesting the importance of genetic variation in an initial population. The island-model GS better maintained genetic improvement in later generations than the other GS methods, suggesting that the island-model GS can utilize genetic variation in breeding and can retain alleles with small effects in the breeding population. The island-model GS will become a new breeding method that enhances the

  2. Global economic consequences of selected surgical diseases: a modelling study.

    Science.gov (United States)

    Alkire, Blake C; Shrime, Mark G; Dare, Anna J; Vincent, Jeffrey R; Meara, John G

    2015-04-27

    The surgical burden of disease is substantial, but little is known about the associated economic consequences. We estimate the global macroeconomic impact of the surgical burden of disease due to injury, neoplasm, digestive diseases, and maternal and neonatal disorders from two distinct economic perspectives. We obtained mortality rate estimates for each disease for the years 2000 and 2010 from the Institute of Health Metrics and Evaluation Global Burden of Disease 2010 study, and estimates of the proportion of the burden of the selected diseases that is surgical from a paper by Shrime and colleagues. We first used the value of lost output (VLO) approach, based on the WHO's Projecting the Economic Cost of Ill-Health (EPIC) model, to project annual market economy losses due to these surgical diseases during 2015-30. EPIC attempts to model how disease affects a country's projected labour force and capital stock, which in turn are related to losses in economic output, or gross domestic product (GDP). We then used the value of lost welfare (VLW) approach, which is conceptually based on the value of a statistical life and is inclusive of non-market losses, to estimate the present value of long-run welfare losses resulting from mortality and short-run welfare losses resulting from morbidity incurred during 2010. Sensitivity analyses were performed for both approaches. During 2015-30, the VLO approach projected that surgical conditions would result in losses of 1·25% of potential GDP, or $20·7 trillion (2010 US$, purchasing power parity) in the 128 countries with data available. When expressed as a proportion of potential GDP, annual GDP losses were greatest in low-income and middle-income countries, with up to a 2·5% loss in output by 2030. When total welfare losses are assessed (VLW), the present value of economic losses is estimated to be equivalent to 17% of 2010 GDP, or $14·5 trillion in the 175 countries assessed with this approach. Neoplasm and injury account

  3. MicroCHP: Overview of selected technologies, products and field test results

    Energy Technology Data Exchange (ETDEWEB)

    Kuhn, Vollrad [Berliner Energieagentur GmbH, Franzoesische Strasse 23, 10117 Berlin (Germany); Klemes, Jiri; Bulatov, Igor [Centre for Process Integration, CEAS, The University of Manchester, P.O. Box 88, M60 1QD Manchester (United Kingdom)

    2008-11-15

    This paper gives an overview on selected microCHP technologies and products with the focus on Stirling and steam machines. Field tests in Germany, the UK and some other EC countries are presented, assessed and evaluated. Test results show the overall positive performance with differences in sectors (domestic vs. small business). Some negative experiences have been received, especially from tests with the Stirling engines and the free-piston steam machine. There are still obstacles for market implementation. Further projects and tests of microCHP are starting in various countries. When positive results will prevail and deficiencies are eliminated, a way to large-scale production and market implementation could be opened. (author)

  4. Improved social force model based on exit selection for microscopic pedestrian simulation in subway station

    Institute of Scientific and Technical Information of China (English)

    郑勋; 李海鹰; 孟令云; 许心越; 陈旭

    2015-01-01

    An improved social force model based on exit selection is proposed to simulate pedestrians’ microscopic behaviors in subway station. The modification lies in considering three factors of spatial distance, occupant density and exit width. In addition, the problem of pedestrians selecting exit frequently is solved as follows: not changing to other exits in the affected area of one exit, using the probability of remaining preceding exit and invoking function of exit selection after several simulation steps. Pedestrians in subway station have some special characteristics, such as explicit destinations, different familiarities with subway station. Finally, Beijing Zoo Subway Station is taken as an example and the feasibility of the model results is verified through the comparison of the actual data and simulation data. The simulation results show that the improved model can depict the microscopic behaviors of pedestrians in subway station.

  5. Predictive and Descriptive CoMFA Models: The Effect of Variable Selection.

    Science.gov (United States)

    Sepehri, Bakhtyar; Omidikia, Nematollah; Kompany-Zareh, Mohsen; Ghavami, Raouf

    2018-01-01

    Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  6. Identifiability Results for Several Classes of Linear Compartment Models.

    Science.gov (United States)

    Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa

    2015-08-01

    Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.

  7. A Neuronal Network Model for Pitch Selectivity and Representation.

    Science.gov (United States)

    Huang, Chengcheng; Rinzel, John

    2016-01-01

    Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among convergent auditory nerve fibers across frequency channels. Their selectivity for only very fast rising slopes of convergent input enables these slope-detectors to distinguish the most prominent coincidences in multi-peaked input time courses. Pitch can then be estimated from the first-order interspike intervals of the slope-detectors. The regular firing pattern of the slope-detector neurons are similar for sounds sharing the same pitch despite the distinct timbres. The decoded pitch strengths also correlate well with the salience of pitch perception as reported by human listeners. Therefore, our model can serve as a neural representation for pitch. Our model performs successfully in estimating the pitch of missing fundamental complexes and reproducing the pitch variation with respect to the frequency shift of inharmonic complexes. It also accounts for the phase sensitivity of pitch perception in the cases of Schroeder phase, alternating phase and random phase relationships. Moreover, our model can also be applied to stochastic sound stimuli, iterated-ripple-noise, and account for their multiple pitch perceptions.

  8. Natural and sexual selection giveth and taketh away reproductive barriers: models of population divergence in guppies.

    Science.gov (United States)

    Labonne, Jacques; Hendry, Andrew P

    2010-07-01

    The standard predictions of ecological speciation might be nuanced by the interaction between natural and sexual selection. We investigated this hypothesis with an individual-based model tailored to the biology of guppies (Poecilia reticulata). We specifically modeled the situation where a high-predation population below a waterfall colonizes a low-predation population above a waterfall. Focusing on the evolution of male color, we confirm that divergent selection causes the appreciable evolution of male color within 20 generations. The rate and magnitude of this divergence were reduced when dispersal rates were high and when female choice did not differ between environments. Adaptive divergence was always coupled to the evolution of two reproductive barriers: viability selection against immigrants and hybrids. Different types of sexual selection, however, led to contrasting results for another potential reproductive barrier: mating success of immigrants. In some cases, the effects of natural and sexual selection offset each other, leading to no overall reproductive isolation despite strong adaptive divergence. Sexual selection acting through female choice can thus strongly modify the effects of divergent natural selection and thereby alter the standard predictions of ecological speciation. We also found that under no circumstances did divergent selection cause appreciable divergence in neutral genetic markers.

  9. Results of the Marine Ice Sheet Model Intercomparison Project, MISMIP

    Directory of Open Access Journals (Sweden)

    F. Pattyn

    2012-05-01

    Full Text Available Predictions of marine ice-sheet behaviour require models that are able to robustly simulate grounding line migration. We present results of an intercomparison exercise for marine ice-sheet models. Verification is effected by comparison with approximate analytical solutions for flux across the grounding line using simplified geometrical configurations (no lateral variations, no effects of lateral buttressing. Unique steady state grounding line positions exist for ice sheets on a downward sloping bed, while hysteresis occurs across an overdeepened bed, and stable steady state grounding line positions only occur on the downward-sloping sections. Models based on the shallow ice approximation, which does not resolve extensional stresses, do not reproduce the approximate analytical results unless appropriate parameterizations for ice flux are imposed at the grounding line. For extensional-stress resolving "shelfy stream" models, differences between model results were mainly due to the choice of spatial discretization. Moving grid methods were found to be the most accurate at capturing grounding line evolution, since they track the grounding line explicitly. Adaptive mesh refinement can further improve accuracy, including fixed grid models that generally perform poorly at coarse resolution. Fixed grid models, with nested grid representations of the grounding line, are able to generate accurate steady state positions, but can be inaccurate over transients. Only one full-Stokes model was included in the intercomparison, and consequently the accuracy of shelfy stream models as approximations of full-Stokes models remains to be determined in detail, especially during transients.

  10. The Baltic Sea experiment BALTEX: a brief overview and some selected results

    Energy Technology Data Exchange (ETDEWEB)

    Raschke, E.; Karstens, U.; Nolte-Holube, R.; Brandt, R.; Isemer, H.J.; Lohmann, D.; Lobmeyr, M.; Rockel, B.; Stuhlmann, R. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Atmosphaerenphysik

    1997-12-31

    The mechanisms responsible for the transfer of energy and water within the climate system are under worldwide investigation within the framework of the Global Energy and Water Cycle Experiment (GEWEX) to improve the predictability of natural and man-made climate changes at short and long ranges and their impact on water resources. Five continental-scale experiments have been established within GEWEX to enable a more complete coupling between atmospheric and hydrodlogical models. One of them is the Baltic Sea Experiment (BALTEX). In this paper, the goals and structure of BALTEX are outlined. A short overview of measuring and modelling strategies is given. Atmospheric and hydrological model results of the authors are presented. This includes validation of precipitation using station measurements as well as validation of modelled cloud cover with cloud estimates form satellite data. Furthermore, results of a large-scale grid based hydrological model to be coupled to atmospheric models are presented. (orig.) [Deutsch] Im Rahmen des Programmes GEWEX (Globales Energie- und Wasserkreislauf-Experiment) werden weltweite Untersuchungen derjenigen Mechanismen unternommen, die die Uebertragung von Energie und Wasser innerhalb des Klimasystems bestimmen. Dadurch soll die Vorhersagebarkeit von natuerlichen und anthropogenen Klimaaenderungen in kurzen und laengeren Zeitraeumen und deren Wirkung auf die verfuegbaren Wasservorraete verbessert werden. Insgesamt fuenf kontinentweite Experimente wurden innerhalb von GEWEX fuer diese Zwecke begonnen. In ihnen soll vordringlich eine Kopplung von Hydrologiemodellen an Atmosphaermodelle erfolgen. Eines dieser Experimente ist das BALTEX (Baltic Sea Experiment). In dieser Arbeit werden die Ziele und die Struktur von BALTEX vorgestellt. Es wird auch ein kurzer Ueberblick ueber die Mess- und Modellierstrategie vermittelt. Ferner werden erste Ergebnisse der Autoren vorgestellt. Diese schliessen auch einen Vergleich zwischen gemessenen und

  11. gamboostLSS: An R Package for Model Building and Variable Selection in the GAMLSS Framework

    Directory of Open Access Journals (Sweden)

    Benjamin Hofner

    2016-10-01

    Full Text Available Generalized additive models for location, scale and shape are a flexible class of regression models that allow to model multiple parameters of a distribution function, such as the mean and the standard deviation, simultaneously. With the R package gamboostLSS, we provide a boosting method to fit these models. Variable selection and model choice are naturally available within this regularized regression framework. To introduce and illustrate the R package gamboostLSS and its infrastructure, we use a data set on stunted growth in India. In addition to the specification and application of the model itself, we present a variety of convenience functions, including methods for tuning parameter selection, prediction and visualization of results. The package gamboostLSS is available from the Comprehensive R Archive Network (CRAN at https://CRAN.R-project.org/package=gamboostLSS.

  12. Evaluation of selected predictive models and parameters for the environmental transport and dosimetry of radionuclides

    International Nuclear Information System (INIS)

    Miller, C.W.; Dunning, D.E. Jr.; Etnier, E.L.; Hoffman, F.O.; Little, C.A.; Meyer, H.R.; Shaeffer, D.L.; Till, J.E.

    1979-07-01

    Evaluations of selected predictive models and parameters used in the assessment of the environmental transport and dosimetry of radionuclides are summarized. Mator sections of this report include a validation of the Gaussian plume disperson model, comparison of the output of a model for the transport of 131 I from vegetation to milk with field data, validation of a model for the fraction of aerosols intercepted by vegetation, an evaluation of dose conversion factors for 232 Th, an evaluation of considering the effect of age dependency on population dose estimates, and a summary of validation results for hydrologic transport models

  13. POST ASYMPTOTIC GIANT BRANCH BIPOLAR REFLECTION NEBULAE: RESULT OF DYNAMICAL EJECTION OR SELECTIVE ILLUMINATION?

    International Nuclear Information System (INIS)

    Koning, N.; Kwok, Sun; Steffen, W.

    2013-01-01

    A model for post asymptotic giant branch bipolar reflection nebulae has been constructed based on a pair of evacuated cavities in a spherical dust envelope. Many of the observed features of bipolar nebulae, including filled bipolar lobes, an equatorial torus, searchlight beams, and a bright central light source, can be reproduced. The effects on orientation and dust densities are studied and comparisons with some observed examples are offered. We suggest that many observed properties of bipolar nebulae are the result of optical effects and any physical modeling of these nebulae has to take these factors into consideration.

  14. Fetal Intervention in Right Outflow Tract Obstructive Disease: Selection of Candidates and Results

    Science.gov (United States)

    Gómez Montes, E.; Herraiz, I.; Mendoza, A.; Galindo, A.

    2012-01-01

    Objectives. To describe the process of selection of candidates for fetal cardiac intervention (FCI) in fetuses diagnosed with pulmonary atresia-critical stenosis with intact ventricular septum (PA/CS-IVS) and report our own experience with FCI for such disease. Methods. We searched our database for cases of PA/CS-IVS prenatally diagnosed in 2003–2012. Data of 38 fetuses were retrieved and analyzed. FCI were offered to 6 patients (2 refused). In the remaining it was not offered due to the presence of either favourable prognostic echocardiographic markers (n = 20) or poor prognostic indicators (n = 12). Results. The outcome of fetuses with PA/CS-IVS was accurately predicted with multiparametric scoring systems. Pulmonary valvuloplasty was technically successful in all 4 fetuses. The growth of the fetal right heart and hemodynamic parameters showed a Gaussian-like behaviour with an improvement in the first weeks and slow worsening as pregnancy advanced, probably indicating a restenosis. Conclusions. The most likely type of circulation after birth may be predicted in the second trimester of pregnancy by means of combining cardiac dimensions and functional parameters. Fetal pulmonary valvuloplasty in midgestation is technically feasible and in well-selected cases may improve right heart growth, fetal hemodynamics, and postnatal outcome. PMID:22928144

  15. Selectivity lists of pesticides to beneficial arthropods for IPM programs in carrot--first results.

    Science.gov (United States)

    Hautier, L; Jansen, J-P; Mabon, N; Schiffers, B

    2005-01-01

    In order to improve IPM programs in carrot, 7 fungicides, 12 herbicides and 9 insecticides commonly used in Belgium were tested for their toxicity towards five beneficial arthropods representative of most important natural enemies encountered in carrot: parasitic wasps - Aphidius rhopalosiphi (De Stefani-Perez) (Hym., Aphidiidae), ladybirds - Adalia bipunctata (L.) (Col., Coccinellidae), hoverfly - Episyrphus balteatus (Dipt.. Syrphidae), rove beetle - Aleochara bilineata (Col., Staphylinidae) and carabid beetle - Bembidion lampros (Col., Carabidae). Initialy, all plant protection products were tested on inert substrate glass plates or sand according to the insect. Products with a corrected mortality (CM) or a parasitism reduction (PR) lower than 30% were kept for the constitution of positive list (green list). The other compounds were further tested on plant for A. rhopalosiphi, A. bipunctata, E. balteatus and soil for B. lampros and A. bilineata. With these extended laboratory tests results, products were listed in toxicity class: green category [CM or PR harmless to beneficials except Tebuconazole, which was slightly harmful for A. bipunctata. Herbicides were also harmless for soil beneficials, except Chlorpropham. This product was very toxic on sand towards A. bilineata and must be tested on soil. All soil insecticides tested were very toxic for ground beneficials and considered as non-selective. Their use in IPM is subject to questioning in view of negative impacts on beneficials. Among foliar insecticides, Dimethoate and Deltamethrin are not recommended for IPM because their high toxicity for all beneficials. The other foliar insecticides were more selective; any of them were harmless for all species tested.

  16. Results of radiation tests at cryogenic temperature on some selected organic materials for the LHC

    International Nuclear Information System (INIS)

    Schoenbacher, H.; Szeless, B.; Tavlet, M.; Humer, K.; Weber, H.W.

    1996-01-01

    Future multi-TeV particle accelerators like the CERN Large Hadron Collider (LHC) will use superconducting magnets where organic materials will be exposed to high radiation levels at temperatures as low as 2 K. A representative selection of organic materials comprising insulating films, cable insulations, and epoxy-type impregnated resins were exposed to neutron and gamma radiation of a nuclear reactor. Depending on the type of materials, the integrated radiation doses varied between 180 kGy and 155 MGy. During irradiation, the samples were kept close to the boiling temperature of liquid nitrogen i.e. ∼ 80 K and thereafter stored in liquid nitrogen and transferred at the same temperature into the testing device for measurement of tensile and flexural strength. Tests were carried out on the same materials at similar dose rates at room temperature, and the results were compared with those obtained at cryogenic temperature. They show that, within the selected dose range, a number of organic materials are suitable for use in the radiation field of the LHC at cryogenic temperature. (orig.)

  17. Differential Nanos 2 protein stability results in selective germ cell accumulation in the sea urchin.

    Science.gov (United States)

    Oulhen, Nathalie; Wessel, Gary M

    2016-10-01

    Nanos is a translational regulator required for the survival and maintenance of primordial germ cells. In the sea urchin, Strongylocentrotus purpuratus (Sp), Nanos 2 mRNA is broadly transcribed but accumulates specifically in the small micromere (sMic) lineage, in part because of the 3'UTR element GNARLE leads to turnover in somatic cells but retention in the sMics. Here we found that the Nanos 2 protein is also selectively stabilized; it is initially translated throughout the embryo but turned over in the future somatic cells and retained only in the sMics, the future germ line in this animal. This differential stability of Nanos protein is dependent on the open reading frame (ORF), and is independent of the sumoylation and ubiquitylation pathways. Manipulation of the ORF indicates that 68 amino acids in the N terminus of the Nanos protein are essential for its stability in the sMics whereas a 45 amino acid element adjacent to the zinc fingers targets its degradation. Further, this regulation of Nanos protein is cell autonomous, following formation of the germ line. These results are paradigmatic for the unique presence of Nanos in the germ line by a combination of selective RNA retention, distinctive translational control mechanisms (Oulhen et al., 2013), and now also by defined Nanos protein stability. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Sensor selection of helicopter transmission systems based on physical model and sensitivity analysis

    Directory of Open Access Journals (Sweden)

    Lyu Kehong

    2014-06-01

    Full Text Available In the helicopter transmission systems, it is important to monitor and track the tooth damage evolution using lots of sensors and detection methods. This paper develops a novel approach for sensor selection based on physical model and sensitivity analysis. Firstly, a physical model of tooth damage and mesh stiffness is built. Secondly, some effective condition indicators (CIs are presented, and the optimal CIs set is selected by comparing their test statistics according to Mann–Kendall test. Afterwards, the selected CIs are used to generate a health indicator (HI through sen slop estimator. Then, the sensors are selected according to the monotonic relevance and sensitivity to the damage levels. Finally, the proposed method is verified by the simulation and experimental data. The results show that the approach can provide a guide for health monitoring of helicopter transmission systems, and it is effective to reduce the test cost and improve the system’s reliability.

  19. Developing a conceptual model for selecting and evaluating online markets

    Directory of Open Access Journals (Sweden)

    Sadegh Feizollahi

    2013-04-01

    Full Text Available There are many evidences, which emphasis on the benefits of using new technologies of information and communication in international business and many believe that E-Commerce can help satisfy customer explicit and implicit requirements. Internet shopping is a concept developed after the introduction of electronic commerce. Information technology (IT and its applications, specifically in the realm of the internet and e-mail promoted the development of e-commerce in terms of advertising, motivating and information. However, with the development of new technologies, credit and financial exchange on the internet websites were constructed so to facilitate e-commerce. The proposed study sends a total of 200 questionnaires to the target group (teachers - students - professionals - managers of commercial web sites and it manages to collect 130 questionnaires for final evaluation. Cronbach's alpha test is used for measuring reliability and to evaluate the validity of measurement instruments (questionnaires, and to assure construct validity, confirmatory factor analysis is employed. In addition, in order to analyze the research questions based on the path analysis method and to determine markets selection models, a regular technique is implemented. In the present study, after examining different aspects of e-commerce, we provide a conceptual model for selecting and evaluating online marketing in Iran. These findings provide a consistent, targeted and holistic framework for the development of the Internet market in the country.

  20. Cliff-edge model of obstetric selection in humans.

    Science.gov (United States)

    Mitteroecker, Philipp; Huttegger, Simon M; Fischer, Barbara; Pavlicev, Mihaela

    2016-12-20

    The strikingly high incidence of obstructed labor due to the disproportion of fetal size and the mother's pelvic dimensions has puzzled evolutionary scientists for decades. Here we propose that these high rates are a direct consequence of the distinct characteristics of human obstetric selection. Neonatal size relative to the birth-relevant maternal dimensions is highly variable and positively associated with reproductive success until it reaches a critical value, beyond which natural delivery becomes impossible. As a consequence, the symmetric phenotype distribution cannot match the highly asymmetric, cliff-edged fitness distribution well: The optimal phenotype distribution that maximizes population mean fitness entails a fraction of individuals falling beyond the "fitness edge" (i.e., those with fetopelvic disproportion). Using a simple mathematical model, we show that weak directional selection for a large neonate, a narrow pelvic canal, or both is sufficient to account for the considerable incidence of fetopelvic disproportion. Based on this model, we predict that the regular use of Caesarean sections throughout the last decades has led to an evolutionary increase of fetopelvic disproportion rates by 10 to 20%.

  1. Financial performance as a decision criterion of credit scoring models selection [doi: 10.21529/RECADM.2017004

    Directory of Open Access Journals (Sweden)

    Rodrigo Alves Silva

    2017-09-01

    Full Text Available This paper aims to show the importance of the use of financial metrics in decision-making of credit scoring models selection. In order to achieve such, we considered an automatic approval system approach and we carried out a performance analysis of the financial metrics on the theoretical portfolios generated by seven credit scoring models based on main statistical learning techniques. The models were estimated on German Credit dataset and the results were analyzed based on four metrics: total accuracy, error cost, risk adjusted return on capital and Sharpe index. The results show that total accuracy, widely used as a criterion for selecting credit scoring models, is unable to select the most profitable model for the company, indicating the need to incorporate financial metrics into the credit scoring model selection process. Keywords Credit risk; Model’s selection; Statistical learning.

  2. Observing with a space-borne gamma-ray telescope: selected results from INTEGRAL

    International Nuclear Information System (INIS)

    Schanne, Stephane

    2006-01-01

    The International Gamma-Ray Astrophysics Laboratory, i.e. the INTEGRAL satellite of ESA, in orbit since about 3 years, performs gamma-ray observations of the sky in the 15 keV to 8 MeV energy range. Thanks to its imager IBIS, and in particular the ISGRI detection plane based on 16384 CdTe pixels, it achieves an excellent angular resolution (12 arcmin) for point source studies with good continuum spectrum sensitivity. Thanks to its spectrometer SPI, based on 19 germanium detectors maintained at 85 K by a cryogenic system, located inside an active BGO veto shield, it achieves excellent spectral resolution of about 2 keV for 1 MeV photons, which permits astrophysical gamma-ray line studies with good narrow-line sensitivity. In this paper we review some goals of gamma-ray astronomy from space and present the INTEGRAL satellite, in particular its instruments ISGRI and SPI. Ground and in-flight calibration results from SPI are presented, before presenting some selected astrophysical results from INTEGRAL. In particular results on point source searches are presented, followed by results on nuclear astrophysics, exemplified by the study of the 1809 keV gamma-ray line from radioactive 26 Al nuclei produced by the ongoing stellar nucleosynthesis in the Galaxy. Finally a review on the study of the positron-electron annihilation in the Galactic center region, producing 511 keV gamma-rays, is presented

  3. Selective CT mesentericography in the diagnostics of obscure overt intestinal bleeding: preliminary results

    International Nuclear Information System (INIS)

    Schuermann, K.; Buecker, A.; Tacke, J.; Schmitz-Rode, T.; Guenther, R.W.; Jansen, M.

    2002-01-01

    Aim: To evaluate intra-arterial CT mesentericography (CTM) in the diagnostics of severe obscure overt intestinal bleeding in comparison with conventional mesentericography (MG) and surgery. Methods: In 8 patients (23 - 82 years, mean 59 years), CTM was performed via the catheter left in the superior mesenteric artery after MG to detect the source of bleeding. Early and late-phase spiral CT scans were acquired after administration of contrast medium. Active bleeding was considered to be present if extravasation of contrast medium into the bowel was found. The results of MG and CTM were compared with the results of surgery. Results: With MG active bleeding was found in one patient, with CTM in five patients. In three patients, both MG and CTM were negative. Six patients underwent surgery. Five cases of bleeding detected with CTM were confirmed by surgery. In one case, bleeding found with CTM was not confirmed by surgery. One patient underwent surgery although all imaging procedures were negative. The source of bleeding remained unknown. Surgically, the site of bleeding was located in the jejunum in 3 patients (jejunitis, jejunal ulcers, carcinoid), one patient had a diverticulum in the ascending colon. The colonic bleeding site was correctly localized with CTM, whereas the small bowel bleeding could only roughly be assigned to the proximal or distal jejunum or jejunoileal transition area. Conclusion: Preliminary results indicate that selective CTM is superior to MG in the evaluation of severe obscure overt intestinal bleeding. (orig.) [de

  4. Employment and other selected personnel attributes in metallurgical and industrial enterprises of different size - research results

    Directory of Open Access Journals (Sweden)

    A. Pawliczek

    2015-10-01

    Full Text Available The presented paper deals with the issue of employment and other selected personnel attributes as employees’ affiliations, employees’ benefits, monitoring of employees’ satisfaction, monitoring of work productivity, investments into employees education and obstacles in hiring qualified human resources. The characteristics are benchmarked on the background of enterprise size based on the employees count in the year 2013. The relevant data were collected in Czech industrial enterprises, including metallurgical companies, with the help of university questionnaire research in order to induce synergy effect arising from mutual communication of academy-students-industry. The most important results are presented later in the paper, complemented with discussion based on relevant professional literature sources. The findings suggest that bigger companies check productivity and satisfaction and dismiss employees more frequently, unlike medium companies which do not reduce their workforce and solve the impact of crisis by decreased affiliations, reduced benefits and similar savings.

  5. Model selection with multiple regression on distance matrices leads to incorrect inferences.

    Directory of Open Access Journals (Sweden)

    Ryan P Franckowiak

    Full Text Available In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC, its small-sample correction (AICc, and the Bayesian information criterion (BIC to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.

  6. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    KAUST Repository

    Elsheikh, A. H.

    2013-12-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known as nested sampling (NS), which can simultaneously sample the posterior distribution for uncertainty quantification, and estimate the Bayesian evidence for model selection. Model selection statistics, such as the Bayesian evidence, are needed to choose or assign different weights to different models of different levels of complexities. In this work, we report the first successful application of nested sampling for calibration of several nonlinear subsurface flow problems. The estimated Bayesian evidence by the NS algorithm is used to weight different parameterizations of the subsurface flow models (prior model selection). The results of the numerical evaluation implicitly enforced Occam\\'s razor where simpler models with fewer number of parameters are favored over complex models. The proper level of model complexity was automatically determined based on the information content of the calibration data and the data mismatch of the calibrated model.

  7. Impact Assessment of Abiotic Resources in LCA: Quantitative Comparison of Selected Characterization Models

    DEFF Research Database (Denmark)

    Rørbech, Jakob Thaysen; Vadenbo, Carl; Hellweg, Stefanie

    2014-01-01

    Resources have received significant attention in recent years resulting in development of a wide range of resource depletion indicators within life cycle assessment (LCA). Understanding the differences in assessment principles used to derive these indicators and the effects on the impact assessment...... results is critical for indicator selection and interpretation of the results. Eleven resource depletion methods were evaluated quantitatively with respect to resource coverage, characterization factors (CF), impact contributions from individual resources, and total impact scores. We included 2247...... groups, according to method focus and modeling approach, to aid method selection within LCA....

  8. Genomic Selection Accuracy using Multifamily Prediction Models in a Wheat Breeding Program

    Directory of Open Access Journals (Sweden)

    Elliot L. Heffner

    2011-03-01

    Full Text Available Genomic selection (GS uses genome-wide molecular marker data to predict the genetic value of selection candidates in breeding programs. In plant breeding, the ability to produce large numbers of progeny per cross allows GS to be conducted within each family. However, this approach requires phenotypes of lines from each cross before conducting GS. This will prolong the selection cycle and may result in lower gains per year than approaches that estimate marker-effects with multiple families from previous selection cycles. In this study, phenotypic selection (PS, conventional marker-assisted selection (MAS, and GS prediction accuracy were compared for 13 agronomic traits in a population of 374 winter wheat ( L. advanced-cycle breeding lines. A cross-validation approach that trained and validated prediction accuracy across years was used to evaluate effects of model selection, training population size, and marker density in the presence of genotype × environment interactions (G×E. The average prediction accuracies using GS were 28% greater than with MAS and were 95% as accurate as PS. For net merit, the average accuracy across six selection indices for GS was 14% greater than for PS. These results provide empirical evidence that multifamily GS could increase genetic gain per unit time and cost in plant breeding.

  9. Generalised Chou-Yang model and recent results

    International Nuclear Information System (INIS)

    Fazal-e-Aleem; Rashid, H.

    1995-09-01

    It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and ρ together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author). 16 refs, 2 figs

  10. Generalised Chou-Yang model and recent results

    Energy Technology Data Exchange (ETDEWEB)

    Fazal-e-Aleem [International Centre for Theoretical Physics, Trieste (Italy); Rashid, H. [Punjab Univ., Lahore (Pakistan). Centre for High Energy Physics

    1996-12-31

    It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and {rho} together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV within the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author) 16 refs.

  11. Generalised Chou-Yang model and recent results

    International Nuclear Information System (INIS)

    Fazal-e-Aleem; Rashid, H.

    1996-01-01

    It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and ρ together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV within the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author)

  12. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    International Nuclear Information System (INIS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-01-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction. (paper)

  13. Examining speed versus selection in connectivity models using elk migration as an example

    Science.gov (United States)

    Brennan, Angela; Hanks, Ephraim M.; Merkle, Jerod A.; Cole, Eric K.; Dewey, Sarah R.; Courtemanch, Alyson B.; Cross, Paul C.

    2018-01-01

    ContextLandscape resistance is vital to connectivity modeling and frequently derived from resource selection functions (RSFs). RSFs estimate relative probability of use and tend to focus on understanding habitat preferences during slow, routine animal movements (e.g., foraging). Dispersal and migration, however, can produce rarer, faster movements, in which case models of movement speed rather than resource selection may be more realistic for identifying habitats that facilitate connectivity.ObjectiveTo compare two connectivity modeling approaches applied to resistance estimated from models of movement rate and resource selection.MethodsUsing movement data from migrating elk, we evaluated continuous time Markov chain (CTMC) and movement-based RSF models (i.e., step selection functions [SSFs]). We applied circuit theory and shortest random path (SRP) algorithms to CTMC, SSF and null (i.e., flat) resistance surfaces to predict corridors between elk seasonal ranges. We evaluated prediction accuracy by comparing model predictions to empirical elk movements.ResultsAll connectivity models predicted elk movements well, but models applied to CTMC resistance were more accurate than models applied to SSF and null resistance. Circuit theory models were more accurate on average than SRP models.ConclusionsCTMC can be more realistic than SSFs for estimating resistance for fast movements, though SSFs may demonstrate some predictive ability when animals also move slowly through corridors (e.g., stopover use during migration). High null model accuracy suggests seasonal range data may also be critical for predicting direct migration routes. For animals that migrate or disperse across large landscapes, we recommend incorporating CTMC into the connectivity modeling toolkit.

  14. The selection pressures induced non-smooth infectious disease model and bifurcation analysis

    International Nuclear Information System (INIS)

    Qin, Wenjie; Tang, Sanyi

    2014-01-01

    Highlights: • A non-smooth infectious disease model to describe selection pressure is developed. • The effect of selection pressure on infectious disease transmission is addressed. • The key factors which are related to the threshold value are determined. • The stabilities and bifurcations of model have been revealed in more detail. • Strategies for the prevention of emerging infectious disease are proposed. - Abstract: Mathematical models can assist in the design strategies to control emerging infectious disease. This paper deduces a non-smooth infectious disease model induced by selection pressures. Analysis of this model reveals rich dynamics including local, global stability of equilibria and local sliding bifurcations. Model solutions ultimately stabilize at either one real equilibrium or the pseudo-equilibrium on the switching surface of the present model, depending on the threshold value determined by some related parameters. Our main results show that reducing the threshold value to a appropriate level could contribute to the efficacy on prevention and treatment of emerging infectious disease, which indicates that the selection pressures can be beneficial to prevent the emerging infectious disease under medical resource limitation

  15. Computationally efficient thermal-mechanical modelling of selective laser melting

    Science.gov (United States)

    Yang, Yabin; Ayas, Can

    2017-10-01

    The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is anticipated to be instrumental for understanding and predicting the development of residual stress field during the build process. However, SLM process modelling requires determination of the heat transients within the part being built which is coupled to a mechanical boundary value problem to calculate displacement and residual stress fields. Thermal models associated with SLM are typically complex and computationally demanding. In this paper, we present a simple semi-analytical thermal-mechanical model, developed for SLM that represents the effect of laser scanning vectors with line heat sources. The temperature field within the part being build is attained by superposition of temperature field associated with line heat sources in a semi-infinite medium and a complimentary temperature field which accounts for the actual boundary conditions. An analytical solution of a line heat source in a semi-infinite medium is first described followed by the numerical procedure used for finding the complimentary temperature field. This analytical description of the line heat sources is able to capture the steep temperature gradients in the vicinity of the laser spot which is typically tens of micrometers. In turn, semi-analytical thermal model allows for having a relatively coarse discretisation of the complimentary temperature field. The temperature history determined is used to calculate the thermal strain induced on the SLM part. Finally, a mechanical model governed by elastic-plastic constitutive rule having isotropic hardening is used to predict the residual stresses.

  16. A multicriteria decision making model for assessment and selection of an ERP in a logistics context

    Science.gov (United States)

    Pereira, Teresa; Ferreira, Fernanda A.

    2017-07-01

    The aim of this work is to apply a methodology of decision support based on a multicriteria decision analyses (MCDA) model that allows the assessment and selection of an Enterprise Resource Planning (ERP) in a Portuguese logistics company by Group Decision Maker (GDM). A Decision Support system (DSS) that implements a MCDA - Multicriteria Methodology for the Assessment and Selection of Information Systems / Information Technologies (MMASSI / IT) is used based on its features and facility to change and adapt the model to a given scope. Using this DSS it was obtained the information system that best suited to the decisional context, being this result evaluated through a sensitivity and robustness analysis.

  17. SNP calling using genotype model selection on high-throughput sequencing data

    KAUST Repository

    You, Na; Murillo, Gabriel; Su, Xiaoquan; Zeng, Xiaowei; Xu, Jian; Ning, Kang; Zhang, ShouDong; Zhu, Jian-Kang; Cui, Xinping

    2012-01-01

    calling SNPs. Thus, errors not involved in base-calling or alignment, such as those in genomic sample preparation, are not accounted for.Results: A novel method of consensus and SNP calling, Genotype Model Selection (GeMS), is given which accounts

  18. Climate Change and Agricultural Productivity in Sub-Saharan Africa: A Spatial Sample Selection Model

    NARCIS (Netherlands)

    Ward, P.S.; Florax, R.J.G.M.; Flores-Lagunes, A.

    2014-01-01

    Using spatially explicit data, we estimate a cereal yield response function using a recently developed estimator for spatial error models when endogenous sample selection is of concern. Our results suggest that yields across Sub-Saharan Africa will decline with projected climatic changes, and that

  19. Dynamical Analysis of Density-dependent Selection in a Discrete one-island Migration Model

    Science.gov (United States)

    James H. Roberds; James F. Selgrade

    2000-01-01

    A system of non-linear difference equations is used to model the effects of density-dependent selection and migration in a population characterized by two alleles at a single gene locus. Results for the existence and stability of polymorphic equilibria are established. Properties for a genetically important class of equilibria associated with complete dominance in...

  20. Guiding center model to interpret neutral particle analyzer results

    Science.gov (United States)

    Englert, G. W.; Reinmann, J. J.; Lauver, M. R.

    1974-01-01

    The theoretical model is discussed, which accounts for drift and cyclotron components of ion motion in a partially ionized plasma. Density and velocity distributions are systematically precribed. The flux into the neutral particle analyzer (NPA) from this plasma is determined by summing over all charge exchange neutrals in phase space which are directed into apertures. Especially detailed data, obtained by sweeping the line of sight of the apertures across the plasma of the NASA Lewis HIP-1 burnout device, are presented. Selection of randomized cyclotron velocity distributions about mean azimuthal drift yield energy distributions which compared well with experiment. Use of data obtained with a bending magnet on the NPA showed that separation between energy distribution curves of various mass species correlate well with a drift divided by mean cyclotron energy parameter of the theory. Use of the guiding center model in conjunction with NPA scans across the plasma aid in estimates of ion density and E field variation with plasma radius.

  1. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    Science.gov (United States)

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. A Site Selection Model for a Straw-Based Power Generation Plant with CO2 Emissions

    Directory of Open Access Journals (Sweden)

    Hao Lv

    2014-10-01

    Full Text Available The decision on the location of a straw-based power generation plant has a great influence on the plant’s operation and performance. This study explores traditional theories for site selection. Using integer programming, the study optimizes the economic and carbon emission outcomes of straw-based power generation as two objectives, with the supply and demand of straw as constraints. It provides a multi-objective mixed-integer programming model to solve the site selection problem for a straw-based power generation plant. It then provides a case study to demonstrate the application of the model in the decision on the site selection for a straw-based power generation plant with a Chinese region. Finally, the paper discusses the result of the model in the context of the wider aspect of straw-based power generation.

  3. Multicriteria decision group model for the selection of suppliers

    Directory of Open Access Journals (Sweden)

    Luciana Hazin Alencar

    2008-08-01

    Full Text Available Several authors have been studying group decision making over the years, which indicates how relevant it is. This paper presents a multicriteria group decision model based on ELECTRE IV and VIP Analysis methods, to those cases where there is great divergence among the decision makers. This model includes two stages. In the first, the ELECTRE IV method is applied and a collective criteria ranking is obtained. In the second, using criteria ranking, VIP Analysis is applied and the alternatives are selected. To illustrate the model, a numerical application in the context of the selection of suppliers in project management is used. The suppliers that form part of the project team have a crucial role in project management. They are involved in a network of connected activities that can jeopardize the success of the project, if they are not undertaken in an appropriate way. The question tackled is how to select service suppliers for a project on behalf of an enterprise that assists the multiple objectives of the decision-makers.Vários autores têm estudado decisão em grupo nos últimos anos, o que indica a relevância do assunto. Esse artigo apresenta um modelo multicritério de decisão em grupo baseado nos métodos ELECTRE IV e VIP Analysis, adequado aos casos em que se tem uma grande divergência entre os decisores. Esse modelo é composto por dois estágios. No primeiro, o método ELECTRE IV é aplicado e uma ordenação dos critérios é obtida. No próximo estágio, com a ordenação dos critérios, o método VIP Analysis é aplicado e as alternativas são selecionadas. Para ilustrar o modelo, uma aplicação numérica no contexto da seleção de fornecedores em projetos é realizada. Os fornecedores que fazem parte da equipe do projeto têm um papel fundamental no gerenciamento de projetos. Eles estão envolvidos em uma rede de atividades conectadas que, caso não sejam executadas de forma apropriada, podem colocar em risco o sucesso do

  4. Improving permafrost distribution modelling using feature selection algorithms

    Science.gov (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  5. Results from the IAEA benchmark of spallation models

    International Nuclear Information System (INIS)

    Leray, S.; David, J.C.; Khandaker, M.; Mank, G.; Mengoni, A.; Otsuka, N.; Filges, D.; Gallmeier, F.; Konobeyev, A.; Michel, R.

    2011-01-01

    Spallation reactions play an important role in a wide domain of applications. In the simulation codes used in this field, the nuclear interaction cross-sections and characteristics are computed by spallation models. The International Atomic Energy Agency (IAEA) has recently organised a benchmark of the spallation models used or that could be used in the future into high-energy transport codes. The objectives were, first, to assess the prediction capabilities of the different spallation models for the different mass and energy regions and the different exit channels and, second, to understand the reason for the success or deficiency of the models. Results of the benchmark concerning both the analysis of the prediction capabilities of the models and the first conclusions on the physics of spallation models are presented. (authors)

  6. A Model for Selection of Eyespots on Butterfly Wings.

    Science.gov (United States)

    Sekimura, Toshio; Venkataraman, Chandrasekhar; Madzvamuse, Anotida

    2015-01-01

    The development of eyespots on the wing surface of butterflies of the family Nympalidae is one of the most studied examples of biological pattern formation.However, little is known about the mechanism that determines the number and precise locations of eyespots on the wing. Eyespots develop around signaling centers, called foci, that are located equidistant from wing veins along the midline of a wing cell (an area bounded by veins). A fundamental question that remains unsolved is, why a certain wing cell develops an eyespot, while other wing cells do not. We illustrate that the key to understanding focus point selection may be in the venation system of the wing disc. Our main hypothesis is that changes in morphogen concentration along the proximal boundary veins of wing cells govern focus point selection. Based on previous studies, we focus on a spatially two-dimensional reaction-diffusion system model posed in the interior of each wing cell that describes the formation of focus points. Using finite element based numerical simulations, we demonstrate that variation in the proximal boundary condition is sufficient to robustly select whether an eyespot focus point forms in otherwise identical wing cells. We also illustrate that this behavior is robust to small perturbations in the parameters and geometry and moderate levels of noise. Hence, we suggest that an anterior-posterior pattern of morphogen concentration along the proximal vein may be the main determinant of the distribution of focus points on the wing surface. In order to complete our model, we propose a two stage reaction-diffusion system model, in which an one-dimensional surface reaction-diffusion system, posed on the proximal vein, generates the morphogen concentrations that act as non-homogeneous Dirichlet (i.e., fixed) boundary conditions for the two-dimensional reaction-diffusion model posed in the wing cells. The two-stage model appears capable of generating focus point distributions observed in

  7. A Model for Selection of Eyespots on Butterfly Wings.

    Directory of Open Access Journals (Sweden)

    Toshio Sekimura

    Full Text Available The development of eyespots on the wing surface of butterflies of the family Nympalidae is one of the most studied examples of biological pattern formation.However, little is known about the mechanism that determines the number and precise locations of eyespots on the wing. Eyespots develop around signaling centers, called foci, that are located equidistant from wing veins along the midline of a wing cell (an area bounded by veins. A fundamental question that remains unsolved is, why a certain wing cell develops an eyespot, while other wing cells do not.We illustrate that the key to understanding focus point selection may be in the venation system of the wing disc. Our main hypothesis is that changes in morphogen concentration along the proximal boundary veins of wing cells govern focus point selection. Based on previous studies, we focus on a spatially two-dimensional reaction-diffusion system model posed in the interior of each wing cell that describes the formation of focus points. Using finite element based numerical simulations, we demonstrate that variation in the proximal boundary condition is sufficient to robustly select whether an eyespot focus point forms in otherwise identical wing cells. We also illustrate that this behavior is robust to small perturbations in the parameters and geometry and moderate levels of noise. Hence, we suggest that an anterior-posterior pattern of morphogen concentration along the proximal vein may be the main determinant of the distribution of focus points on the wing surface. In order to complete our model, we propose a two stage reaction-diffusion system model, in which an one-dimensional surface reaction-diffusion system, posed on the proximal vein, generates the morphogen concentrations that act as non-homogeneous Dirichlet (i.e., fixed boundary conditions for the two-dimensional reaction-diffusion model posed in the wing cells. The two-stage model appears capable of generating focus point distributions

  8. A finite volume alternate direction implicit approach to modeling selective laser melting

    DEFF Research Database (Denmark)

    Hattel, Jesper Henri; Mohanty, Sankhya

    2013-01-01

    Over the last decade, several studies have attempted to develop thermal models for analyzing the selective laser melting process with a vision to predict thermal stresses, microstructures and resulting mechanical properties of manufactured products. While a holistic model addressing all involved...... to accurately simulate the process, are constrained by either the size or scale of the model domain. A second challenging aspect involves the inclusion of non-linear material behavior into the 3D implicit FE models. An alternating direction implicit (ADI) method based on a finite volume (FV) formulation...... is proposed for modeling single-layer and few-layers selective laser melting processes. The ADI technique is implemented and applied for two cases involving constant material properties and non-linear material behavior. The ADI FV method consume less time while having comparable accuracy with respect to 3D...

  9. Multiphysics modeling of selective laser sintering/melting

    Science.gov (United States)

    Ganeriwala, Rishi Kumar

    A significant percentage of total global employment is due to the manufacturing industry. However, manufacturing also accounts for nearly 20% of total energy usage in the United States according to the EIA. In fact, manufacturing accounted for 90% of industrial energy consumption and 84% of industry carbon dioxide emissions in 2002. Clearly, advances in manufacturing technology and efficiency are necessary to curb emissions and help society as a whole. Additive manufacturing (AM) refers to a relatively recent group of manufacturing technologies whereby one can 3D print parts, which has the potential to significantly reduce waste, reconfigure the supply chain, and generally disrupt the whole manufacturing industry. Selective laser sintering/melting (SLS/SLM) is one type of AM technology with the distinct advantage of being able to 3D print metals and rapidly produce net shape parts with complicated geometries. In SLS/SLM parts are built up layer-by-layer out of powder particles, which are selectively sintered/melted via a laser. However, in order to produce defect-free parts of sufficient strength, the process parameters (laser power, scan speed, layer thickness, powder size, etc.) must be carefully optimized. Obviously, these process parameters will vary depending on material, part geometry, and desired final part characteristics. Running experiments to optimize these parameters is costly, energy intensive, and extremely material specific. Thus a computational model of this process would be highly valuable. In this work a three dimensional, reduced order, coupled discrete element - finite difference model is presented for simulating the deposition and subsequent laser heating of a layer of powder particles sitting on top of a substrate. Validation is provided and parameter studies are conducted showing the ability of this model to help determine appropriate process parameters and an optimal powder size distribution for a given material. Next, thermal stresses upon

  10. Selective Cerebro-Myocardial Perfusion in Complex Neonatal Aortic Arch Pathology: Midterm Results.

    Science.gov (United States)

    Hoxha, Stiljan; Abbasciano, Riccardo Giuseppe; Sandrini, Camilla; Rossetti, Lucia; Menon, Tiziano; Barozzi, Luca; Linardi, Daniele; Rungatscher, Alessio; Faggian, Giuseppe; Luciani, Giovanni Battista

    2018-04-01

    Aortic arch repair in newborns and infants has traditionally been accomplished using a period of deep hypothermic circulatory arrest. To reduce neurologic and cardiac dysfunction related to circulatory arrest and myocardial ischemia during complex aortic arch surgery, an alternative and novel strategy for cerebro-myocardial protection was recently developed, where regional low-flow perfusion is combined with controlled and independent coronary perfusion. The aim of the present retrospective study was to assess short-term and mid-term results of selective and independent cerebro-myocardial perfusion in neonatal aortic arch surgery. From April 2008 to August 2015, 28 consecutive neonates underwent aortic arch surgery under cerebro-myocardial perfusion. There were 17 male and 11 female, with median age of 15 days (3-30 days) and median body weight of 3 kg (1.6-4.2 kg), 9 (32%) of whom with low body weight (cerebro-myocardial perfusion was 30 ± 11 min (15-69 min). Renal dysfunction, requiring a period of peritoneal dialysis was observed in 10 (36%) patients, while liver dysfunction was noted only in 3 (11%). There were three (11%) early and two late deaths during a median follow-up of 2.9 years (range 6 months-7.7 years), with an actuarial survival of 82% at 7 years. At latest follow-up, no patient showed signs of cardiac or neurologic dysfunction. The present experience shows that a strategy of selective and independent cerebro-myocardial perfusion is safe, versatile, and feasible in high-risk neonates with complex congenital arch pathology. Encouraging outcomes were noted in terms of cardiac and neurological function, with limited end-organ morbidity. © 2018 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  11. Mutation-selection models of codon substitution and their use to estimate selective strengths on codon usage

    DEFF Research Database (Denmark)

    Yang, Ziheng; Nielsen, Rasmus

    2008-01-01

    Current models of codon substitution are formulated at the levels of nucleotide substitution and do not explicitly consider the separate effects of mutation and selection. They are thus incapable of inferring whether mutation or selection is responsible for evolution at silent sites. Here we impl...... codon usage in mammals. Estimates of selection coefficients nevertheless suggest that selection on codon usage is weak and most mutations are nearly neutral. The sensitivity of the analysis on the assumed mutation model is discussed.......Current models of codon substitution are formulated at the levels of nucleotide substitution and do not explicitly consider the separate effects of mutation and selection. They are thus incapable of inferring whether mutation or selection is responsible for evolution at silent sites. Here we...... implement a few population genetics models of codon substitution that explicitly consider mutation bias and natural selection at the DNA level. Selection on codon usage is modeled by introducing codon-fitness parameters, which together with mutation-bias parameters, predict optimal codon frequencies...

  12. Heterosis may result in selection favouring the products of long-distance pollen dispersal in Eucalyptus.

    Directory of Open Access Journals (Sweden)

    João Costa E Silva

    Full Text Available Using native trees from near the northern and southern extremities of the relatively continuous eastern distribution of Eucalyptus globulus in Tasmania, we compared the progenies derived from natural open-pollination (OP with those generated from within-region and long-distance outcrossing. Controlled outcrossing amongst eight parents - with four parents from each of the northern and southern regions - was undertaken using a diallel mating scheme. The progeny were planted in two field trials located within the species native range in southern Tasmania, and their survival and diameter growth were monitored over a 13-year-period. The survival and growth performances of all controlled cross types exceeded those of the OP progenies, consistent with inbreeding depression due to a combination of selfing and bi-parental inbreeding. The poorer survival of the northern regional (♀N♂N outcrosses compared with the local southern regional outcrosses (♀S♂S indicated differential selection against the former. Despite this mal-adaptation of the non-local ♀N♂N crosses at both southern sites, the survival of the inter-regional hybrids (♀N♂S and ♀S♂N was never significantly different from that of the local ♀S♂S crosses. Significant site-dependent heterosis was detected for the growth of the surviving long-distance hybrids. This was expressed as mid-parent heterosis, particularly at the more northern planting site. Heterosis increased with age, while the difference between the regional ♀N♂N and ♀S♂S crosses remained insignificant at any age at either site. Nevertheless, the results for growth suggest that the fitness of individuals derived from long-distance crossing may be better at the more northern of the planting sites. Our results demonstrate the potential for early-age assessments of pollen dispersal to underestimate realised gene flow, with local inbreeding under natural open-pollination resulting in selection favouring the

  13. Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects

    Directory of Open Access Journals (Sweden)

    Guangjie Li

    2015-07-01

    Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.

  14. Using Random Forests to Select Optimal Input Variables for Short-Term Wind Speed Forecasting Models

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-10-01

    Full Text Available Achieving relatively high-accuracy short-term wind speed forecasting estimates is a precondition for the construction and grid-connected operation of wind power forecasting systems for wind farms. Currently, most research is focused on the structure of forecasting models and does not consider the selection of input variables, which can have significant impacts on forecasting performance. This paper presents an input variable selection method for wind speed forecasting models. The candidate input variables for various leading periods are selected and random forests (RF is employed to evaluate the importance of all variable as features. The feature subset with the best evaluation performance is selected as the optimal feature set. Then, kernel-based extreme learning machine is constructed to evaluate the performance of input variables selection based on RF. The results of the case study show that by removing the uncorrelated and redundant features, RF effectively extracts the most strongly correlated set of features from the candidate input variables. By finding the optimal feature combination to represent the original information, RF simplifies the structure of the wind speed forecasting model, shortens the training time required, and substantially improves the model’s accuracy and generalization ability, demonstrating that the input variables selected by RF are effective.

  15. Mathematical Model of the Emissions of a selected vehicle

    Directory of Open Access Journals (Sweden)

    Matušů Radim

    2014-10-01

    Full Text Available The article addresses the quantification of exhaust emissions from gasoline engines during transient operation. The main targeted emissions are carbon monoxide and carbon dioxide. The result is a mathematical model describing the production of individual emissions components in all modes (static and dynamic. It also describes the procedure for the determination of emissions from the engine’s operating parameters. The result is compared with other possible methods of measuring emissions. The methodology is validated using the data from an on-road measurement. The mathematical model was created on the first route and validated on the second route.

  16. A reaction-diffusion model to capture disparity selectivity in primary visual cortex.

    Directory of Open Access Journals (Sweden)

    Mohammed Sultan Mohiuddin Siddiqui

    Full Text Available Decades of experimental studies are available on disparity selective cells in visual cortex of macaque and cat. Recently, local disparity map for iso-orientation sites for near-vertical edge preference is reported in area 18 of cat visual cortex. No experiment is yet reported on complete disparity map in V1. Disparity map for layer IV in V1 can provide insight into how disparity selective complex cell receptive field is organized from simple cell subunits. Though substantial amounts of experimental data on disparity selective cells is available, no model on receptive field development of such cells or disparity map development exists in literature. We model disparity selectivity in layer IV of cat V1 using a reaction-diffusion two-eye paradigm. In this model, the wiring between LGN and cortical layer IV is determined by resource an LGN cell has for supporting connections to cortical cells and competition for target space in layer IV. While competing for target space, the same type of LGN cells, irrespective of whether it belongs to left-eye-specific or right-eye-specific LGN layer, cooperate with each other while trying to push off the other type. Our model captures realistic 2D disparity selective simple cell receptive fields, their response properties and disparity map along with orientation and ocular dominance maps. There is lack of correlation between ocular dominance and disparity selectivity at the cell population level. At the map level, disparity selectivity topography is not random but weakly clustered for similar preferred disparities. This is similar to the experimental result reported for macaque. The details of weakly clustered disparity selectivity map in V1 indicate two types of complex cell receptive field organization.

  17. Seismarmara experiment: results from reprocessing of selected multi-channel seismic reflection profiles

    Science.gov (United States)

    Cetin, S.; Voogd, B.; Carton, H.; Laigle, M.; Becel, A.; Saatcilar, R.; Singh, S.; Hirn, A.

    2003-04-01

    geometries makes it possible to obtain an enhanced image of different targets such as faults, basins with several seconds of sedimentary infill, lower crust and Moho structure. Along selected lines reflected events from the top of lower crust and the Moho boundary (7 to 12 seconds two-way time) have been identified and will contribute together with wide-angle data recorded at fixed receivers (OBS and land stations) to a joint structural and velocity model at crustal scale. Improved images shed light on the tectonic evolution of the Marmara Sea at crustal scale and contribute to its discussion in terms of structure and earthquake activity.

  18. Quantitative genetic models of sexual selection by male choice.

    Science.gov (United States)

    Nakahashi, Wataru

    2008-09-01

    There are many examples of male mate choice for female traits that tend to be associated with high fertility. I develop quantitative genetic models of a female trait and a male preference to show when such a male preference can evolve. I find that a disagreement between the fertility maximum and the viability maximum of the female trait is necessary for directional male preference (preference for extreme female trait values) to evolve. Moreover, when there is a shortage of available male partners or variance in male nongenetic quality, strong male preference can evolve. Furthermore, I also show that males evolve to exhibit a stronger preference for females that are more feminine (less resemblance to males) than the average female when there is a sexual dimorphism caused by fertility selection which acts only on females.

  19. The effect of bathymetric filtering on nearshore process model results

    Science.gov (United States)

    Plant, N.G.; Edwards, K.L.; Kaihatu, J.M.; Veeramony, J.; Hsu, L.; Holland, K.T.

    2009-01-01

    Nearshore wave and flow model results are shown to exhibit a strong sensitivity to the resolution of the input bathymetry. In this analysis, bathymetric resolution was varied by applying smoothing filters to high-resolution survey data to produce a number of bathymetric grid surfaces. We demonstrate that the sensitivity of model-predicted wave height and flow to variations in bathymetric resolution had different characteristics. Wave height predictions were most sensitive to resolution of cross-shore variability associated with the structure of nearshore sandbars. Flow predictions were most sensitive to the resolution of intermediate scale alongshore variability associated with the prominent sandbar rhythmicity. Flow sensitivity increased in cases where a sandbar was closer to shore and shallower. Perhaps the most surprising implication of these results is that the interpolation and smoothing of bathymetric data could be optimized differently for the wave and flow models. We show that errors between observed and modeled flow and wave heights are well predicted by comparing model simulation results using progressively filtered bathymetry to results from the highest resolution simulation. The damage done by over smoothing or inadequate sampling can therefore be estimated using model simulations. We conclude that the ability to quantify prediction errors will be useful for supporting future data assimilation efforts that require this information.

  20. Model selection for convolutive ICA with an application to spatiotemporal analysis of EEG

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2007-01-01

    We present a new algorithm for maximum likelihood convolutive independent component analysis (ICA) in which components are unmixed using stable autoregressive filters determined implicitly by estimating a convolutive model of the mixing process. By introducing a convolutive mixing model...... for the components, we show how the order of the filters in the model can be correctly detected using Bayesian model selection. We demonstrate a framework for deconvolving a subspace of independent components in electroencephalography (EEG). Initial results suggest that in some cases, convolutive mixing may...

  1. A Heckman selection model for the safety analysis of signalized intersections.

    Directory of Open Access Journals (Sweden)

    Xuecai Xu

    Full Text Available The objective of this paper is to provide a new method for estimating crash rate and severity simultaneously.This study explores a Heckman selection model of the crash rate and severity simultaneously at different levels and a two-step procedure is used to investigate the crash rate and severity levels. The first step uses a probit regression model to determine the sample selection process, and the second step develops a multiple regression model to simultaneously evaluate the crash rate and severity for slight injury/kill or serious injury (KSI, respectively. The model uses 555 observations from 262 signalized intersections in the Hong Kong metropolitan area, integrated with information on the traffic flow, geometric road design, road environment, traffic control and any crashes that occurred during two years.The results of the proposed two-step Heckman selection model illustrate the necessity of different crash rates for different crash severity levels.A comparison with the existing approaches suggests that the Heckman selection model offers an efficient and convenient alternative method for evaluating the safety performance at signalized intersections.

  2. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  3. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  4. The animal model determines the results of Aeromonas virulence factors

    Directory of Open Access Journals (Sweden)

    Alejandro Romero

    2016-10-01

    Full Text Available The selection of an experimental animal model is of great importance in the study of bacterial virulence factors. Here, a bath infection of zebrafish larvae is proposed as an alternative model to study the virulence factors of A. hydrophila. Intraperitoneal infections in mice and trout were compared with bath infections in zebrafish larvae using specific mutants. The great advantage of this model is that bath immersion mimics the natural route of infection, and injury to the tail also provides a natural portal of entry for the bacteria. The implication of T3SS in the virulence of A. hydrophila was analysed using the AH-1::aopB mutant. This mutant was less virulent than the wild-type strain when inoculated into zebrafish larvae, as described in other vertebrates. However, the zebrafish model exhibited slight differences in mortality kinetics only observed using invertebrate models. Infections using the mutant AH-1∆vapA lacking the gene coding for the surface S-layer suggested that this protein was not totally necessary to the bacteria once it was inside the host, but it contributed to the inflammatory response. Only when healthy zebrafish larvae were infected did the mutant produce less mortality than the wild type. Variations between models were evidenced using the AH-1∆rmlB, which lacks the O-antigen lipopolysaccharide (LPS, and the AH-1∆wahD, which lacks the O-antigen LPS and part of the LPS outer-core. Both mutants showed decreased mortality in all of the animal models, but the differences between them were only observed in injured zebrafish larvae, suggesting that residues from the LPS outer core must be important for virulence. The greatest differences were observed using the AH-1ΔFlaB-J (lacking polar flagella and unable to swim and the AH-1::motX (non-motile but producing flagella. They were as pathogenic as the wild-type strain when injected into mice and trout, but no mortalities were registered in zebrafish larvae. This study

  5. ModelMage: a tool for automatic model generation, selection and management.

    Science.gov (United States)

    Flöttmann, Max; Schaber, Jörg; Hoops, Stephan; Klipp, Edda; Mendes, Pedro

    2008-01-01

    Mathematical modeling of biological systems usually involves implementing, simulating, and discriminating several candidate models that represent alternative hypotheses. Generating and managing these candidate models is a tedious and difficult task and can easily lead to errors. ModelMage is a tool that facilitates management of candidate models. It is designed for the easy and rapid development, generation, simulation, and discrimination of candidate models. The main idea of the program is to automatically create a defined set of model alternatives from a single master model. The user provides only one SBML-model and a set of directives from which the candidate models are created by leaving out species, modifiers or reactions. After generating models the software can automatically fit all these models to the data and provides a ranking for model selection, in case data is available. In contrast to other model generation programs, ModelMage aims at generating only a limited set of models that the user can precisely define. ModelMage uses COPASI as a simulation and optimization engine. Thus, all simulation and optimization features of COPASI are readily incorporated. ModelMage can be downloaded from http://sysbio.molgen.mpg.de/modelmage and is distributed as free software.

  6. On theoretical models of gene expression evolution with random genetic drift and natural selection.

    Directory of Open Access Journals (Sweden)

    Osamu Ogasawara

    2009-11-01

    Full Text Available The relative contributions of natural selection and random genetic drift are a major source of debate in the study of gene expression evolution, which is hypothesized to serve as a bridge from molecular to phenotypic evolution. It has been suggested that the conflict between views is caused by the lack of a definite model of the neutral hypothesis, which can describe the long-run behavior of evolutionary change in mRNA abundance. Therefore previous studies have used inadequate analogies with the neutral prediction of other phenomena, such as amino acid or nucleotide sequence evolution, as the null hypothesis of their statistical inference.In this study, we introduced two novel theoretical models, one based on neutral drift and the other assuming natural selection, by focusing on a common property of the distribution of mRNA abundance among a variety of eukaryotic cells, which reflects the result of long-term evolution. Our results demonstrated that (1 our models can reproduce two independently found phenomena simultaneously: the time development of gene expression divergence and Zipf's law of the transcriptome; (2 cytological constraints can be explicitly formulated to describe long-term evolution; (3 the model assuming that natural selection optimized relative mRNA abundance was more consistent with previously published observations than the model of optimized absolute mRNA abundances.The models introduced in this study give a formulation of evolutionary change in the mRNA abundance of each gene as a stochastic process, on the basis of previously published observations. This model provides a foundation for interpreting observed data in studies of gene expression evolution, including identifying an adequate time scale for discriminating the effect of natural selection from that of random genetic drift of selectively neutral variations.

  7. Selection of Children for the KEEP Demonstration School: Criteria, Procedures, and Results. Technical Report #13.

    Science.gov (United States)

    Mays, Violet; And Others

    This brief report describes the selection of the pupil population of the Kamehameha Early Education Program (KEEP) Demonstration School. The pupil population must be representative of the Kalihi community (an urban area of Honolulu) from which it is drawn. An attempt was made to include 75% Hawaiian and 25 % Non-Hawaiian children, to select equal…

  8. Within-host selection of drug resistance in a mouse model reveals dose-dependent selection of atovaquone resistance mutations

    NARCIS (Netherlands)

    Nuralitha, Suci; Murdiyarso, Lydia S.; Siregar, Josephine E.; Syafruddin, Din; Roelands, Jessica; Verhoef, Jan; Hoepelman, Andy I.M.; Marzuki, Sangkot

    2017-01-01

    The evolutionary selection of malaria parasites within an individual host plays a critical role in the emergence of drug resistance. We have compared the selection of atovaquone resistance mutants in mouse models reflecting two different causes of failure of malaria treatment, an inadequate

  9. Circulation in the Gulf of Trieste: measurements and model results

    International Nuclear Information System (INIS)

    Bogunovici, B.; Malacic, V.

    2008-01-01

    The study presents seasonal variability of currents in the southern part of the Gulf of Trieste. A time series analysis of currents and wind stress for the period 2003-2006, which were measured by the coastal oceanographic buoy, was conducted. A comparison between these data and results obtained from a numerical model of circulation in the Gulf was performed to validate model results. Three different approaches were applied to the wind data to determine the wind stress. Similarities were found between Kondo and Smith approaches while the method of Vera shows differences which were particularly noticeable for lower (= 1 m/s) and higher wind speeds (= 15 m/s). Mean currents in the surface layer are generally outflow currents from the Gulf due to wind forcing (bora). However in all other depth layers inflow currents are dominant. With the principal component analysis (Pca) major and minor axes were determined for all seasons. The major axis of maximum variance in years between 2003 and 2006 is prevailing in Ne-Sw direction, which is parallel to the coastline. Comparison of observation and model results is showing that currents are similar (in direction) for the surface and bottom layers but are significantly different for the middle layer (5-13 m). At a depth between 14-21 m velocities are comparable in direction as well as in magnitude even though model values are higher. Higher values of modelled currents at the surface and near the bottom are explained by higher values of wind stress that were used in the model as driving input with respect to the stress calculated from the measured winds. Larger values of modelled currents near the bottom are related to the larger inflow that needs to compensate for the larger modelled outflow at the surface. However, inspection of the vertical structure of temperature, salinity and density shows that the model is reproducing a weaker density gradient which enables the penetration of the outflow surface currents to larger depths.

  10. Objective Model Selection for Identifying the Human Feedforward Response in Manual Control.

    Science.gov (United States)

    Drop, Frank M; Pool, Daan M; van Paassen, Marinus Rene M; Mulder, Max; Bulthoff, Heinrich H

    2018-01-01

    Realistic manual control tasks typically involve predictable target signals and random disturbances. The human controller (HC) is hypothesized to use a feedforward control strategy for target-following, in addition to feedback control for disturbance-rejection. Little is known about human feedforward control, partly because common system identification methods have difficulty in identifying whether, and (if so) how, the HC applies a feedforward strategy. In this paper, an identification procedure is presented that aims at an objective model selection for identifying the human feedforward response, using linear time-invariant autoregressive with exogenous input models. A new model selection criterion is proposed to decide on the model order (number of parameters) and the presence of feedforward in addition to feedback. For a range of typical control tasks, it is shown by means of Monte Carlo computer simulations that the classical Bayesian information criterion (BIC) leads to selecting models that contain a feedforward path from data generated by a pure feedback model: "false-positive" feedforward detection. To eliminate these false-positives, the modified BIC includes an additional penalty on model complexity. The appropriate weighting is found through computer simulations with a hypothesized HC model prior to performing a tracking experiment. Experimental human-in-the-loop data will be considered in future work. With appropriate weighting, the method correctly identifies the HC dynamics in a wide range of control tasks, without false-positive results.

  11. Melt coolability modeling and comparison to MACE test results

    International Nuclear Information System (INIS)

    Farmer, M.T.; Sienicki, J.J.; Spencer, B.W.

    1992-01-01

    An important question in the assessment of severe accidents in light water nuclear reactors is the ability of water to quench a molten corium-concrete interaction and thereby terminate the accident progression. As part of the Melt Attack and Coolability Experiment (MACE) Program, phenomenological models of the corium quenching process are under development. The modeling approach considers both bulk cooldown and crust-limited heat transfer regimes, as well as criteria for the pool thermal hydraulic conditions which separate the two regimes. The model is then compared with results of the MACE experiments

  12. Gamma radiation measurement in select sand samples from Camburi beach - Vitoria, Espirito Santo, Brazil: preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Barros, Livia F.; Pecequilo, Brigitte R.S.; Aquino, Reginaldo R., E-mail: lfbarros@ipen.b, E-mail: brigitte@ipen.b, E-mail: raquino@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The variation of natural radioactivity along the surface of the beach sands of Camburi, located in Vitoria, capital of Espirito Santo, southeastern Brazil, was determined from the contents of {sup 226}Ra, {sup 232}Th and {sup 40}K. Eleven collecting points was selected along all the 6 km extension of the Camburi beach. Sand samples collected from all established points on January 2011 were dried and sealed in standard 100 mL polyethylene flasks and measured by high resolution gamma spectrometry after a 4 weeks ingrowth period, in order to allow the secular equilibrium in the {sup 238}U and {sup 232}Th series. The {sup 226}Ra concentration was determined from the weighted average concentrations of {sup 214}Pb and {sup 214}Bi. The {sup 232}Th concentration was determined from the weighted average concentrations of {sup 228}Ac, {sup 212}Pb and {sup 212}Bi and the {sup 40}K from its single gamma transition. Preliminary results show activity concentrations varying from 5 Bq.kg{sup -1} to {sup 222} Bq.kg{sup -1} for {sup 226}Ra and from 14 Bq.kg{sup -1} to 1074 Bq.kg{sup -'}1 for {sup 232}Th, both with the highest values for Camburi South and Central. For {sup 40}K, the activity concentrations ranged from 14 Bq.kg{sup -1} to 179 Bq.kg{sup -1} and the highest values were obtained for Camburi South. (author)

  13. Gamma radiation measurement in select sand samples from Camburi beach - Vitoria, Espirito Santo, Brazil: preliminary results

    International Nuclear Information System (INIS)

    Barros, Livia F.; Pecequilo, Brigitte R.S.; Aquino, Reginaldo R.

    2011-01-01

    The variation of natural radioactivity along the surface of the beach sands of Camburi, located in Vitoria, capital of Espirito Santo, southeastern Brazil, was determined from the contents of 226 Ra, 232 Th and 40 K. Eleven collecting points was selected along all the 6 km extension of the Camburi beach. Sand samples collected from all established points on January 2011 were dried and sealed in standard 100 mL polyethylene flasks and measured by high resolution gamma spectrometry after a 4 weeks ingrowth period, in order to allow the secular equilibrium in the 238 U and 232 Th series. The 226 Ra concentration was determined from the weighted average concentrations of 214 Pb and 214 Bi. The 232 Th concentration was determined from the weighted average concentrations of 228 Ac, 212 Pb and 212 Bi and the 40 K from its single gamma transition. Preliminary results show activity concentrations varying from 5 Bq.kg -1 to 222 Bq.kg -1 for 226 Ra and from 14 Bq.kg -1 to 1074 Bq.kg -' 1 for 232 Th, both with the highest values for Camburi South and Central. For 40 K, the activity concentrations ranged from 14 Bq.kg -1 to 179 Bq.kg -1 and the highest values were obtained for Camburi South. (author)

  14. Radiation impact on the characteristics of optical glasses test results on a selected set of materials

    Science.gov (United States)

    Fruit, Michel; Gussarov, Andrei; Berghmans, Francis; Doyle, Dominic; Ulbrich, Gerd

    2017-11-01

    It is well known within the Space optics community that radiation may significantly affect transmittance of glasses. To overcome this drawback, glass manufacturers have developed Cerium doped counterparts of classical glasses. This doped glasses display much less transmittance sensitivity to radiation. Still, the impact of radiation on refractive index is less known and may affect indifferently classical or Cerium doped glasses. ESTEC has initialised an R&D program with the aim of establishing a comprehensive data base gathering radiation sensitivity data, called Dose coefficients, for all the glass optical parameters (transmittance / refractive index / compaction……). The first part of this study, to define the methodology for such a data base, is run by ASTRIUM SAS in co-operation with SCK CEN. This covers theoretical studies associated to testing of a selected set of classical and "radiation hardened" glasses. It is proposed here to present first the theoretical backgrounds of this study and then to give results which have been obtained so far.

  15. The results of selective cytogenetic monitoring of Chernobyl accident victims in the Ukraine

    International Nuclear Information System (INIS)

    Pilinskaya, M.A.

    1996-01-01

    Selective cytogenetic monitoring of the highest priority groups of Chernobyl disaster victims has been carried out since 1987. In 1992-1993, 125 liquidators (irradiated mainly in 1986) and 42 persons recovering from acute radiation sickness of the second and third degrees of severity were examined. Cytogenetic effects (an elevated level of unstable as well as stable markers of radiation exposure) were found in all groups, which showed a positive correlation with the initial degree of irradiation severity even 6-7 y after the accident. Comparative scoring of conventional staining vs. G-banding in 10 liquidators showed the identical rate of unstable aberrations. At the same time, the yield of stable aberrations for G-banded slides exceeded the frequency for conventional staining. In order to study possible mutagenic activity of chronic low levels of irradiation, the cytogenetic monitoring of some critical groups of the population (especially children and occupational groups-tractor drivers and foresters) living in areas of the Ukraine contaminated by radionuclides was carried out. In all the examined groups, a significant increase in the frequency of aberrant metaphases, chromosome aberrations (both unstable and stable), an chromatid aberrations was observed. Data gathered from groups of children reflect the intensity of mutagenic impact on the studied populations and demonstrate a positive correlation with the duration of exposure. Results of cytogenetic examination of adults confirmed the importance of considering the contribution of occupational radiation exposure to genetic effects of Chernobyl accident factors on the population of contaminated areas. 17 refs., 3 tabs

  16. The healthy building intervention study: Objectives, methods and results of selected environmental measurements

    Energy Technology Data Exchange (ETDEWEB)

    Fisk, W.J.; Faulkner, D.; Sullivan, D. [and others

    1998-02-17

    To test proposed methods for reducing SBS symptoms and to learn about the causes of these symptoms, a double-blind controlled intervention study was designed and implemented. This study utilized two different interventions designed to reduce occupants` exposures to airborne particles: (1) high efficiency filters in the building`s HVAC systems; and (2) thorough cleaning of carpeted floors and fabric-covered chairs with an unusually powerful vacuum cleaner. The study population was the workers on the second and fourth floors of a large office building with mechanical ventilation, air conditioning, and sealed windows. Interventions were implemented on one floor while the occupants on the other floor served as a control group. For the enhanced-filtration intervention, a multiple crossover design was used (a crossover is a repeat of the experiment with the former experimental group as the control group and vice versa). Demographic and health symptom data were collected via an initial questionnaire on the first study week and health symptom data were obtained each week, for eight additional weeks, via weekly questionnaires. A large number of indoor environmental parameters were measured during the study including air temperatures and humidities, carbon dioxide concentrations, particle concentrations, concentrations of several airborne bioaerosols, and concentrations of several microbiologic compounds within the dust sampled from floors and chairs. This report describes the study methods and summarizes the results of selected environmental measurements.

  17. Selective fallopian tube catheterisation in female infertility: clinical results and absorbed radiation dose

    International Nuclear Information System (INIS)

    Nakamura, K.; Ishiguchi, T.; Maekoshi, H.; Ando, Y.; Tsuzaka, M.; Tamiya, T.; Suganuma, N.; Ishigaki, T.

    1996-01-01

    Clinical results of fluoroscopic fallopian tube catheterisation and absorbed radiation doses during the procedure were evaluated in 30 infertility patients with unilateral or bilateral tubal obstruction documented on hysterosalpingography. The staged technique consisted of contrast injection through an intrauterine catheter with a vacuum cup device, ostial salpingography with the wedged catheter, and selective salpingography with a coaxial microcatheter. Of 45 fallopian tubes examined, 35 (78 %) were demonstrated by the procedure, and at least one tube was newly demonstrated in 26 patients (87 %). Six of these patients conceived spontaneously in the follow-up period of 1-11 months. Four pregnancies were intrauterine and 2 were ectopic. This technique provided accurate and detailed information in the diagnosis and treatment of tubal obstruction in infertility patients. The absorbed radiation dose to the ovary in the average standardised procedure was estimated to be 0.9 cGy. Further improvement in the X-ray equipment and technique is required to reduce the radiation dose. (orig.). With 3 figs., 3 tabs

  18. PARAMETER ESTIMATION AND MODEL SELECTION FOR INDOOR ENVIRONMENTS BASED ON SPARSE OBSERVATIONS

    Directory of Open Access Journals (Sweden)

    Y. Dehbi

    2017-09-01

    Full Text Available This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  19. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    Science.gov (United States)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  20. A computational neural model of goal-directed utterance selection.

    Science.gov (United States)

    Klein, Michael; Kamp, Hans; Palm, Guenther; Doya, Kenji

    2010-06-01

    It is generally agreed that much of human communication is motivated by extra-linguistic goals: we often make utterances in order to get others to do something, or to make them support our cause, or adopt our point of view, etc. However, thus far a computational foundation for this view on language use has been lacking. In this paper we propose such a foundation using Markov Decision Processes. We borrow computational components from the field of action selection and motor control, where a neurobiological basis of these components has been established. In particular, we make use of internal models (i.e., next-state transition functions defined on current state action pairs). The internal model is coupled with reinforcement learning of a value function that is used to assess the desirability of any state that utterances (as well as certain non-verbal actions) can bring about. This cognitive architecture is tested in a number of multi-agent game simulations. In these computational experiments an agent learns to predict the context-dependent effects of utterances by interacting with other agents that are already competent speakers. We show that the cognitive architecture can account for acquiring the capability of deciding when to speak in order to achieve a certain goal (instead of performing a non-verbal action or simply doing nothing), whom to address and what to say. Copyright 2010 Elsevier Ltd. All rights reserved.

  1. Novel Harmonic Regularization Approach for Variable Selection in Cox’s Proportional Hazards Model

    Directory of Open Access Journals (Sweden)

    Ge-Jin Chu

    2014-01-01

    Full Text Available Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq  (1/2select key risk factors in the Cox’s proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL, the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.

  2. PRESEMO - a predictive model of codend selectivity - a tool for fishery managers

    DEFF Research Database (Denmark)

    O'Neill, F.G.; Herrmann, Bent

    2007-01-01

    parameters are expressed in terms of the gear design parameters and in terms of both catch size and gear design parameters. The potential use of these results in a management context and for the development of more selective gears is highlighted by plotting iso-/(50) and iso-sr curves used to identify gear...... design parameters that give equal estimates of the 50% retention length and the selection range, respectively. It is emphasized that this approach can be extended to consider the influence of other design parameters and, if sufficient relevant quantitative information exists, biological and behavioural...... parameters. As such, the model presented here will provide a better understanding of the selection process, permit a more targeted approach to codend selectivity experiments, and assist fishery managers to assess the impact of proposed technical measures that are introduced to reduce the catch of undersized...

  3. Modelling of tracer-kinetic results using xylene isomerization as an example

    International Nuclear Information System (INIS)

    Bauer, F.J.; Dermietzel, J.; Roesseler, M.; Koch, H.

    1976-01-01

    The analysis of results from differential or/and integral reactor experiments often admits the interpretation of a chemical reaction in several ways. In addition, the use of mathematical methods for the model selection and planning of experiments is rendered more difficult by great confidence intervals of the ascertained model parameters. The application of radioactively labelled molecules results in improving the knowledge of reaction mechanisms as well as the assessment of parameters obtained. This is shown on the basis of modelling the isomerization of xylene. (author)

  4. Fuzzy Multicriteria Model for Selection of Vibration Technology

    Directory of Open Access Journals (Sweden)

    María Carmen Carnero

    2016-01-01

    Full Text Available The benefits of applying the vibration analysis program are well known and have been so for decades. A large number of contributions have been produced discussing new diagnostic, signal treatment, technical parameter analysis, and prognosis techniques. However, to obtain the expected benefits from a vibration analysis program, it is necessary to choose the instrumentation which guarantees the best results. Despite its importance, in the literature, there are no models to assist in taking this decision. This research describes an objective model using Fuzzy Analytic Hierarchy Process (FAHP to make a choice of the most suitable technology among portable vibration analysers. The aim is to create an easy-to-use model for processing, manufacturing, services, and research organizations, to guarantee adequate decision-making in the choice of vibration analysis technology. The model described recognises that judgements are often based on ambiguous, imprecise, or inadequate information that cannot provide precise values. The model incorporates judgements from several decision-makers who are experts in the field of vibration analysis, maintenance, and electronic devices. The model has been applied to a Health Care Organization.

  5. Selection Bias in Educational Transition Models: Theory and Empirical Evidence

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads

    variables. This paper, first, explains theoretically how selection on unobserved variables leads to waning coefficients and, second, illustrates empirically how selection leads to biased estimates of the effect of family background on educational transitions. Our empirical analysis using data from...

  6. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  7. Relationship Marketing results: proposition of a cognitive mapping model

    Directory of Open Access Journals (Sweden)

    Iná Futino Barreto

    2015-12-01

    Full Text Available Objective - This research sought to develop a cognitive model that expresses how marketing professionals understand the relationship between the constructs that define relationship marketing (RM. It also tried to understand, using the obtained model, how objectives in this field are achieved. Design/methodology/approach – Through cognitive mapping, we traced 35 individual mental maps, highlighting how each respondent understands the interactions between RM elements. Based on the views of these individuals, we established an aggregate mental map. Theoretical foundation – The topic is based on a literature review that explores the RM concept and its main elements. Based on this review, we listed eleven main constructs. Findings – We established an aggregate mental map that represents the RM structural model. Model analysis identified that CLV is understood as the final result of RM. We also observed that the impact of most of the RM elements on CLV is brokered by loyalty. Personalization and quality, on the other hand, proved to be process input elements, and are the ones that most strongly impact others. Finally, we highlight that elements that punish customers are much less effective than elements that benefit them. Contributions - The model was able to insert core elements of RM, but absent from most formal models: CLV and customization. The analysis allowed us to understand the interactions between the RM elements and how the end result of RM (CLV is formed. This understanding improves knowledge on the subject and helps guide, assess and correct actions.

  8. Privacy-Preserving Evaluation of Generalization Error and Its Application to Model and Attribute Selection

    Science.gov (United States)

    Sakuma, Jun; Wright, Rebecca N.

    Privacy-preserving classification is the task of learning or training a classifier on the union of privately distributed datasets without sharing the datasets. The emphasis of existing studies in privacy-preserving classification has primarily been put on the design of privacy-preserving versions of particular data mining algorithms, However, in classification problems, preprocessing and postprocessing— such as model selection or attribute selection—play a prominent role in achieving higher classification accuracy. In this paper, we show generalization error of classifiers in privacy-preserving classification can be securely evaluated without sharing prediction results. Our main technical contribution is a new generalized Hamming distance protocol that is universally applicable to preprocessing and postprocessing of various privacy-preserving classification problems, such as model selection in support vector machine and attribute selection in naive Bayes classification.

  9. A semiparametric graphical modelling approach for large-scale equity selection.

    Science.gov (United States)

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.

  10. Functional results-oriented healthcare leadership: a novel leadership model.

    Science.gov (United States)

    Al-Touby, Salem Said

    2012-03-01

    This article modifies the traditional functional leadership model to accommodate contemporary needs in healthcare leadership based on two findings. First, the article argues that it is important that the ideal healthcare leadership emphasizes the outcomes of the patient care more than processes and structures used to deliver such care; and secondly, that the leadership must strive to attain effectiveness of their care provision and not merely targeting the attractive option of efficient operations. Based on these premises, the paper reviews the traditional Functional Leadership Model and the three elements that define the type of leadership an organization has namely, the tasks, the individuals, and the team. The article argues that concentrating on any one of these elements is not ideal and proposes adding a new element to the model to construct a novel Functional Result-Oriented healthcare leadership model. The recommended Functional-Results Oriented leadership model embosses the results element on top of the other three elements so that every effort on healthcare leadership is directed towards attaining excellent patient outcomes.

  11. A Bayesian Approach to Model Selection in Hierarchical Mixtures-of-Experts Architectures.

    Science.gov (United States)

    Tanner, Martin A.; Peng, Fengchun; Jacobs, Robert A.

    1997-03-01

    There does not exist a statistical model that shows good performance on all tasks. Consequently, the model selection problem is unavoidable; investigators must decide which model is best at summarizing the data for each task of interest. This article presents an approach to the model selection problem in hierarchical mixtures-of-experts architectures. These architectures combine aspects of generalized linear models with those of finite mixture models in order to perform tasks via a recursive "divide-and-conquer" strategy. Markov chain Monte Carlo methodology is used to estimate the distribution of the architectures' parameters. One part of our approach to model selection attempts to estimate the worth of each component of an architecture so that relatively unused components can be pruned from the architecture's structure. A second part of this approach uses a Bayesian hypothesis testing procedure in order to differentiate inputs that carry useful information from nuisance inputs. Simulation results suggest that the approach presented here adheres to the dictum of Occam's razor; simple architectures that are adequate for summarizing the data are favored over more complex structures. Copyright 1997 Elsevier Science Ltd. All Rights Reserved.

  12. The effect of mis-specification on mean and selection between the Weibull and lognormal models

    Science.gov (United States)

    Jia, Xiang; Nadarajah, Saralees; Guo, Bo

    2018-02-01

    The lognormal and Weibull models are commonly used to analyse data. Although selection procedures have been extensively studied, it is possible that the lognormal model could be selected when the true model is Weibull or vice versa. As the mean is important in applications, we focus on the effect of mis-specification on mean. The effect on lognormal mean is first considered if the lognormal sample is wrongly fitted by a Weibull model. The maximum likelihood estimate (MLE) and quasi-MLE (QMLE) of lognormal mean are obtained based on lognormal and Weibull models. Then, the impact is evaluated by computing ratio of biases and ratio of mean squared errors (MSEs) between MLE and QMLE. For completeness, the theoretical results are demonstrated by simulation studies. Next, the effect of the reverse mis-specification on Weibull mean is discussed. It is found that the ratio of biases and the ratio of MSEs are independent of the location and scale parameters of the lognormal and Weibull models. The influence could be ignored if some special conditions hold. Finally, a model selection method is proposed by comparing ratios concerning biases and MSEs. We also present a published data to illustrate the study in this paper.

  13. Management Model for Evaluation and Selection of Engineering Equipment Suppliers for Construction Projects in Iraq

    Directory of Open Access Journals (Sweden)

    Kadhim Raheem Erzaij

    2016-06-01

    Full Text Available Engineering equipment is essential part in the construction project and usually manufactured with long lead times, large costs and special engineering requirements. Construction manager targets that equipment to be delivered in the site need date with the right quantity, appropriate cost and required quality, and this entails an efficient supplier can satisfy these targets. Selection of engineering equipment supplier is a crucial managerial process .it requires evaluation of multiple suppliers according to multiple criteria. This process is usually performed manually and based on just limited evaluation criteria, so better alternatives may be neglected. Three stages of survey comprised number of public and private companies in Iraqi construction sector were employed to identify main criteria and sub criteria for supplier selection and their priorities.The main criteria identified were quality of product, commercial aspect, delivery, reputation and position, and system quality . An effective technique in multiple criteria decision making (MCDM as analytical hierarchy process (AHP have been used to get importance weights of criteria based on experts judgment. Thereafter, a management software system for Evaluation and Selection of Engineering Equipment Suppliers (ESEES has been developed based on the results obtained from AHP. This model was validated in a case study at municipality of Baghdad involved actual cases of selection pumps suppliers for infrastructure projects .According to experts, this model can improve the current process followed in the supplier selection and aid decision makers to adopt better choices in the domain of selection engineering equipment suppliers.

  14. Value of the distant future: Model-independent results

    Science.gov (United States)

    Katz, Yuri A.

    2017-01-01

    This paper shows that the model-independent account of correlations in an interest rate process or a log-consumption growth process leads to declining long-term tails of discount curves. Under the assumption of an exponentially decaying memory in fluctuations of risk-free real interest rates, I derive the analytical expression for an apt value of the long run discount factor and provide a detailed comparison of the obtained result with the outcome of the benchmark risk-free interest rate models. Utilizing the standard consumption-based model with an isoelastic power utility of the representative economic agent, I derive the non-Markovian generalization of the Ramsey discounting formula. Obtained analytical results allowing simple calibration, may augment the rigorous cost-benefit and regulatory impact analysis of long-term environmental and infrastructure projects.

  15. Storm-time ring current: model-dependent results

    Directory of Open Access Journals (Sweden)

    N. Yu. Ganushkina

    2012-01-01

    Full Text Available The main point of the paper is to investigate how much the modeled ring current depends on the representations of magnetic and electric fields and boundary conditions used in simulations. Two storm events, one moderate (SymH minimum of −120 nT on 6–7 November 1997 and one intense (SymH minimum of −230 nT on 21–22 October 1999, are modeled. A rather simple ring current model is employed, namely, the Inner Magnetosphere Particle Transport and Acceleration model (IMPTAM, in order to make the results most evident. Four different magnetic field and two electric field representations and four boundary conditions are used. We find that different combinations of the magnetic and electric field configurations and boundary conditions result in very different modeled ring current, and, therefore, the physical conclusions based on simulation results can differ significantly. A time-dependent boundary outside of 6.6 RE gives a possibility to take into account the particles in the transition region (between dipole and stretched field lines forming partial ring current and near-Earth tail current in that region. Calculating the model SymH* by Biot-Savart's law instead of the widely used Dessler-Parker-Sckopke (DPS relation gives larger and more realistic values, since the currents are calculated in the regions with nondipolar magnetic field. Therefore, the boundary location and the method of SymH* calculation are of key importance for ring current data-model comparisons to be correctly interpreted.

  16. Results and Interpretation of the WFRD ELS Distillation Down-Select Test Data

    Science.gov (United States)

    Delzeit, Lance Dean; Flynn, Michael; Carter, Layne; Long, David A.

    2010-01-01

    Testing of the Wiped-film Rotating-disk (WFRD) evaporator was conducted in support of the Exploration Life Support Distillation Down-Select Test. The WFRD was constructed at NASA Ames Research Center (ARC) and tested at NASA Marshall Space Flight Center (MSFC). The WFRD was delivered to MSFC in September 2009, and testing of solution #1 and solution #2 immediately following. Solution #1 was composed of humidity condensate and urine, including flush water and pretreatment chemicals. Solution #2 was composed of hygiene water, humidity condensate, and urine, including flush water and pretreatment chemicals. During the testing, the operational parameters of the WFRD were recorded and samples of the feed, brine, and product were collected and analyzed. The steady-state results of processing 414L of feed solution #1 and 1283L of feed solution #2 demonstrated that running the WFRD at a brine temperature of 50 C gave an average production rate of 16.7 L/hr. The specific energy consumption was 80.5W-hr/L. Data Analysis shows that the water recovery rates were 94% and 91%, respectively. The total mass of the WFRD as delivered to MSFC was 300 Kg. The volume of the tests stand rack was 1m width x 0.7m depth x 1.9m height or 1.5 cu m of which about half of the total volume is occupied by equipment. Chemical analysis of the distillate showed an average TOC of 20ppm, a pH of 3.5, and a conductivity of 98 mho/cm. The conductivity of the distillate, compared to the feed, decreased by 98.9%., the total ion concentration decreased by 99.6%, the total organics decreased 98.6%, and the metals were at or below detection limits

  17. Evaluation of intradural stimulation efficiency and selectivity in a computational model of spinal cord stimulation.

    Directory of Open Access Journals (Sweden)

    Bryan Howell

    Full Text Available Spinal cord stimulation (SCS is an alternative or adjunct therapy to treat chronic pain, a prevalent and clinically challenging condition. Although SCS has substantial clinical success, the therapy is still prone to failures, including lead breakage, lead migration, and poor pain relief. The goal of this study was to develop a computational model of SCS and use the model to compare activation of neural elements during intradural and extradural electrode placement. We constructed five patient-specific models of SCS. Stimulation thresholds predicted by the model were compared to stimulation thresholds measured intraoperatively, and we used these models to quantify the efficiency and selectivity of intradural and extradural SCS. Intradural placement dramatically increased stimulation efficiency and reduced the power required to stimulate the dorsal columns by more than 90%. Intradural placement also increased selectivity, allowing activation of a greater proportion of dorsal column fibers before spread of activation to dorsal root fibers, as well as more selective activation of individual dermatomes at different lateral deviations from the midline. Further, the results suggest that current electrode designs used for extradural SCS are not optimal for intradural SCS, and a novel azimuthal tripolar design increased stimulation selectivity, even beyond that achieved with an intradural paddle array. Increased stimulation efficiency is expected to increase the battery life of implantable pulse generators, increase the recharge interval of rechargeable implantable pulse generators, and potentially reduce stimulator volume. The greater selectivity of intradural stimulation may improve the success rate of SCS by mitigating the sensitivity of pain relief to malpositioning of the electrode. The outcome of this effort is a better quantitative understanding of how intradural electrode placement can potentially increase the selectivity and efficiency of SCS

  18. Fuzzy Multicriteria Model for Selection of Vibration Technology

    OpenAIRE

    María Carmen Carnero

    2016-01-01

    The benefits of applying the vibration analysis program are well known and have been so for decades. A large number of contributions have been produced discussing new diagnostic, signal treatment, technical parameter analysis, and prognosis techniques. However, to obtain the expected benefits from a vibration analysis program, it is necessary to choose the instrumentation which guarantees the best results. Despite its importance, in the literature, there are no models to assist in taking this...

  19. Evaluation of selected martensitic stainless steels for use in downhole tubular expansion - Results of a laboratory study

    Energy Technology Data Exchange (ETDEWEB)

    Mack, Robert [Shell International E and P, b.v. Kessler Park 1, Postbus 60, 2280 AB Rijswijk (Netherlands)

    2004-07-01

    A laboratory program was performed to evaluate the potential of selected martensitic stainless steels for downhole cladding applications. The evaluation of the effects of tubular expansion on mechanical properties, defects, and resistance to environmentally assisted cracking demonstrated that some steels were acceptable for the intended application. The results were used to qualify and select the stainless steel for the intended sweet cladding applications. (authors)

  20. Guiding center model to interpret neutral particle analyzer results

    International Nuclear Information System (INIS)

    Englert, G.W.; Reinmann, J.J.; Lauver, M.R.

    1974-01-01

    The theoretical model is discussed, which accounts for drift and cyclotron components of ion motion in a partially ionized plasma. Density and velocity distributions are systematically prescribed. The flux into the neutron particle analyzer (NPA) from this plasma is determined by summing over all charge exchange neutrals in phase space which are directed into apertures. Especially detailed data, obtained by sweeping the line of sight of the apertures across the plasma of the NASA Lewis HIP-1 burnout device, are presented. Selection of randomized cyclotron velocity distributions about mean azimuthal drift yield energy distributions which compared well with experiment. Use of data obtained with a bending magnet on the NPA showed that separation between energy distribution curves of various mass species correlate well with a drift divided by mean cyclotron energy parameter of the theory. Use of the guiding center model in conjunction with NPA scans across the plasma aid in estimates of ion density and E field variation with plasma radius. (U.S.)

  1. Model unspecific search in CMS. Results at 8 TeV

    Energy Technology Data Exchange (ETDEWEB)

    Albert, Andreas; Duchardt, Deborah; Hebbeker, Thomas; Knutzen, Simon; Lieb, Jonas; Meyer, Arnd; Pook, Tobias; Roemer, Jonas [III. Physikalisches Institut A, RWTH Aachen University (Germany)

    2016-07-01

    In the year 2012, CMS collected a total data set of approximately 20 fb{sup -1} in proton-proton collisions at √(s)=8 TeV. Dedicated searches for physics beyond the standard model are commonly designed with the signatures of a given theoretical model in mind. While this approach allows for an optimised sensitivity to the sought-after signal, it may cause unexpected phenomena to be overlooked. In a complementary approach, the Model Unspecific Search in CMS (MUSiC) analyses CMS data in a general way. Depending on the reconstructed final state objects (e.g. electrons), collision events are sorted into classes. In each of the classes, the distributions of selected kinematic variables are compared to standard model simulation. An automated statistical analysis is performed to quantify the agreement between data and prediction. In this talk, the analysis concept is introduced and selected results of the analysis of the 2012 CMS data set are presented.

  2. Test results of the SMES model coil. Pulse performance

    International Nuclear Information System (INIS)

    Hamajima, Takataro; Shimada, Mamoru; Ono, Michitaka

    1998-01-01

    A model coil for superconducting magnetic energy storage (SMES model coil) has been developed to establish the component technologies needed for a small-scale 100 kWh SMES device. The SMES model coil was fabricated, and then performance tests were carried out in 1996. The coil was successfully charged up to around 30 kA and down to zero at the same ramp rate of magnetic field experienced in a 100 kWh SMES device. AC loss in the coil was measured by an enthalpy method as parameters of ramp rate and flat top current. The results were evaluated by an analysis and compared with short-sample test results. The measured hysteresis loss is in good agreement with that estimated from the short-sample results. It was found that the coupling loss of the coil consists of two major coupling time constants. One is a short time constant of about 200 ms, which is in agreement with the test results of a short real conductor. The other is a long time constant of about 30 s, which could not be expected from the short sample test results. (author)

  3. Modeling Results For the ITER Cryogenic Fore Pump. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Pfotenhauer, John M. [University of Wisconsin, Madison, WI (United States); Zhang, Dongsheng [University of Wisconsin, Madison, WI (United States)

    2014-03-31

    A numerical model characterizing the operation of a cryogenic fore-pump (CFP) for ITER has been developed at the University of Wisconsin – Madison during the period from March 15, 2011 through June 30, 2014. The purpose of the ITER-CFP is to separate hydrogen isotopes from helium gas, both making up the exhaust components from the ITER reactor. The model explicitly determines the amount of hydrogen that is captured by the supercritical-helium-cooled pump as a function of the inlet temperature of the supercritical helium, its flow rate, and the inlet conditions of the hydrogen gas flow. Furthermore the model computes the location and amount of hydrogen captured in the pump as a function of time. Throughout the model’s development, and as a calibration check for its results, it has been extensively compared with the measurements of a CFP prototype tested at Oak Ridge National Lab. The results of the model demonstrate that the quantity of captured hydrogen is very sensitive to the inlet temperature of the helium coolant on the outside of the cryopump. Furthermore, the model can be utilized to refine those tests, and suggests methods that could be incorporated in the testing to enhance the usefulness of the measured data.

  4. Variable selection models for genomic selection using whole-genome sequence data and singular value decomposition.

    Science.gov (United States)

    Meuwissen, Theo H E; Indahl, Ulf G; Ødegård, Jørgen

    2017-12-27

    Non-linear Bayesian genomic prediction models such as BayesA/B/C/R involve iteration and mostly Markov chain Monte Carlo (MCMC) algorithms, which are computationally expensive, especially when whole-genome sequence (WGS) data are analyzed. Singular value decomposition (SVD) of the genotype matrix can facilitate genomic prediction in large datasets, and can be used to estimate marker effects and their prediction error variances (PEV) in a computationally efficient manner. Here, we developed, implemented, and evaluated a direct, non-iterative method for the estimation of marker effects for the BayesC genomic prediction model. The BayesC model assumes a priori that markers have normally distributed effects with probability [Formula: see text] and no effect with probability (1 - [Formula: see text]). Marker effects and their PEV are estimated by using SVD and the posterior probability of the marker having a non-zero effect is calculated. These posterior probabilities are used to obtain marker-specific effect variances, which are subsequently used to approximate BayesC estimates of marker effects in a linear model. A computer simulation study was conducted to compare alternative genomic prediction methods, where a single reference generation was used to estimate marker effects, which were subsequently used for 10 generations of forward prediction, for which accuracies were evaluated. SVD-based posterior probabilities of markers having non-zero effects were generally lower than MCMC-based posterior probabilities, but for some regions the opposite occurred, resulting in clear signals for QTL-rich regions. The accuracies of breeding values estimated using SVD- and MCMC-based BayesC analyses were similar across the 10 generations of forward prediction. For an intermediate number of generations (2 to 5) of forward prediction, accuracies obtained with the BayesC model tended to be slightly higher than accuracies obtained using the best linear unbiased prediction of SNP

  5. Factors to Consider in Selecting an Organisational Improvement Initiative: Survey Results

    Directory of Open Access Journals (Sweden)

    Mohammad Musli

    2017-01-01

    Full Text Available Organisations should select the appropriate improvement initiative that will fit with the context of organisation and provide value to the organisation. This paper presents 18 factors to be considered when selecting an organisational improvement initiative. Organisational improvement initiatives are approaches, management systems, tools and/or techniques that can be used for managing and improving organisations, such as Lean, ISO9001, Six Sigma and Improvement Team. A survey was conducted to identify the level of importance of these 18 factors as criteria for selecting an improvement initiative. Purposive sampling was used for this survey involving practitioners, managers, engineers, executives, consultants and/or academicians, who have been involved in the selection and/or implementation of organisational improvement initiatives in Malaysia. Two factors were rated as ‘very high importance’, which involve: (1 The ability to gain top management commitment and support to introduce and implement the initiative successfully, and (2 The initiative is aligned to the vision, mission and/or purpose of the organisation. All these factors can be adopted by the organisations as decision criteria to assist in the selection of the most appropriate improvement initiative based on rational decision making.

  6. Methodology and Results of Mathematical Modelling of Complex Technological Processes

    Science.gov (United States)

    Mokrova, Nataliya V.

    2018-03-01

    The methodology of system analysis allows us to draw a mathematical model of the complex technological process. The mathematical description of the plasma-chemical process was proposed. The importance the quenching rate and initial temperature decrease time was confirmed for producing the maximum amount of the target product. The results of numerical integration of the system of differential equations can be used to describe reagent concentrations, plasma jet rate and temperature in order to achieve optimal mode of hardening. Such models are applicable both for solving control problems and predicting future states of sophisticated technological systems.

  7. Modeling vertical loads in pools resulting from fluid injection

    International Nuclear Information System (INIS)

    Lai, W.; McCauley, E.W.

    1978-01-01

    Table-top model experiments were performed to investigate pressure suppression pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peachbottom Mark I boiling water reactor containment system. The results guided subsequent conduct of experiments in the 1 / 5 -scale facility and provided new insight into the vertical load function (VLF). Model experiments show an oscillatory VLF with the download typically double-spiked followed by a more gradual sinusoidal upload. The load function contains a high frequency oscillation superimposed on a low frequency one; evidence from measurements indicates that the oscillations are initiated by fluid dynamics phenomena

  8. Heat transfer modelling and stability analysis of selective laser melting

    International Nuclear Information System (INIS)

    Gusarov, A.V.; Yadroitsev, I.; Bertrand, Ph.; Smurov, I.

    2007-01-01

    The process of direct manufacturing by selective laser melting basically consists of laser beam scanning over a thin powder layer deposited on a dense substrate. Complete remelting of the powder in the scanned zone and its good adhesion to the substrate ensure obtaining functional parts with improved mechanical properties. Experiments with single-line scanning indicate, that an interval of scanning velocities exists where the remelted tracks are uniform. The tracks become broken if the scanning velocity is outside this interval. This is extremely undesirable and referred to as the 'balling' effect. A numerical model of coupled radiation and heat transfer is proposed to analyse the observed instability. The 'balling' effect at high scanning velocities (above ∼20 cm/s for the present conditions) can be explained by the Plateau-Rayleigh capillary instability of the melt pool. Two factors stabilize the process with decreasing the scanning velocity: reducing the length-to-width ratio of the melt pool and increasing the width of its contact with the substrate

  9. Results of the eruptive column model inter-comparison study

    Science.gov (United States)

    Costa, Antonio; Suzuki, Yujiro; Cerminara, M.; Devenish, Ben J.; Esposti Ongaro, T.; Herzog, Michael; Van Eaton, Alexa; Denby, L.C.; Bursik, Marcus; de' Michieli Vitturi, Mattia; Engwell, S.; Neri, Augusto; Barsotti, Sara; Folch, Arnau; Macedonio, Giovanni; Girault, F.; Carazzo, G.; Tait, S.; Kaminski, E.; Mastin, Larry G.; Woodhouse, Mark J.; Phillips, Jeremy C.; Hogg, Andrew J.; Degruyter, Wim; Bonadonna, Costanza

    2016-01-01

    This study compares and evaluates one-dimensional (1D) and three-dimensional (3D) numerical models of volcanic eruption columns in a set of different inter-comparison exercises. The exercises were designed as a blind test in which a set of common input parameters was given for two reference eruptions, representing a strong and a weak eruption column under different meteorological conditions. Comparing the results of the different models allows us to evaluate their capabilities and target areas for future improvement. Despite their different formulations, the 1D and 3D models provide reasonably consistent predictions of some of the key global descriptors of the volcanic plumes. Variability in plume height, estimated from the standard deviation of model predictions, is within ~ 20% for the weak plume and ~ 10% for the strong plume. Predictions of neutral buoyancy level are also in reasonably good agreement among the different models, with a standard deviation ranging from 9 to 19% (the latter for the weak plume in a windy atmosphere). Overall, these discrepancies are in the range of observational uncertainty of column height. However, there are important differences amongst models in terms of local properties along the plume axis, particularly for the strong plume. Our analysis suggests that the simplified treatment of entrainment in 1D models is adequate to resolve the general behaviour of the weak plume. However, it is inadequate to capture complex features of the strong plume, such as large vortices, partial column collapse, or gravitational fountaining that strongly enhance entrainment in the lower atmosphere. We conclude that there is a need to more accurately quantify entrainment rates, improve the representation of plume radius, and incorporate the effects of column instability in future versions of 1D volcanic plume models.

  10. Noninvasive testing in coronary artery disease. Selection of procedures and interpretation of results

    International Nuclear Information System (INIS)

    Sox, H.C. Jr.

    1983-01-01

    In patients with acute chest pain, selection of diagnostic tests and admission to and discharge from the coronary care unit are critical decisions for which useful empirical guidelines are now available. In hospitalized patients, the serum level of the MB fraction of creatine kinase is particularly useful when the history strongly suggests infarction but the ECG is nondiagnostic. In patients with chronic chest pain, the gender of the patient and the character of the pain are the most important guides to selecting and interpreting exercise tests. In women and in men with nonanginal chest pain, the myocardial scintiscan is preferred to the exercise ECG because of its greater diagnostic accuracy. In men with atypical angina, the two tests are nearly equivalent, and the added cost of the scintiscan is a factor in test selection. Since nearly all men with typical angina have coronary artery disease, diagnostic tests are usually not needed

  11. Selecting Sensitive Parameter Subsets in Dynamical Models With Application to Biomechanical System Identification.

    Science.gov (United States)

    Ramadan, Ahmed; Boss, Connor; Choi, Jongeun; Peter Reeves, N; Cholewicki, Jacek; Popovich, John M; Radcliffe, Clark J

    2018-07-01

    Estimating many parameters of biomechanical systems with limited data may achieve good fit but may also increase 95% confidence intervals in parameter estimates. This results in poor identifiability in the estimation problem. Therefore, we propose a novel method to select sensitive biomechanical model parameters that should be estimated, while fixing the remaining parameters to values obtained from preliminary estimation. Our method relies on identifying the parameters to which the measurement output is most sensitive. The proposed method is based on the Fisher information matrix (FIM). It was compared against the nonlinear least absolute shrinkage and selection operator (LASSO) method to guide modelers on the pros and cons of our FIM method. We present an application identifying a biomechanical parametric model of a head position-tracking task for ten human subjects. Using measured data, our method (1) reduced model complexity by only requiring five out of twelve parameters to be estimated, (2) significantly reduced parameter 95% confidence intervals by up to 89% of the original confidence interval, (3) maintained goodness of fit measured by variance accounted for (VAF) at 82%, (4) reduced computation time, where our FIM method was 164 times faster than the LASSO method, and (5) selected similar sensitive parameters to the LASSO method, where three out of five selected sensitive parameters were shared by FIM and LASSO methods.

  12. Non-ignorable missingness item response theory models for choice effects in examinee-selected items.

    Science.gov (United States)

    Liu, Chen-Wei; Wang, Wen-Chung

    2017-11-01

    Examinee-selected item (ESI) design, in which examinees are required to respond to a fixed number of items in a given set, always yields incomplete data (i.e., when only the selected items are answered, data are missing for the others) that are likely non-ignorable in likelihood inference. Standard item response theory (IRT) models become infeasible when ESI data are missing not at random (MNAR). To solve this problem, the authors propose a two-dimensional IRT model that posits one unidimensional IRT model for observed data and another for nominal selection patterns. The two latent variables are assumed to follow a bivariate normal distribution. In this study, the mirt freeware package was adopted to estimate parameters. The authors conduct an experiment to demonstrate that ESI data are often non-ignorable and to determine how to apply the new model to the data collected. Two follow-up simulation studies are conducted to assess the parameter recovery of the new model and the consequences for parameter estimation of ignoring MNAR data. The results of the two simulation studies indicate good parameter recovery of the new model and poor parameter recovery when non-ignorable missing data were mistakenly treated as ignorable. © 2017 The British Psychological Society.

  13. DISCRETE DEFORMATION WAVE DYNAMICS IN SHEAR ZONES: PHYSICAL MODELLING RESULTS

    Directory of Open Access Journals (Sweden)

    S. A. Bornyakov

    2016-01-01

    Full Text Available Observations of earthquake migration along active fault zones [Richter, 1958; Mogi, 1968] and related theoretical concepts [Elsasser, 1969] have laid the foundation for studying the problem of slow deformation waves in the lithosphere. Despite the fact that this problem has been under study for several decades and discussed in numerous publications, convincing evidence for the existence of deformation waves is still lacking. One of the causes is that comprehensive field studies to register such waves by special tools and equipment, which require sufficient organizational and technical resources, have not been conducted yet.The authors attempted at finding a solution to this problem by physical simulation of a major shear zone in an elastic-viscous-plastic model of the lithosphere. The experiment setup is shown in Figure 1 (A. The model material and boundary conditions were specified in accordance with the similarity criteria (described in detail in [Sherman, 1984; Sherman et al., 1991; Bornyakov et al., 2014]. The montmorillonite clay-and-water paste was placed evenly on two stamps of the installation and subject to deformation as the active stamp (1 moved relative to the passive stamp (2 at a constant speed. The upper model surface was covered with fine sand in order to get high-contrast photos. Photos of an emerging shear zone were taken every second by a Basler acA2000-50gm digital camera. Figure 1 (B shows an optical image of a fragment of the shear zone. The photos were processed by the digital image correlation method described in [Sutton et al., 2009]. This method estimates the distribution of components of displacement vectors and strain tensors on the model surface and their evolution over time [Panteleev et al., 2014, 2015].Strain fields and displacements recorded in the optical images of the model surface were estimated in a rectangular box (220.00×72.17 mm shown by a dot-and-dash line in Fig. 1, A. To ensure a sufficient level of

  14. Application Of Decision Tree Approach To Student Selection Model- A Case Study

    Science.gov (United States)

    Harwati; Sudiya, Amby

    2016-01-01

    The main purpose of the institution is to provide quality education to the students and to improve the quality of managerial decisions. One of the ways to improve the quality of students is to arrange the selection of new students with a more selective. This research takes the case in the selection of new students at Islamic University of Indonesia, Yogyakarta, Indonesia. One of the university's selection is through filtering administrative selection based on the records of prospective students at the high school without paper testing. Currently, that kind of selection does not yet has a standard model and criteria. Selection is only done by comparing candidate application file, so the subjectivity of assessment is very possible to happen because of the lack standard criteria that can differentiate the quality of students from one another. By applying data mining techniques classification, can be built a model selection for new students which includes criteria to certain standards such as the area of origin, the status of the school, the average value and so on. These criteria are determined by using rules that appear based on the classification of the academic achievement (GPA) of the students in previous years who entered the university through the same way. The decision tree method with C4.5 algorithm is used here. The results show that students are given priority for admission is that meet the following criteria: came from the island of Java, public school, majoring in science, an average value above 75, and have at least one achievement during their study in high school.

  15. Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-07

    The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.

  16. Analysis of multicriteria models application for selection of an optimal artificial lift method in oil production

    Directory of Open Access Journals (Sweden)

    Crnogorac Miroslav P.

    2016-01-01

    Full Text Available In the world today for the exploitation of oil reservoirs by artificial lift methods are applied different types of deep pumps (piston, centrifugal, screw, hydraulic, water jet pumps and gas lift (continuous, intermittent and plunger. Maximum values of oil production achieved by these exploitation methods are significantly different. In order to select the optimal exploitation method of oil well, the multicriteria analysis models are used. In this paper is presented an analysis of the multicriteria model's application known as VIKOR, TOPSIS, ELECTRE, AHP and PROMETHEE for selection of optimal exploitation method for typical oil well at Serbian exploration area. Ranking results of applicability of the deep piston pumps, hydraulic pumps, screw pumps, gas lift method and electric submersible centrifugal pumps, indicated that in the all above multicriteria models except in PROMETHEE, the optimal method of exploitation are deep piston pumps and gas lift.

  17. Improved time series prediction with a new method for selection of model parameters

    International Nuclear Information System (INIS)

    Jade, A M; Jayaraman, V K; Kulkarni, B D

    2006-01-01

    A new method for model selection in prediction of time series is proposed. Apart from the conventional criterion of minimizing RMS error, the method also minimizes the error on the distribution of singularities, evaluated through the local Hoelder estimates and its probability density spectrum. Predictions of two simulated and one real time series have been done using kernel principal component regression (KPCR) and model parameters of KPCR have been selected employing the proposed as well as the conventional method. Results obtained demonstrate that the proposed method takes into account the sharp changes in a time series and improves the generalization capability of the KPCR model for better prediction of the unseen test data. (letter to the editor)

  18. Selective advantage of tolerant cultural traits in the Axelrod-Schelling model

    Science.gov (United States)

    Gracia-Lázaro, C.; Floría, L. M.; Moreno, Y.

    2011-05-01

    The Axelrod-Schelling model incorporates into the original Axelrod’s model of cultural dissemination the possibility that cultural agents placed in culturally dissimilar environments move to other places, the strength of this mobility being controlled by an intolerance parameter. By allowing heterogeneity in the intolerance of cultural agents, and considering it as a cultural feature, i.e., susceptible of cultural transmission (thus breaking the original symmetry of Axelrod-Schelling dynamics), we address here the question of whether tolerant or intolerant traits are more likely to become dominant in the long-term cultural dynamics. Our results show that tolerant traits possess a clear selective advantage in the framework of the Axelrod-Schelling model. We show that the reason for this selective advantage is the development, as time evolves, of a positive correlation between the number of neighbors that an agent has in its environment and its tolerant character.

  19. Seed fall and regeneration from a group selection cut . . . first year results

    Science.gov (United States)

    Philip M. McDonald

    1966-01-01

    To approximate a group selection cut at the Challenge Experimental Forest, 48 small openings of three sizes—30, 60, and 90 feet in diameter—were logged in 1963. One aim was to create conditions of light and soil moisture that would favor establishment and growth of Douglas-fir, sugar pine, and white fir over ponderosa pine. Seed fall and first-year...

  20. First experiments results about the engineering model of Rapsodie

    International Nuclear Information System (INIS)

    Chalot, A.; Ginier, R.; Sauvage, M.

    1964-01-01

    This report deals with the first series of experiments carried out on the engineering model of Rapsodie and on an associated sodium facility set in a laboratory hall of Cadarache. It conveys more precisely: 1/ - The difficulties encountered during the erection and assembly of the engineering model and a compilation of the results of the first series of experiments and tests carried out on this installation (loading of the subassemblies preheating, thermal chocks...). 2/ - The experiments and tests carried out on the two prototypes control rod drive mechanisms which brought to the choice for the design of the definitive drive mechanism. As a whole, the results proved the validity of the general design principles adopted for Rapsodie. (authors) [fr

  1. Meteorological uncertainty of atmospheric dispersion model results (MUD)

    Energy Technology Data Exchange (ETDEWEB)

    Havskov Soerensen, J.; Amstrup, B.; Feddersen, H. [Danish Meteorological Institute, Copenhagen (Denmark)] [and others

    2013-08-15

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for long-range atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent uncertainties of the meteorological model results. These uncertainties stem from e.g. limits in meteorological observations used to initialise meteorological forecast series. By perturbing e.g. the initial state of an NWP model run in agreement with the available observational data, an ensemble of meteorological forecasts is produced from which uncertainties in the various meteorological parameters are estimated, e.g. probabilities for rain. Corresponding ensembles of atmospheric dispersion can now be computed from which uncertainties of predicted radionuclide concentration and deposition patterns can be derived. (Author)

  2. Meteorological uncertainty of atmospheric dispersion model results (MUD)

    International Nuclear Information System (INIS)

    Havskov Soerensen, J.; Amstrup, B.; Feddersen, H.

    2013-08-01

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for long-range atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent uncertainties of the meteorological model results. These uncertainties stem from e.g. limits in meteorological observations used to initialise meteorological forecast series. By perturbing e.g. the initial state of an NWP model run in agreement with the available observational data, an ensemble of meteorological forecasts is produced from which uncertainties in the various meteorological parameters are estimated, e.g. probabilities for rain. Corresponding ensembles of atmospheric dispersion can now be computed from which uncertainties of predicted radionuclide concentration and deposition patterns can be derived. (Author)

  3. Impacts of land cover data selection and trait parameterisation on dynamic modelling of species' range expansion.

    Directory of Open Access Journals (Sweden)

    Risto K Heikkinen

    Full Text Available Dynamic models for range expansion provide a promising tool for assessing species' capacity to respond to climate change by shifting their ranges to new areas. However, these models include a number of uncertainties which may affect how successfully they can be applied to climate change oriented conservation planning. We used RangeShifter, a novel dynamic and individual-based modelling platform, to study two potential sources of such uncertainties: the selection of land cover data and the parameterization of key life-history traits. As an example, we modelled the range expansion dynamics of two butterfly species, one habitat specialist (Maniola jurtina and one generalist (Issoria lathonia. Our results show that projections of total population size, number of occupied grid cells and the mean maximal latitudinal range shift were all clearly dependent on the choice made between using CORINE land cover data vs. using more detailed grassland data from three alternative national databases. Range expansion was also sensitive to the parameterization of the four considered life-history traits (magnitude and probability of long-distance dispersal events, population growth rate and carrying capacity, with carrying capacity and magnitude of long-distance dispersal showing the strongest effect. Our results highlight the sensitivity of dynamic species population models to the selection of existing land cover data and to uncertainty in the model parameters and indicate that these need to be carefully evaluated before the models are applied to conservation planning.

  4. Acoustic results of the Boeing model 360 whirl tower test

    Science.gov (United States)

    Watts, Michael E.; Jordan, David

    1990-09-01

    An evaluation is presented for whirl tower test results of the Model 360 helicopter's advanced, high-performance four-bladed composite rotor system intended to facilitate over-200-knot flight. During these performance measurements, acoustic data were acquired by seven microphones. A comparison of whirl-tower tests with theory indicate that theoretical prediction accuracies vary with both microphone position and the inclusion of ground reflection. Prediction errors varied from 0 to 40 percent of the measured signal-to-peak amplitude.

  5. Exact results for the one dimensional asymmetric exclusion model

    International Nuclear Information System (INIS)

    Derrida, B.; Evans, M.R.; Pasquier, V.

    1993-01-01

    The asymmetric exclusion model describes a system of particles hopping in a preferred direction with hard core repulsion. These particles can be thought of as charged particles in a field, as steps of an interface, as cars in a queue. Several exact results concerning the steady state of this system have been obtained recently. The solution consists of representing the weights of the configurations in the steady state as products of non-commuting matrices. (author)

  6. Equilibrium and nonequilibrium attractors for a discrete, selection-migration model

    Science.gov (United States)

    James F. Selgrade; James H. Roberds

    2003-01-01

    This study presents a discrete-time model for the effects of selection and immigration on the demographic and genetic compositions of a population. Under biologically reasonable conditions, it is shown that the model always has an equilibrium. Although equilibria for similar models without migration must have real eigenvalues, for this selection-migration model we...

  7. Illustrating sensitivity in environmental fate models using partitioning maps - application to selected contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, T.; Wania, F. [Univ. of Toronto at Scarborough - DPES, Toronto (Canada)

    2004-09-15

    Generic environmental multimedia fate models are important tools in the assessment of the impact of organic pollutants. Because of limited possibilities to evaluate generic models by comparison with measured data and the increasing regulatory use of such models, uncertainties of model input and output are of considerable concern. This led to a demand for sensitivity and uncertainty analyses for the outputs of environmental fate models. Usually, variations of model predictions of the environmental fate of organic contaminants are analyzed for only one or at most a few selected chemicals, even though parameter sensitivity and contribution to uncertainty are widely different for different chemicals. We recently presented a graphical method that allows for the comprehensive investigation of model sensitivity and uncertainty for all neutral organic chemicals simultaneously. This is achieved by defining a two-dimensional hypothetical ''chemical space'' as a function of the equilibrium partition coefficients between air, water, and octanol (K{sub OW}, K{sub AW}, K{sub OA}), and plotting sensitivity and/or uncertainty of a specific model result to each input parameter as function of this chemical space. Here we show how such sensitivity maps can be used to quickly identify the variables with the highest influence on the environmental fate of selected, chlorobenzenes, polychlorinated biphenyls (PCBs), polycyclic aromatic hydrocarbons (PAHs), hexachlorocyclohexanes (HCHs) and brominated flame retardents (BFRs).

  8. Selecting Products Considering the Regret Behavior of Consumer: A Decision Support Model Based on Online Ratings

    Directory of Open Access Journals (Sweden)

    Xia Liang

    2018-05-01

    Full Text Available With the remarkable promotion of e-commerce platforms, consumers increasingly prefer to purchase products online. Online ratings facilitate consumers to choose among products. Thus, to help consumers effectively select products, it is necessary to provide decision support methods for consumers to trade online. Considering the decision makers are bounded rational, this paper proposes a novel decision support model for product selection based on online ratings, in which the regret aversion behavior of consumers is formulated. Massive online ratings provided by experienced consumers for alternative products associated with several evaluation attributes are obtained by software finders. Then, the evaluations of alternative products in format of stochastic variables are conducted. To select a desirable alternative product, a novel method is introduced to calculate gain and loss degrees of each alternative over others. Considering the regret behavior of consumers in the product selection process, the regret and rejoice values of alternative products for consumer are computed to obtain the perceived utility values of alternative products. According to the prior order of the evaluation attributes provided by the consumer, the prior weights of attributes are determined based on the perceived utility values of alternative products. Furthermore, the overall perceived utility values of alternative products are obtained to generate a ranking result. Finally, a practical example from Zol.com.cn for tablet computer selection is used to demonstrate the feasibility and practically of the proposed model.

  9. Performance Measurement Model for the Supplier Selection Based on AHP

    Directory of Open Access Journals (Sweden)

    Fabio De Felice

    2015-10-01

    Full Text Available The performance of the supplier is a crucial factor for the success or failure of any company. Rational and effective decision making in terms of the supplier selection process can help the organization to optimize cost and quality functions. The nature of supplier selection processes is generally complex, especially when the company has a large variety of products and vendors. Over the years, several solutions and methods have emerged for addressing the supplier selection problem (SSP. Experience and studies have shown that there is no best way for evaluating and selecting a specific supplier process, but that it varies from one organization to another. The aim of this research is to demonstrate how a multiple attribute decision making approach can be effectively applied for the supplier selection process.

  10. A regional climate model for northern Europe: model description and results from the downscaling of two GCM control simulations

    Science.gov (United States)

    Rummukainen, M.; Räisänen, J.; Bringfelt, B.; Ullerstig, A.; Omstedt, A.; Willén, U.; Hansson, U.; Jones, C.

    This work presents a regional climate model, the Rossby Centre regional Atmospheric model (RCA1), recently developed from the High Resolution Limited Area Model (HIRLAM). The changes in the HIRLAM parametrizations, necessary for climate-length integrations, are described. A regional Baltic Sea ocean model and a modeling system for the Nordic inland lake systems have been coupled with RCA1. The coupled system has been used to downscale 10-year time slices from two different general circulation model (GCM) simulations to provide high-resolution regional interpretation of large-scale modeling. A selection of the results from the control runs, i.e. the present-day climate simulations, are presented: large-scale free atmospheric fields, the surface temperature and precipitation results and results for the on-line simulated regional ocean and lake surface climates. The regional model modifies the surface climate description compared to the GCM simulations, but it is also substantially affected by the biases in the GCM simulations. The regional model also improves the representation of the regional ocean and the inland lakes, compared to the GCM results.

  11. A regional climate model for northern Europe: model description and results from the downscaling of two GCM control simulations

    Energy Technology Data Exchange (ETDEWEB)

    Rummukainen, M.; Raeisaenen, J.; Bringfelt, B.; Ullerstig, A.; Omstedt, A.; Willen, U.; Hansson, U.; Jones, C. [Rossby Centre, Swedish Meteorological and Hydrological Inst., Norrkoeping (Sweden)

    2001-03-01

    This work presents a regional climate model, the Rossby Centre regional Atmospheric model (RCA1), recently developed from the High Resolution Limited Area Model (HIRLAM). The changes in the HIRLAM parametrizations, necessary for climate-length integrations, are described. A regional Baltic Sea ocean model and a modeling system for the Nordic inland lake systems have been coupled with RCA1. The coupled system has been used to downscale 10-year time slices from two different general circulation model (GCM) simulations to provide high-resolution regional interpretation of large-scale modeling. A selection of the results from the control runs, i.e. the present-day climate simulations, are presented: large-scale free atmospheric fields, the surface temperature and precipitation results and results for the on-line simulated regional ocean and lake surface climates. The regional model modifies the surface climate description compared to the GCM simulations, but it is also substantially affected by the biases in the GCM simulations. The regional model also improves the representation of the regional ocean and the inland lakes, compared to the GCM results. (orig.)

  12. Comparison of transient PCRV model test results with analysis

    International Nuclear Information System (INIS)

    Marchertas, A.H.; Belytschko, T.B.

    1979-01-01

    Comparisons are made of transient data derived from simple models of a reactor containment vessel with analytical solutions. This effort is a part of the ongoing process of development and testing of the DYNAPCON computer code. The test results used in these comparisons were obtained from scaled models of the British sodium cooled fast breeder program. The test structure is a scaled model of a cylindrically shaped reactor containment vessel made of concrete. This concrete vessel is prestressed axially by holddown bolts spanning the top and bottom slabs along the cylindrical walls, and is also prestressed circumferentially by a number of cables wrapped around the vessel. For test purposes this containment vessel is partially filled with water, which comes in direct contact with the vessel walls. The explosive charge is immersed in the pool of water and is centrally suspended from the top of the vessel. The load history was obtained from an ICECO analysis, using the equations of state for the source and the water. A detailed check of this solution was made to assure that the derived loading did provide the correct input. The DYNAPCON code was then used for the analysis of the prestressed concrete containment model. This analysis required the simulation of prestressing and the response of the model to the applied transient load. The calculations correctly predict the magnitudes of displacements of the PCRV model. In addition, the displacement time histories obtained from the calculations reproduce the general features of the experimental records: the period elongation and amplitude increase as compared to an elastic solution, and also the absence of permanent displacement. However, the period still underestimates the experiment, while the amplitude is generally somewhat large

  13. Thermal-Chemical Model Of Subduction: Results And Tests

    Science.gov (United States)

    Gorczyk, W.; Gerya, T. V.; Connolly, J. A.; Yuen, D. A.; Rudolph, M.

    2005-12-01

    Seismic structures with strong positive and negative velocity anomalies in the mantle wedge above subduction zones have been interpreted as thermally and/or chemically induced phenomena. We have developed a thermal-chemical model of subduction, which constrains the dynamics of seismic velocity structure beneath volcanic arcs. Our simulations have been calculated over a finite-difference grid with (201×101) to (201×401) regularly spaced Eulerian points, using 0.5 million to 10 billion markers. The model couples numerical thermo-mechanical solution with Gibbs energy minimization to investigate the dynamic behavior of partially molten upwellings from slabs (cold plumes) and structures associated with their development. The model demonstrates two chemically distinct types of plumes (mixed and unmixed), and various rigid body rotation phenomena in the wedge (subduction wheel, fore-arc spin, wedge pin-ball). These thermal-chemical features strongly perturb seismic structure. Their occurrence is dependent on the age of subducting slab and the rate of subduction.The model has been validated through a series of test cases and its results are consistent with a variety of geological and geophysical data. In contrast to models that attribute a purely thermal origin for mantle wedge seismic anomalies, the thermal-chemical model is able to simulate the strong variations of seismic velocity existing beneath volcanic arcs which are associated with development of cold plumes. In particular, molten regions that form beneath volcanic arcs as a consequence of vigorous cold wet plumes are manifest by > 20% variations in the local Poisson ratio, as compared to variations of ~ 2% expected as a consequence of temperature variation within the mantle wedge.

  14. Congruence analysis of geodetic networks - hypothesis tests versus model selection by information criteria

    Science.gov (United States)

    Lehmann, Rüdiger; Lösler, Michael

    2017-12-01

    Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.

  15. A habitat selection model for Javan deer (Rusa timorensis in Wanagama I Forest, Yogyakarta

    Directory of Open Access Journals (Sweden)

    DANANG WAHYU PURNOMO

    2010-07-01

    Full Text Available Purnomo DW. 2010. A Habitat selection model for Javan deer (Rusa timorensis in Wanagama I Forest, Yogyakarta. Nusantara Bioscience 2: 84-89. Wanagama I Forest is the natural breeding habitat of Javan deer (Rusa timorensis de Blainville, 1822. Habitat changes had affected Timor’s resource selection and caused the deer to move from undisturbed areas to developed areas with agriculture and human settlements. We suspected that this shift was caused by the degradation of natural habitat. The research aimed to identify factors that might influence future habitat selection. Habitat selection was analyzed by comparing proportions of sites actually used to sites that we considered available to use. The results of a logistic regression of site categories showed there are three habitat variables that influence resource selection: sum of tree species (expß=1.305, slope (expß=1.061, and distance to a water source (expß=1.002. The three variables influence the deer existing in a certain site of Wanagama Forest and arrange resource selection probability function (RSPF.

  16. A Model for Service Life Control of Selected Device Systems

    Directory of Open Access Journals (Sweden)

    Zieja Mariusz

    2014-04-01

    Full Text Available This paper presents a way of determining distribution of limit state exceedence time by a diagnostic parameter which determines accuracy of maintaining zero state. For calculations it was assumed that the diagnostic parameter is deviation from nominal value (zero state. Change of deviation value occurs as a result of destructive processes which occur during service. For estimation of deviation increasing rate in probabilistic sense, was used a difference equation from which, after transformation, Fokker-Planck differential equation was obtained [4, 11]. A particular solution of the equation is deviation increasing rate density function which was used for determining exceedance probability of limit state. The so-determined probability was then used to determine density function of limit state exceedance time, by increasing deviation. Having at disposal the density function of limit state exceedance time one determined service life of a system of maladjustment. In the end, a numerical example based on operational data of selected aircraft [weapon] sights was presented. The elaborated method can be also applied to determining residual life of shipboard devices whose technical state is determined on the basis of analysis of values of diagnostic parameters.

  17. Influence of delayed neutron parameter calculation accuracy on results of modeled WWER scram experiments

    International Nuclear Information System (INIS)

    Artemov, V.G.; Gusev, V.I.; Zinatullin, R.E.; Karpov, A.S.

    2007-01-01

    Using modeled WWER cram rod drop experiments, performed at the Rostov NPP, as an example, the influence of delayed neutron parameters on the modeling results was investigated. The delayed neutron parameter values were taken from both domestic and foreign nuclear databases. Numerical modeling was carried out on the basis of SAPFIR 9 5andWWERrogram package. Parameters of delayed neutrons were acquired from ENDF/B-VI and BNAB-78 validated data files. It was demonstrated that using delay fraction data from different databases in reactivity meters led to significantly different reactivity results. Based on the results of numerically modeled experiments, delayed neutron parameters providing the best agreement between calculated and measured data were selected and recommended for use in reactor calculations (Authors)

  18. Human Commercial Models' Eye Colour Shows Negative Frequency-Dependent Selection.

    Directory of Open Access Journals (Sweden)

    Isabela Rodrigues Nogueira Forti

    Full Text Available In this study we investigated the eye colour of human commercial models registered in the UK (400 female and 400 male and Brazil (400 female and 400 male to test the hypothesis that model eye colour frequency was the result of negative frequency-dependent selection. The eye colours of the models were classified as: blue, brown or intermediate. Chi-square analyses of data for countries separated by sex showed that in the United Kingdom brown eyes and intermediate colours were significantly more frequent than expected in comparison to the general United Kingdom population (P<0.001. In Brazil, the most frequent eye colour brown was significantly less frequent than expected in comparison to the general Brazilian population. These results support the hypothesis that model eye colour is the result of negative frequency-dependent selection. This could be the result of people using eye colour as a marker of genetic diversity and finding rarer eye colours more attractive because of the potential advantage more genetically diverse offspring that could result from such a choice. Eye colour may be important because in comparison to many other physical traits (e.g., hair colour it is hard to modify, hide or disguise, and it is highly polymorphic.

  19. Result

    Indian Academy of Sciences (India)

    With reference to the detailed evaluation of bids submitted the following agencies has been selected to award the contract on L1 ( lowest bidder) basis. 1. M/s . CITO INFOTECH, Bengaluru ( for procurement of desktop computers). 2. M/s. MCCANNINFO SOLUTION, Mumbai ( for procurement of laptops computers)

  20. Measurement model choice influenced randomized controlled trial results.

    Science.gov (United States)

    Gorter, Rosalie; Fox, Jean-Paul; Apeldoorn, Adri; Twisk, Jos

    2016-11-01

    In randomized controlled trials (RCTs), outcome variables are often patient-reported outcomes measured with questionnaires. Ideally, all available item information is used for score construction, which requires an item response theory (IRT) measurement model. However, in practice, the classical test theory measurement model (sum scores) is mostly used, and differences between response patterns leading to the same sum score are ignored. The enhanced differentiation between scores with IRT enables more precise estimation of individual trajectories over time and group effects. The objective of this study was to show the advantages of using IRT scores instead of sum scores when analyzing RCTs. Two studies are presented, a real-life RCT, and a simulation study. Both IRT and sum scores are used to measure the construct and are subsequently used as outcomes for effect calculation. The bias in RCT results is conditional on the measurement model that was used to construct the scores. A bias in estimated trend of around one standard deviation was found when sum scores were used, where IRT showed negligible bias. Accurate statistical inferences are made from an RCT study when using IRT to estimate construct measurements. The use of sum scores leads to incorrect RCT results. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Location-based Mobile Relay Selection and Impact of Inaccurate Path Loss Model Parameters

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Madsen, Tatiana Kozlova; Schwefel, Hans-Peter

    2010-01-01

    In this paper we propose a relay selection scheme which uses collected location information together with a path loss model for relay selection, and analyze the performance impact of mobility and different error causes on this scheme. Performance is evaluated in terms of bit error rate...... by simulations. The SNR measurement based relay selection scheme proposed previously is unsuitable for use with fast moving users in e.g. vehicular scenarios due to a large signaling overhead. The proposed location based scheme is shown to work well with fast moving users due to a lower signaling overhead...... in these situations. As the location-based scheme relies on a path loss model to estimate link qualities and select relays, the sensitivity with respect to inaccurate estimates of the unknown path loss model parameters is investigated. The parameter ranges that result in useful performance were found...

  2. Partner Selection in a Virtual Enterprise: A Group Multiattribute Decision Model with Weighted Possibilistic Mean Values

    Directory of Open Access Journals (Sweden)

    Fei Ye

    2013-01-01

    Full Text Available This paper proposes an extended technique for order preference by similarity to ideal solution (TOPSIS for partner selection in a virtual enterprise (VE. The imprecise and fuzzy information of the partner candidate and the risk preferences of decision makers are both considered in the group multiattribute decision-making model. The weighted possibilistic mean values are used to handle triangular fuzzy numbers in the fuzzy environment. A ranking procedure for partner candidates is developed to help decision makers with varying risk preferences select the most suitable partners. Numerical examples are presented to reflect the feasibility and efficiency of the proposed TOPSIS. Results show that the varying risk preferences of decision makers play a significant role in the partner selection process in VE under a fuzzy environment.

  3. An Innovative Structural Mode Selection Methodology: Application for the X-33 Launch Vehicle Finite Element Model

    Science.gov (United States)

    Hidalgo, Homero, Jr.

    2000-01-01

    An innovative methodology for determining structural target mode selection and mode selection based on a specific criterion is presented. An effective approach to single out modes which interact with specific locations on a structure has been developed for the X-33 Launch Vehicle Finite Element Model (FEM). We presented Root-Sum-Square (RSS) displacement method computes resultant modal displacement for each mode at selected degrees of freedom (DOF) and sorts to locate modes with highest values. This method was used to determine modes, which most influenced specific locations/points on the X-33 flight vehicle such as avionics control components, aero-surface control actuators, propellant valve and engine points for use in flight control stability analysis and for flight POGO stability analysis. Additionally, the modal RSS method allows for primary or global target vehicle modes to also be identified in an accurate and efficient manner.

  4. Selecting representative climate models for climate change impact studies : An advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; ter Maat, Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.|info:eu-repo/dai/nl/290472113

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

  5. Selecting representative climate models for climate change impact studies: an advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; Maat, ter Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

  6. Modelling the factors influencing the selection of the construction equipment for Indian construction organizations

    Directory of Open Access Journals (Sweden)

    S.V.S. Raja Prasad

    2016-09-01

    Full Text Available The contribution of Indian construction sector to the GDP is approximately 10%. Under new government policy, it is anticipated that $1000 Billion share for exclusively infrastructure segment would be completed within the next few years. Construction sector in developing country like India still depends on labor and the practice of mechanization, adopting to use of versatile construction equipment is not in force. The need for implementing new technologies and automation is essential to improve the quality, safety and efficiency. To meet the challenges ahead the construction, organizations should focus on utilization of machinery/equipment to achieve desirable results. Modern construction is characterized by the increase in utilization of equipment to accomplish numerous construction activities. The selection of construction equipment often affects the required amount of time and effort. It is therefore important for managements of construction organizations and planners to be familiar with the features of various types of equipment commonly used in construction activities. The selection of appropriate equipment is a crucial decision making process as it involves huge capital investment. The purpose of the present study is to develop a model pertaining to the factors influencing the selection of construction equipment by using interpretive structural modelling and the results indicate that productivity and safety are the important factors in selection of equipment in Indian construction organizations.

  7. Trust-Enhanced Cloud Service Selection Model Based on QoS Analysis.

    Science.gov (United States)

    Pan, Yuchen; Ding, Shuai; Fan, Wenjuan; Li, Jing; Yang, Shanlin

    2015-01-01

    Cloud computing technology plays a very important role in many areas, such as in the construction and development of the smart city. Meanwhile, numerous cloud services appear on the cloud-based platform. Therefore how to how to select trustworthy cloud services remains a significant problem in such platforms, and extensively investigated owing to the ever-growing needs of users. However, trust relationship in social network has not been taken into account in existing methods of cloud service selection and recommendation. In this paper, we propose a cloud service selection model based on the trust-enhanced similarity. Firstly, the direct, indirect, and hybrid trust degrees are measured based on the interaction frequencies among users. Secondly, we estimate the overall similarity by combining the experience usability measured based on Jaccard's Coefficient and the numerical distance computed by Pearson Correlation Coefficient. Then through using the trust degree to modify the basic similarity, we obtain a trust-enhanced similarity. Finally, we utilize the trust-enhanced similarity to find similar trusted neighbors and predict the missing QoS values as the basis of cloud service selection and recommendation. The experimental results show that our approach is able to obtain optimal results via adjusting parameters and exhibits high effectiveness. The cloud services ranking by our model also have better QoS properties than other methods in the comparison experiments.

  8. SR-Site groundwater flow modelling methodology, setup and results

    International Nuclear Information System (INIS)

    Selroos, Jan-Olof; Follin, Sven

    2010-12-01

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken three groundwater flow modelling studies. These are performed within the SR-Site project and represent time periods with different climate conditions. The simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. Three time periods are addressed; the Excavation and operational phases, the Initial period of temperate climate after closure, and the Remaining part of the reference glacial cycle. The present report is a synthesis of the background reports describing the modelling methodology, setup, and results. It is the primary reference for the conclusions drawn in a SR-Site specific context concerning groundwater flow during the three climate periods. These conclusions are not necessarily provided explicitly in the background reports, but are based on the results provided in these reports. The main results and comparisons presented in the present report are summarised in the SR-Site Main report

  9. Geochemical controls on shale groundwaters: Results of reaction path modeling

    International Nuclear Information System (INIS)

    Von Damm, K.L.; VandenBrook, A.J.

    1989-03-01

    The EQ3NR/EQ6 geochemical modeling code was used to simulate the reaction of several shale mineralogies with different groundwater compositions in order to elucidate changes that may occur in both the groundwater compositions, and rock mineralogies and compositions under conditions which may be encountered in a high-level radioactive waste repository. Shales with primarily illitic or smectitic compositions were the focus of this study. The reactions were run at the ambient temperatures of the groundwaters and to temperatures as high as 250/degree/C, the approximate temperature maximum expected in a repository. All modeling assumed that equilibrium was achieved and treated the rock and water assemblage as a closed system. Graphite was used as a proxy mineral for organic matter in the shales. The results show that the presence of even a very small amount of reducing mineral has a large influence on the redox state of the groundwaters, and that either pyrite or graphite provides essentially the same results, with slight differences in dissolved C, Fe and S concentrations. The thermodynamic data base is inadequate at the present time to fully evaluate the speciation of dissolved carbon, due to the paucity of thermodynamic data for organic compounds. In the illitic cases the groundwaters resulting from interaction at elevated temperatures are acid, while the smectitic cases remain alkaline, although the final equilibrium mineral assemblages are quite similar. 10 refs., 8 figs., 15 tabs

  10. SR-Site groundwater flow modelling methodology, setup and results

    Energy Technology Data Exchange (ETDEWEB)

    Selroos, Jan-Olof (Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)); Follin, Sven (SF GeoLogic AB, Taeby (Sweden))

    2010-12-15

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken three groundwater flow modelling studies. These are performed within the SR-Site project and represent time periods with different climate conditions. The simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. Three time periods are addressed; the Excavation and operational phases, the Initial period of temperate climate after closure, and the Remaining part of the reference glacial cycle. The present report is a synthesis of the background reports describing the modelling methodology, setup, and results. It is the primary reference for the conclusions drawn in a SR-Site specific context concerning groundwater flow during the three climate periods. These conclusions are not necessarily provided explicitly in the background reports, but are based on the results provided in these reports. The main results and comparisons presented in the present report are summarised in the SR-Site Main report.

  11. Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.

    Science.gov (United States)

    Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David

    2018-07-01

    To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP

  12. Deriving user-informed climate information from climate model ensemble results

    Science.gov (United States)

    Huebener, Heike; Hoffmann, Peter; Keuler, Klaus; Pfeifer, Susanne; Ramthun, Hans; Spekat, Arne; Steger, Christian; Warrach-Sagi, Kirsten

    2017-07-01

    Communication between providers and users of climate model simulation results still needs to be improved. In the German regional climate modeling project ReKliEs-De a midterm user workshop was conducted to allow the intended users of the project results to assess the preliminary results and to streamline the final project results to their needs. The user feedback highlighted, in particular, the still considerable gap between climate research output and user-tailored input for climate impact research. Two major requests from the user community addressed the selection of sub-ensembles and some condensed, easy to understand information on the strengths and weaknesses of the climate models involved in the project.

  13. An Integrated DEMATEL-QFD Model for Medical Supplier Selection

    OpenAIRE

    Mehtap Dursun; Zeynep Şener

    2014-01-01

    Supplier selection is considered as one of the most critical issues encountered by operations and purchasing managers to sharpen the company’s competitive advantage. In this paper, a novel fuzzy multi-criteria group decision making approach integrating quality function deployment (QFD) and decision making trial and evaluation laboratory (DEMATEL) method is proposed for supplier selection. The proposed methodology enables to consider the impacts of inner dependence among supplier assessment cr...

  14. Results of the benchmark for blade structural models, part A

    DEFF Research Database (Denmark)

    Lekou, D.J.; Chortis, D.; Belen Fariñas, A.

    2013-01-01

    A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved within...... Task 2.2 of the InnWind.Eu project. The benchmark is based on the reference wind turbine and the reference blade provided by DTU [1]. "Structural Concept developers/modelers" of WP2 were provided with the necessary input for a comparison numerical simulation run, upon definition of the reference blade...

  15. Preliminary results of steel containment vessel model test

    International Nuclear Information System (INIS)

    Matsumoto, T.; Komine, K.; Arai, S.

    1997-01-01

    A high pressure test of a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of a steel containment vessel (SCV), representing an improved boiling water reactor (BWR) Mark II containment, was conducted on December 11-12, 1996 at Sandia National Laboratories. This paper describes the preliminary results of the high pressure test. In addition, the preliminary post-test measurement data and the preliminary comparison of test data with pretest analysis predictions are also presented

  16. Results of the ITER toroidal field model coil project

    International Nuclear Information System (INIS)

    Salpietro, E.; Maix, R.

    2001-01-01

    In the scope of the ITER EDA one of the seven largest projects was devoted to the development, manufacture and testing of a Toroidal Field Model Coil (TFMC). The industry consortium AGAN manufactured the TFMC based on on a conceptual design developed by the ITER EDA EU Home Team. The TFMC was completed and assembled in the test facility TOSKA of the Forschungszentrum Karlsruhe in the first half of 2001. The first testing phase started in June 2001 and lasted till October 2001. The first results have shown that the main goals of the project have been achieved

  17. Evaluation of uncertainties in selected environmental dispersion models

    International Nuclear Information System (INIS)

    Little, C.A.; Miller, C.W.

    1979-01-01

    Compliance with standards of radiation dose to the general public has necessitated the use of dispersion models to predict radionuclide concentrations in the environment due to releases from nuclear facilities. Because these models are only approximations of reality and because of inherent variations in the input parameters used in these models, their predictions are subject to uncertainty. Quantification of this uncertainty is necessary to assess the adequacy of these models for use in determining compliance with protection standards. This paper characterizes the capabilities of several dispersion models to predict accurately pollutant concentrations in environmental media. Three types of models are discussed: aquatic or surface water transport models, atmospheric transport models, and terrestrial and aquatic food chain models. Using data published primarily by model users, model predictions are compared to observations

  18. Models for MOX fuel behaviour. A selective review

    International Nuclear Information System (INIS)

    Massih, Ali R.

    2006-01-01

    This report reviews the basic physical properties of light water reactor mixed-oxide (MOX) fuel comprising nuclear characteristics, thermal properties such as melting temperature, thermal conductivity, thermal expansion, and heat capacity, and compares these with properties of conventional UO 2 fuel. These properties are generally well understood for MOX fuel and are well described by appropriate models developed for engineering analysis. Moreover, certain modelling approaches of MOX fuel in-reactor behaviour, regarding densification, swelling, fission product gas release, helium release, fuel creep and grain growth, are evaluated and compared with the models for UO 2 . In MOX fuel the presence of plutonium rich agglomerates adds to the complexity of fuel behaviour on the micro scale. In addition, we survey the recent fuel performance experience and post irradiation examinations on several types of MOX fuel types. We discuss the data from these examinations, regarding densification, swelling, fission product gas release and the evolution of the microstructure during irradiation. The results of our review indicate that in general MOX fuel has a higher fission gas release and helium release than UO 2 fuel. Part of this increase is due to the higher operating temperatures of MOX fuel relative to UO 2 fuel due to the lower thermal conductivity of MOX material. But this effect by itself seems to be insufficient to make for the difference in the observed fission gas release of UO 2 vs. MOX fuel. Furthermore, the irradiation induced creep rate of MOX fuel is higher than that of UO 2 . This effect can reduce the pellet-clad interaction intensity in fuel rods. Finally, we suggest that certain physical based approaches discussed in the report are implemented in the fuel performance code to account for the behaviour of MOX fuel during irradiation

  19. Models for MOX fuel behaviour. A selective review

    Energy Technology Data Exchange (ETDEWEB)

    Massih, Ali R. [Quantum Technologies AB, Uppsala Science Park (Sweden)

    2006-12-15

    This report reviews the basic physical properties of light water reactor mixed-oxide (MOX) fuel comprising nuclear characteristics, thermal properties such as melting temperature, thermal conductivity, thermal expansion, and heat capacity, and compares these with properties of conventional UO{sub 2} fuel. These properties are generally well understood for MOX fuel and are well described by appropriate models developed for engineering analysis. Moreover, certain modelling approaches of MOX fuel in-reactor behaviour, regarding densification, swelling, fission product gas release, helium release, fuel creep and grain growth, are evaluated and compared with the models for UO{sub 2}. In MOX fuel the presence of plutonium rich agglomerates adds to the complexity of fuel behaviour on the micro scale. In addition, we survey the recent fuel performance experience and post irradiation examinations on several types of MOX fuel types. We discuss the data from these examinations, regarding densification, swelling, fission product gas release and the evolution of the microstructure during irradiation. The results of our review indicate that in general MOX fuel has a higher fission gas release and helium release than UO{sub 2} fuel. Part of this increase is due to the higher operating temperatures of MOX fuel relative to UO{sub 2} fuel due to the lower thermal conductivity of MOX material. But this effect by itself seems to be insufficient to make for the difference in the observed fission gas release of UO{sub 2} vs. MOX fuel. Furthermore, the irradiation induced creep rate of MOX fuel is higher than that of UO{sub 2}. This effect can reduce the pellet-clad interaction intensity in fuel rods. Finally, we suggest that certain physical based approaches discussed in the report are implemented in the fuel performance code to account for the behaviour of MOX fuel during irradiation.

  20. Ureaplasma parvum undergoes selection in utero resulting in genetically diverse isolates colonizing the chorioamnion of fetal sheep.

    Science.gov (United States)

    Dando, Samantha J; Nitsos, Ilias; Polglase, Graeme R; Newnham, John P; Jobe, Alan H; Knox, Christine L

    2014-02-01

    Ureaplasmas are the microorganisms most frequently isolated from the amniotic fluid of pregnant women and can cause chronic intrauterine infections. These tiny bacteria are thought to undergo rapid evolution and exhibit a hypermutatable phenotype; however, little is known about how ureaplasmas respond to selective pressures in utero. Using an ovine model of chronic intraamniotic infection, we investigated if exposure of ureaplasmas to subinhibitory concentrations of erythromycin could induce phenotypic or genetic indicators of macrolide resistance. At 55 days gestation, 12 pregnant ewes received an intraamniotic injection of a nonclonal, clinical Ureaplasma parvum strain followed by (i) erythromycin treatment (intramuscularly, 30 mg/kg/day, n = 6) or (ii) saline (intramuscularly, n = 6) at 100 days gestation. Fetuses were then delivered surgically at 125 days gestation. Despite injecting the same inoculum into all the ewes, significant differences between amniotic fluid and chorioamnion ureaplasmas were detected following chronic intraamniotic infection. Numerous polymorphisms were observed in domain V of the 23S rRNA gene of ureaplasmas isolated from the chorioamnion (but not the amniotic fluid), resulting in a mosaiclike sequence. Chorioamnion isolates also harbored the macrolide resistance genes erm(B) and msr(D) and were associated with variable roxithromycin minimum inhibitory concentrations. Remarkably, this variability occurred independently of exposure of ureaplasmas to erythromycin, suggesting that low-level erythromycin exposure does not induce ureaplasmal macrolide resistance in utero. Rather, the significant differences observed between amniotic fluid and chorioamnion ureaplasmas suggest that different anatomical sites may select for ureaplasma subtypes within nonclonal, clinical strains. This may have implications for the treatment of intrauterine ureaplasma infections.

  1. Selection of terrestrial transfer factors for radioecological assessment models and regulatory guides

    International Nuclear Information System (INIS)

    Ng, Y.C.; Hoffman, F.O.

    1983-01-01

    A parameter value for a radioecological assessment model is not a single value but a distribution of values about a central value. The sources that contribute to the variability of transfer factors to predict foodchain transport of radionuclides are enumerated. Knowledge of these sources, judgement in interpreting the available data, consideration of collateral information, and established criteria that specify the desired level of conservatism in the resulting predictions are essential elements when selecting appropriate parameter values for radioecological assessment models and regulatory guides. 39 references, 4 figures, 5 tables

  2. Selection of terrestrial transfer factors for radioecological assessment models and regulatory guides

    Energy Technology Data Exchange (ETDEWEB)

    Ng, Y.C.; Hoffman, F.O.

    1983-01-01

    A parameter value for a radioecological assessment model is not a single value but a distribution of values about a central value. The sources that contribute to the variability of transfer factors to predict foodchain transport of radionuclides are enumerated. Knowledge of these sources, judgement in interpreting the available data, consideration of collateral information, and established criteria that specify the desired level of conservatism in the resulting predictions are essential elements when selecting appropriate parameter values for radioecological assessment models and regulatory guides. 39 references, 4 figures, 5 tables.

  3. Comparison of transient PCRV model test results with analysis

    International Nuclear Information System (INIS)

    Marchertas, A.H.; Belytschko, T.B.

    1979-01-01

    Comparisons are made of transient data derived from simple models of a reactor containment vessel with analytical solutions. This effort is a part of the ongoing process of development and testing of the DYNAPCON computer code. The test results used in these comparisons were obtained from scaled models of the British sodium cooled fast breeder program. The test structure is a scaled model of a cylindrically shaped reactor containment vessel made of concrete. This concrete vessel is prestressed axially by holddown bolts spanning the top and bottom slabs along the cylindrical walls, and is also prestressed circumferentially by a number of cables wrapped around the vessel. For test purposes this containment vessel is partially filled with water, which comes in direct contact with the vessel walls. The explosive charge is immersed in the pool of water and is centrally suspended from the top of the vessel. The tests are very similar to the series of tests made for the COVA experimental program, but the vessel here is the prestressed concrete container. (orig.)

  4. Interval-valued intuitionistic fuzzy multi-criteria model for design concept selection

    Directory of Open Access Journals (Sweden)

    Daniel Osezua Aikhuele

    2017-09-01

    Full Text Available This paper presents a new approach for design concept selection by using an integrated Fuzzy Analytical Hierarchy Process (FAHP and an Interval-valued intuitionistic fuzzy modified TOP-SIS (IVIF-modified TOPSIS model. The integrated model which uses the improved score func-tion and a weighted normalized Euclidean distance method for the calculation of the separation measures of alternatives from the positive and negative intuitionistic ideal solutions provides a new approach for the computation of intuitionistic fuzzy ideal solutions. The results of the two approaches are integrated using a reflection defuzzification integration formula. To ensure the feasibility and the rationality of the integrated model, the method is successfully applied for eval-uating and selecting some design related problems including a real-life case study for the selec-tion of the best concept design for a new printed-circuit-board (PCB and for a hypothetical ex-ample. The model which provides a novel alternative, has been compared with similar computa-tional methods in the literature.

  5. N-mix for fish: estimating riverine salmonid habitat selection via N-mixture models

    Science.gov (United States)

    Som, Nicholas A.; Perry, Russell W.; Jones, Edward C.; De Juilio, Kyle; Petros, Paul; Pinnix, William D.; Rupert, Derek L.

    2018-01-01

    Models that formulate mathematical linkages between fish use and habitat characteristics are applied for many purposes. For riverine fish, these linkages are often cast as resource selection functions with variables including depth and velocity of water and distance to nearest cover. Ecologists are now recognizing the role that detection plays in observing organisms, and failure to account for imperfect detection can lead to spurious inference. Herein, we present a flexible N-mixture model to associate habitat characteristics with the abundance of riverine salmonids that simultaneously estimates detection probability. Our formulation has the added benefits of accounting for demographics variation and can generate probabilistic statements regarding intensity of habitat use. In addition to the conceptual benefits, model application to data from the Trinity River, California, yields interesting results. Detection was estimated to vary among surveyors, but there was little spatial or temporal variation. Additionally, a weaker effect of water depth on resource selection is estimated than that reported by previous studies not accounting for detection probability. N-mixture models show great promise for applications to riverine resource selection.

  6. Research on Methods for Discovering and Selecting Cloud Infrastructure Services Based on Feature Modeling

    Directory of Open Access Journals (Sweden)

    Huamin Zhu

    2016-01-01

    Full Text Available Nowadays more and more cloud infrastructure service providers are providing large numbers of service instances which are a combination of diversified resources, such as computing, storage, and network. However, for cloud infrastructure services, the lack of a description standard and the inadequate research of systematic discovery and selection methods have exposed difficulties in discovering and choosing services for users. First, considering the highly configurable properties of a cloud infrastructure service, the feature model method is used to describe such a service. Second, based on the description of the cloud infrastructure service, a systematic discovery and selection method for cloud infrastructure services are proposed. The automatic analysis techniques of the feature model are introduced to verify the model’s validity and to perform the matching of the service and demand models. Finally, we determine the critical decision metrics and their corresponding measurement methods for cloud infrastructure services, where the subjective and objective weighting results are combined to determine the weights of the decision metrics. The best matching instances from various providers are then ranked by their comprehensive evaluations. Experimental results show that the proposed methods can effectively improve the accuracy and efficiency of cloud infrastructure service discovery and selection.

  7. INTRAVAL Finnsjoen Test - modelling results for some tracer experiments

    International Nuclear Information System (INIS)

    Jakob, A.; Hadermann, J.

    1994-09-01

    This report presents the results within Phase II of the INTRAVAL study. Migration experiments performed at the Finnsjoen test site were investigated. The study was done to gain an improved understanding of not only the mechanisms of tracer transport, but also the accuracy and limitations of the model used. The model is based on the concept of a dual porosity medium, taking into account one dimensional advection, longitudinal dispersion, sorption onto the fracture surfaces, diffusion into connected pores of the matrix rock, and sorption onto matrix surfaces. The number of independent water carrying zones, represented either as planar fractures or tubelike veins, may be greater than one, and the sorption processes are described either by linear or non-linear Freundlich isotherms assuming instantaneous sorption equilibrium. The diffusion of the tracer out of the water-carrying zones into connected pore space of the adjacent rock is calculated perpendicular to the direction of the advective/dispersive flow. In the analysis, the fluid flow parameters are calibrated by the measured breakthrough curves for the conservative tracer (iodide). Subsequent fits to the experimental data for the two sorbing tracers strontium and cesium then involve element dependent parameters providing information on the sorption processes and on its representation in the model. The methodology of fixing all parameters except those for sorption with breakthrough curves for non-sorbing tracers generally worked well. The investigation clearly demonstrates the necessity of taking into account pump flow rate variations at both boundaries. If this is not done, reliable conclusions on transport mechanisms or geometrical factors can not be achieved. A two flow path model reproduces the measured data much better than a single flow path concept. (author) figs., tabs., 26 refs

  8. Results and Error Estimates from GRACE Forward Modeling over Antarctica

    Science.gov (United States)

    Bonin, Jennifer; Chambers, Don

    2013-04-01

    Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Antarctica. However when tested previously, the least squares technique has required constraints in the form of added process noise in order to be reliable. Poor choice of local basin layout has also adversely affected results, as has the choice of spatial smoothing used with GRACE. To develop design parameters which will result in correct high-resolution mass detection and to estimate the systematic errors of the method over Antarctica, we use a "truth" simulation of the Antarctic signal. We apply the optimal parameters found from the simulation to RL05 GRACE data across Antarctica and the surrounding ocean. We particularly focus on separating the Antarctic peninsula's mass signal from that of the rest of western Antarctica. Additionally, we characterize how well the technique works for removing land leakage signal from the nearby ocean, particularly that near the Drake Passage.

  9. Some exact results for the three-layer Zamolodchikov model

    International Nuclear Information System (INIS)

    Boos, H.E.; Mangazeev, V.V.

    2001-01-01

    In this paper we continue the study of the three-layer Zamolodchikov model started in our previous works (H.E. Boos, V.V. Mangazeev, J. Phys. A 32 (1999) 3041-3054 and H.E. Boos, V.V. Mangazeev, J. Phys. A 32 (1999) 5285-5298). We analyse numerically the solutions to the Bethe ansatz equations obtained in H.E. Boos, V.V. Mangazeev, J. Phys. A 32 (1999) 5285-5298. We consider two regimes I and II which differ by the signs of the spherical sides (a 1 ,a 2 ,a 3 )→(-a 1 ,-a 2 ,-a 3 ). We accept the two-line hypothesis for the regime I and the one-line hypothesis for the regime II. In the thermodynamic limit we derive integral equations for distribution densities and solve them exactly. We calculate the partition function for the three-layer Zamolodchikov model and check a compatibility of this result with the functional relations obtained in H.E. Boos, V.V. Mangazeev, J. Phys. A 32 (1999) 5285-5298. We also do some numeric checkings of our results

  10. Preliminary time-phased TWRS process model results

    International Nuclear Information System (INIS)

    Orme, R.M.

    1995-01-01

    This report documents the first phase of efforts to model the retrieval and processing of Hanford tank waste within the constraints of an assumed tank farm configuration. This time-phased approach simulates a first try at a retrieval sequence, the batching of waste through retrieval facilities, the batching of retrieved waste through enhanced sludge washing, the batching of liquids through pretreatment and low-level waste (LLW) vitrification, and the batching of pretreated solids through high-level waste (HLW) vitrification. The results reflect the outcome of an assumed retrieval sequence that has not been tailored with respect to accepted measures of performance. The batch data, composition variability, and final waste volume projects in this report should be regarded as tentative. Nevertheless, the results provide interesting insights into time-phased processing of the tank waste. Inspection of the composition variability, for example, suggests modifications to the retrieval sequence that will further improve the uniformity of feed to the vitrification facilities. This model will be a valuable tool for evaluating suggested retrieval sequences and establishing a time-phased processing baseline. An official recommendation on tank retrieval sequence will be made in September, 1995

  11. An application of locally linear model tree algorithm with combination of feature selection in credit scoring

    Science.gov (United States)

    Siami, Mohammad; Gholamian, Mohammad Reza; Basiri, Javad

    2014-10-01

    Nowadays, credit scoring is one of the most important topics in the banking sector. Credit scoring models have been widely used to facilitate the process of credit assessing. In this paper, an application of the locally linear model tree algorithm (LOLIMOT) was experimented to evaluate the superiority of its performance to predict the customer's credit status. The algorithm is improved with an aim of adjustment by credit scoring domain by means of data fusion and feature selection techniques. Two real world credit data sets - Australian and German - from UCI machine learning database were selected to demonstrate the performance of our new classifier. The analytical results indicate that the improved LOLIMOT significantly increase the prediction accuracy.

  12. Selecting the right statistical model for analysis of insect count data by using information theoretic measures.

    Science.gov (United States)

    Sileshi, G

    2006-10-01

    Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.

  13. A two-temperature model for selective photothermolysis laser treatment of port wine stains

    International Nuclear Information System (INIS)

    Li, D.; Wang, G.X.; He, Y.L.; Kelly, K.M.; Wu, W.J.; Wang, Y.X.; Ying, Z.X.

    2013-01-01

    Selective photothermolysis is the basic principle for laser treatment of vascular malformations such as port wine stain birthmarks (PWS). During cutaneous laser surgery, blood inside blood vessels is heated due to selective absorption of laser energy, while the surrounding normal tissue is spared. As a result, the blood and the surrounding tissue experience a local thermodynamic non-equilibrium condition. Traditionally, the PWS laser treatment process was simulated by a discrete-blood-vessel model that simplifies blood vessels into parallel cylinders buried in a multi-layer skin model. In this paper, PWS skin is treated as a porous medium made of tissue matrix and blood in the dermis. A two-temperature model is constructed following the local thermal non-equilibrium theory of porous media. Both transient and steady heat conduction problems are solved in a unit cell for the interfacial heat transfer between blood vessels and the surrounding tissue to close the present two-temperature model. The present two-temperature model is validated by good agreement with those from the discrete-blood-vessel model. The characteristics of the present two-temperature model are further illustrated through a comparison with the previously-used homogenous model, in which a local thermodynamic equilibrium assumption between the blood and the surrounding tissue is employed. -- Highlights: • Local thermal non-equilibrium theory was adapted in field of laser dermatology. • Transient interfacial heat transfer coefficient between two phases is presented. • Less PWS blood vessel micro-structure information is required in present model. • Good agreement between present model and classical discrete-blood-vessel model

  14. 78 FR 20148 - Reporting Procedure for Mathematical Models Selected To Predict Heated Effluent Dispersion in...

    Science.gov (United States)

    2013-04-03

    ... procedure acceptable to the NRC staff for providing summary details of mathematical modeling methods used in... NUCLEAR REGULATORY COMMISSION [NRC-2013-0062] Reporting Procedure for Mathematical Models Selected... Regulatory Guide (RG) 4.4, ``Reporting Procedure for Mathematical Models Selected to Predict Heated Effluent...

  15. Assessment of the quality of test results from selected civil engineering material testing laboratories in Tanzania

    CSIR Research Space (South Africa)

    Mbawala, SJ

    2017-12-01

    Full Text Available Civil and geotechnical engineering material testing laboratories are expected to produce accurate and reliable test results. However, the ability of laboratories to produce accurate and reliable test results depends on many factors, among others...

  16. Pattern Classification Using an Olfactory Model with PCA Feature Selection in Electronic Noses: Study and Application

    Directory of Open Access Journals (Sweden)

    Junbao Zheng

    2012-03-01

    Full Text Available Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor as well as its parallel channels (inner factor. The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.

  17. A General Model of Negative Frequency Dependent Selection Explains Global Patterns of Human ABO Polymorphism.

    Directory of Open Access Journals (Sweden)

    Fernando A Villanea

    Full Text Available The ABO locus in humans is characterized by elevated heterozygosity and very similar allele frequencies among populations scattered across the globe. Using knowledge of ABO protein function, we generated a simple model of asymmetric negative frequency dependent selection and genetic drift to explain the maintenance of ABO polymorphism and its loss in human populations. In our models, regardless of the strength of selection, models with large effective population sizes result in ABO allele frequencies that closely match those observed in most continental populations. Populations must be moderately small to fall out of equilibrium and lose either the A or B allele (N(e ≤ 50 and much smaller (N(e ≤ 25 for the complete loss of diversity, which nearly always involved the fixation of the O allele. A pattern of low heterozygosity at the ABO locus where loss of polymorphism occurs in our model is consistent with small populations, such as Native American populations. This study provides a general evolutionary model to explain the observed global patterns of polymorphism at the ABO locus and the pattern of allele loss in small populations. Moreover, these results inform the range of population sizes associated with the recent human colonization of the Americas.

  18. Different resource allocation strategies result from selection for litter size at weaning in rabbit does

    DEFF Research Database (Denmark)

    Savietto, D; Cervera, C; Rodenas, L

    2014-01-01

    diet. The litter size was lower for female rabbits housed in both NF and HC environments, but the extent and timing where this reduction took place differed between generations. In challenging conditions (NF and HC), the average reduction in the reproductive performance of female rabbits from...... a greater reduction at the 3rd parturition (−3.53 kits born alive; Pdifferences between generations in digestible energy intake, milk yield and accretion, and use of body reserves throughout lactation in NC, HC and NF, which together indicate that there were...... different resource allocation strategies in the animals from the different generations. Selection to increase litter size at weaning led to increased reproductive robustness at the onset of an environmental constraint, but failure to sustain the reproductive liability when the challenge was maintained...

  19. Lyme Neuroborreliosis: Preliminary Results from an Urban Referral Center Employing Strict CDC Criteria for Case Selection

    Directory of Open Access Journals (Sweden)

    David S. Younger

    2010-01-01

    Full Text Available Lyme neuroborreliosis or “neurological Lyme disease” was evidenced in 2 of 23 patients submitted to strict criteria for case selection of the Centers for Disease Control and Prevention employing a two-tier test to detect antibodies to Borrelia burgdorferi at a single institution. One patient had symptomatic polyradiculoneuritis, dysautonomia, and serological evidence of early infection; and another had symptomatic small fiber sensory neuropathy, distal polyneuropathy, dysautonomia, and serological evidence of late infection. In the remaining patients symptoms initially ascribed to Lyme disease were probably unrelated to B. burgdorferi infection. Our findings suggest early susceptibility and protracted involvement of the nervous system most likely due to the immunological effects of B. burgdorferi infection, although the exact mechanisms remain uncertain.

  20. Q selection for an electro-optical earth imaging system: theoretical and experimental results.

    Science.gov (United States)

    Cochrane, Andy; Schulz, Kevin; Kendrick, Rick; Bell, Ray

    2013-09-23

    This paper explores practical design considerations for selecting Q for an electro-optical earth imaging system, where Q is defined as (λ FN) / pixel pitch. Analytical methods are used to show that, under imaging conditions with high SNR, increasing Q with fixed aperture cannot lead to degradation of image quality regardless of the angular smear rate of the system. The potential for degradation of image quality under low SNR is bounded by an increase of the detector noise scaling as Q. An imaging test bed is used to collect representative imagery for various Q configurations. The test bed includes real world errors such as image smear and haze. The value of Q is varied by changing the focal length of the imaging system. Imagery is presented over a broad range of parameters.