VEMAP 1: Selected Model Results
National Aeronautics and Space Administration — The Vegetation/Ecosystem Modeling and Analysis Project (VEMAP) was a multi-institutional, international effort addressing the response of biogeography and...
VEMAP 1: Selected Model Results
National Aeronautics and Space Administration — ABSTRACT: The Vegetation/Ecosystem Modeling and Analysis Project (VEMAP) was a multi-institutional, international effort addressing the response of biogeography and...
Selection of LHCb Physics Results
Schmidt, Burkhard
2013-05-01
LHCb is a dedicated flavour physics experiment at the LHC searching for physics beyond the Standard Model through precision measurements of CP-violating observables and the study of very rare decays of beauty- and charm-flavoured hadrons. In this article a selection of recent LHCb results is presented. Unless otherwise stated, the results are based on an integrated luminosity of 1 fb-1 accumulated during the year 2011 at √s = 7 TeV.
Gampe, D.; Ludwig, R.
2017-12-01
Regional Climate Models (RCMs) that downscale General Circulation Models (GCMs) are the primary tool to project future climate and serve as input to many impact models to assess the related changes and impacts under such climate conditions. Such RCMs are made available through the Coordinated Regional climate Downscaling Experiment (CORDEX). The ensemble of models provides a range of possible future climate changes around the ensemble mean climate change signal. The model outputs however are prone to biases compared to regional observations. A bias correction of these deviations is a crucial step in the impact modelling chain to allow the reproduction of historic conditions of i.e. river discharge. However, the detection and quantification of model biases are highly dependent on the selected regional reference data set. Additionally, in practice due to computational constraints it is usually not feasible to consider the entire ensembles of climate simulations with all members as input for impact models which provide information to support decision-making. Although more and more studies focus on model selection based on the preservation of the climate model spread, a selection based on validity, i.e. the representation of the historic conditions is still a widely applied approach. In this study, several available reference data sets for precipitation are selected to detect the model bias for the reference period 1989 - 2008 over the alpine catchment of the Adige River located in Northern Italy. The reference data sets originate from various sources, such as station data or reanalysis. These data sets are remapped to the common RCM grid at 0.11° resolution and several indicators, such as dry and wet spells, extreme precipitation and general climatology, are calculate to evaluate the capability of the RCMs to produce the historical conditions. The resulting RCM spread is compared against the spread of the reference data set to determine the related uncertainties and
International Nuclear Information System (INIS)
Lahodova, M.
2001-01-01
A modernization fuel system and advanced fuel for operation up to the high burnup are used in present time in Dukovany NPP. Reloading of the cores are evaluated using computer codes for thermomechanical behavior of the most loaded fuel rods. The paper presents results of parametric calculations performed by the NRI Rez integral code PIN, version 2000 (PIN2k) to assess influence of fission gas release modelling complexity on achieved results. The representative Dukovany NPP fuel rod irradiation history data are used and two cases of fuel parameter variables (soft and hard) are chosen for the comparison. Involved FGR models where the GASREL diffusion model developed in the NRI Rez plc and standard Weisman model that is recommended in the previous version of the PIN integral code. FGR calculation by PIN2k with GASREL model represents more realistic results than standard Weisman's model. Results for linear power, fuel centre temperature, FGR and gas pressure versus burnup are given for two fuel rods
DEFF Research Database (Denmark)
Beaude, Francois; Atayi, A.; Bourmaud, J.-Y.
2013-01-01
The OPTIMATE1 platform focuses on electricity system and market designs modelling in order to assess current and innovative designs in Europe. The current paper describes the results of the first validation studies' conducted with the tool. These studies deal with day-ahead market rules, load fle...
Marchenko, Yulia V.
2012-03-01
Sample selection arises often in practice as a result of the partial observability of the outcome of interest in a study. In the presence of sample selection, the observed data do not represent a random sample from the population, even after controlling for explanatory variables. That is, data are missing not at random. Thus, standard analysis using only complete cases will lead to biased results. Heckman introduced a sample selection model to analyze such data and proposed a full maximum likelihood estimation method under the assumption of normality. The method was criticized in the literature because of its sensitivity to the normality assumption. In practice, data, such as income or expenditure data, often violate the normality assumption because of heavier tails. We first establish a new link between sample selection models and recently studied families of extended skew-elliptical distributions. Then, this allows us to introduce a selection-t (SLt) model, which models the error distribution using a Student\\'s t distribution. We study its properties and investigate the finite-sample performance of the maximum likelihood estimators for this model. We compare the performance of the SLt model to the conventional Heckman selection-normal (SLN) model and apply it to analyze ambulatory expenditures. Unlike the SLNmodel, our analysis using the SLt model provides statistical evidence for the existence of sample selection bias in these data. We also investigate the performance of the test for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical Association.
Steigner, D.; Steinbrecher, R.; Rappenglück, B.; Gasche, R.; Hansel, A.; Graus, M.; Lindinger, Ch.
2003-04-01
Biogenic volatile organic compounds (BVOCs) play a crucial role in the formation of photo-oxidants and particles through the diverse BVOC degradation pathways. Yet, current estimations about temporal and spatial BVOC emissions, including the specific BVOC mix are rather vague. This project addresses this issue by: the determination of (a) BVOC net emission rates and (b) primary emissions of BVOCs from the trees and soils. Measurement campaigns were carried out at the Waldstein site in the Fichtelgebirge in 2001 and 2002. Primary emissions of isoprenoids from the soil and from twigs of Norway spruce (Picea abies [L.] Karst.) and stand fluxes of isoprenoids were quantified by means of REA-technique with in situ GC-FID analysis and GC-MS analysis in the laboratory. Moreover, REA-samples obtained by the system were analysed by a PTR-MS. A critical value when using the REA approach is the Businger-Oncley parameter b. For this canopy type a b value of 0.39 (threshold velocity w_o = 0.6) was determined. The PTR-MS data show clear diurnal variations of ambient air mixing ratios of VOC such as isoprene and monoterpenes, but also of oxygenated VOCs such as carbonyls and alcohols and methylvinylketone (MVK) and methacrolein (MAK), products from isoprene degradation. Four selected trees (Picea abies [L.] Karst.) were intensively screened for primary BVOC emission rates. Most abundant species are b-pinene/sabinene and camphene. They show typical diurnal patterns with high emissions during daytime. Soil emissions of NO reached 250 nmol N m-2 s-1 at soil temperatures (in 3 cm depth) of 13^oC and at a relative air humidity of 60%. Ambient air mixing ratios near the soil surface of NO reached values of up to 0.7 ppb. NO_2 and ozone mixing ratios varied between 0.1 to 1.5 ppb and 10 to 37 ppb, respectively. As expected nitrogen oxide emissions rates tend to increase with increasing surface temperature. Isoprenoid emission from the soil was low and in general near the detection limit
Zhang, Zhen; Sinha, Samiran; Maiti, Tapabrata; Shipp, Eva
2018-04-01
Accelerated failure time model is a popular model to analyze censored time-to-event data. Analysis of this model without assuming any parametric distribution for the model error is challenging, and the model complexity is enhanced in the presence of large number of covariates. We developed a nonparametric Bayesian method for regularized estimation of the regression parameters in a flexible accelerated failure time model. The novelties of our method lie in modeling the error distribution of the accelerated failure time nonparametrically, modeling the variance as a function of the mean, and adopting a variable selection technique in modeling the mean. The proposed method allowed for identifying a set of important regression parameters, estimating survival probabilities, and constructing credible intervals of the survival probabilities. We evaluated operating characteristics of the proposed method via simulation studies. Finally, we apply our new comprehensive method to analyze the motivating breast cancer data from the Surveillance, Epidemiology, and End Results Program, and estimate the five-year survival probabilities for women included in the Surveillance, Epidemiology, and End Results database who were diagnosed with breast cancer between 1990 and 2000.
Atmospheric Deposition Modeling Results
U.S. Environmental Protection Agency — This asset provides data on model results for dry and total deposition of sulfur, nitrogen and base cation species. Components include deposition velocities, dry...
Energy Technology Data Exchange (ETDEWEB)
Limpert, Steven; Ghosh, Kunal; Wagner, Hannes; Bowden, Stuart; Honsberg, Christiana; Goodnick, Stephen; Bremner, Stephen; Green, Martin
2014-06-09
We report results from coupled optical and electrical Sentaurus TCAD models of a gallium phosphide (GaP) on silicon electron carrier selective contact (CSC) solar cell. Detailed analyses of current and voltage performance are presented for devices having substrate thicknesses of 10 μm, 50 μm, 100 μm and 150 μm, and with GaP/Si interfacial quality ranging from very poor to excellent. Ultimate potential performance was investigated using optical absorption profiles consistent with light trapping schemes of random pyramids with attached and detached rear reflector, and planar with an attached rear reflector. Results indicate Auger-limited open-circuit voltages up to 787 mV and efficiencies up to 26.7% may be possible for front-contacted devices.
Selected results of the slovak coal research
Directory of Open Access Journals (Sweden)
Hredzák Slavomír
1997-09-01
Full Text Available The contribution gives the review of Slovak brown coal research in the last 10 years. The state and development trends of the coal research in Slovakia from the point of view of the clean coal technologies application are described. Some selected results which have been obtained at the Institute of Geotechnics of the Slovak Academy of Sciences are also introduced.
International Nuclear Information System (INIS)
Martin Llorente, F.
1990-01-01
The models of atmospheric pollutants dispersion are based in mathematic algorithms that describe the transport, diffusion, elimination and chemical reactions of atmospheric contaminants. These models operate with data of contaminants emission and make an estimation of quality air in the area. This model can be applied to several aspects of atmospheric contamination
Bogiages, Christopher A.; Lotter, Christine
2011-01-01
In their research, scientists generate, test, and modify scientific models. These models can be shared with others and demonstrate a scientist's understanding of how the natural world works. Similarly, students can generate and modify models to gain a better understanding of the content, process, and nature of science (Kenyon, Schwarz, and Hug…
Brad C. Timm; Kevin McGarigal; Samuel A. Cushman; Joseph L. Ganey
2016-01-01
Efficacy of future habitat selection studies will benefit by taking a multi-scale approach. In addition to potentially providing increased explanatory power and predictive capacity, multi-scale habitat models enhance our understanding of the scales at which species respond to their environment, which is critical knowledge required to implement effective...
Schmidt-Eisenlohr, F.; Puñal, O.; Klagges, K.; Kirsche, M.
Apart from the general issue of modeling the channel, the PHY and the MAC of wireless networks, there are specific modeling assumptions that are considered for different systems. In this chapter we consider three specific wireless standards and highlight modeling options for them. These are IEEE 802.11 (as example for wireless local area networks), IEEE 802.16 (as example for wireless metropolitan networks) and IEEE 802.15 (as example for body area networks). Each section on these three systems discusses also at the end a set of model implementations that are available today.
Zukowska, Barbara; Pacyna, Jozef; Namiesnik, Jacek
2005-02-01
The ELOISE EU EuroCat project integrated natural and social sciences to link the impacts affecting the coastal sea to the human activities developed along the catchments. In EuroCat project river catchments' changes and their impact on the inflow area were analysed. The information was linked with environmental models. The part of the EU ELOISE EuroCat project focusing on the Vistula River catchment and the Baltic Sea costal zone was named VisCat. Within the framework of the EU ELOISE EuroCat - VisCat project, CoZMo-POP (Coastal Zone Model for Persistent Organic Pollutants), a non-steady-state multicompartmental mass balance model of long-term chemical fate in the coastal environment or the drainage basin of a large lake environment was used. The model is parameterised and tested herein to simulate the long-term fate and distribution of selected HCHs (hexachlorocyclohexanes) and PCBs (polychlorinated biphenyls) in the Gulf of Gdansk and the Vistula River drainage basin environment. The model can also be used in the future to predict future concentrations in relation to various emission scenarios and in management of economic development and regulations of substance-emission to this environment. However, this would require more extensive efforts in the future on model parameterisation and validation in order to increase the confidence in current model outputs.
Ochs, M.; Davis, J.A.; Olin, M.; Payne, T.E.; Tweed, C.J.; Askarieh, M.M.; Altmann, S.
2006-01-01
For the safe final disposal and/or long-term storage of radioactive wastes, deep or near-surface underground repositories are being considered world-wide. A central safety feature is the prevention, or sufficient retardation, of radionuclide (RN) migration to the biosphere. To this end, radionuclide sorption is one of the most important processes. Decreasing the uncertainty in radionuclide sorption may contribute significantly to reducing the overall uncertainty of a performance assessment (PA). For PA, sorption is typically characterised by distribution coefficients (Kd values). The conditional nature of Kd requires different estimates of this parameter for each set of geochemical conditions of potential relevance in a RN's migration pathway. As it is not feasible to measure sorption for every set of conditions, the derivation of Kd for PA must rely on data derived from representative model systems. As a result, uncertainty in Kd is largely caused by the need to derive values for conditions not explicitly addressed in experiments. The recently concluded NEA Sorption Project [1] showed that thermodynamic sorption models (TSMs) are uniquely suited to derive K d as a function of conditions, because they allow a direct coupling of sorption with variable solution chemistry and mineralogy in a thermodynamic framework. The results of the project enable assessment of the suitability of various TSM approaches for PA-relevant applications as well as of the potential and limitations of TSMs to model RN sorption in complex systems. ?? by Oldenbourg Wissenschaftsverlag.
Genetic search feature selection for affective modeling
DEFF Research Database (Denmark)
Martínez, Héctor P.; Yannakakis, Georgios N.
2010-01-01
Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...
Voter models with heterozygosity selection
Czech Academy of Sciences Publication Activity Database
Sturm, A.; Swart, Jan M.
2008-01-01
Roč. 18, č. 1 (2008), s. 59-99 ISSN 1050-5164 R&D Projects: GA ČR GA201/06/1323; GA ČR GA201/07/0237 Institutional research plan: CEZ:AV0Z10750506 Keywords : Heterozygosity selection * rebellious voter model * branching * annihilation * survival * coexistence Subject RIV: BA - General Mathematics Impact factor: 1.285, year: 2008
THE TIME DOMAIN SPECTROSCOPIC SURVEY: VARIABLE SELECTION AND ANTICIPATED RESULTS
Energy Technology Data Exchange (ETDEWEB)
Morganson, Eric; Green, Paul J. [Harvard Smithsonian Center for Astrophysics, 60 Garden St, Cambridge, MA 02138 (United States); Anderson, Scott F.; Ruan, John J. [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States); Myers, Adam D. [Department of Physics and Astronomy, University of Wyoming, Laramie, WY 82071 (United States); Eracleous, Michael; Brandt, William Nielsen [Department of Astronomy and Astrophysics, 525 Davey Laboratory, The Pennsylvania State University, University Park, PA 16802 (United States); Kelly, Brandon [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106-9530 (United States); Badenes, Carlos [Department of Physics and Astronomy and Pittsburgh Particle Physics, Astrophysics and Cosmology Center (PITT PACC), University of Pittsburgh, 3941 O’Hara St, Pittsburgh, PA 15260 (United States); Bañados, Eduardo [Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg (Germany); Blanton, Michael R. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States); Bershady, Matthew A. [Department of Astronomy, University of Wisconsin, 475 N. Charter St., Madison, WI 53706 (United States); Borissova, Jura [Instituto de Física y Astronomía, Universidad de Valparaíso, Av. Gran Bretaña 1111, Playa Ancha, Casilla 5030, and Millennium Institute of Astrophysics (MAS), Santiago (Chile); Burgett, William S. [GMTO Corp, Suite 300, 251 S. Lake Ave, Pasadena, CA 91101 (United States); Chambers, Kenneth, E-mail: emorganson@cfa.harvard.edu [Institute for Astronomy, University of Hawaii at Manoa, Honolulu, HI 96822 (United States); and others
2015-06-20
We present the selection algorithm and anticipated results for the Time Domain Spectroscopic Survey (TDSS). TDSS is an Sloan Digital Sky Survey (SDSS)-IV Extended Baryon Oscillation Spectroscopic Survey (eBOSS) subproject that will provide initial identification spectra of approximately 220,000 luminosity-variable objects (variable stars and active galactic nuclei across 7500 deg{sup 2} selected from a combination of SDSS and multi-epoch Pan-STARRS1 photometry. TDSS will be the largest spectroscopic survey to explicitly target variable objects, avoiding pre-selection on the basis of colors or detailed modeling of specific variability characteristics. Kernel Density Estimate analysis of our target population performed on SDSS Stripe 82 data suggests our target sample will be 95% pure (meaning 95% of objects we select have genuine luminosity variability of a few magnitudes or more). Our final spectroscopic sample will contain roughly 135,000 quasars and 85,000 stellar variables, approximately 4000 of which will be RR Lyrae stars which may be used as outer Milky Way probes. The variability-selected quasar population has a smoother redshift distribution than a color-selected sample, and variability measurements similar to those we develop here may be used to make more uniform quasar samples in large surveys. The stellar variable targets are distributed fairly uniformly across color space, indicating that TDSS will obtain spectra for a wide variety of stellar variables including pulsating variables, stars with significant chromospheric activity, cataclysmic variables, and eclipsing binaries. TDSS will serve as a pathfinder mission to identify and characterize the multitude of variable objects that will be detected photometrically in even larger variability surveys such as Large Synoptic Survey Telescope.
MODEL SELECTION FOR SPECTROPOLARIMETRIC INVERSIONS
International Nuclear Information System (INIS)
Asensio Ramos, A.; Manso Sainz, R.; Martínez González, M. J.; Socas-Navarro, H.; Viticchié, B.; Orozco Suárez, D.
2012-01-01
Inferring magnetic and thermodynamic information from spectropolarimetric observations relies on the assumption of a parameterized model atmosphere whose parameters are tuned by comparison with observations. Often, the choice of the underlying atmospheric model is based on subjective reasons. In other cases, complex models are chosen based on objective reasons (for instance, the necessity to explain asymmetries in the Stokes profiles) but it is not clear what degree of complexity is needed. The lack of an objective way of comparing models has, sometimes, led to opposing views of the solar magnetism because the inferred physical scenarios are essentially different. We present the first quantitative model comparison based on the computation of the Bayesian evidence ratios for spectropolarimetric observations. Our results show that there is not a single model appropriate for all profiles simultaneously. Data with moderate signal-to-noise ratios (S/Ns) favor models without gradients along the line of sight. If the observations show clear circular and linear polarization signals above the noise level, models with gradients along the line are preferred. As a general rule, observations with large S/Ns favor more complex models. We demonstrate that the evidence ratios correlate well with simple proxies. Therefore, we propose to calculate these proxies when carrying out standard least-squares inversions to allow for model comparison in the future.
Efficiently adapting graphical models for selectivity estimation
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2013-01-01
in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...
Adverse selection model regarding tobacco consumption
Directory of Open Access Journals (Sweden)
Dumitru MARIN
2006-01-01
Full Text Available The impact of introducing a tax on tobacco consumption can be studied trough an adverse selection model. The objective of the model presented in the following is to characterize the optimal contractual relationship between the governmental authorities and the two type employees: smokers and non-smokers, taking into account that the consumers’ decision to smoke or not represents an element of risk and uncertainty. Two scenarios are run using the General Algebraic Modeling Systems software: one without taxes set on tobacco consumption and another one with taxes set on tobacco consumption, based on an adverse selection model described previously. The results of the two scenarios are compared in the end of the paper: the wage earnings levels and the social welfare in case of a smoking agent and in case of a non-smoking agent.
Post-model selection inference and model averaging
Directory of Open Access Journals (Sweden)
Georges Nguefack-Tsague
2011-07-01
Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.
Model selection for univariable fractional polynomials.
Royston, Patrick
2017-07-01
Since Royston and Altman's 1994 publication ( Journal of the Royal Statistical Society, Series C 43: 429-467), fractional polynomials have steadily gained popularity as a tool for flexible parametric modeling of regression relationships. In this article, I present fp_select, a postestimation tool for fp that allows the user to select a parsimonious fractional polynomial model according to a closed test procedure called the fractional polynomial selection procedure or function selection procedure. I also give a brief introduction to fractional polynomial models and provide examples of using fp and fp_select to select such models with real data.
Directory of Open Access Journals (Sweden)
Burak Omer Saracoglu
2016-03-01
Full Text Available Purpose: The electricity demand in Turkey has been increasing for a while. Hydropower is one of the major electricity generation types to compensate this electricity demand in Turkey. Private investors (domestic and foreign in the hydropower electricity generation sector have been looking for the most appropriate and satisfactory new private hydropower investment (PHPI options and opportunities in Turkey. This study aims to present a qualitative multi-attribute decision making (MADM model, that is easy, straightforward, and fast for the selection of the most satisfactory reasonable PHPI options during the very early investment stages (data and information poorness on projects. Design/methodology/approach: The data and information of the PHPI options was gathered from the official records on the official websites. A wide and deep literature review was conducted for the MADM models and for the hydropower industry. The attributes of the model were identified, selected, clustered and evaluated by the expert decision maker (EDM opinion and by help of an open source search results clustering engine (Carrot2 (helpful for also comprehension. The PHPI options were clustered according to their installed capacities main property to analyze the options in the most appropriate, decidable, informative, understandable and meaningful way. A simple clustering algorithm for the PHPI options was executed in the current study. A template model for the selection of the most satisfactory PHPI options was built in the DEXi (Decision EXpert for Education and the DEXiTree software. Findings: The basic attributes for the selection of the PHPI options were presented and afterwards the aggregate attributes were defined by the bottom-up structuring for the early investment stages. The attributes were also analyzed by help of Carrot2. The most satisfactory PHPI options in Turkey in the big options data set were selected for each PHPI options cluster by the EDM evaluations in
Sexual selection resulting from extrapair paternity in collared flycatchers.
Sheldon; Ellegren
1999-02-01
Extrapair paternity has been suggested to represent a potentially important source of sexual selection on male secondary sexual characters, particularly in birds with predominantly socially monogamous mating systems. However, relatively few studies have demonstrated sexual selection within single species by this mechanism, and there have been few attempts to assess the importance of extrapair paternity in relation to other mechanisms of sexual selection. We report estimates of sexual selection gradients on male secondary sexual plumage characters resulting from extrapair paternity in the collared flycatcher Ficedula albicollis, and compare the importance of this form of sexual selection with that resulting from variation in mate fecundity. Microsatellite genotyping revealed that 15% of nestlings, distributed nonrandomly among 33% of broods (N=79), were the result of extrapair copulations. Multivariate selection analyses revealed significant positive directional sexual selection on two uncorrelated secondary sexual characters in males (forehead and wing patch size) when fledgling number was used as the measure of fitness. When number of offspring recruiting to the breeding population was used as the measure of male fitness, selection on these traits appeared to be directional and stabilizing, respectively. Pairwise comparisons of cuckolded and cuckolding males revealed that males that sired young through extrapair copulations had wider forehead patches, and were paired to females that bred earlier, than the males that they cuckolded. Path analysis was used to partition selection on these traits into pathways via mate fecundity and sperm competition, and suggested that the sperm competition pathway accounted for between 64 and 90% of the total sexual selection via the two paths. The selection revealed in these analyses is relatively weak in comparison with many other measures of selection in natural populations. We offer some explanations for the relatively weak
Chemical identification using Bayesian model selection
Energy Technology Data Exchange (ETDEWEB)
Burr, Tom; Fry, H. A. (Herbert A.); McVey, B. D. (Brian D.); Sander, E. (Eric)
2002-01-01
Remote detection and identification of chemicals in a scene is a challenging problem. We introduce an approach that uses some of the image's pixels to establish the background characteristics while other pixels represent the target for which we seek to identify all chemical species present. This leads to a generalized least squares problem in which we focus on 'subset selection' to identify the chemicals thought to be present. Bayesian model selection allows us to approximate the posterior probability that each chemical in the library is present by adding the posterior probabilities of all the subsets which include the chemical. We present results using realistic simulated data for the case with 1 to 5 chemicals present in each target and compare performance to a hybrid of forward and backward stepwise selection procedure using the F statistic.
Multi-dimensional model order selection
Directory of Open Access Journals (Sweden)
Roemer Florian
2011-01-01
Full Text Available Abstract Multi-dimensional model order selection (MOS techniques achieve an improved accuracy, reliability, and robustness, since they consider all dimensions jointly during the estimation of parameters. Additionally, from fundamental identifiability results of multi-dimensional decompositions, it is known that the number of main components can be larger when compared to matrix-based decompositions. In this article, we show how to use tensor calculus to extend matrix-based MOS schemes and we also present our proposed multi-dimensional model order selection scheme based on the closed-form PARAFAC algorithm, which is only applicable to multi-dimensional data. In general, as shown by means of simulations, the Probability of correct Detection (PoD of our proposed multi-dimensional MOS schemes is much better than the PoD of matrix-based schemes.
Reserve selection using nonlinear species distribution models.
Moilanen, Atte
2005-06-01
Reserve design is concerned with optimal selection of sites for new conservation areas. Spatial reserve design explicitly considers the spatial pattern of the proposed reserve network and the effects of that pattern on reserve cost and/or ability to maintain species there. The vast majority of reserve selection formulations have assumed a linear problem structure, which effectively means that the biological value of a potential reserve site does not depend on the pattern of selected cells. However, spatial population dynamics and autocorrelation cause the biological values of neighboring sites to be interdependent. Habitat degradation may have indirect negative effects on biodiversity in areas neighboring the degraded site as a result of, for example, negative edge effects or lower permeability for animal movement. In this study, I present a formulation and a spatial optimization algorithm for nonlinear reserve selection problems in grid-based landscapes that accounts for interdependent site values. The method is demonstrated using habitat maps and nonlinear habitat models for threatened birds in the Netherlands, and it is shown that near-optimal solutions are found for regions consisting of up to hundreds of thousands grid cells, a landscape size much larger than those commonly attempted even with linear reserve selection formulations.
Selected sports talent development models
Michal Vičar
2017-01-01
Background: Sports talent in the Czech Republic is generally viewed as a static, stable phenomena. It stands in contrast with widespread praxis carried out in Anglo-Saxon countries that emphasise its fluctuant nature. This is reflected in the current models describing its development. Objectives: The aim is to introduce current models of talent development in sport. Methods: Comparison and analysing of the following models: Balyi - Long term athlete development model, Côté - Developmen...
Selection of robust methods. Numerical examples and results
Czech Academy of Sciences Publication Activity Database
Víšek, Jan Ámos
2005-01-01
Roč. 21, č. 11 (2005), s. 1-58 ISSN 1212-074X R&D Projects: GA ČR(CZ) GA402/03/0084 Institutional research plan: CEZ:AV0Z10750506 Keywords : robust regression * model selection * uniform consistency of M-estimators Subject RIV: BA - General Mathematics
The linear utility model for optimal selection
Mellenbergh, Gideon J.; van der Linden, Willem J.
A linear utility model is introduced for optimal selection when several subpopulations of applicants are to be distinguished. Using this model, procedures are described for obtaining optimal cutting scores in subpopulations in quota-free as well as quota-restricted selection situations. The cutting
Exploring Several Methods of Groundwater Model Selection
Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar
2017-04-01
Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).
Selection of classification models from repository of model for water ...
African Journals Online (AJOL)
This paper proposes a new technique, Model Selection Technique (MST) for selection and ranking of models from the repository of models by combining three performance measures (Acc, TPR and TNR). This technique provides weightage to each performance measure to find the most suitable model from the repository of ...
Thomas, D.L.; Johnson, D.; Griffith, B.
2006-01-01
Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a
A Dynamic Model for Limb Selection
Cox, R.F.A; Smitsman, A.W.
2008-01-01
Two experiments and a model on limb selection are reported. In Experiment 1 left-handed and right-handed participants (N = 36) repeatedly used one hand for grasping a small cube. After a clear switch in the cube’s location, perseverative limb selection was revealed in both handedness groups. In
Review and selection of unsaturated flow models
Energy Technology Data Exchange (ETDEWEB)
Reeves, M.; Baker, N.A.; Duguid, J.O. [INTERA, Inc., Las Vegas, NV (United States)
1994-04-04
Since the 1960`s, ground-water flow models have been used for analysis of water resources problems. In the 1970`s, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970`s and well into the 1980`s focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M&O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing.
Review and selection of unsaturated flow models
International Nuclear Information System (INIS)
Reeves, M.; Baker, N.A.; Duguid, J.O.
1994-01-01
Since the 1960's, ground-water flow models have been used for analysis of water resources problems. In the 1970's, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970's and well into the 1980's focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M ampersand O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M ampersand O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing
Graphical tools for model selection in generalized linear models.
Murray, K; Heritier, S; Müller, S
2013-11-10
Model selection techniques have existed for many years; however, to date, simple, clear and effective methods of visualising the model building process are sparse. This article describes graphical methods that assist in the selection of models and comparison of many different selection criteria. Specifically, we describe for logistic regression, how to visualize measures of description loss and of model complexity to facilitate the model selection dilemma. We advocate the use of the bootstrap to assess the stability of selected models and to enhance our graphical tools. We demonstrate which variables are important using variable inclusion plots and show that these can be invaluable plots for the model building process. We show with two case studies how these proposed tools are useful to learn more about important variables in the data and how these tools can assist the understanding of the model building process. Copyright © 2013 John Wiley & Sons, Ltd.
Model and Variable Selection Procedures for Semiparametric Time Series Regression
Directory of Open Access Journals (Sweden)
Risa Kato
2009-01-01
Full Text Available Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.
Bayesian Model Selection in Geophysics: The evidence
Vrugt, J. A.
2016-12-01
Bayesian inference has found widespread application and use in science and engineering to reconcile Earth system models with data, including prediction in space (interpolation), prediction in time (forecasting), assimilation of observations and deterministic/stochastic model output, and inference of the model parameters. Per Bayes theorem, the posterior probability, , P(H|D), of a hypothesis, H, given the data D, is equivalent to the product of its prior probability, P(H), and likelihood, L(H|D), divided by a normalization constant, P(D). In geophysics, the hypothesis, H, often constitutes a description (parameterization) of the subsurface for some entity of interest (e.g. porosity, moisture content). The normalization constant, P(D), is not required for inference of the subsurface structure, yet of great value for model selection. Unfortunately, it is not particularly easy to estimate P(D) in practice. Here, I will introduce the various building blocks of a general purpose method which provides robust and unbiased estimates of the evidence, P(D). This method uses multi-dimensional numerical integration of the posterior (parameter) distribution. I will then illustrate this new estimator by application to three competing subsurface models (hypothesis) using GPR travel time data from the South Oyster Bacterial Transport Site, in Virginia, USA. The three subsurface models differ in their treatment of the porosity distribution and use (a) horizontal layering with fixed layer thicknesses, (b) vertical layering with fixed layer thicknesses and (c) a multi-Gaussian field. The results of the new estimator are compared against the brute force Monte Carlo method, and the Laplace-Metropolis method.
Selecting model complexity in learning problems
Energy Technology Data Exchange (ETDEWEB)
Buescher, K.L. [Los Alamos National Lab., NM (United States); Kumar, P.R. [Illinois Univ., Urbana, IL (United States). Coordinated Science Lab.
1993-10-01
To learn (or generalize) from noisy data, one must resist the temptation to pick a model for the underlying process that overfits the data. Many existing techniques solve this problem at the expense of requiring the evaluation of an absolute, a priori measure of each model`s complexity. We present a method that does not. Instead, it uses a natural, relative measure of each model`s complexity. This method first creates a pool of ``simple`` candidate models using part of the data and then selects from among these by using the rest of the data.
Selecting a model of supersymmetry breaking mediation
International Nuclear Information System (INIS)
AbdusSalam, S. S.; Allanach, B. C.; Dolan, M. J.; Feroz, F.; Hobson, M. P.
2009-01-01
We study the problem of selecting between different mechanisms of supersymmetry breaking in the minimal supersymmetric standard model using current data. We evaluate the Bayesian evidence of four supersymmetry breaking scenarios: mSUGRA, mGMSB, mAMSB, and moduli mediation. The results show a strong dependence on the dark matter assumption. Using the inferred cosmological relic density as an upper bound, minimal anomaly mediation is at least moderately favored over the CMSSM. Our fits also indicate that evidence for a positive sign of the μ parameter is moderate at best. We present constraints on the anomaly and gauge mediated parameter spaces and some previously unexplored aspects of the dark matter phenomenology of the moduli mediation scenario. We use sparticle searches, indirect observables and dark matter observables in the global fit and quantify robustness with respect to prior choice. We quantify how much information is contained within each constraint.
Model selection for Gaussian kernel PCA denoising
DEFF Research Database (Denmark)
Jørgensen, Kasper Winther; Hansen, Lars Kai
2012-01-01
We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...
Melody Track Selection Using Discriminative Language Model
Wu, Xiao; Li, Ming; Suo, Hongbin; Yan, Yonghong
In this letter we focus on the task of selecting the melody track from a polyphonic MIDI file. Based on the intuition that music and language are similar in many aspects, we solve the selection problem by introducing an n-gram language model to learn the melody co-occurrence patterns in a statistical manner and determine the melodic degree of a given MIDI track. Furthermore, we propose the idea of using background model and posterior probability criteria to make modeling more discriminative. In the evaluation, the achieved 81.6% correct rate indicates the feasibility of our approach.
Linkage of PRA models. Phase 1, Results
Energy Technology Data Exchange (ETDEWEB)
Smith, C.L.; Knudsen, J.K.; Kelly, D.L.
1995-12-01
The goal of the Phase I work of the ``Linkage of PRA Models`` project was to postulate methods of providing guidance for US Nuclear Regulator Commission (NRC) personnel on the selection and usage of probabilistic risk assessment (PRA) models that are best suited to the analysis they are performing. In particular, methods and associated features are provided for (a) the selection of an appropriate PRA model for a particular analysis, (b) complementary evaluation tools for the analysis, and (c) a PRA model cross-referencing method. As part of this work, three areas adjoining ``linking`` analyses to PRA models were investigated: (a) the PRA models that are currently available, (b) the various types of analyses that are performed within the NRC, and (c) the difficulty in trying to provide a ``generic`` classification scheme to groups plants based upon a particular plant attribute.
Model structure selection in convolutive mixtures
DEFF Research Database (Denmark)
Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai
2006-01-01
The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: 'Are we actually dealing with a convolutive mixture?'. We try to answer this question for EEG data....
On spatial mutation-selection models
Energy Technology Data Exchange (ETDEWEB)
Kondratiev, Yuri, E-mail: kondrat@math.uni-bielefeld.de [Fakultät für Mathematik, Universität Bielefeld, Postfach 100131, 33501 Bielefeld (Germany); Kutoviy, Oleksandr, E-mail: kutoviy@math.uni-bielefeld.de, E-mail: kutovyi@mit.edu [Fakultät für Mathematik, Universität Bielefeld, Postfach 100131, 33501 Bielefeld (Germany); Department of Mathematics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139 (United States); Minlos, Robert, E-mail: minl@iitp.ru; Pirogov, Sergey, E-mail: pirogov@proc.ru [IITP, RAS, Bolshoi Karetnyi 19, Moscow (Russian Federation)
2013-11-15
We discuss the selection procedure in the framework of mutation models. We study the regulation for stochastically developing systems based on a transformation of the initial Markov process which includes a cost functional. The transformation of initial Markov process by cost functional has an analytic realization in terms of a Kimura-Maruyama type equation for the time evolution of states or in terms of the corresponding Feynman-Kac formula on the path space. The state evolution of the system including the limiting behavior is studied for two types of mutation-selection models.
Intraspecies prion transmission results in selection of sheep scrapie strains.
Directory of Open Access Journals (Sweden)
Takashi Yokoyama
Full Text Available BACKGROUND: Sheep scrapie is caused by multiple prion strains, which have been classified on the basis of their biological characteristics in inbred mice. The heterogeneity of natural scrapie prions in individual sheep and in sheep flocks has not been clearly defined. METHODOLOGY/PRINCIPAL FINDINGS: In this study, we intravenously injected 2 sheep (Suffolk and Corriedale with material from a natural case of sheep scrapie (Suffolk breed. These 3 sheep had identical prion protein (PrP genotypes. The protease-resistant core of PrP (PrPres in the experimental Suffolk sheep was similar to that in the original Suffolk sheep. In contrast, PrPres in the Corriedale sheep differed from the original PrPres but resembled the unusual scrapie isolate, CH1641. This unusual PrPres was not detected in the original sheep. The PrPres distributions in the brain and peripheral tissues differed between the 2 breeds of challenged sheep. A transmission study in wild-type and TgBoPrP mice, which overexpressing bovine PrP, led to the selection of different prion strains. The pathological features of prion diseases are thought to depend on the dominantly propagated strain. CONCLUSIONS/SIGNIFICANCE: Our results indicate that prion strain selection occurs after both inter- and intraspecies transmission. The unusual scrapie prion was a hidden or an unexpressed component in typical sheep scrapie.
Sparse model selection via integral terms
Schaeffer, Hayden; McCalla, Scott G.
2017-08-01
Model selection and parameter estimation are important for the effective integration of experimental data, scientific theory, and precise simulations. In this work, we develop a learning approach for the selection and identification of a dynamical system directly from noisy data. The learning is performed by extracting a small subset of important features from an overdetermined set of possible features using a nonconvex sparse regression model. The sparse regression model is constructed to fit the noisy data to the trajectory of the dynamical system while using the smallest number of active terms. Computational experiments detail the model's stability, robustness to noise, and recovery accuracy. Examples include nonlinear equations, population dynamics, chaotic systems, and fast-slow systems.
The genealogy of samples in models with selection.
Neuhauser, C; Krone, S M
1997-02-01
We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.
Modeling and Selection of Software Service Variants
Wittern, John Erik
2015-01-01
Providers and consumers have to deal with variants, meaning alternative instances of a service?s design, implementation, deployment, or operation, when developing or delivering software services. This work presents service feature modeling to deal with associated challenges, comprising a language to represent software service variants and a set of methods for modeling and subsequent variant selection. This work?s evaluation includes a POC implementation and two real-life use cases.
Model Selection in Data Analysis Competitions
DEFF Research Database (Denmark)
Wind, David Kofoed; Winther, Ole
2014-01-01
The use of data analysis competitions for selecting the most appropriate model for a problem is a recent innovation in the field of predictive machine learning. Two of the most well-known examples of this trend was the Netflix Competition and recently the competitions hosted on the online platfor...
Selective bowel decontamination results in gram-positive translocation.
Jackson, R J; Smith, S D; Rowe, M I
1990-05-01
Colonization by enteric gram-negative bacteria with subsequent translocation is believed to be a major mechanism for infection in the critically ill patient. Selective bowel decontamination (SBD) has been used to control gram-negative infections by eliminating these potentially pathogenic bacteria while preserving anaerobic and other less pathogenic organisms. Infection with gram-positive organisms and anaerobes in two multivisceral transplant patients during SBD led us to investigate the effect of SBD on gut colonization and translocation. Twenty-four rats received enteral polymixin E, tobramycin, amphotericin B, and parenteral cefotaxime for 7 days (PTA + CEF); 23 received parenteral cefotaxime alone (CEF), 19 received the enteral antibiotics alone (PTA), 21 controls received no antibiotics. Cecal homogenates, mesenteric lymph node (MLN), liver, and spleen were cultured. Only 8% of the PTA + CEF group had gram-negative bacteria in cecal culture vs 52% CEF, 84% PTA, and 100% in controls. Log Enterococcal colony counts were higher in the PTA + CEF group (8.0 + 0.9) vs controls (5.4 + 0.4) P less than 0.01. Translocation of Enterococcus to the MLN was significantly increased in the PTA + CEF group (67%) vs controls (0%) P less than 0.01. SBD effectively eliminates gram-negative organisms from the gut in the rat model. Overgrowth and translocation of Enterococcus suggests that infection with gram-positive organisms may be a limitation of SBD.
Elementary Teachers' Selection and Use of Visual Models
Lee, Tammy D.; Gail Jones, M.
2018-02-01
As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.
Model selection criterion in survival analysis
Karabey, Uǧur; Tutkun, Nihal Ata
2017-07-01
Survival analysis deals with time until occurrence of an event of interest such as death, recurrence of an illness, the failure of an equipment or divorce. There are various survival models with semi-parametric or parametric approaches used in medical, natural or social sciences. The decision on the most appropriate model for the data is an important point of the analysis. In literature Akaike information criteria or Bayesian information criteria are used to select among nested models. In this study,the behavior of these information criterion is discussed for a real data set.
Tc-99 Adsorption on Selected Activated Carbons - Batch Testing Results
Energy Technology Data Exchange (ETDEWEB)
Mattigod, Shas V.; Wellman, Dawn M.; Golovich, Elizabeth C.; Cordova, Elsa A.; Smith, Ronald M.
2010-12-01
CH2M HILL Plateau Remediation Company (CHPRC) is currently developing a 200-West Area groundwater pump-and-treat system as the remedial action selected under the Comprehensive Environmental Response, Compensation, and Liability Act Record of Decision for Operable Unit (OU) 200-ZP-1. This report documents the results of treatability tests Pacific Northwest National Laboratory researchers conducted to quantify the ability of selected activated carbon products (or carbons) to adsorb technetium-99 (Tc-99) from 200-West Area groundwater. The Tc-99 adsorption performance of seven activated carbons (J177601 Calgon Fitrasorb 400, J177606 Siemens AC1230AWC, J177609 Carbon Resources CR-1240-AW, J177611 General Carbon GC20X50, J177612 Norit GAC830, J177613 Norit GAC830, and J177617 Nucon LW1230) were evaluated using water from well 299-W19-36. Four of the best performing carbons (J177606 Siemens AC1230AWC, J177609 Carbon Resources CR-1240-AW, J177611 General Carbon GC20X50, and J177613 Norit GAC830) were selected for batch isotherm testing. The batch isotherm tests on four of the selected carbons indicated that under lower nitrate concentration conditions (382 mg/L), Kd values ranged from 6,000 to 20,000 mL/g. In comparison. Under higher nitrate (750 mg/L) conditions, there was a measureable decrease in Tc-99 adsorption with Kd values ranging from 3,000 to 7,000 mL/g. The adsorption data fit both the Langmuir and the Freundlich equations. Supplemental tests were conducted using the two carbons that demonstrated the highest adsorption capacity to resolve the issue of the best fit isotherm. These tests indicated that Langmuir isotherms provided the best fit for Tc-99 adsorption under low nitrate concentration conditions. At the design basis concentration of Tc 0.865 µg/L(14,700 pCi/L), the predicted Kd values from using Langmuir isotherm constants were 5,980 mL/g and 6,870 mL/g for for the two carbons. These Kd values did not meet the target Kd value of 9,000 mL/g. Tests
On Using Selection Procedures with Binomial Models.
1983-10-01
eds.), Shinko Tsusho Co. Ltd., Tokyo, Japan , pp. 501-533. Gupta, S. S. and Sobel, M. (1960). Selecting a subset containing the best of several...IA_____3_6r__I____ *TITLE food A$ieweI L TYPE of 09PORT 6 PERIOD COVERED ON USING SELECTION PROCEDURES WITH BINOMIAL MODELS Technical 6. PeSPRFeauS1 ONG. REPORT...ontoedis stoc toeSI. to Ei.,..,t&* toemR.,. 14. SUPPOLEMENTARY MOCTES 19. Rey WORDS (Coatiou. 40 ow.oa* edo if Necesary and #do""&a by block number
Aerosol model selection and uncertainty modelling by adaptive MCMC technique
Directory of Open Access Journals (Sweden)
M. Laine
2008-12-01
Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.
The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.
We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.
Preliminary results on species selection by animals on sour grassveld
African Journals Online (AJOL)
A study of species selection by cattle under a system of controlled selective grazing using wheel-point surveys and fistulated animals has demonstrated that both techniques provide valuable information on the preference shown for such species as Andropogon amplectens, Eulalia villosa, Themeda triandra, Trachypogon ...
Review and selection of unsaturated flow models
Energy Technology Data Exchange (ETDEWEB)
NONE
1993-09-10
Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer ground-water flow models; to conduct performance assessments; and to develop performance assessment models, where necessary. In the area of scientific modeling, the M&O CRWMS has the following responsibilities: To provide overall management and integration of modeling activities. To provide a framework for focusing modeling and model development. To identify areas that require increased or decreased emphasis. To ensure that the tools necessary to conduct performance assessment are available. These responsibilities are being initiated through a three-step process. It consists of a thorough review of existing models, testing of models which best fit the established requirements, and making recommendations for future development that should be conducted. Future model enhancement will then focus on the models selected during this activity. Furthermore, in order to manage future model development, particularly in those areas requiring substantial enhancement, the three-step process will be updated and reported periodically in the future.
Model structure selection in convolutive mixtures
DEFF Research Database (Denmark)
Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai
2006-01-01
The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....
Skewed factor models using selection mechanisms
Kim, Hyoung-Moon
2015-12-21
Traditional factor models explicitly or implicitly assume that the factors follow a multivariate normal distribution; that is, only moments up to order two are involved. However, it may happen in real data problems that the first two moments cannot explain the factors. Based on this motivation, here we devise three new skewed factor models, the skew-normal, the skew-tt, and the generalized skew-normal factor models depending on a selection mechanism on the factors. The ECME algorithms are adopted to estimate related parameters for statistical inference. Monte Carlo simulations validate our new models and we demonstrate the need for skewed factor models using the classic open/closed book exam scores dataset.
Expatriates Selection: An Essay of Model Analysis
Directory of Open Access Journals (Sweden)
Rui Bártolo-Ribeiro
2015-03-01
Full Text Available The business expansion to other geographical areas with different cultures from which organizations were created and developed leads to the expatriation of employees to these destinations. Recruitment and selection procedures of expatriates do not always have the intended success leading to an early return of these professionals with the consequent organizational disorders. In this study, several articles published in the last five years were analyzed in order to identify the most frequently mentioned dimensions in the selection of expatriates in terms of success and failure. The characteristics in the selection process that may increase prediction of adaptation of expatriates to new cultural contexts of the some organization were studied according to the KSAOs model. Few references were found concerning Knowledge, Skills and Abilities dimensions in the analyzed papers. There was a strong predominance on the evaluation of Other Characteristics, and was given more importance to dispositional factors than situational factors for promoting the integration of the expatriates.
Ensembling Variable Selectors by Stability Selection for the Cox Model
Directory of Open Access Journals (Sweden)
Qing-Yan Yin
2017-01-01
Full Text Available As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs have gained much interest due to their many advantages. Stability selection (Meinshausen and Bühlmann, 2010, a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region Λ in lasso and the parameter λmin properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Λ and λmin. Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.
Skudlarek, Jason W; DiMarco, Christina N; Babaoglu, Kerim; Roecker, Anthony J; Bruno, Joseph G; Pausch, Mark A; O'Brien, Julie A; Cabalu, Tamara D; Stevens, Joanne; Brunner, Joseph; Tannenbaum, Pamela L; Wuelfing, W Peter; Garson, Susan L; Fox, Steven V; Savitz, Alan T; Harrell, Charles M; Gotter, Anthony L; Winrow, Christopher J; Renger, John J; Kuduk, Scott D; Coleman, Paul J
2017-03-15
In an ongoing effort to explore the use of orexin receptor antagonists for the treatment of insomnia, dual orexin receptor antagonists (DORAs) were structurally modified, resulting in compounds selective for the OX 2 R subtype and culminating in the discovery of 23, a highly potent, OX 2 R-selective molecule that exhibited a promising in vivo profile. Further structural modification led to an unexpected restoration of OX 1 R antagonism. Herein, these changes are discussed and a rationale for selectivity based on computational modeling is proposed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Behavioral optimization models for multicriteria portfolio selection
Directory of Open Access Journals (Sweden)
Mehlawat Mukesh Kumar
2013-01-01
Full Text Available In this paper, behavioral construct of suitability is used to develop a multicriteria decision making framework for portfolio selection. To achieve this purpose, we rely on multiple methodologies. Analytical hierarchy process technique is used to model the suitability considerations with a view to obtaining the suitability performance score in respect of each asset. A fuzzy multiple criteria decision making method is used to obtain the financial quality score of each asset based upon investor's rating on the financial criteria. Two optimization models are developed for optimal asset allocation considering simultaneously financial and suitability criteria. An empirical study is conducted on randomly selected assets from National Stock Exchange, Mumbai, India to demonstrate the effectiveness of the proposed methodology.
Selected Experimental Results from Heavy-Ion Collisions at LHC
Directory of Open Access Journals (Sweden)
Ranbir Singh
2013-01-01
Full Text Available We review a subset of experimental results from the heavy-ion collisions at the Large Hadron Collider (LHC facility at CERN. Excellent consistency is observed across all the experiments at the LHC (at center of mass energy sNN=2.76 TeV for the measurements such as charged particle multiplicity density, azimuthal anisotropy coefficients, and nuclear modification factor of charged hadrons. Comparison to similar measurements from the Relativistic Heavy Ion Collider (RHIC at lower energy (sNN=200 GeV suggests that the system formed at LHC has a higher energy density and larger system size and lives for a longer time. These measurements are compared to model calculations to obtain physical insights on the properties of matter created at the RHIC and LHC.
A simple parametric model selection test
Susanne M. Schennach; Daniel Wilhelm
2014-01-01
We propose a simple model selection test for choosing among two parametric likelihoods which can be applied in the most general setting without any assumptions on the relation between the candidate models and the true distribution. That is, both, one or neither is allowed to be correctly speci fied or misspeci fied, they may be nested, non-nested, strictly non-nested or overlapping. Unlike in previous testing approaches, no pre-testing is needed, since in each case, the same test statistic to...
Robust inference in sample selection models
Zhelonkin, Mikhail
2015-11-20
The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.
Novel metrics for growth model selection.
Grigsby, Matthew R; Di, Junrui; Leroux, Andrew; Zipunnikov, Vadim; Xiao, Luo; Crainiceanu, Ciprian; Checkley, William
2018-01-01
Literature surrounding the statistical modeling of childhood growth data involves a diverse set of potential models from which investigators can choose. However, the lack of a comprehensive framework for comparing non-nested models leads to difficulty in assessing model performance. This paper proposes a framework for comparing non-nested growth models using novel metrics of predictive accuracy based on modifications of the mean squared error criteria. Three metrics were created: normalized, age-adjusted, and weighted mean squared error (MSE). Predictive performance metrics were used to compare linear mixed effects models and functional regression models. Prediction accuracy was assessed by partitioning the observed data into training and test datasets. This partitioning was constructed to assess prediction accuracy for backward (i.e., early growth), forward (i.e., late growth), in-range, and on new-individuals. Analyses were done with height measurements from 215 Peruvian children with data spanning from near birth to 2 years of age. Functional models outperformed linear mixed effects models in all scenarios tested. In particular, prediction errors for functional concurrent regression (FCR) and functional principal component analysis models were approximately 6% lower when compared to linear mixed effects models. When we weighted subject-specific MSEs according to subject-specific growth rates during infancy, we found that FCR was the best performer in all scenarios. With this novel approach, we can quantitatively compare non-nested models and weight subgroups of interest to select the best performing growth model for a particular application or problem at hand.
Science and Information Conference 2015 : Extended and Selected Results
Kapoor, Supriya; Bhatia, Rahul
2016-01-01
This book is a collection of extended chapters from the selected papers that were published in the proceedings of Science and Information (SAI) Conference 2015. It contains twenty-one chapters in the field of Computational Intelligence, which received highly recommended feedback during SAI Conference 2015 review process. During the three-day event 260 scientists, technology developers, young researcher including PhD students, and industrial practitioners from 56 countries have engaged intensively in presentations, demonstrations, open panel sessions and informal discussions. .
Model selection and comparison for independents sinusoids
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2014-01-01
In the signal processing literature, many methods have been proposed for estimating the number of sinusoidal basis functions from a noisy data set. The most popular method is the asymptotic MAP criterion, which is sometimes also referred to as the BIC. In this paper, we extend and improve this me....... Through simulations, we demonstrate that the lp-BIC outperforms the asymptotic MAP criterion and other state of the art methods in terms of model selection, de-noising and prediction performance. The simulation code is available online.......In the signal processing literature, many methods have been proposed for estimating the number of sinusoidal basis functions from a noisy data set. The most popular method is the asymptotic MAP criterion, which is sometimes also referred to as the BIC. In this paper, we extend and improve...... this method by considering the problem in a full Bayesian framework instead of the approximate formulation, on which the asymptotic MAP criterion is based. This leads to a new model selection and comparison method, the lp-BIC, whose computational complexity is of the same order as the asymptotic MAP criterion...
Selected results from the Mark II at SPEAR
International Nuclear Information System (INIS)
Scharre, D.L.
1980-06-01
Recent results on radiative transitions from the psi(3095), charmed meson decay, and the Cabibbo-suppressed decay tau → K* ν/sub tau/ are reviewed. The results come primarily from the Mark II experiment at SPEAR, but preliminary results from the Crystal Ball experiment on psi radiative transitions are also discussed
Results of steel containment vessel model test
International Nuclear Information System (INIS)
Luk, V.K.; Ludwigsen, J.S.; Hessheimer, M.F.; Komine, Kuniaki; Matsumoto, Tomoyuki; Costello, J.F.
1998-05-01
A series of static overpressurization tests of scale models of nuclear containment structures is being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the US Nuclear Regulatory Commission. Two tests are being conducted: (1) a test of a model of a steel containment vessel (SCV) and (2) a test of a model of a prestressed concrete containment vessel (PCCV). This paper summarizes the conduct of the high pressure pneumatic test of the SCV model and the results of that test. Results of this test are summarized and are compared with pretest predictions performed by the sponsoring organizations and others who participated in a blind pretest prediction effort. Questions raised by this comparison are identified and plans for posttest analysis are discussed
Uniform design based SVM model selection for face recognition
Li, Weihong; Liu, Lijuan; Gong, Weiguo
2010-02-01
Support vector machine (SVM) has been proved to be a powerful tool for face recognition. The generalization capacity of SVM depends on the model with optimal hyperparameters. The computational cost of SVM model selection results in application difficulty in face recognition. In order to overcome the shortcoming, we utilize the advantage of uniform design--space filling designs and uniformly scattering theory to seek for optimal SVM hyperparameters. Then we propose a face recognition scheme based on SVM with optimal model which obtained by replacing the grid and gradient-based method with uniform design. The experimental results on Yale and PIE face databases show that the proposed method significantly improves the efficiency of SVM model selection.
High-dimensional model estimation and model selection
CERN. Geneva
2015-01-01
I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.
A physiological production model for cacao : results of model simulations
Zuidema, P.A.; Leffelaar, P.A.
2002-01-01
CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.
Evidence accumulation as a model for lexical selection.
Anders, R; Riès, S; van Maanen, L; Alario, F X
2015-11-01
We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of alternatives, which each have varying activations (or signal supports), that are largely resultant of an initial stimulus recognition. We thoroughly present a case for how such a process may be theoretically explained by the evidence accumulation paradigm, and we demonstrate how this paradigm can be directly related or combined with conventional psycholinguistic theory and their simulatory instantiations (generally, neural network models). Then with a demonstrative application on a large new real data set, we establish how the empirical evidence accumulation approach is able to provide parameter results that are informative to leading psycholinguistic theory, and that motivate future theoretical development. Copyright © 2015 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Henry de-Graft Acquah
2013-01-01
Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.
Interpreting Results from the Multinomial Logit Model
DEFF Research Database (Denmark)
Wulff, Jesper
2015-01-01
This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there seem...... to be systematic issues with regard to how researchers interpret their results when using the MLM. In this study, I present a set of guidelines critical to analyzing and interpreting results from the MLM. The procedure involves intuitive graphical representations of predicted probabilities and marginal effects...... suitable for both interpretation and communication of results. The pratical steps are illustrated through an application of the MLM to the choice of foreign market entry mode....
Random effect selection in generalised linear models
DEFF Research Database (Denmark)
Denwood, Matt; Houe, Hans; Forkman, Björn
We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...
Hidden Markov Model for Stock Selection
Directory of Open Access Journals (Sweden)
Nguyet Nguyen
2015-10-01
Full Text Available The hidden Markov model (HMM is typically used to predict the hidden regimes of observation data. Therefore, this model finds applications in many different areas, such as speech recognition systems, computational molecular biology and financial market predictions. In this paper, we use HMM for stock selection. We first use HMM to make monthly regime predictions for the four macroeconomic variables: inflation (consumer price index (CPI, industrial production index (INDPRO, stock market index (S&P 500 and market volatility (VIX. At the end of each month, we calibrate HMM’s parameters for each of these economic variables and predict its regimes for the next month. We then look back into historical data to find the time periods for which the four variables had similar regimes with the forecasted regimes. Within those similar periods, we analyze all of the S&P 500 stocks to identify which stock characteristics have been well rewarded during the time periods and assign scores and corresponding weights for each of the stock characteristics. A composite score of each stock is calculated based on the scores and weights of its features. Based on this algorithm, we choose the 50 top ranking stocks to buy. We compare the performances of the portfolio with the benchmark index, S&P 500. With an initial investment of $100 in December 1999, over 15 years, in December 2014, our portfolio had an average gain per annum of 14.9% versus 2.3% for the S&P 500.
Selecting an optimal mixed products using grey relationship model
Directory of Open Access Journals (Sweden)
Farshad Faezy Razi
2013-06-01
Full Text Available This paper presents an integrated supplier selection and inventory management using grey relationship model (GRM as well as multi-objective decision making process. The proposed model of this paper first ranks different suppliers based on GRM technique and then determines the optimum level of inventory by considering different objectives. To show the implementation of the proposed model, we use some benchmark data presented by Talluri and Baker [Talluri, S., & Baker, R. C. (2002. A multi-phase mathematical programming approach for effective supply chain design. European Journal of Operational Research, 141(3, 544-558.]. The preliminary results indicate that the proposed model of this paper is capable of handling different criteria for supplier selection.
Psyche Mission: Scientific Models and Instrument Selection
Polanskey, C. A.; Elkins-Tanton, L. T.; Bell, J. F., III; Lawrence, D. J.; Marchi, S.; Park, R. S.; Russell, C. T.; Weiss, B. P.
2017-12-01
NASA has chosen to explore (16) Psyche with their 14th Discovery-class mission. Psyche is a 226-km diameter metallic asteroid hypothesized to be the exposed core of a planetesimal that was stripped of its rocky mantle by multiple hit and run collisions in the early solar system. The spacecraft launch is planned for 2022 with arrival at the asteroid in 2026 for 21 months of operations. The Psyche investigation has five primary scientific objectives: A. Determine whether Psyche is a core, or if it is unmelted material. B. Determine the relative ages of regions of Psyche's surface. C. Determine whether small metal bodies incorporate the same light elements as are expected in the Earth's high-pressure core. D. Determine whether Psyche was formed under conditions more oxidizing or more reducing than Earth's core. E. Characterize Psyche's topography. The mission's task was to select the appropriate instruments to meet these objectives. However, exploring a metal world, rather than one made of ice, rock, or gas, requires development of new scientific models for Psyche to support the selection of the appropriate instruments for the payload. If Psyche is indeed a planetary core, we expect that it should have a detectable magnetic field. However, the strength of the magnetic field can vary by orders of magnitude depending on the formational history of Psyche. The implications of both the extreme low-end and the high-end predictions impact the magnetometer and mission design. For the imaging experiment, what can the team expect for the morphology of a heavily impacted metal body? Efforts are underway to further investigate the differences in crater morphology between high velocity impacts into metal and rock to be prepared to interpret the images of Psyche when they are returned. Finally, elemental composition measurements at Psyche using nuclear spectroscopy encompass a new and unexplored phase space of gamma-ray and neutron measurements. We will present some end
Selected Test Results from the Encell Technology Nickel Iron Battery
Energy Technology Data Exchange (ETDEWEB)
Ferreira, Summer Kamal Rhodes [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Advanced Power Sources R& D; Baca, Wes Edmund [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Advanced Power Sources R& D; Avedikian, Kristan [Encell Technology, Alachua, FL (United States)
2014-09-01
The performance of the Encell Nickel Iron (NiFe) battery was measured. Tests included capacity, capacity as a function of rate, capacity as a function of temperature, charge retention (28-day), efficiency, accelerated life projection, and water refill evaluation. The goal of this work was to evaluate the general performance of the Encell NiFe battery technology for stationary applications and demonstrate the chemistry's capabilities in extreme conditions. Test results have indicated that the Encell NiFe battery technology can provide power levels up to the 6C discharge rate, ampere-hour efficiency above 70%. In summary, the Encell batteries have met performance metrics established by the manufacturer. Long-term cycle tests are not included in this report. A cycle test at elevated temperature was run, funded by the manufacturer, which Encell uses to predict long-term cycling performance, and which passed their prescribed metrics.
Integrated model for supplier selection and performance evaluation
Directory of Open Access Journals (Sweden)
Borges de Araújo, Maria Creuza
2015-08-01
Full Text Available This paper puts forward a model for selecting suppliers and evaluating the performance of those already working with a company. A simulation was conducted in a food industry. This sector has high significance in the economy of Brazil. The model enables the phases of selecting and evaluating suppliers to be integrated. This is important so that a company can have partnerships with suppliers who are able to meet their needs. Additionally, a group method is used to enable managers who will be affected by this decision to take part in the selection stage. Finally, the classes resulting from the performance evaluation are shown to support the contractor in choosing the most appropriate relationship with its suppliers.
PROPOSAL OF AN EMPIRICAL MODEL FOR SUPPLIERS SELECTION
Directory of Open Access Journals (Sweden)
Paulo Ávila
2015-03-01
Full Text Available The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, trough the literature review, there were identified five broad suppliers selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. Thereafter, a survey was elaborated and companies were contacted in order to answer which factors have more relevance in their decisions to choose the suppliers. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP method or Simple Multi-Attribute Rating Technique (SMART. The result of the research undertaken by the authors is a reference model that represents a decision making support for the suppliers/partners selection process.
Model selection for the extraction of movement primitives
Directory of Open Access Journals (Sweden)
Dominik M Endres
2013-12-01
Full Text Available A wide range of blind source separation methods have been used in motor control research for the extraction of movement primitives from EMG and kinematic data. Popular examples are principal component analysis (PCA,independent component analysis (ICA, anechoic demixing, and the time-varying synergy model. However, choosing the parameters of these models, or indeed choosing the type of model, is often done in a heuristic fashion, driven by result expectations as much as by the data. We propose an objective criterion which allows to select the model type, number of primitives and the temporal smoothness prior. Our approach is based on a Laplace approximation to the posterior distribution of the parameters of a given blind source separation model, re-formulated as a Bayesian generative model.We first validate our criterion on ground truth data, showing that it performs at least as good as traditional model selection criteria (Bayesian information criterion, BIC and the Akaike Information Criterion (AIC. Then, we analyze human gait data, finding that an anechoic mixture model with a temporal smoothness constraint on the sources can best account for the data.
Presenting results of software model checker via debugging interface
Kohan, Tomáš
2012-01-01
Title: Presenting results of software model checker via debugging interface Author: Tomáš Kohan Department: Department of Software Engineering Supervisor of the master thesis: RNDr. Ondřej Šerý, Ph.D., Department of Distributed and Dependable Systems Abstract: This thesis is devoted to design and implementation of the new debugging interface of the Java PathFinder application. As a suitable inte- face container was selected the Eclipse development environment. The created interface should vis...
A new Russell model for selecting suppliers
Azadi, Majid; Shabani, Amir; Farzipoor Saen, Reza
2014-01-01
Recently, supply chain management (SCM) has been considered by many researchers. Supplier evaluation and selection plays a significant role in establishing an effective SCM. One of the techniques that can be used for selecting suppliers is data envelopment analysis (DEA). In some situations, to
Modeling shape selection of buckled dielectric elastomers
Langham, Jacob; Bense, Hadrien; Barkley, Dwight
2018-02-01
A dielectric elastomer whose edges are held fixed will buckle, given a sufficiently applied voltage, resulting in a nontrivial out-of-plane deformation. We study this situation numerically using a nonlinear elastic model which decouples two of the principal electrostatic stresses acting on an elastomer: normal pressure due to the mutual attraction of oppositely charged electrodes and tangential shear ("fringing") due to repulsion of like charges at the electrode edges. These enter via physically simplified boundary conditions that are applied in a fixed reference domain using a nondimensional approach. The method is valid for small to moderate strains and is straightforward to implement in a generic nonlinear elasticity code. We validate the model by directly comparing the simulated equilibrium shapes with the experiment. For circular electrodes which buckle axisymetrically, the shape of the deflection profile is captured. Annular electrodes of different widths produce azimuthal ripples with wavelengths that match our simulations. In this case, it is essential to compute multiple equilibria because the first model solution obtained by the nonlinear solver (Newton's method) is often not the energetically favored state. We address this using a numerical technique known as "deflation." Finally, we observe the large number of different solutions that may be obtained for the case of a long rectangular strip.
Selective experimental review of the Standard Model
International Nuclear Information System (INIS)
Bloom, E.D.
1985-02-01
Before disussing experimental comparisons with the Standard Model, (S-M) it is probably wise to define more completely what is commonly meant by this popular term. This model is a gauge theory of SU(3)/sub f/ x SU(2)/sub L/ x U(1) with 18 parameters. The parameters are α/sub s/, α/sub qed/, theta/sub W/, M/sub W/ (M/sub Z/ = M/sub W//cos theta/sub W/, and thus is not an independent parameter), M/sub Higgs/; the lepton masses, M/sub e/, Mμ, M/sub r/; the quark masses, M/sub d/, M/sub s/, M/sub b/, and M/sub u/, M/sub c/, M/sub t/; and finally, the quark mixing angles, theta 1 , theta 2 , theta 3 , and the CP violating phase delta. The latter four parameters appear in the quark mixing matrix for the Kobayashi-Maskawa and Maiani forms. Clearly, the present S-M covers an enormous range of physics topics, and the author can only lightly cover a few such topics in this report. The measurement of R/sub hadron/ is fundamental as a test of the running coupling constant α/sub s/ in QCD. The author will discuss a selection of recent precision measurements of R/sub hadron/, as well as some other techniques for measuring α/sub s/. QCD also requires the self interaction of gluons. The search for the three gluon vertex may be practically realized in the clear identification of gluonic mesons. The author will present a limited review of recent progress in the attempt to untangle such mesons from the plethora q anti q states of the same quantum numbers which exist in the same mass range. The electroweak interactions provide some of the strongest evidence supporting the S-M that exists. Given the recent progress in this subfield, and particularly with the discovery of the W and Z bosons at CERN, many recent reviews obviate the need for further discussion in this report. In attempting to validate a theory, one frequently searches for new phenomena which would clearly invalidate it. 49 references, 28 figures
The Danish national passenger model – Model specification and results
DEFF Research Database (Denmark)
Rich, Jeppe; Hansen, Christian Overgaard
2016-01-01
, the paper provides a description of a large-scale forecast model with a discussion of the linkage between population synthesis, demand and assignment. Secondly, the paper gives specific attention to model specification and in particular choice of functional form and cost-damping. Specifically we suggest...... a family of logarithmic spline functions and illustrate how it is applied in the model. Thirdly and finally, we evaluate model sensitivity and performance by evaluating the distance distribution and elasticities. In the paper we present results where the spline-function is compared with more traditional...... function types and it is indicated that the spline-function provides a better description of the data. Results are also provided in the form of a back-casting exercise where the model is tested in a back-casting scenario to 2002....
Selection of productivity improvement techniques via mathematical modeling
Directory of Open Access Journals (Sweden)
Mahassan M. Khater
2011-07-01
Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.
Numerical Model based Reliability Estimation of Selective Laser Melting Process
DEFF Research Database (Denmark)
Mohanty, Sankhya; Hattel, Jesper Henri
2014-01-01
Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....
A decision model for energy resource selection in China
International Nuclear Information System (INIS)
Wang Bing; Kocaoglu, Dundar F.; Daim, Tugrul U.; Yang Jiting
2010-01-01
This paper evaluates coal, petroleum, natural gas, nuclear energy and renewable energy resources as energy alternatives for China through use of a hierarchical decision model. The results indicate that although coal is still the major preferred energy alternative, it is followed closely by renewable energy. The sensitivity analysis indicates that the most critical criterion for energy selection is the current energy infrastructure. A hierarchical decision model is used, and expert judgments are quantified, to evaluate the alternatives. Criteria used for the evaluations are availability, current energy infrastructure, price, safety, environmental impacts and social impacts.
Scale Model Thruster Acoustic Measurement Results
Vargas, Magda; Kenny, R. Jeremy
2013-01-01
The Space Launch System (SLS) Scale Model Acoustic Test (SMAT) is a 5% scale representation of the SLS vehicle, mobile launcher, tower, and launch pad trench. The SLS launch propulsion system will be comprised of the Rocket Assisted Take-Off (RATO) motors representing the solid boosters and 4 Gas Hydrogen (GH2) thrusters representing the core engines. The GH2 thrusters were tested in a horizontal configuration in order to characterize their performance. In Phase 1, a single thruster was fired to determine the engine performance parameters necessary for scaling a single engine. A cluster configuration, consisting of the 4 thrusters, was tested in Phase 2 to integrate the system and determine their combined performance. Acoustic and overpressure data was collected during both test phases in order to characterize the system's acoustic performance. The results from the single thruster and 4- thuster system are discussed and compared.
CMS standard model Higgs boson results
Directory of Open Access Journals (Sweden)
Garcia-Abia Pablo
2013-11-01
Full Text Available In July 2012 CMS announced the discovery of a new boson with properties resembling those of the long-sought Higgs boson. The analysis of the proton-proton collision data recorded by the CMS detector at the LHC, corresponding to integrated luminosities of 5.1 fb−1 at √s = 7 TeV and 19.6 fb−1 at √s = 8 TeV, confirm the Higgs-like nature of the new boson, with a signal strength associated with vector bosons and fermions consistent with the expectations for a standard model (SM Higgs boson, and spin-parity clearly favouring the scalar nature of the new boson. In this note I review the updated results of the CMS experiment.
Broken selection rule in the quantum Rabi model.
Forn-Díaz, P; Romero, G; Harmans, C J P M; Solano, E; Mooij, J E
2016-06-07
Understanding the interaction between light and matter is very relevant for fundamental studies of quantum electrodynamics and for the development of quantum technologies. The quantum Rabi model captures the physics of a single atom interacting with a single photon at all regimes of coupling strength. We report the spectroscopic observation of a resonant transition that breaks a selection rule in the quantum Rabi model, implemented using an LC resonator and an artificial atom, a superconducting qubit. The eigenstates of the system consist of a superposition of bare qubit-resonator states with a relative sign. When the qubit-resonator coupling strength is negligible compared to their own frequencies, the matrix element between excited eigenstates of different sign is very small in presence of a resonator drive, establishing a sign-preserving selection rule. Here, our qubit-resonator system operates in the ultrastrong coupling regime, where the coupling strength is 10% of the resonator frequency, allowing sign-changing transitions to be activated and, therefore, detected. This work shows that sign-changing transitions are an unambiguous, distinctive signature of systems operating in the ultrastrong coupling regime of the quantum Rabi model. These results pave the way to further studies of sign-preserving selection rules in multiqubit and multiphoton models.
Models of cultural niche construction with selection and assortative mating.
Creanza, Nicole; Fogarty, Laurel; Feldman, Marcus W
2012-01-01
Niche construction is a process through which organisms modify their environment and, as a result, alter the selection pressures on themselves and other species. In cultural niche construction, one or more cultural traits can influence the evolution of other cultural or biological traits by affecting the social environment in which the latter traits may evolve. Cultural niche construction may include either gene-culture or culture-culture interactions. Here we develop a model of this process and suggest some applications of this model. We examine the interactions between cultural transmission, selection, and assorting, paying particular attention to the complexities that arise when selection and assorting are both present, in which case stable polymorphisms of all cultural phenotypes are possible. We compare our model to a recent model for the joint evolution of religion and fertility and discuss other potential applications of cultural niche construction theory, including the evolution and maintenance of large-scale human conflict and the relationship between sex ratio bias and marriage customs. The evolutionary framework we introduce begins to address complexities that arise in the quantitative analysis of multiple interacting cultural traits.
Models of cultural niche construction with selection and assortative mating.
Directory of Open Access Journals (Sweden)
Nicole Creanza
Full Text Available Niche construction is a process through which organisms modify their environment and, as a result, alter the selection pressures on themselves and other species. In cultural niche construction, one or more cultural traits can influence the evolution of other cultural or biological traits by affecting the social environment in which the latter traits may evolve. Cultural niche construction may include either gene-culture or culture-culture interactions. Here we develop a model of this process and suggest some applications of this model. We examine the interactions between cultural transmission, selection, and assorting, paying particular attention to the complexities that arise when selection and assorting are both present, in which case stable polymorphisms of all cultural phenotypes are possible. We compare our model to a recent model for the joint evolution of religion and fertility and discuss other potential applications of cultural niche construction theory, including the evolution and maintenance of large-scale human conflict and the relationship between sex ratio bias and marriage customs. The evolutionary framework we introduce begins to address complexities that arise in the quantitative analysis of multiple interacting cultural traits.
Austin, Peter C
2008-10-01
Researchers have proposed using bootstrap resampling in conjunction with automated variable selection methods to identify predictors of an outcome and to develop parsimonious regression models. Using this method, multiple bootstrap samples are drawn from the original data set. Traditional backward variable elimination is used in each bootstrap sample, and the proportion of bootstrap samples in which each candidate variable is identified as an independent predictor of the outcome is determined. The performance of this method for identifying predictor variables has not been examined. Monte Carlo simulation methods were used to determine the ability of bootstrap model selection methods to correctly identify predictors of an outcome when those variables that are selected for inclusion in at least 50% of the bootstrap samples are included in the final regression model. We compared the performance of the bootstrap model selection method to that of conventional backward variable elimination. Bootstrap model selection tended to result in an approximately equal proportion of selected models being equal to the true regression model compared with the use of conventional backward variable elimination. Bootstrap model selection performed comparatively to backward variable elimination for identifying the true predictors of a binary outcome.
Quantitative modeling of selective lysosomal targeting for drug design
DEFF Research Database (Denmark)
Trapp, Stefan; Rosania, G.; Horobin, R.W.
2008-01-01
Lysosomes are acidic organelles and are involved in various diseases, the most prominent is malaria. Accumulation of molecules in the cell by diffusion from the external solution into cytosol, lysosome and mitochondrium was calculated with the Fick–Nernst–Planck equation. The cell model considers...... the diffusion of neutral and ionic molecules across biomembranes, protonation to mono- or bivalent ions, adsorption to lipids, and electrical attraction or repulsion. Based on simulation results, high and selective accumulation in lysosomes was found for weak mono- and bivalent bases with intermediate to high...... predicted by the model and three were close. Five of the antimalarial drugs were lipophilic weak dibasic compounds. The predicted optimum properties for a selective accumulation of weak bivalent bases in lysosomes are consistent with experimental values and are more accurate than any prior calculation...
Genomic Selection in Plant Breeding: Methods, Models, and Perspectives.
Crossa, José; Pérez-Rodríguez, Paulino; Cuevas, Jaime; Montesinos-López, Osval; Jarquín, Diego; de Los Campos, Gustavo; Burgueño, Juan; González-Camacho, Juan M; Pérez-Elizalde, Sergio; Beyene, Yoseph; Dreisigacker, Susanne; Singh, Ravi; Zhang, Xuecai; Gowda, Manje; Roorkiwal, Manish; Rutkoski, Jessica; Varshney, Rajeev K
2017-11-01
Genomic selection (GS) facilitates the rapid selection of superior genotypes and accelerates the breeding cycle. In this review, we discuss the history, principles, and basis of GS and genomic-enabled prediction (GP) as well as the genetics and statistical complexities of GP models, including genomic genotype×environment (G×E) interactions. We also examine the accuracy of GP models and methods for two cereal crops and two legume crops based on random cross-validation. GS applied to maize breeding has shown tangible genetic gains. Based on GP results, we speculate how GS in germplasm enhancement (i.e., prebreeding) programs could accelerate the flow of genes from gene bank accessions to elite lines. Recent advances in hyperspectral image technology could be combined with GS and pedigree-assisted breeding. Copyright © 2017 Elsevier Ltd. All rights reserved.
Uncertainty associated with selected environmental transport models
International Nuclear Information System (INIS)
Little, C.A.; Miller, C.W.
1979-11-01
A description is given of the capabilities of several models to predict accurately either pollutant concentrations in environmental media or radiological dose to human organs. The models are discussed in three sections: aquatic or surface water transport models, atmospheric transport models, and terrestrial and aquatic food chain models. Using data published primarily by model users, model predictions are compared to observations. This procedure is infeasible for food chain models and, therefore, the uncertainty embodied in the models input parameters, rather than the model output, is estimated. Aquatic transport models are divided into one-dimensional, longitudinal-vertical, and longitudinal-horizontal models. Several conclusions were made about the ability of the Gaussian plume atmospheric dispersion model to predict accurately downwind air concentrations from releases under several sets of conditions. It is concluded that no validation study has been conducted to test the predictions of either aquatic or terrestrial food chain models. Using the aquatic pathway from water to fish to an adult for 137 Cs as an example, a 95% one-tailed confidence limit interval for the predicted exposure is calculated by examining the distributions of the input parameters. Such an interval is found to be 16 times the value of the median exposure. A similar one-tailed limit for the air-grass-cow-milk-thyroid for 131 I and infants was 5.6 times the median dose. Of the three model types discussed in this report,the aquatic transport models appear to do the best job of predicting observed concentrations. However, this conclusion is based on many fewer aquatic validation data than were availaable for atmospheric model validation
Quality Quandaries- Time Series Model Selection and Parsimony
DEFF Research Database (Denmark)
Bisgaard, Søren; Kulahci, Murat
2009-01-01
Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....
Selecting global climate models for regional climate change studies.
Pierce, David W; Barnett, Tim P; Santer, Benjamin D; Gleckler, Peter J
2009-05-26
Regional or local climate change modeling studies currently require starting with a global climate model, then downscaling to the region of interest. How should global models be chosen for such studies, and what effect do such choices have? This question is addressed in the context of a regional climate detection and attribution (D&A) study of January-February-March (JFM) temperature over the western U.S. Models are often selected for a regional D&A analysis based on the quality of the simulated regional climate. Accordingly, 42 performance metrics based on seasonal temperature and precipitation, the El Nino/Southern Oscillation (ENSO), and the Pacific Decadal Oscillation are constructed and applied to 21 global models. However, no strong relationship is found between the score of the models on the metrics and results of the D&A analysis. Instead, the importance of having ensembles of runs with enough realizations to reduce the effects of natural internal climate variability is emphasized. Also, the superiority of the multimodel ensemble average (MM) to any 1 individual model, already found in global studies examining the mean climate, is true in this regional study that includes measures of variability as well. Evidence is shown that this superiority is largely caused by the cancellation of offsetting errors in the individual global models. Results with both the MM and models picked randomly confirm the original D&A results of anthropogenically forced JFM temperature changes in the western U.S. Future projections of temperature do not depend on model performance until the 2080s, after which the better performing models show warmer temperatures.
Auditory-model based robust feature selection for speech recognition.
Koniaris, Christos; Kuropatwinski, Marcin; Kleijn, W Bastiaan
2010-02-01
It is shown that robust dimension-reduction of a feature set for speech recognition can be based on a model of the human auditory system. Whereas conventional methods optimize classification performance, the proposed method exploits knowledge implicit in the auditory periphery, inheriting its robustness. Features are selected to maximize the similarity of the Euclidean geometry of the feature domain and the perceptual domain. Recognition experiments using mel-frequency cepstral coefficients (MFCCs) confirm the effectiveness of the approach, which does not require labeled training data. For noisy data the method outperforms commonly used discriminant-analysis based dimension-reduction methods that rely on labeling. The results indicate that selecting MFCCs in their natural order results in subsets with good performance.
Proposition of a multicriteria model to select logistics services providers
Directory of Open Access Journals (Sweden)
Miriam Catarina Soares Aharonovitz
2014-06-01
Full Text Available This study aims to propose a multicriteria model to select logistics service providers by the development of a decision tree. The methodology consists of a survey, which resulted in a sample of 181 responses. The sample was analyzed using statistic methods, descriptive statistics among them, multivariate analysis, variance analysis, and parametric tests to compare means. Based on these results, it was possible to obtain the decision tree and information to support the multicriteria analysis. The AHP (Analytic Hierarchy Process was applied to determine the data influence and thus ensure better consistency in the analysis. The decision tree categorizes the criteria according to the decision levels (strategic, tactical and operational. Furthermore, it allows to generically evaluate the importance of each criterion in the supplier selection process from the point of view of logistics services contractors.
McGurk, B. J.; Painter, T. H.
2014-12-01
Deterministic snow accumulation and ablation simulation models are widely used by runoff managers throughout the world to predict runoff quantities and timing. Model fitting is typically based on matching modeled runoff volumes and timing with observed flow time series at a few points in the basin. In recent decades, sparse networks of point measurements of the mountain snowpacks have been available to compare with modeled snowpack, but the comparability of results from a snow sensor or course to model polygons of 5 to 50 sq. km is suspect. However, snowpack extent, depth, and derived snow water equivalent have been produced by the NASA/JPL Airborne Snow Observatory (ASO) mission for spring of 20013 and 2014 in the Tuolumne River basin above Hetch Hetchy Reservoir. These high-resolution snowpack data have exposed the weakness in a model calibration based on runoff alone. The U.S. Geological Survey's Precipitation Runoff Modeling System (PRMS) calibration that was based on 30-years of inflow to Hetch Hetchy produces reasonable inflow results, but modeled spatial snowpack location and water quantity diverged significantly from the weekly measurements made by ASO during the two ablation seasons. The reason is that the PRMS model has many flow paths, storages, and water transfer equations, and a calibrated outflow time series can be right for many wrong reasons. The addition of a detailed knowledge of snow extent and water content constrains the model so that it is a better representation of the actual watershed hydrology. The mechanics of recalibrating PRMS to the ASO measurements will be described, and comparisons in observed versus modeled flow for both a small subbasin and the entire Hetch Hetchy basin will be shown. The recalibrated model provided a bitter fit to the snowmelt recession, a key factor for water managers as they balance declining inflows with demand for power generation and ecosystem releases during the final months of snow melt runoff.
Immersive visualization of dynamic CFD model results
International Nuclear Information System (INIS)
Comparato, J.R.; Ringel, K.L.; Heath, D.J.
2004-01-01
With immersive visualization the engineer has the means for vividly understanding problem causes and discovering opportunities to improve design. Software can generate an interactive world in which collaborators experience the results of complex mathematical simulations such as computational fluid dynamic (CFD) modeling. Such software, while providing unique benefits over traditional visualization techniques, presents special development challenges. The visualization of large quantities of data interactively requires both significant computational power and shrewd data management. On the computational front, commodity hardware is outperforming large workstations in graphical quality and frame rates. Also, 64-bit commodity computing shows promise in enabling interactive visualization of large datasets. Initial interactive transient visualization methods and examples are presented, as well as development trends in commodity hardware and clustering. Interactive, immersive visualization relies on relevant data being stored in active memory for fast response to user requests. For large or transient datasets, data management becomes a key issue. Techniques for dynamic data loading and data reduction are presented as means to increase visualization performance. (author)
Pareto-Optimal Model Selection via SPRINT-Race.
Zhang, Tiantian; Georgiopoulos, Michael; Anagnostopoulos, Georgios C
2018-02-01
In machine learning, the notion of multi-objective model selection (MOMS) refers to the problem of identifying the set of Pareto-optimal models that optimize by compromising more than one predefined objectives simultaneously. This paper introduces SPRINT-Race, the first multi-objective racing algorithm in a fixed-confidence setting, which is based on the sequential probability ratio with indifference zone test. SPRINT-Race addresses the problem of MOMS with multiple stochastic optimization objectives in the proper Pareto-optimality sense. In SPRINT-Race, a pairwise dominance or non-dominance relationship is statistically inferred via a non-parametric, ternary-decision, dual-sequential probability ratio test. The overall probability of falsely eliminating any Pareto-optimal models or mistakenly returning any clearly dominated models is strictly controlled by a sequential Holm's step-down family-wise error rate control method. As a fixed-confidence model selection algorithm, the objective of SPRINT-Race is to minimize the computational effort required to achieve a prescribed confidence level about the quality of the returned models. The performance of SPRINT-Race is first examined via an artificially constructed MOMS problem with known ground truth. Subsequently, SPRINT-Race is applied on two real-world applications: 1) hybrid recommender system design and 2) multi-criteria stock selection. The experimental results verify that SPRINT-Race is an effective and efficient tool for such MOMS problems. code of SPRINT-Race is available at https://github.com/watera427/SPRINT-Race.
Engineering Glass Passivation Layers -Model Results
Energy Technology Data Exchange (ETDEWEB)
Skorski, Daniel C.; Ryan, Joseph V.; Strachan, Denis M.; Lepry, William C.
2011-08-08
The immobilization of radioactive waste into glass waste forms is a baseline process of nuclear waste management not only in the United States, but worldwide. The rate of radionuclide release from these glasses is a critical measure of the quality of the waste form. Over long-term tests and using extrapolations of ancient analogues, it has been shown that well designed glasses exhibit a dissolution rate that quickly decreases to a slow residual rate for the lifetime of the glass. The mechanistic cause of this decreased corrosion rate is a subject of debate, with one of the major theories suggesting that the decrease is caused by the formation of corrosion products in such a manner as to present a diffusion barrier on the surface of the glass. Although there is much evidence of this type of mechanism, there has been no attempt to engineer the effect to maximize the passivating qualities of the corrosion products. This study represents the first attempt to engineer the creation of passivating phases on the surface of glasses. Our approach utilizes interactions between the dissolving glass and elements from the disposal environment to create impermeable capping layers. By drawing from other corrosion studies in areas where passivation layers have been successfully engineered to protect the bulk material, we present here a report on mineral phases that are likely have a morphological tendency to encrust the surface of the glass. Our modeling has focused on using the AFCI glass system in a carbonate, sulfate, and phosphate rich environment. We evaluate the minerals predicted to form to determine the likelihood of the formation of a protective layer on the surface of the glass. We have also modeled individual ions in solutions vs. pH and the addition of aluminum and silicon. These results allow us to understand the pH and ion concentration dependence of mineral formation. We have determined that iron minerals are likely to form a complete incrustation layer and we plan
Application of Bayesian Model Selection for Metal Yield Models using ALEGRA and Dakota.
Energy Technology Data Exchange (ETDEWEB)
Portone, Teresa; Niederhaus, John Henry; Sanchez, Jason James; Swiler, Laura Painton
2018-02-01
This report introduces the concepts of Bayesian model selection, which provides a systematic means of calibrating and selecting an optimal model to represent a phenomenon. This has many potential applications, including for comparing constitutive models. The ideas described herein are applied to a model selection problem between different yield models for hardened steel under extreme loading conditions.
Modeling and Field Results from Seismic Stimulation
International Nuclear Information System (INIS)
Majer, E.; Pride, S.; Lo, W.; Daley, T.; Nakagawa, Seiji; Sposito, Garrison; Roberts, P.
2006-01-01
Modeling the effect of seismic stimulation employing Maxwell-Boltzmann theory shows that the important component of stimulation is mechanical rather than fluid pressure effects. Modeling using Biot theory (two phases) shows that the pressure effects diffuse too quickly to be of practical significance. Field data from actual stimulation will be shown to compare to theory
Patch-based generative shape model and MDL model selection for statistical analysis of archipelagos
DEFF Research Database (Denmark)
Ganz, Melanie; Nielsen, Mads; Brandt, Sami
2010-01-01
as a probability distribution of a binary image where the model is intended to facilitate sequential simulation. Our results show that a relatively simple model is able to generate structures visually similar to calcifications. Furthermore, we used the shape model as a shape prior in the statistical segmentation......We propose a statistical generative shape model for archipelago-like structures. These kind of structures occur, for instance, in medical images, where our intention is to model the appearance and shapes of calcifications in x-ray radio graphs. The generative model is constructed by (1) learning...... a patch-based dictionary for possible shapes, (2) building up a time-homogeneous Markov model to model the neighbourhood correlations between the patches, and (3) automatic selection of the model complexity by the minimum description length principle. The generative shape model is proposed...
A Hybrid Multiple Criteria Decision Making Model for Supplier Selection
Wu, Chung-Min; Hsieh, Ching-Lin; Chang, Kuei-Lun
2013-01-01
The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM) model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP) is then used to obtain their weights. To avoid calculation and additional pairwise compa...
Astrophysical Model Selection in Gravitational Wave Astronomy
Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.
2012-01-01
Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.
Modeling and Analysis of Supplier Selection Method Using ...
African Journals Online (AJOL)
However, in these parts of the world the application of tools and models for supplier selection problem is yet to surface and the banking and finance industry here in Ethiopia is no exception. Thus, the purpose of this research was to address supplier selection problem through modeling and application of analytical hierarchy ...
Dealing with selection bias in educational transition models
DEFF Research Database (Denmark)
Holm, Anders; Jæger, Mads Meier
2011-01-01
This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational tr...
On Optimal Input Design and Model Selection for Communication Channels
Energy Technology Data Exchange (ETDEWEB)
Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL
2013-01-01
In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.
Python Program to Select HII Region Models
Miller, Clare; Lamarche, Cody; Vishwas, Amit; Stacey, Gordon J.
2016-01-01
HII regions are areas of singly ionized Hydrogen formed by the ionizing radiaiton of upper main sequence stars. The infrared fine-structure line emissions, particularly Oxygen, Nitrogen, and Neon, can give important information about HII regions including gas temperature and density, elemental abundances, and the effective temperature of the stars that form them. The processes involved in calculating this information from observational data are complex. Models, such as those provided in Rubin 1984 and those produced by Cloudy (Ferland et al, 2013) enable one to extract physical parameters from observational data. However, the multitude of search parameters can make sifting through models tedious. I digitized Rubin's models and wrote a Python program that is able to take observed line ratios and their uncertainties and find the Rubin or Cloudy model that best matches the observational data. By creating a Python script that is user friendly and able to quickly sort through models with a high level of accuracy, this work increases efficiency and reduces human error in matching HII region models to observational data.
Ground-water transport model selection and evaluation guidelines
International Nuclear Information System (INIS)
Simmons, C.S.; Cole, C.R.
1983-01-01
Guidelines are being developed to assist potential users with selecting appropriate computer codes for ground-water contaminant transport modeling. The guidelines are meant to assist managers with selecting appropriate predictive models for evaluating either arid or humid low-level radioactive waste burial sites. Evaluation test cases in the form of analytical solutions to fundamental equations and experimental data sets have been identified and recommended to ensure adequate code selection, based on accurate simulation of relevant physical processes. The recommended evaluation procedures will consider certain technical issues related to the present limitations in transport modeling capabilities. A code-selection plan will depend on identifying problem objectives, determining the extent of collectible site-specific data, and developing a site-specific conceptual model for the involved hydrology. Code selection will be predicated on steps for developing an appropriate systems model. This paper will review the progress in developing those guidelines. 12 references
Selection of Representative Models for Decision Analysis Under Uncertainty
Meira, Luis A. A.; Coelho, Guilherme P.; Santos, Antonio Alberto S.; Schiozer, Denis J.
2016-03-01
The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.
Applying a Hybrid MCDM Model for Six Sigma Project Selection
Directory of Open Access Journals (Sweden)
Fu-Kwun Wang
2014-01-01
Full Text Available Six Sigma is a project-driven methodology; the projects that provide the maximum financial benefits and other impacts to the organization must be prioritized. Project selection (PS is a type of multiple criteria decision making (MCDM problem. In this study, we present a hybrid MCDM model combining the decision-making trial and evaluation laboratory (DEMATEL technique, analytic network process (ANP, and the VIKOR method to evaluate and improve Six Sigma projects for reducing performance gaps in each criterion and dimension. We consider the film printing industry of Taiwan as an empirical case. The results show that our study not only can use the best project selection, but can also be used to analyze the gaps between existing performance values and aspiration levels for improving the gaps in each dimension and criterion based on the influential network relation map.
Continuum model for chiral induced spin selectivity in helical molecules
Energy Technology Data Exchange (ETDEWEB)
Medina, Ernesto [Centro de Física, Instituto Venezolano de Investigaciones Científicas, 21827, Caracas 1020 A (Venezuela, Bolivarian Republic of); Groupe de Physique Statistique, Institut Jean Lamour, Université de Lorraine, 54506 Vandoeuvre-les-Nancy Cedex (France); Department of Chemistry and Biochemistry, Arizona State University, Tempe, Arizona 85287 (United States); González-Arraga, Luis A. [IMDEA Nanoscience, Cantoblanco, 28049 Madrid (Spain); Finkelstein-Shapiro, Daniel; Mujica, Vladimiro [Department of Chemistry and Biochemistry, Arizona State University, Tempe, Arizona 85287 (United States); Berche, Bertrand [Centro de Física, Instituto Venezolano de Investigaciones Científicas, 21827, Caracas 1020 A (Venezuela, Bolivarian Republic of); Groupe de Physique Statistique, Institut Jean Lamour, Université de Lorraine, 54506 Vandoeuvre-les-Nancy Cedex (France)
2015-05-21
A minimal model is exactly solved for electron spin transport on a helix. Electron transport is assumed to be supported by well oriented p{sub z} type orbitals on base molecules forming a staircase of definite chirality. In a tight binding interpretation, the spin-orbit coupling (SOC) opens up an effective π{sub z} − π{sub z} coupling via interbase p{sub x,y} − p{sub z} hopping, introducing spin coupled transport. The resulting continuum model spectrum shows two Kramers doublet transport channels with a gap proportional to the SOC. Each doubly degenerate channel satisfies time reversal symmetry; nevertheless, a bias chooses a transport direction and thus selects for spin orientation. The model predicts (i) which spin orientation is selected depending on chirality and bias, (ii) changes in spin preference as a function of input Fermi level and (iii) back-scattering suppression protected by the SO gap. We compute the spin current with a definite helicity and find it to be proportional to the torsion of the chiral structure and the non-adiabatic Aharonov-Anandan phase. To describe room temperature transport, we assume that the total transmission is the result of a product of coherent steps.
Estimation and variable selection for generalized additive partial linear models
Wang, Li
2011-08-01
We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.
Methods for model selection in applied science and engineering.
Energy Technology Data Exchange (ETDEWEB)
Field, Richard V., Jr.
2004-10-01
Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be
Coupled Michigan MHD - Rice Convection Model Results
de Zeeuw, D.; Sazykin, S.; Wolf, D.; Gombosi, T.; Powell, K.
2002-12-01
A new high performance Rice Convection Model (RCM) has been coupled to the adaptive-grid Michigan MHD model (BATSRUS). This fully coupled code allows us to self-consistently simulate the physics in the inner and middle magnetosphere. A study will be presented of the basic characteristics of the inner and middle magnetosphere in the context of a single coupled-code run for idealized storm inputs. The analysis will include region-2 currents, shielding of the inner magnetosphere, partial ring currents, pressure distribution, magnetic field inflation, and distribution of pV^gamma.
Graphical interpretation of numerical model results
International Nuclear Information System (INIS)
Drewes, D.R.
1979-01-01
Computer software has been developed to produce high quality graphical displays of data from a numerical grid model. The code uses an existing graphical display package (DISSPLA) and overcomes some of the problems of both line-printer output and traditional graphics. The software has been designed to be flexible enough to handle arbitrarily placed computation grids and a variety of display requirements
International Nuclear Information System (INIS)
Thijs, Lore; Montero Sistiaga, Maria Luz; Wauthle, Ruben; Xie, Qingge; Kruth, Jean-Pierre; Van Humbeeck, Jan
2013-01-01
Selective laser melting (SLM) makes use of a high energy density laser beam to melt successive layers of metallic powders in order to create functional parts. The energy density of the laser is high enough to melt refractory metals like Ta and produce mechanically sound parts. Furthermore, the localized heat input causes a strong directional cooling and solidification. Epitaxial growth due to partial remelting of the previous layer, competitive growth mechanism and a specific global direction of heat flow during SLM of Ta result in the formation of long columnar grains with a 〈1 1 1〉 preferential crystal orientation along the building direction. The microstructure was visualized using both optical and scanning electron microscopy equipped with electron backscattered diffraction and the global crystallographic texture was measured using X-ray diffraction. The thermal profile around the melt pool was modeled using a pragmatic model for SLM. Furthermore, rotation of the scanning direction between different layers was seen to promote the competitive growth. As a result, the texture strength increased to as large as 4.7 for rotating the scanning direction 90° every layer. By comparison of the yield strength measured by compression tests in different orientations and the averaged Taylor factor calculated using the viscoplastic self-consistent model, it was found that both the morphological and crystallographic texture observed in SLM Ta contribute to yield strength anisotropy
Ignalina NPP Safety Analysis: Models and Results
International Nuclear Information System (INIS)
Uspuras, E.
1999-01-01
Research directions, linked to safety assessment of the Ignalina NPP, of the scientific safety analysis group are presented: Thermal-hydraulic analysis of accidents and operational transients; Thermal-hydraulic assessment of Ignalina NPP Accident Localization System and other compartments; Structural analysis of plant components, piping and other parts of Main Circulation Circuit; Assessment of RBMK-1500 reactor core and other. Models and main works carried out last year are described. (author)
Modeling clicks beyond the first result page
Chuklin, A.; Serdyukov, P.; de Rijke, M.
2013-01-01
Most modern web search engines yield a list of documents of a fixed length (usually 10) in response to a user query. The next ten search results are usually available in one click. These documents either replace the current result page or are appended to the end. Hence, in order to examine more
Development of Solar Drying Model for Selected Cambodian Fish Species
Directory of Open Access Journals (Sweden)
Anna Hubackova
2014-01-01
Full Text Available A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6°C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg·h−1. Based on coefficient of determination (R2, chi-square (χ2 test, and root-mean-square error (RMSE, the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing.
Modeling HIV-1 drug resistance as episodic directional selection.
Directory of Open Access Journals (Sweden)
Ben Murrell
Full Text Available The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.
Muir, William M; Bijma, P; Schinckel, A
2013-06-01
An experiment was conducted comparing multilevel selection in Japanese quail for 43 days weight and survival with birds housed in either kin (K) or random (R) groups. Multilevel selection significantly reduced mortality (6.6% K vs. 8.5% R) and increased weight (1.30 g/MG K vs. 0.13 g/MG R) resulting in response an order of magnitude greater with Kin than Random. Thus, multilevel selection was effective in reducing detrimental social interactions, which contributed to improved weight gain. The observed rates of response did not differ significantly from expected, demonstrating that current theory is adequate to explain multilevel selection response. Based on estimated genetic parameters, group selection would always be superior to any other combination of multilevel selection. Further, near optimal results could be attained using multilevel selection if 20% of the weight was on the group component regardless of group composition. Thus, in nature the conditions for multilevel selection to be effective in bringing about social change maybe common. In terms of a sustainability of breeding programs, multilevel selection is easy to implement and is expected to give near optimal responses with reduced rates of inbreeding as compared to group selection, the only requirement is that animals be housed in kin groups. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.
Surfleet, Christopher G.; Tullos, Desirèe; Chang, Heejun; Jung, Il-Won
2012-09-01
SummaryA wide variety of approaches to hydrologic (rainfall-runoff) modeling of river basins confounds our ability to select, develop, and interpret models, particularly in the evaluation of prediction uncertainty associated with climate change assessment. To inform the model selection process, we characterized and compared three structurally-distinct approaches and spatial scales of parameterization to modeling catchment hydrology: a large-scale approach (using the VIC model; 671,000 km2 area), a basin-scale approach (using the PRMS model; 29,700 km2 area), and a site-specific approach (the GSFLOW model; 4700 km2 area) forced by the same future climate estimates. For each approach, we present measures of fit to historic observations and predictions of future response, as well as estimates of model parameter uncertainty, when available. While the site-specific approach generally had the best fit to historic measurements, the performance of the model approaches varied. The site-specific approach generated the best fit at unregulated sites, the large scale approach performed best just downstream of flood control projects, and model performance varied at the farthest downstream sites where streamflow regulation is mitigated to some extent by unregulated tributaries and water diversions. These results illustrate how selection of a modeling approach and interpretation of climate change projections require (a) appropriate parameterization of the models for climate and hydrologic processes governing runoff generation in the area under study, (b) understanding and justifying the assumptions and limitations of the model, and (c) estimates of uncertainty associated with the modeling approach.
Ponciano, José Miguel; Taper, Mark L; Dennis, Brian; Lele, Subhash R
2009-02-01
Hierarchical statistical models are increasingly being used to describe complex ecological processes. The data cloning (DC) method is a new general technique that uses Markov chain Monte Carlo (MCMC) algorithms to compute maximum likelihood (ML) estimates along with their asymptotic variance estimates for hierarchical models. Despite its generality, the method has two inferential limitations. First, it only provides Wald-type confidence intervals, known to be inaccurate in small samples. Second, it only yields ML parameter estimates, but not the maximized likelihood values used for profile likelihood intervals, likelihood ratio hypothesis tests, and information-theoretic model selection. Here we describe how to overcome these inferential limitations with a computationally efficient method for calculating likelihood ratios via data cloning. The ability to calculate likelihood ratios allows one to do hypothesis tests, construct accurate confidence intervals and undertake information-based model selection with hierarchical models in a frequentist context. To demonstrate the use of these tools with complex ecological models, we reanalyze part of Gause's classic Paramecium data with state-space population models containing both environmental noise and sampling error. The analysis results include improved confidence intervals for parameters, a hypothesis test of laboratory replication, and a comparison of the Beverton-Holt and the Ricker growth forms based on a model selection index.
Variable selection for mixture and promotion time cure rate models.
Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng
2016-11-16
Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.
Microplasticity of MMC. Experimental results and modelling
Energy Technology Data Exchange (ETDEWEB)
Maire, E. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Lormand, G. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Gobin, P.F. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Fougeres, R. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France))
1993-11-01
The microplastic behavior of several MMC is investigated by means of tension and compression tests. This behavior is assymetric : the proportional limit is higher in tension than in compression but the work hardening rate is higher in compression. These differences are analysed in terms of maxium of the Tresca's shear stress at the interface (proportional limit) and of the emission of dislocation loops during the cooling (work hardening rate). On another hand, a model is proposed to calculate the value of the yield stress, describing the composite as a material composed of three phases : inclusion, unaffected matrix and matrix surrounding the inclusion having a gradient in the density of the thermally induced dilocations. (orig.).
Partner Selection Optimization Model of Agricultural Enterprises in Supply Chain
Feipeng Guo; Qibei Lu
2013-01-01
With more and more importance of correctly selecting partners in supply chain of agricultural enterprises, a large number of partner evaluation techniques are widely used in the field of agricultural science research. This study established a partner selection model to optimize the issue of agricultural supply chain partner selection. Firstly, it constructed a comprehensive evaluation index system after analyzing the real characteristics of agricultural supply chain. Secondly, a heuristic met...
Effect of Model Selection on Computed Water Balance Components
Jhorar, R.K.; Smit, A.A.M.F.R.; Roest, C.W.J.
2009-01-01
Soil water flow modelling approaches as used in four selected on-farm water management models, namely CROPWAT. FAIDS, CERES and SWAP, are compared through numerical experiments. The soil water simulation approaches used in the first three models are reformulated to incorporate ail evapotranspiration
Muir, W.M.; Bijma, P.; schinckel, A.
2013-01-01
An experiment was conducted comparing multilevel selection in Japanese quail for 43 days weight and survival with birds housed in either kin (K) or random (R) groups. Multilevel selection significantly reduced mortality (6.6% K vs. 8.5% R) and increased weight (1.30 g/MG K vs. 0.13 g/MG R) resulting
Animal Model Selection for Inhalational HCN Exposure
2016-08-01
hypoxia Decreased utilization of oxygen leads to an increase in venous oxygen levels. Because the brain, heart, and other oxygen-sensitive tissue...are impacted by CN, oxygen cannot be utilized in those tissues and results in cellular hypoxia . In addition, a decreased utilization of pyruvate by the...oxygen species, enhances N-methyl-D-aspartate (NMDA) receptor function, interacts with cystine to produce 2-ICA and 2-ACA (associated with memory
Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang
2014-12-01
Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.
Validation of elk resource selection models with spatially independent data
Priscilla K. Coe; Bruce K. Johnson; Michael J. Wisdom; John G. Cook; Marty Vavra; Ryan M. Nielson
2011-01-01
Knowledge of how landscape features affect wildlife resource use is essential for informed management. Resource selection functions often are used to make and validate predictions about landscape use; however, resource selection functions are rarely validated with data from landscapes independent of those from which the models were built. This problem has severely...
A Working Model of Natural Selection Illustrated by Table Tennis
Dinc, Muhittin; Kilic, Selda; Aladag, Caner
2013-01-01
Natural selection is one of the most important topics in biology and it helps to clarify the variety and complexity of organisms. However, students in almost every stage of education find it difficult to understand the mechanism of natural selection and they can develop misconceptions about it. This article provides an active model of natural…
Augmented Self-Modeling as an Intervention for Selective Mutism
Kehle, Thomas J.; Bray, Melissa A.; Byer-Alcorace, Gabriel F.; Theodore, Lea A.; Kovac, Lisa M.
2012-01-01
Selective mutism is a rare disorder that is difficult to treat. It is often associated with oppositional defiant behavior, particularly in the home setting, social phobia, and, at times, autism spectrum disorder characteristics. The augmented self-modeling treatment has been relatively successful in promoting rapid diminishment of selective mutism…
Review of Current Standard Model Results in ATLAS
Brandt, Gerhard; The ATLAS collaboration
2018-01-01
This talk highlights results selected from the Standard Model research programme of the ATLAS Collaboration at the Large Hadron Collider. Results using data from $p-p$ collisions at $\\sqrt{s}=7,8$~TeV in LHC Run-1 as well as results using data at $\\sqrt{s}=13$~TeV in LHC Run-2 are covered. The status of cross section measurements from soft QCD processes and jet production as well as photon production are presented. The presentation extends to vector boson production with associated jets. Precision measurements of the production of $W$ and $Z$ bosons, including a first measurement of the mass of the $W$ bosons, $m_W$, are discussed. The programme to measure electroweak processes with di-boson and tri-boson final states is outlined. All presented measurements are compatible with Standard Model descriptions and allow to further constrain it. In addition they allow to probe new physics which would manifest through extra gauge couplings, or Standard Model gauge couplings deviating from their predicted value.
On Martingales, Causality, Identifiability and Model Selection
DEFF Research Database (Denmark)
Sokol, Alexander
or not. We attempt to elucidate what happens in the case where the error distributions are close to but not exactly Gaussian. Finally, Chapter 10 discusses degrees of freedom in nonlinear regression. Our motivating problem is that of L1-constrained and L1-penalized estimation in nonlinear regression. Our......Ornstein-Uhlenbeck SDEs, where explicit calculations may be made for the postintervention distributions. Chapter 9 concerns identiability of the mixing matrix in ICA. It is a well-known result that identiability of the mixing matrix depends crucially on whether the error distributions are Gaussian...
Robust Decision-making Applied to Model Selection
Energy Technology Data Exchange (ETDEWEB)
Hemez, Francois M. [Los Alamos National Laboratory
2012-08-06
The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define each of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.
Target Selection Models with Preference Variation Between Offenders
Townsley, Michael; Birks, Daniel; Ruiter, Stijn; Bernasco, Wim; White, Gentry
2016-01-01
Objectives: This study explores preference variation in location choice strategies of residential burglars. Applying a model of offender target selection that is grounded in assertions of the routine activity approach, rational choice perspective, crime pattern and social disorganization theories,
Akaike information criterion to select well-fit resist models
Burbine, Andrew; Fryer, David; Sturtevant, John
2015-03-01
In the field of model design and selection, there is always a risk that a model is over-fit to the data used to train the model. A model is well suited when it describes the physical system and not the stochastic behavior of the particular data collected. K-fold cross validation is a method to check this potential over-fitting to the data by calibrating with k-number of folds in the data, typically between 4 and 10. Model training is a computationally expensive operation, however, and given a wide choice of candidate models, calibrating each one repeatedly becomes prohibitively time consuming. Akaike information criterion (AIC) is an information-theoretic approach to model selection based on the maximized log-likelihood for a given model that only needs a single calibration per model. It is used in this study to demonstrate model ranking and selection among compact resist modelforms that have various numbers and types of terms to describe photoresist behavior. It is shown that there is a good correspondence of AIC to K-fold cross validation in selecting the best modelform, and it is further shown that over-fitting is, in most cases, not indicated. In modelforms with more than 40 fitting parameters, the size of the calibration data set benefits from additional parameters, statistically validating the model complexity.
Model catalysis by size-selected cluster deposition
Energy Technology Data Exchange (ETDEWEB)
Anderson, Scott [Univ. of Utah, Salt Lake City, UT (United States)
2015-11-20
This report summarizes the accomplishments during the last four years of the subject grant. Results are presented for experiments in which size-selected model catalysts were studied under surface science and aqueous electrochemical conditions. Strong effects of cluster size were found, and by correlating the size effects with size-dependent physical properties of the samples measured by surface science methods, it was possible to deduce mechanistic insights, such as the factors that control the rate-limiting step in the reactions. Results are presented for CO oxidation, CO binding energetics and geometries, and electronic effects under surface science conditions, and for the electrochemical oxygen reduction reaction, ethanol oxidation reaction, and for oxidation of carbon by water.
A risk assessment model for selecting cloud service providers
Cayirci, Erdal; Garaga, Alexandr; Santana de Oliveira, Anderson; Roudier, Yves
2016-01-01
The Cloud Adoption Risk Assessment Model is designed to help cloud customers in assessing the risks that they face by selecting a specific cloud service provider. It evaluates background information obtained from cloud customers and cloud service providers to analyze various risk scenarios. This facilitates decision making an selecting the cloud service provider with the most preferable risk profile based on aggregated risks to security, privacy, and service delivery. Based on this model we ...
SELECTION MOMENTS AND GENERALIZED METHOD OF MOMENTS FOR HETEROSKEDASTIC MODELS
Directory of Open Access Journals (Sweden)
Constantin ANGHELACHE
2016-06-01
Full Text Available In this paper, the authors describe the selection methods for moments and the application of the generalized moments method for the heteroskedastic models. The utility of GMM estimators is found in the study of the financial market models. The selection criteria for moments are applied for the efficient estimation of GMM for univariate time series with martingale difference errors, similar to those studied so far by Kuersteiner.
Ensemble Prediction Model with Expert Selection for Electricity Price Forecasting
Directory of Open Access Journals (Sweden)
Bijay Neupane
2017-01-01
Full Text Available Forecasting of electricity prices is important in deregulated electricity markets for all of the stakeholders: energy wholesalers, traders, retailers and consumers. Electricity price forecasting is an inherently difficult problem due to its special characteristic of dynamicity and non-stationarity. In this paper, we present a robust price forecasting mechanism that shows resilience towards the aggregate demand response effect and provides highly accurate forecasted electricity prices to the stakeholders in a dynamic environment. We employ an ensemble prediction model in which a group of different algorithms participates in forecasting 1-h ahead the price for each hour of a day. We propose two different strategies, namely, the Fixed Weight Method (FWM and the Varying Weight Method (VWM, for selecting each hour’s expert algorithm from the set of participating algorithms. In addition, we utilize a carefully engineered set of features selected from a pool of features extracted from the past electricity price data, weather data and calendar data. The proposed ensemble model offers better results than the Autoregressive Integrated Moving Average (ARIMA method, the Pattern Sequence-based Forecasting (PSF method and our previous work using Artificial Neural Networks (ANN alone on the datasets for New York, Australian and Spanish electricity markets.
A Network Analysis Model for Selecting Sustainable Technology
Directory of Open Access Journals (Sweden)
Sangsung Park
2015-09-01
Full Text Available Most companies develop technologies to improve their competitiveness in the marketplace. Typically, they then patent these technologies around the world in order to protect their intellectual property. Other companies may use patented technologies to develop new products, but must pay royalties to the patent holders or owners. Should they fail to do so, this can result in legal disputes in the form of patent infringement actions between companies. To avoid such situations, companies attempt to research and develop necessary technologies before their competitors do so. An important part of this process is analyzing existing patent documents in order to identify emerging technologies. In such analyses, extracting sustainable technology from patent data is important, because sustainable technology drives technological competition among companies and, thus, the development of new technologies. In addition, selecting sustainable technologies makes it possible to plan their R&D (research and development efficiently. In this study, we propose a network model that can be used to select the sustainable technology from patent documents, based on the centrality and degree of a social network analysis. To verify the performance of the proposed model, we carry out a case study using actual patent data from patent databases.
Model Selection in Continuous Test Norming With GAMLSS.
Voncken, Lieke; Albers, Casper J; Timmerman, Marieke E
2017-06-01
To compute norms from reference group test scores, continuous norming is preferred over traditional norming. A suitable continuous norming approach for continuous data is the use of the Box-Cox Power Exponential model, which is found in the generalized additive models for location, scale, and shape. Applying the Box-Cox Power Exponential model for test norming requires model selection, but it is unknown how well this can be done with an automatic selection procedure. In a simulation study, we compared the performance of two stepwise model selection procedures combined with four model-fit criteria (Akaike information criterion, Bayesian information criterion, generalized Akaike information criterion (3), cross-validation), varying data complexity, sampling design, and sample size in a fully crossed design. The new procedure combined with one of the generalized Akaike information criterion was the most efficient model selection procedure (i.e., required the smallest sample size). The advocated model selection procedure is illustrated with norming data of an intelligence test.
Selection Criteria in Regime Switching Conditional Volatility Models
Directory of Open Access Journals (Sweden)
Thomas Chuffart
2015-05-01
Full Text Available A large number of nonlinear conditional heteroskedastic models have been proposed in the literature. Model selection is crucial to any statistical data analysis. In this article, we investigate whether the most commonly used selection criteria lead to choice of the right specification in a regime switching framework. We focus on two types of models: the Logistic Smooth Transition GARCH and the Markov-Switching GARCH models. Simulation experiments reveal that information criteria and loss functions can lead to misspecification ; BIC sometimes indicates the wrong regime switching framework. Depending on the Data Generating Process used in the experiments, great care is needed when choosing a criterion.
A model selection support system for numerical simulations of nuclear thermal-hydraulics
International Nuclear Information System (INIS)
Gofuku, Akio; Shimizu, Kenji; Sugano, Keiji; Yoshikawa, Hidekazu; Wakabayashi, Jiro
1990-01-01
In order to execute efficiently a dynamic simulation of a large-scaled engineering system such as a nuclear power plant, it is necessary to develop intelligent simulation support system for all phases of the simulation. This study is concerned with the intelligent support for the program development phase and is engaged in the adequate model selection support method by applying AI (Artificial Intelligence) techniques to execute a simulation consistent with its purpose and conditions. A proto-type expert system to support the model selection for numerical simulations of nuclear thermal-hydraulics in the case of cold leg small break loss-of-coolant accident of PWR plant is now under development on a personal computer. The steps to support the selection of both fluid model and constitutive equations for the drift flux model have been developed. Several cases of model selection were carried out and reasonable model selection results were obtained. (author)
A guide to Bayesian model selection for ecologists
Hooten, Mevin B.; Hobbs, N.T.
2015-01-01
The steady upward trend in the use of model selection and Bayesian methods in ecological research has made it clear that both approaches to inference are important for modern analysis of models and data. However, in teaching Bayesian methods and in working with our research colleagues, we have noticed a general dissatisfaction with the available literature on Bayesian model selection and multimodel inference. Students and researchers new to Bayesian methods quickly find that the published advice on model selection is often preferential in its treatment of options for analysis, frequently advocating one particular method above others. The recent appearance of many articles and textbooks on Bayesian modeling has provided welcome background on relevant approaches to model selection in the Bayesian framework, but most of these are either very narrowly focused in scope or inaccessible to ecologists. Moreover, the methodological details of Bayesian model selection approaches are spread thinly throughout the literature, appearing in journals from many different fields. Our aim with this guide is to condense the large body of literature on Bayesian approaches to model selection and multimodel inference and present it specifically for quantitative ecologists as neutrally as possible. We also bring to light a few important and fundamental concepts relating directly to model selection that seem to have gone unnoticed in the ecological literature. Throughout, we provide only a minimal discussion of philosophy, preferring instead to examine the breadth of approaches as well as their practical advantages and disadvantages. This guide serves as a reference for ecologists using Bayesian methods, so that they can better understand their options and can make an informed choice that is best aligned with their goals for inference.
Määttä, Anu; Laine, Marko; Tamminen, Johanna
2015-04-01
This study aims to characterize the uncertainty related to the aerosol microphysical model selection and the modelling error due to approximations in the forward modelling. Many satellite aerosol retrieval algorithms rely on pre-calculated look-up tables of model parameters representing various atmospheric conditions. In the retrieval we need to choose the most appropriate aerosol microphysical models from the pre-defined set of models by fitting them to the observations. The aerosol properties, e.g. AOD, are then determined from the best models. This choice of an appropriate aerosol model composes a notable part in the AOD retrieval uncertainty. The motivation in our study was to account these two sources in the total uncertainty budget: uncertainty in selecting the most appropriate model, and uncertainty resulting from the approximations in the pre-calculated aerosol microphysical model. The systematic model error was analysed by studying the behaviour of the model residuals, i.e. the differences between modelled and observed reflectances, by statistical methods. We utilised Gaussian processes to characterize the uncertainty related to approximations in aerosol microphysics modelling due to use of look-up tables and other non-modelled systematic features in the Level 1 data. The modelling error is described by a non-diagonal covariance matrix parameterised by correlation length, which is estimated from the residuals using computational tools from spatial statistics. In addition, we utilised Bayesian model selection and model averaging methods to account the uncertainty due to aerosol model selection. By acknowledging the modelling error as a source of uncertainty in the retrieval of AOD from observed spectral reflectance, we allow the observed values to deviate from the modelled values within limits determined by both the measurement and modelling errors. This results in a more realistic uncertainty level of the retrieved AOD. The method is illustrated by both
Model Selection and Hypothesis Testing for Large-Scale Network Models with Overlapping Groups
Directory of Open Access Journals (Sweden)
Tiago P. Peixoto
2015-03-01
Full Text Available The effort to understand network systems in increasing detail has resulted in a diversity of methods designed to extract their large-scale structure from data. Unfortunately, many of these methods yield diverging descriptions of the same network, making both the comparison and understanding of their results a difficult challenge. A possible solution to this outstanding issue is to shift the focus away from ad hoc methods and move towards more principled approaches based on statistical inference of generative models. As a result, we face instead the more well-defined task of selecting between competing generative processes, which can be done under a unified probabilistic framework. Here, we consider the comparison between a variety of generative models including features such as degree correction, where nodes with arbitrary degrees can belong to the same group, and community overlap, where nodes are allowed to belong to more than one group. Because such model variants possess an increasing number of parameters, they become prone to overfitting. In this work, we present a method of model selection based on the minimum description length criterion and posterior odds ratios that is capable of fully accounting for the increased degrees of freedom of the larger models and selects the best one according to the statistical evidence available in the data. In applying this method to many empirical unweighted networks from different fields, we observe that community overlap is very often not supported by statistical evidence and is selected as a better model only for a minority of them. On the other hand, we find that degree correction tends to be almost universally favored by the available data, implying that intrinsic node proprieties (as opposed to group properties are often an essential ingredient of network formation.
Directory of Open Access Journals (Sweden)
Florian Kopp
2014-12-01
Full Text Available Acquiring therapy resistance is one of the major obstacles in the treatment of patients with cancer. The discovery of the cancer stem cell (CSC–specific drug salinomycin raised hope for improved treatment options by targeting therapy-refractory CSCs and mesenchymal cancer cells. However, the occurrence of an acquired salinomycin resistance in tumor cells remains elusive. To study the formation of salinomycin resistance, mesenchymal breast cancer cells were sequentially treated with salinomycin in an in vitro cell culture assay, and the resulting differences in gene expression and salinomycin susceptibility were analyzed. We demonstrated that long-term salinomycin treatment of mesenchymal cancer cells resulted in salinomycin-resistant cells with elevated levels of epithelial markers, such as E-cadherin and miR-200c, a decreased migratory capability, and a higher susceptibility to the classic chemotherapeutic drug doxorubicin. The formation of salinomycin resistance through the acquisition of epithelial traits was further validated by inducing mesenchymal-epithelial transition through an overexpression of miR-200c. The transition from a mesenchymal to a more epithelial-like phenotype of salinomycin-treated tumor cells was moreover confirmed in vivo, using syngeneic and, for the first time, transgenic mouse tumor models. These results suggest that the acquisition of salinomycin resistance through the clonal selection of epithelial-like cancer cells could become exploited for improved cancer therapies by antagonizing the tumor-progressive effects of epithelial-mesenchymal transition.
The Use of Evolution in a Central Action Selection Model
Directory of Open Access Journals (Sweden)
F. Montes-Gonzalez
2007-01-01
Full Text Available The use of effective central selection provides flexibility in design by offering modularity and extensibility. In earlier papers we have focused on the development of a simple centralized selection mechanism. Our current goal is to integrate evolutionary methods in the design of non-sequential behaviours and the tuning of specific parameters of the selection model. The foraging behaviour of an animal robot (animat has been modelled in order to integrate the sensory information from the robot to perform selection that is nearly optimized by the use of genetic algorithms. In this paper we present how selection through optimization finally arranges the pattern of presented behaviours for the foraging task. Hence, the execution of specific parts in a behavioural pattern may be ruled out by the tuning of these parameters. Furthermore, the intensive use of colour segmentation from a colour camera for locating a cylinder sets a burden on the calculations carried out by the genetic algorithm.
A Hybrid Multiple Criteria Decision Making Model for Supplier Selection
Directory of Open Access Journals (Sweden)
Chung-Min Wu
2013-01-01
Full Text Available The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain their weights. To avoid calculation and additional pairwise comparisons of ANP, a technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. The use of a combination of the fuzzy Delphi method, ANP, and TOPSIS, proposing an MCDM model for supplier selection, and applying these to a real case are the unique features of this study.
Variable selection in Logistic regression model with genetic algorithm.
Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi
2018-02-01
Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.
A CONCEPTUAL MODEL FOR IMPROVED PROJECT SELECTION AND PRIORITISATION
Directory of Open Access Journals (Sweden)
P. J. Viljoen
2012-01-01
Full Text Available
ENGLISH ABSTRACT: Project portfolio management processes are often designed and operated as a series of stages (or project phases and gates. However, the flow of such a process is often slow, characterised by queues waiting for a gate decision and by repeated work from previous stages waiting for additional information or for re-processing. In this paper the authors propose a conceptual model that applies supply chain and constraint management principles to the project portfolio management process. An advantage of the proposed model is that it provides the ability to select and prioritise projects without undue changes to project schedules. This should result in faster flow through the system.
AFRIKAANSE OPSOMMING: Prosesse om portefeuljes van projekte te bestuur word normaalweg ontwerp en bedryf as ’n reeks fases en hekke. Die vloei deur so ’n proses is dikwels stadig en word gekenmerk deur toue wat wag vir besluite by die hekke en ook deur herwerk van vorige fases wat wag vir verdere inligting of vir herprosessering. In hierdie artikel word ‘n konseptuele model voorgestel. Die model berus op die beginsels van voorsieningskettings sowel as van beperkingsbestuur, en bied die voordeel dat projekte geselekteer en geprioritiseer kan word sonder onnodige veranderinge aan projekskedules. Dit behoort te lei tot versnelde vloei deur die stelsel.
Statistical model selection with “Big Data”
Directory of Open Access Journals (Sweden)
Jurgen A. Doornik
2015-12-01
Full Text Available Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by non-statistical criteria (the formulation problem, using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem, using a viable approach that resolves the computational problem of immense numbers of possible models.
Multicriteria framework for selecting a process modelling language
Scanavachi Moreira Campos, Ana Carolina; Teixeira de Almeida, Adiel
2016-01-01
The choice of process modelling language can affect business process management (BPM) since each modelling language shows different features of a given process and may limit the ways in which a process can be described and analysed. However, choosing the appropriate modelling language for process modelling has become a difficult task because of the availability of a large number modelling languages and also due to the lack of guidelines on evaluating, and comparing languages so as to assist in selecting the most appropriate one. This paper proposes a framework for selecting a modelling language in accordance with the purposes of modelling. This framework is based on the semiotic quality framework (SEQUAL) for evaluating process modelling languages and a multicriteria decision aid (MCDA) approach in order to select the most appropriate language for BPM. This study does not attempt to set out new forms of assessment and evaluation criteria, but does attempt to demonstrate how two existing approaches can be combined so as to solve the problem of selection of modelling language. The framework is described in this paper and then demonstrated by means of an example. Finally, the advantages and disadvantages of using SEQUAL and MCDA in an integrated manner are discussed.
Hurtado-Chong, Anahí; Joeris, Alexander; Hess, Denise; Blauth, Michael
2017-07-12
A considerable number of clinical studies experience delays, which result in increased duration and costs. In multicentre studies, patient recruitment is among the leading causes of delays. Poor site selection can result in low recruitment and bad data quality. Site selection is therefore crucial for study quality and completion, but currently no specific guidelines are available. Selection of sites adequate to participate in a prospective multicentre cohort study was performed through an open call using a newly developed objective multistep approach. The method is based on use of a network, definition of objective criteria and a systematic screening process. Out of 266 interested sites, 24 were shortlisted and finally 12 sites were selected to participate in the study. The steps in the process included an open call through a network, use of selection questionnaires tailored to the study, evaluation of responses using objective criteria and scripted telephone interviews. At each step, the number of candidate sites was quickly reduced leaving only the most promising candidates. Recruitment and quality of data went according to expectations in spite of the contracting problems faced with some sites. The results of our first experience with a standardised and objective method of site selection are encouraging. The site selection method described here can serve as a guideline for other researchers performing multicentre studies. ClinicalTrials.gov: NCT02297581. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Directory of Open Access Journals (Sweden)
Clare Stawski
2017-12-01
Full Text Available According to the “aerobic capacity model,” endothermy in birds and mammals evolved as a result of natural selection favoring increased persistent locomotor activity, fuelled by aerobic metabolism. However, this also increased energy expenditure even during rest, with the lowest metabolic rates occurring in the thermoneutral zone (TNZ and increasing at ambient temperatures (Ta below and above this range, depicted by the thermoregulatory curve. In our experimental evolution system, four lines of bank voles (Myodes glareolus have been selected for high swim-induced aerobic metabolism and four unselected lines have been maintained as a control. In addition to a 50% higher rate of oxygen consumption during swimming, the selected lines have also evolved a 7.3% higher mass-adjusted basal metabolic rate. Therefore, we asked whether voles from selected lines would also display a shift in the thermoregulatory curve and an increased body temperature (Tb during exposure to high Ta. To test these hypotheses we measured the RMR and Tb of selected and control voles at Ta from 10 to 34°C. As expected, RMR within and around the TNZ was higher in selected lines. Further, the Tb of selected lines within the TNZ was greater than the Tb of control lines, particularly at the maximum measured Ta of 34°C, suggesting that selected voles are more prone to hyperthermia. Interestingly, our results revealed that while the slope of the thermoregulatory curve below the lower critical temperature (LCT is significantly lower in the selected lines, the LCT (26.1°C does not differ. Importantly, selected voles also evolved a higher maximum thermogenesis, but thermal conductance did not increase. As a consequence, the minimum tolerated temperature, calculated from an extrapolation of the thermoregulatory curve, is 8.4°C lower in selected (−28.6°C than in control lines (−20.2°C. Thus, selection for high aerobic exercise performance, even though operating under
Loss of spent fuel pool cooling PRA: Model and results
Energy Technology Data Exchange (ETDEWEB)
Siu, N.; Khericha, S.; Conroy, S.; Beck, S.; Blackman, H.
1996-09-01
This letter report documents models for quantifying the likelihood of loss of spent fuel pool cooling; models for identifying post-boiling scenarios that lead to core damage; qualitative and quantitative results generated for a selected plant that account for plant design and operational practices; a comparison of these results and those generated from earlier studies; and a review of available data on spent fuel pool accidents. The results of this study show that for a representative two-unit boiling water reactor, the annual probability of spent fuel pool boiling is 5 {times} 10{sup {minus}5} and the annual probability of flooding associated with loss of spent fuel pool cooling scenarios is 1 {times} 10{sup {minus}3}. Qualitative arguments are provided to show that the likelihood of core damage due to spent fuel pool boiling accidents is low for most US commercial nuclear power plants. It is also shown that, depending on the design characteristics of a given plant, the likelihood of either: (a) core damage due to spent fuel pool-associated flooding, or (b) spent fuel damage due to pool dryout, may not be negligible.
Loss of spent fuel pool cooling PRA: Model and results
International Nuclear Information System (INIS)
Siu, N.; Khericha, S.; Conroy, S.; Beck, S.; Blackman, H.
1996-09-01
This letter report documents models for quantifying the likelihood of loss of spent fuel pool cooling; models for identifying post-boiling scenarios that lead to core damage; qualitative and quantitative results generated for a selected plant that account for plant design and operational practices; a comparison of these results and those generated from earlier studies; and a review of available data on spent fuel pool accidents. The results of this study show that for a representative two-unit boiling water reactor, the annual probability of spent fuel pool boiling is 5 x 10 -5 and the annual probability of flooding associated with loss of spent fuel pool cooling scenarios is 1 x 10 -3 . Qualitative arguments are provided to show that the likelihood of core damage due to spent fuel pool boiling accidents is low for most US commercial nuclear power plants. It is also shown that, depending on the design characteristics of a given plant, the likelihood of either: (a) core damage due to spent fuel pool-associated flooding, or (b) spent fuel damage due to pool dryout, may not be negligible
On model selections for repeated measurement data in clinical studies.
Zou, Baiming; Jin, Bo; Koch, Gary G; Zhou, Haibo; Borst, Stephen E; Menon, Sandeep; Shuster, Jonathan J
2015-05-10
Repeated measurement designs have been widely used in various randomized controlled trials for evaluating long-term intervention efficacies. For some clinical trials, the primary research question is how to compare two treatments at a fixed time, using a t-test. Although simple, robust, and convenient, this type of analysis fails to utilize a large amount of collected information. Alternatively, the mixed-effects model is commonly used for repeated measurement data. It models all available data jointly and allows explicit assessment of the overall treatment effects across the entire time spectrum. In this paper, we propose an analytic strategy for longitudinal clinical trial data where the mixed-effects model is coupled with a model selection scheme. The proposed test statistics not only make full use of all available data but also utilize the information from the optimal model deemed for the data. The performance of the proposed method under various setups, including different data missing mechanisms, is evaluated via extensive Monte Carlo simulations. Our numerical results demonstrate that the proposed analytic procedure is more powerful than the t-test when the primary interest is to test for the treatment effect at the last time point. Simulations also reveal that the proposed method outperforms the usual mixed-effects model for testing the overall treatment effects across time. In addition, the proposed framework is more robust and flexible in dealing with missing data compared with several competing methods. The utility of the proposed method is demonstrated by analyzing a clinical trial on the cognitive effect of testosterone in geriatric men with low baseline testosterone levels. Copyright © 2015 John Wiley & Sons, Ltd.
Optimal experiment design for model selection in biochemical networks.
Vanlier, Joep; Tiemann, Christian A; Hilbers, Peter A J; van Riel, Natal A W
2014-02-20
Mathematical modeling is often used to formalize hypotheses on how a biochemical network operates by discriminating between competing models. Bayesian model selection offers a way to determine the amount of evidence that data provides to support one model over the other while favoring simple models. In practice, the amount of experimental data is often insufficient to make a clear distinction between competing models. Often one would like to perform a new experiment which would discriminate between competing hypotheses. We developed a novel method to perform Optimal Experiment Design to predict which experiments would most effectively allow model selection. A Bayesian approach is applied to infer model parameter distributions. These distributions are sampled and used to simulate from multivariate predictive densities. The method is based on a k-Nearest Neighbor estimate of the Jensen Shannon divergence between the multivariate predictive densities of competing models. We show that the method successfully uses predictive differences to enable model selection by applying it to several test cases. Because the design criterion is based on predictive distributions, which can be computed for a wide range of model quantities, the approach is very flexible. The method reveals specific combinations of experiments which improve discriminability even in cases where data is scarce. The proposed approach can be used in conjunction with existing Bayesian methodologies where (approximate) posteriors have been determined, making use of relations that exist within the inferred posteriors.
Quantile hydrologic model selection and model structure deficiency assessment : 1. Theory
Pande, S.
2013-01-01
A theory for quantile based hydrologic model selection and model structure deficiency assessment is presented. The paper demonstrates that the degree to which a model selection problem is constrained by the model structure (measured by the Lagrange multipliers of the constraints) quantifies
Fuzzy Investment Portfolio Selection Models Based on Interval Analysis Approach
Directory of Open Access Journals (Sweden)
Haifeng Guo
2012-01-01
Full Text Available This paper employs fuzzy set theory to solve the unintuitive problem of the Markowitz mean-variance (MV portfolio model and extend it to a fuzzy investment portfolio selection model. Our model establishes intervals for expected returns and risk preference, which can take into account investors' different investment appetite and thus can find the optimal resolution for each interval. In the empirical part, we test this model in Chinese stocks investment and find that this model can fulfill different kinds of investors’ objectives. Finally, investment risk can be decreased when we add investment limit to each stock in the portfolio, which indicates our model is useful in practice.
Development of an Environment for Software Reliability Model Selection
1992-09-01
now is directed to other related problems such as tools for model selection, multiversion programming, and software fault tolerance modeling... multiversion programming, 7. Hlardware can be repaired by spare modules, which is not. the case for software, 2-6 N. Preventive maintenance is very important
Analytical Modelling Of Milling For Tool Design And Selection
Fontaine, M.; Devillez, A.; Dudzinski, D.
2007-05-01
This paper presents an efficient analytical model which allows to simulate a large panel of milling operations. A geometrical description of common end mills and of their engagement in the workpiece material is proposed. The internal radius of the rounded part of the tool envelope is used to define the considered type of mill. The cutting edge position is described for a constant lead helix and for a constant local helix angle. A thermomechanical approach of oblique cutting is applied to predict forces acting on the tool and these results are compared with experimental data obtained from milling tests on a 42CrMo4 steel for three classical types of mills. The influence of some tool's geometrical parameters on predicted cutting forces is presented in order to propose optimisation criteria for design and selection of cutting tools.
Analytical Modelling Of Milling For Tool Design And Selection
International Nuclear Information System (INIS)
Fontaine, M.; Devillez, A.; Dudzinski, D.
2007-01-01
This paper presents an efficient analytical model which allows to simulate a large panel of milling operations. A geometrical description of common end mills and of their engagement in the workpiece material is proposed. The internal radius of the rounded part of the tool envelope is used to define the considered type of mill. The cutting edge position is described for a constant lead helix and for a constant local helix angle. A thermomechanical approach of oblique cutting is applied to predict forces acting on the tool and these results are compared with experimental data obtained from milling tests on a 42CrMo4 steel for three classical types of mills. The influence of some tool's geometrical parameters on predicted cutting forces is presented in order to propose optimisation criteria for design and selection of cutting tools
Testing exclusion restrictions and additive separability in sample selection models
DEFF Research Database (Denmark)
Huber, Martin; Mellace, Giovanni
2014-01-01
Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction of these......Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction...... of these assumptions by applying the approach of Huber and Mellace (Testing instrument validity for LATE identification based on inequality moment constraints, 2011) (for testing instrument validity under treatment endogeneity) to the sample selection framework. We show that the exclusion restriction and additive...... separability imply two testable inequality constraints that come from both point identifying and bounding the outcome distribution of the subpopulation that is always selected/observed. We apply the tests to two variables for which the exclusion restriction is frequently invoked in female wage regressions: non...
Selection Bias in Educational Transition Models: Theory and Empirical Evidence
DEFF Research Database (Denmark)
Holm, Anders; Jæger, Mads
Most studies using Mare’s (1980, 1981) seminal model of educational transitions find that the effect of family background decreases across transitions. Recently, Cameron and Heckman (1998, 2001) have argued that the “waning coefficients” in the Mare model are driven by selection on unobserved...... the United States, United Kingdom, Denmark, and the Netherlands shows that when we take selection into account the effect of family background variables on educational transitions is largely constant across transitions. We also discuss several difficulties in estimating educational transition models which...
Novel web service selection model based on discrete group search.
Zhai, Jie; Shao, Zhiqing; Guo, Yi; Zhang, Haiteng
2014-01-01
In our earlier work, we present a novel formal method for the semiautomatic verification of specifications and for describing web service composition components by using abstract concepts. After verification, the instantiations of components were selected to satisfy the complex service performance constraints. However, selecting an optimal instantiation, which comprises different candidate services for each generic service, from a large number of instantiations is difficult. Therefore, we present a new evolutionary approach on the basis of the discrete group search service (D-GSS) model. With regard to obtaining the optimal multiconstraint instantiation of the complex component, the D-GSS model has competitive performance compared with other service selection models in terms of accuracy, efficiency, and ability to solve high-dimensional service composition component problems. We propose the cost function and the discrete group search optimizer (D-GSO) algorithm and study the convergence of the D-GSS model through verification and test cases.
Muir, William M; Bijma, P; Schinckel, A
2013-01-01
An experiment was conducted comparing multilevel selection in Japanese quail for 43 days weight and survival with birds housed in either kin (K) or random (R) groups. Multilevel selection significantly reduced mortality (6.6% K vs. 8.5% R) and increased weight (1.30 g/MG K vs. 0.13 g/MG R) resulting in response an order of magnitude greater with Kin than Random. Thus, multilevel selection was effective in reducing detrimental social interactions, which contributed to improved weight gain. The...
Multi-Criteria Decision Making For Determining A Simple Model of Supplier Selection
Harwati
2017-06-01
Supplier selection is a decision with many criteria. Supplier selection model usually involves more than five main criteria and more than 10 sub-criteria. In fact many model includes more than 20 criteria. Too many criteria involved in supplier selection models sometimes make it difficult to apply in many companies. This research focuses on designing supplier selection that easy and simple to be applied in the company. Analytical Hierarchy Process (AHP) is used to weighting criteria. The analysis results there are four criteria that are easy and simple can be used to select suppliers: Price (weight 0.4) shipment (weight 0.3), quality (weight 0.2) and services (weight 0.1). A real case simulation shows that simple model provides the same decision with a more complex model.
Hyperopt: a Python library for model selection and hyperparameter optimization
Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.
2015-01-01
Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.
Selection of climate change scenario data for impact modelling
DEFF Research Database (Denmark)
Sloth Madsen, M; Fox Maule, C; MacKellar, N
2012-01-01
Impact models investigating climate change effects on food safety often need detailed climate data. The aim of this study was to select climate change projection data for selected crop phenology and mycotoxin impact models. Using the ENSEMBLES database of climate model output, this study...... illustrates how the projected climate change signal of important variables as temperature, precipitation and relative humidity depends on the choice of the climate model. Using climate change projections from at least two different climate models is recommended to account for model uncertainty. To make...... the climate projections suitable for impact analysis at the local scale a weather generator approach was adopted. As the weather generator did not treat all the necessary variables, an ad-hoc statistical method was developed to synthesise realistic values of missing variables. The method is presented...
Directory of Open Access Journals (Sweden)
Sarah A. Birken
2017-10-01
Full Text Available Abstract Background Theories provide a synthesizing architecture for implementation science. The underuse, superficial use, and misuse of theories pose a substantial scientific challenge for implementation science and may relate to challenges in selecting from the many theories in the field. Implementation scientists may benefit from guidance for selecting a theory for a specific study or project. Understanding how implementation scientists select theories will help inform efforts to develop such guidance. Our objective was to identify which theories implementation scientists use, how they use theories, and the criteria used to select theories. Methods We identified initial lists of uses and criteria for selecting implementation theories based on seminal articles and an iterative consensus process. We incorporated these lists into a self-administered survey for completion by self-identified implementation scientists. We recruited potential respondents at the 8th Annual Conference on the Science of Dissemination and Implementation in Health and via several international email lists. We used frequencies and percentages to report results. Results Two hundred twenty-three implementation scientists from 12 countries responded to the survey. They reported using more than 100 different theories spanning several disciplines. Respondents reported using theories primarily to identify implementation determinants, inform data collection, enhance conceptual clarity, and guide implementation planning. Of the 19 criteria presented in the survey, the criteria used by the most respondents to select theory included analytic level (58%, logical consistency/plausibility (56%, empirical support (53%, and description of a change process (54%. The criteria used by the fewest respondents included fecundity (10%, uniqueness (12%, and falsifiability (15%. Conclusions Implementation scientists use a large number of criteria to select theories, but there is little
Adverse Selection Models with Three States of Nature
Directory of Open Access Journals (Sweden)
Daniela MARINESCU
2011-02-01
Full Text Available In the paper we analyze an adverse selection model with three states of nature, where both the Principal and the Agent are risk neutral. When solving the model, we use the informational rents and the efforts as variables. We derive the optimal contract in the situation of asymmetric information. The paper ends with the characteristics of the optimal contract and the main conclusions of the model.
A SUPPLIER SELECTION MODEL FOR SOFTWARE DEVELOPMENT OUTSOURCING
Directory of Open Access Journals (Sweden)
Hancu Lucian-Viorel
2010-12-01
Full Text Available This paper presents a multi-criteria decision making model used for supplier selection for software development outsourcing on e-marketplaces. This model can be used in auctions. The supplier selection process becomes complex and difficult on last twenty years since the Internet plays an important role in business management. Companies have to concentrate their efforts on their core activities and the others activities should be realized by outsourcing. They can achieve significant cost reduction by using e-marketplaces in their purchase process and by using decision support systems on supplier selection. In the literature were proposed many approaches for supplier evaluation and selection process. The performance of potential suppliers is evaluated using multi criteria decision making methods rather than considering a single factor cost.
Selection for costly sexual traits results in a vacant mating niche and male dimorphism.
Hendrickx, Frederik; Vanthournout, Bram; Taborsky, Michael
2015-08-01
The expected strong directional selection for traits that increase a male's mating ability conflicts with the frequent observation that within species, males may show extreme variation in sexual traits. These male reproductive polymorphisms are usually attributed to direct male-male competition. It is currently unclear, however, how directional selection for sexually selected traits may convert into disruptive selection, and if female preference for elaborate traits may be an alternative mechanism driving the evolution of male polymorphism. Here, we explore this mechanism using the polyandric dwarf spider Oedothorax gibbosus as a model. We first show that males characterized by conspicuous cephalic structures serving as a nuptial feeding device ("gibbosus males") significantly outperform other males in siring offspring of previously fertilized females. However, significant costs in terms of development time of gibbosus males open a mating niche for an alternative male type lacking expensive secondary sexual traits. These "tuberosus males" obtain virtually all fertilizations early in the breeding season. Individual-based simulations demonstrate a hitherto unknown general principle, by which males selected for high investment to attract females suffer constrained mating opportunities. This creates a vacant mating niche of unmated females for noninvesting males and, consequently, disruptive selection on male secondary sexual traits. © 2015 The Author(s). Evolution © 2015 The Society for the Study of Evolution.
A DNA-based system for selecting and displaying the combined result of two input variables
DEFF Research Database (Denmark)
Liu, Huajie; Wang, Jianbang; Song, S
2015-01-01
demonstrate this capability in a DNA-based system that takes two input numbers, represented in DNA strands, and returns the result of their multiplication, writing this as a number in a display. Unlike a conventional calculator, this system operates by selecting the result from a library of solutions rather...
Modeling quality attributes and metrics for web service selection
Oskooei, Meysam Ahmadi; Daud, Salwani binti Mohd; Chua, Fang-Fang
2014-06-01
Since the service-oriented architecture (SOA) has been designed to develop the system as a distributed application, the service selection has become a vital aspect of service-oriented computing (SOC). Selecting the appropriate web service with respect to quality of service (QoS) through using mathematical solution for optimization of problem turns the service selection problem into a common concern for service users. Nowadays, number of web services that provide the same functionality is increased and selection of services from a set of alternatives which differ in quality parameters can be difficult for service consumers. In this paper, a new model for QoS attributes and metrics is proposed to provide a suitable solution for optimizing web service selection and composition with low complexity.
[On selection criteria in spatially distributed models of competition].
Il'ichev, V G; Il'icheva, O A
2014-01-01
Discrete models of competitors (initial population and mutants) are considered in which reproduction is set by increasing and concave function, and migration in the space consisting of a set of areas, is described by a Markov matrix. This allows the use of the theory of monotonous operators to study problems of selection, coexistence and stability. It is shown that the higher is the number of areas, more and more severe constraints of selective advantage to initial population are required.
Comparing the staffing models of outsourcing in selected companies
Chaloupková, Věra
2010-01-01
This thesis deals with problems of takeover of employees in outsourcing. The capital purpose is to compare the staffing model of outsourcing in selected companies. To compare in selected companies I chose multi-criteria analysis. This thesis is dividend into six chapters. The first charter is devoted to the theoretical part. In this charter describes the basic concepts as outsourcing, personal aspects, phase of the outsourcing projects, communications and culture. The rest of thesis is devote...
Economic assessment model architecture for AGC/AVLIS selection
International Nuclear Information System (INIS)
Hoglund, R.L.
1984-01-01
The economic assessment model architecture described provides the flexibility and completeness in economic analysis that the selection between AGC and AVLIS demands. Process models which are technology-specific will provide the first-order responses of process performance and cost to variations in process parameters. The economics models can be used to test the impacts of alternative deployment scenarios for a technology. Enterprise models provide global figures of merit for evaluating the DOE perspective on the uranium enrichment enterprise, and business analysis models compute the financial parameters from the private investor's viewpoint
Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions
DEFF Research Database (Denmark)
Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.
2011-01-01
, propagated exponentially, can lead to severely sub-optimal plans. Modern optimizers typically maintain one-dimensional statistical summaries and make the attribute value independence and join uniformity assumptions for efficiently estimating selectivities. Therefore, selectivity estimation errors in today......’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all...
Birken, Sarah A; Powell, Byron J; Shea, Christopher M; Haines, Emily R; Alexis Kirk, M; Leeman, Jennifer; Rohweder, Catherine; Damschroder, Laura; Presseau, Justin
2017-10-30
Theories provide a synthesizing architecture for implementation science. The underuse, superficial use, and misuse of theories pose a substantial scientific challenge for implementation science and may relate to challenges in selecting from the many theories in the field. Implementation scientists may benefit from guidance for selecting a theory for a specific study or project. Understanding how implementation scientists select theories will help inform efforts to develop such guidance. Our objective was to identify which theories implementation scientists use, how they use theories, and the criteria used to select theories. We identified initial lists of uses and criteria for selecting implementation theories based on seminal articles and an iterative consensus process. We incorporated these lists into a self-administered survey for completion by self-identified implementation scientists. We recruited potential respondents at the 8th Annual Conference on the Science of Dissemination and Implementation in Health and via several international email lists. We used frequencies and percentages to report results. Two hundred twenty-three implementation scientists from 12 countries responded to the survey. They reported using more than 100 different theories spanning several disciplines. Respondents reported using theories primarily to identify implementation determinants, inform data collection, enhance conceptual clarity, and guide implementation planning. Of the 19 criteria presented in the survey, the criteria used by the most respondents to select theory included analytic level (58%), logical consistency/plausibility (56%), empirical support (53%), and description of a change process (54%). The criteria used by the fewest respondents included fecundity (10%), uniqueness (12%), and falsifiability (15%). Implementation scientists use a large number of criteria to select theories, but there is little consensus on which are most important. Our results suggest that the
Genetic signatures of natural selection in a model invasive ascidian
Lin, Yaping; Chen, Yiyong; Yi, Changho; Fong, Jonathan J.; Kim, Won; Rius, Marc; Zhan, Aibin
2017-03-01
Invasive species represent promising models to study species’ responses to rapidly changing environments. Although local adaptation frequently occurs during contemporary range expansion, the associated genetic signatures at both population and genomic levels remain largely unknown. Here, we use genome-wide gene-associated microsatellites to investigate genetic signatures of natural selection in a model invasive ascidian, Ciona robusta. Population genetic analyses of 150 individuals sampled in Korea, New Zealand, South Africa and Spain showed significant genetic differentiation among populations. Based on outlier tests, we found high incidence of signatures of directional selection at 19 loci. Hitchhiking mapping analyses identified 12 directional selective sweep regions, and all selective sweep windows on chromosomes were narrow (~8.9 kb). Further analyses indentified 132 candidate genes under selection. When we compared our genetic data and six crucial environmental variables, 16 putatively selected loci showed significant correlation with these environmental variables. This suggests that the local environmental conditions have left significant signatures of selection at both population and genomic levels. Finally, we identified “plastic” genomic regions and genes that are promising regions to investigate evolutionary responses to rapid environmental change in C. robusta.
Ecohydrological model parameter selection for stream health evaluation.
Woznicki, Sean A; Nejadhashemi, A Pouyan; Ross, Dennis M; Zhang, Zhen; Wang, Lizhu; Esfahanian, Abdol-Hossein
2015-04-01
Variable selection is a critical step in development of empirical stream health prediction models. This study develops a framework for selecting important in-stream variables to predict four measures of biological integrity: total number of Ephemeroptera, Plecoptera, and Trichoptera (EPT) taxa, family index of biotic integrity (FIBI), Hilsenhoff biotic integrity (HBI), and fish index of biotic integrity (IBI). Over 200 flow regime and water quality variables were calculated using the Hydrologic Index Tool (HIT) and Soil and Water Assessment Tool (SWAT). Streams of the River Raisin watershed in Michigan were grouped using the Strahler stream classification system (orders 1-3 and orders 4-6), k-means clustering technique (two clusters: C1 and C2), and all streams (one grouping). For each grouping, variable selection was performed using Bayesian variable selection, principal component analysis, and Spearman's rank correlation. Following selection of best variable sets, models were developed to predict the measures of biological integrity using adaptive-neuro fuzzy inference systems (ANFIS), a technique well-suited to complex, nonlinear ecological problems. Multiple unique variable sets were identified, all which differed by selection method and stream grouping. Final best models were mostly built using the Bayesian variable selection method. The most effective stream grouping method varied by health measure, although k-means clustering and grouping by stream order were always superior to models built without grouping. Commonly selected variables were related to streamflow magnitude, rate of change, and seasonal nitrate concentration. Each best model was effective in simulating stream health observations, with EPT taxa validation R2 ranging from 0.67 to 0.92, FIBI ranging from 0.49 to 0.85, HBI from 0.56 to 0.75, and fish IBI at 0.99 for all best models. The comprehensive variable selection and modeling process proposed here is a robust method that extends our
Financial applications of a Tabu search variable selection model
Directory of Open Access Journals (Sweden)
Zvi Drezner
2001-01-01
Full Text Available We illustrate how a comparatively new technique, a Tabu search variable selection model [Drezner, Marcoulides and Salhi (1999], can be applied efficiently within finance when the researcher must select a subset of variables from among the whole set of explanatory variables under consideration. Several types of problems in finance, including corporate and personal bankruptcy prediction, mortgage and credit scoring, and the selection of variables for the Arbitrage Pricing Model, require the researcher to select a subset of variables from a larger set. In order to demonstrate the usefulness of the Tabu search variable selection model, we: (1 illustrate its efficiency in comparison to the main alternative search procedures, such as stepwise regression and the Maximum R2 procedure, and (2 show how a version of the Tabu search procedure may be implemented when attempting to predict corporate bankruptcy. We accomplish (2 by indicating that a Tabu Search procedure increases the predictability of corporate bankruptcy by up to 10 percentage points in comparison to Altman's (1968 Z-Score model.
Selecting an appropriate genetic evaluation model for selection in a developing dairy sector
McGill, D.M.; Mulder, H.A.; Thomson, P.C.; Lievaart, J.J.
2014-01-01
This study aimed to identify genetic evaluation models (GEM) to accurately select cattle for milk production when only limited data are available. It is based on a data set from the Pakistani Sahiwal progeny testing programme which includes records from five government herds, each consisting of 100
Multi-scale habitat selection modeling: A review and outlook
Kevin McGarigal; Ho Yi Wan; Kathy A. Zeller; Brad C. Timm; Samuel A. Cushman
2016-01-01
Scale is the lens that focuses ecological relationships. Organisms select habitat at multiple hierarchical levels and at different spatial and/or temporal scales within each level. Failure to properly address scale dependence can result in incorrect inferences in multi-scale habitat selection modeling studies.
Hogan, Daniel R; Salomon, Joshua A; Canning, David; Hammitt, James K; Zaslavsky, Alan M; Bärnighausen, Till
2012-01-01
Objectives Population-based HIV testing surveys have become central to deriving estimates of national HIV prevalence in sub-Saharan Africa. However, limited participation in these surveys can lead to selection bias. We control for selection bias in national HIV prevalence estimates using a novel approach, which unlike conventional imputation can account for selection on unobserved factors. Methods For 12 Demographic and Health Surveys conducted from 2001 to 2009 (N=138 300), we predict HIV status among those missing a valid HIV test with Heckman-type selection models, which allow for correlation between infection status and participation in survey HIV testing. We compare these estimates with conventional ones and introduce a simulation procedure that incorporates regression model parameter uncertainty into confidence intervals. Results Selection model point estimates of national HIV prevalence were greater than unadjusted estimates for 10 of 12 surveys for men and 11 of 12 surveys for women, and were also greater than the majority of estimates obtained from conventional imputation, with significantly higher HIV prevalence estimates for men in Cote d'Ivoire 2005, Mali 2006 and Zambia 2007. Accounting for selective non-participation yielded 95% confidence intervals around HIV prevalence estimates that are wider than those obtained with conventional imputation by an average factor of 4.5. Conclusions Our analysis indicates that national HIV prevalence estimates for many countries in sub-Saharan African are more uncertain than previously thought, and may be underestimated in several cases, underscoring the need for increasing participation in HIV surveys. Heckman-type selection models should be included in the set of tools used for routine estimation of HIV prevalence. PMID:23172342
The Properties of Model Selection when Retaining Theory Variables
DEFF Research Database (Denmark)
Hendry, David F.; Johansen, Søren
Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....
SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model
International Nuclear Information System (INIS)
Zhou, Z; Folkert, M; Wang, J
2016-01-01
Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.
Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H
2017-07-01
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in
Sample selection and taste correlation in discrete choice transport modelling
DEFF Research Database (Denmark)
Mabit, Stefan Lindhard
2008-01-01
of taste correlation in willingness-to-pay estimation are presented. The first contribution addresses how to incorporate taste correlation in the estimation of the value of travel time for public transport. Given a limited dataset the approach taken is to use theory on the value of travel time as guidance...... many issues that deserve attention. This thesis investigates how sample selection can affect estimation of discrete choice models and how taste correlation should be incorporated into applied mixed logit estimation. Sampling in transport modelling is often based on an observed trip. This may cause...... a sample to be choice-based or governed by a self-selection mechanism. In both cases, there is a possibility that sampling affects the estimation of a population model. It was established in the seventies how choice-based sampling affects the estimation of multinomial logit models. The thesis examines...
Spatial Fleming-Viot models with selection and mutation
Dawson, Donald A
2014-01-01
This book constructs a rigorous framework for analysing selected phenomena in evolutionary theory of populations arising due to the combined effects of migration, selection and mutation in a spatial stochastic population model, namely the evolution towards fitter and fitter types through punctuated equilibria. The discussion is based on a number of new methods, in particular multiple scale analysis, nonlinear Markov processes and their entrance laws, atomic measure-valued evolutions and new forms of duality (for state-dependent mutation and multitype selection) which are used to prove ergodic theorems in this context and are applicable for many other questions and renormalization analysis for a variety of phenomena (stasis, punctuated equilibrium, failure of naive branching approximations, biodiversity) which occur due to the combination of rare mutation, mutation, resampling, migration and selection and make it necessary to mathematically bridge the gap (in the limit) between time and space scales.
Tumor-Selective Cytotoxicity of Nitidine Results from Its Rapid Accumulation into Mitochondria
Directory of Open Access Journals (Sweden)
Hironori Iwasaki
2017-01-01
Full Text Available We identified a nitidine- (NTD- accumulating organelle and evaluated the net cytotoxicity of accumulated NTD. To evaluate tumor cell selectivity of the drug, we evaluated its selective cytotoxicity against 39 human cancer cell lines (JFCR39 panel, and the profile was compared with those of known anticancer drugs. Organelle specificity of NTD was visualized using organelle-targeted fluorescent proteins. Real-time analysis of cell growth, proliferation, and cytotoxicity was performed using the xCELLigence system. Selectivity of NTD in the JFCR39 panel was evaluated. Mitochondria-specific accumulation of NTD was observed. Real-time cytotoxicity analysis suggested that the mechanism of NTD-induced cell death is independent of the cell cycle. Short-term treatment indicated that this cytotoxicity only resulted from the accumulation of NTD into the mitochondria. The results from the JFCR39 panel indicated that NTD-mediated cytotoxicity resulted from unique mechanisms compared with those of other known anticancer drugs. These results suggested that the cytotoxicity of NTD is only induced by its accumulation in mitochondria. The drug triggered mitochondrial dysfunction in less than 2 h. Similarity analysis of the selectivity of NTD in 39 tumor cell lines strongly supported the unique tumor cell specificity of NTD. Thus, these features indicate that NTD may be a promising antitumor drug for new combination chemotherapies.
Tumor-Selective Cytotoxicity of Nitidine Results from Its Rapid Accumulation into Mitochondria.
Iwasaki, Hironori; Inafuku, Masashi; Taira, Naoyuki; Saito, Seikoh; Oku, Hirosuke
2017-01-01
We identified a nitidine- (NTD-) accumulating organelle and evaluated the net cytotoxicity of accumulated NTD. To evaluate tumor cell selectivity of the drug, we evaluated its selective cytotoxicity against 39 human cancer cell lines (JFCR39 panel), and the profile was compared with those of known anticancer drugs. Organelle specificity of NTD was visualized using organelle-targeted fluorescent proteins. Real-time analysis of cell growth, proliferation, and cytotoxicity was performed using the xCELLigence system. Selectivity of NTD in the JFCR39 panel was evaluated. Mitochondria-specific accumulation of NTD was observed. Real-time cytotoxicity analysis suggested that the mechanism of NTD-induced cell death is independent of the cell cycle. Short-term treatment indicated that this cytotoxicity only resulted from the accumulation of NTD into the mitochondria. The results from the JFCR39 panel indicated that NTD-mediated cytotoxicity resulted from unique mechanisms compared with those of other known anticancer drugs. These results suggested that the cytotoxicity of NTD is only induced by its accumulation in mitochondria. The drug triggered mitochondrial dysfunction in less than 2 h. Similarity analysis of the selectivity of NTD in 39 tumor cell lines strongly supported the unique tumor cell specificity of NTD. Thus, these features indicate that NTD may be a promising antitumor drug for new combination chemotherapies.
Xu, Zhiqiang
2017-02-16
Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.
Lubke, Gitta H.; Campbell, Ian
2016-01-01
Inference and conclusions drawn from model fitting analyses are commonly based on a single “best-fitting” model. If model selection and inference are carried out using the same data model selection uncertainty is ignored. We illustrate the Type I error inflation that can result from using the same data for model selection and inference, and we then propose a simple bootstrap based approach to quantify model selection uncertainty in terms of model selection rates. A selection rate can be interpreted as an estimate of the replication probability of a fitted model. The benefits of bootstrapping model selection uncertainty is demonstrated in a growth mixture analyses of data from the National Longitudinal Study of Youth, and a 2-group measurement invariance analysis of the Holzinger-Swineford data. PMID:28663687
The Selection of ARIMA Models with or without Regressors
DEFF Research Database (Denmark)
Johansen, Søren; Riani, Marco; Atkinson, Anthony C.
We develop a $C_{p}$ statistic for the selection of regression models with stationary and nonstationary ARIMA error term. We derive the asymptotic theory of the maximum likelihood estimators and show they are consistent and asymptotically Gaussian. We also prove that the distribution of the sum o...
Selecting candidate predictor variables for the modelling of post ...
African Journals Online (AJOL)
Selecting candidate predictor variables for the modelling of post-discharge mortality from sepsis: a protocol development project. Afri. Health Sci. .... Initial list of candidate predictor variables, N=17. Clinical. Laboratory. Social/Demographic. Vital signs (HR, RR, BP, T). Hemoglobin. Age. Oxygen saturation. Blood culture. Sex.
Computationally efficient thermal-mechanical modelling of selective laser melting
Yang, Y.; Ayas, C.; Brabazon, Dermot; Naher, Sumsun; Ul Ahad, Inam
2017-01-01
The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is
Multivariate time series modeling of selected childhood diseases in ...
African Journals Online (AJOL)
This paper is focused on modeling the five most prevalent childhood diseases in Akwa Ibom State using a multivariate approach to time series. An aggregate of 78,839 reported cases of malaria, upper respiratory tract infection (URTI), Pneumonia, anaemia and tetanus were extracted from five randomly selected hospitals in ...
Uehara, Yuki; Yagoshi, Michiko; Tanimichi, Yumiko; Yamada, Hiroko; Shimoguchi, Kazuo; Yamamoto, Sachiyo; Yanai, Mitsuru; Kumasaka, Kazunari
2009-07-01
We assessed the usefulness of reporting direct blood Gram stain results compared with the results of positive blood cultures in 482 episodes and monitored impact on selection of antimicrobial treatment. We found that the reporting groups "Staphylococcus spp," "Pseudomonas spp and related organisms," and "yeasts" identified in this way matched perfectly with later culture identification. When the report indicated Staphylococcus spp or Pseudomonas spp and related organisms, physicians started or changed antimicrobials suitable for these bacteria more frequently than when "other streptococci" and "family Enterobacteriaceae" were reported (P Gram stain results that definitively identify Staphylococcus spp, Pseudomonas spp and related organisms, and yeasts reliably can be rapidly provided by clinical laboratories; this information has a significant impact on early selection of effective antimicrobials. Further investigation is needed to assess the clinical impact of reporting Gram stain results in bacteremia.
International Nuclear Information System (INIS)
Alves de Freitas, Antonio; Abrao, Alcidio; Godoy dos Santos, Adir Janete; Pecequilo, Brigitte Roxana Soreanu
2008-01-01
An analytical procedure was established in order to obtain selective fractions containing radium isotopes ( 228 Ra), thorium ( 232 Th), and rare earths from RETOTER (REsiduo de TOrio e TErras Raras), a solid residue rich in rare earth elements, thorium isotopes and small amount of natural uranium generated from the operation of a thorium pilot plant for purification and production of pure thorium nitrate at IPEN -CNEN/SP. The paper presents preliminary results of 228 Ra, 226 Ra, 238 U, 210 Pb, and 40 K concentrations in the selective fractions and total residue determined by high-resolution gamma spectroscopy, considering radioactive equilibrium of the samples
Modeling and Solving the Liner Shipping Service Selection Problem
DEFF Research Database (Denmark)
Karsten, Christian Vad; Balakrishnan, Anant
We address a tactical planning problem, the Liner Shipping Service Selection Problem (LSSSP), facing container shipping companies. Given estimated demand between various ports, the LSSSP entails selecting the best subset of non-simple cyclic sailing routes from a given pool of candidate routes...... requirements and the hop limits to reduce problem size, and describe techniques to accelerate the solution procedure. We present computational results for realistic problem instances from the benchmark suite LINER-LIB....
On selection of optimal stochastic model for accelerated life testing
International Nuclear Information System (INIS)
Volf, P.; Timková, J.
2014-01-01
This paper deals with the problem of proper lifetime model selection in the context of statistical reliability analysis. Namely, we consider regression models describing the dependence of failure intensities on a covariate, for instance, a stressor. Testing the model fit is standardly based on the so-called martingale residuals. Their analysis has already been studied by many authors. Nevertheless, the Bayes approach to the problem, in spite of its advantages, is just developing. We shall present the Bayes procedure of estimation in several semi-parametric regression models of failure intensity. Then, our main concern is the Bayes construction of residual processes and goodness-of-fit tests based on them. The method is illustrated with both artificial and real-data examples. - Highlights: • Statistical survival and reliability analysis and Bayes approach. • Bayes semi-parametric regression modeling in Cox's and AFT models. • Bayes version of martingale residuals and goodness-of-fit test
Model building strategy for logistic regression: purposeful selection.
Zhang, Zhongheng
2016-03-01
Logistic regression is one of the most commonly used models to account for confounders in medical literature. The article introduces how to perform purposeful selection model building strategy with R. I stress on the use of likelihood ratio test to see whether deleting a variable will have significant impact on model fit. A deleted variable should also be checked for whether it is an important adjustment of remaining covariates. Interaction should be checked to disentangle complex relationship between covariates and their synergistic effect on response variable. Model should be checked for the goodness-of-fit (GOF). In other words, how the fitted model reflects the real data. Hosmer-Lemeshow GOF test is the most widely used for logistic regression model.
Statistical modelling in biostatistics and bioinformatics selected papers
Peng, Defen
2014-01-01
This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and fu...
Bayesian Variable Selection on Model Spaces Constrained by Heredity Conditions.
Taylor-Rodriguez, Daniel; Womack, Andrew; Bliznyuk, Nikolay
2016-01-01
This paper investigates Bayesian variable selection when there is a hierarchical dependence structure on the inclusion of predictors in the model. In particular, we study the type of dependence found in polynomial response surfaces of orders two and higher, whose model spaces are required to satisfy weak or strong heredity conditions. These conditions restrict the inclusion of higher-order terms depending upon the inclusion of lower-order parent terms. We develop classes of priors on the model space, investigate their theoretical and finite sample properties, and provide a Metropolis-Hastings algorithm for searching the space of models. The tools proposed allow fast and thorough exploration of model spaces that account for hierarchical polynomial structure in the predictors and provide control of the inclusion of false positives in high posterior probability models.
Bakker, Eric
2010-02-15
A generalized description of the response behavior of potentiometric polymer membrane ion-selective electrodes is presented on the basis of ion-exchange equilibrium considerations at the sample-membrane interface. This paper includes and extends on previously reported theoretical advances in a more compact yet more comprehensive form. Specifically, the phase boundary potential model is used to derive the origin of the Nernstian response behavior in a single expression, which is valid for a membrane containing any charge type and complex stoichiometry of ionophore and ion-exchanger. This forms the basis for a generalized expression of the selectivity coefficient, which may be used for the selectivity optimization of ion-selective membranes containing electrically charged and neutral ionophores of any desired stoichiometry. It is shown to reduce to expressions published previously for specialized cases, and may be effectively applied to problems relevant in modern potentiometry. The treatment is extended to mixed ion solutions, offering a comprehensive yet formally compact derivation of the response behavior of ion-selective electrodes to a mixture of ions of any desired charge. It is compared to predictions by the less accurate Nicolsky-Eisenman equation. The influence of ion fluxes or any form of electrochemical excitation is not considered here, but may be readily incorporated if an ion-exchange equilibrium at the interface may be assumed in these cases.
A model for the sustainable selection of building envelope assemblies
International Nuclear Information System (INIS)
Huedo, Patricia; Mulet, Elena; López-Mesa, Belinda
2016-01-01
The aim of this article is to define an evaluation model for the environmental impacts of building envelopes to support planners in the early phases of materials selection. The model is intended to estimate environmental impacts for different combinations of building envelope assemblies based on scientifically recognised sustainability indicators. These indicators will increase the amount of information that existing catalogues show to support planners in the selection of building assemblies. To define the model, first the environmental indicators were selected based on the specific aims of the intended sustainability assessment. Then, a simplified LCA methodology was developed to estimate the impacts applicable to three types of dwellings considering different envelope assemblies, building orientations and climate zones. This methodology takes into account the manufacturing, installation, maintenance and use phases of the building. Finally, the model was validated and a matrix in Excel was created as implementation of the model. - Highlights: • Method to assess the envelope impacts based on a simplified LCA • To be used at an earlier phase than the existing methods in a simple way. • It assigns a score by means of known sustainability indicators. • It estimates data about the embodied and operating environmental impacts. • It compares the investment costs with the costs of the consumed energy.
X-Ray Observations of Optically Selected, Radio-quiet Quasars. I. The ASCA Results
George, I. M.; Turner, T. J.; Yaqoob, T.; Netzer, H.; Laor, A.; Mushotzky, R. F.; Nandra, K.; Takahashi, T.
2000-03-01
We present the result of 27 ASCA observations of 26 radio-quiet quasars (RQQs) from the Palomar-Green (PG) survey. The sample is not statistically complete, but it is reasonably representative of RQQs in the PG survey. For many of the sources, the ASCA data are presented here for the first time. All the RQQs were detected except for two objects, both of which contain broad absorption lines in the optical band. We find the variability characteristics of the sources to be consistent with Seyfert 1 galaxies. A power law offers an acceptable description of the time-averaged spectra in the 2-10 keV (quasar frame) band for all but one data set. The best-fitting values of the photon index vary from object to object over the range 1.5~=2 and dispersion σ(Γ2-10)~=0.25. The distribution of Γ2-10 is therefore similar to that observed in other RQ active galactic nuclei (AGNs) and seems to be unrelated to X-ray luminosity. No single model adequately describes the full 0.6-10 keV (observed frame) continuum of all the RQQs. Approximately 50% of the sources can be adequately described by a single power law or by a power law with only very subtle deviations. All but one of the remaining data sets were found to have convex spectra (flattening as one moves to higher energies). The exception is PG 1411+442, in which a substantial column density (NH,z~2x1023 cm-2) obscures ~98% of the continuum. We find only five (maybe six) of 14 objects with z<~0.25 to have ``soft excesses'' at energies <~1 keV, but we find no universal shape for these spectral components. The spectrum of PG 1244+026 contains a rather narrow emission feature centered at an energy ~1 keV (quasar frame). The detection rate of absorption due to ionized material in these RQQs is lower than that seen in Seyfert 1 galaxies. In part, this may be due to selection effects. However, when detected, the absorbers in the RQQs exhibit a similar range of column density and ionization parameter as Seyfert 1 galaxies. We find
A concurrent optimization model for supplier selection with fuzzy quality loss
Energy Technology Data Exchange (ETDEWEB)
Rosyidi, C.; Murtisari, R.; Jauhari, W.
2017-07-01
The purpose of this research is to develop a concurrent supplier selection model to minimize the purchasing cost and fuzzy quality loss considering process capability and assembled product specification. Design/methodology/approach: This research integrates fuzzy quality loss in the model to concurrently solve the decision making in detailed design stage and manufacturing stage. Findings: The resulted model can be used to concurrently select the optimal supplier and determine the tolerance of the components. The model balances the purchasing cost and fuzzy quality loss. Originality/value: An assembled product consists of many components which must be purchased from the suppliers. Fuzzy quality loss is integrated in the supplier selection model to allow the vagueness in final assembly by grouping the assembly into several grades according to the resulted assembly tolerance.
A concurrent optimization model for supplier selection with fuzzy quality loss
International Nuclear Information System (INIS)
Rosyidi, C.; Murtisari, R.; Jauhari, W.
2017-01-01
The purpose of this research is to develop a concurrent supplier selection model to minimize the purchasing cost and fuzzy quality loss considering process capability and assembled product specification. Design/methodology/approach: This research integrates fuzzy quality loss in the model to concurrently solve the decision making in detailed design stage and manufacturing stage. Findings: The resulted model can be used to concurrently select the optimal supplier and determine the tolerance of the components. The model balances the purchasing cost and fuzzy quality loss. Originality/value: An assembled product consists of many components which must be purchased from the suppliers. Fuzzy quality loss is integrated in the supplier selection model to allow the vagueness in final assembly by grouping the assembly into several grades according to the resulted assembly tolerance.
Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman
2017-11-01
Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.
Decision support model for selecting and evaluating suppliers in the construction industry
Directory of Open Access Journals (Sweden)
Fernando Schramm
2012-12-01
Full Text Available A structured evaluation of the construction industry's suppliers, considering aspects which make their quality and credibility evident, can be a strategic tool to manage this specific supply chain. This study proposes a multi-criteria decision model for suppliers' selection from the construction industry, as well as an efficient evaluation procedure for the selected suppliers. The model is based on SMARTER (Simple Multi-Attribute Rating Technique Exploiting Ranking method and its main contribution is a new approach to structure the process of suppliers' selection, establishing explicit strategic policies on which the company management system relied to make the suppliers selection. This model was applied to a Civil Construction Company in Brazil and the main results demonstrate the efficiency of the proposed model. This study allowed the development of an approach to Construction Industry which was able to provide a better relationship among its managers, suppliers and partners.
Schmidtmann, I; Elsäßer, A; Weinmann, A; Binder, H
2014-12-30
For determining a manageable set of covariates potentially influential with respect to a time-to-event endpoint, Cox proportional hazards models can be combined with variable selection techniques, such as stepwise forward selection or backward elimination based on p-values, or regularized regression techniques such as component-wise boosting. Cox regression models have also been adapted for dealing with more complex event patterns, for example, for competing risks settings with separate, cause-specific hazard models for each event type, or for determining the prognostic effect pattern of a variable over different landmark times, with one conditional survival model for each landmark. Motivated by a clinical cancer registry application, where complex event patterns have to be dealt with and variable selection is needed at the same time, we propose a general approach for linking variable selection between several Cox models. Specifically, we combine score statistics for each covariate across models by Fisher's method as a basis for variable selection. This principle is implemented for a stepwise forward selection approach as well as for a regularized regression technique. In an application to data from hepatocellular carcinoma patients, the coupled stepwise approach is seen to facilitate joint interpretation of the different cause-specific Cox models. In conditional survival models at landmark times, which address updates of prediction as time progresses and both treatment and other potential explanatory variables may change, the coupled regularized regression approach identifies potentially important, stably selected covariates together with their effect time pattern, despite having only a small number of events. These results highlight the promise of the proposed approach for coupling variable selection between Cox models, which is particularly relevant for modeling for clinical cancer registries with their complex event patterns. Copyright © 2014 John Wiley & Sons
Selection of Models for Ingestion Pathway and Relocation Radii Determination
International Nuclear Information System (INIS)
Blanchard, A.
1998-01-01
The distance at which intermediate phase protective actions (such as food interdiction and relocation) may be needed following postulated accidents at three Savannah River Site nonreactor nuclear facilities will be determined by modeling. The criteria used to select dispersion/deposition models are presented. Several models were considered, including ARAC, MACCS, HOTSPOT, WINDS (coupled with PUFF-PLUME), and UFOTRI. Although ARAC and WINDS are expected to provide more accurate modeling of atmospheric transport following an actual release, analyses consistent with regulatory guidance for planning purposes may be accomplished with comparatively simple dispersion models such as HOTSPOT and UFOTRI. A recommendation is made to use HOTSPOT for non-tritium facilities and UFOTRI for tritium facilities
Modelling Technical and Economic Parameters in Selection of Manufacturing Devices
Directory of Open Access Journals (Sweden)
Naqib Daneshjo
2017-11-01
Full Text Available Sustainable science and technology development is also conditioned by continuous development of means of production which have a key role in structure of each production system. Mechanical nature of the means of production is complemented by controlling and electronic devices in context of intelligent industry. A selection of production machines for a technological process or technological project has so far been practically resolved, often only intuitively. With regard to increasing intelligence, the number of variable parameters that have to be considered when choosing a production device is also increasing. It is necessary to use computing techniques and decision making methods according to heuristic methods and more precise methodological procedures during the selection. The authors present an innovative model for optimization of technical and economic parameters in the selection of manufacturing devices for industry 4.0.
Directory of Open Access Journals (Sweden)
Salabura Piotr
2017-01-01
Full Text Available HADES experiment at GSI is the only high precision experiment probing nuclear matter in the beam energy range of a few AGeV. Pion, proton and ion beams are used to study rare dielectron and strangeness probes to diagnose properties of strongly interacting matter in this energy regime. Selected results from p + A and A + A collisions are presented and discussed.
The European Integrated Tokamak Modelling (ITM) effort: achievements and first physics results
G.L. Falchetto,; Coster, D.; Coelho, R.; Scott, B. D.; Figini, L.; Kalupin, D.; Nardon, E.; Nowak, S.; L.L. Alves,; Artaud, J. F.; Basiuk, V.; João P.S. Bizarro,; C. Boulbe,; Dinklage, A.; Farina, D.; B. Faugeras,; Ferreira, J.; Figueiredo, A.; Huynh, P.; Imbeaux, F.; Ivanova-Stanik, I.; Jonsson, T.; H.-J. Klingshirn,; Konz, C.; Kus, A.; Marushchenko, N. B.; Pereverzev, G.; M. Owsiak,; Poli, E.; Peysson, Y.; R. Reimer,; Signoret, J.; Sauter, O.; Stankiewicz, R.; Strand, P.; Voitsekhovitch, I.; Westerhof, E.; T. Zok,; Zwingmann, W.; ITM-TF contributors,; ASDEX Upgrade team,; JET-EFDA Contributors,
2014-01-01
A selection of achievements and first physics results are presented of the European Integrated Tokamak Modelling Task Force (EFDA ITM-TF) simulation framework, which aims to provide a standardized platform and an integrated modelling suite of validated numerical codes for the simulation and
Jagiellonian University Selected Results on the CKM Angle $\\gamma $ Measurement at the LHCb
Krupa, Wojciech
2017-01-01
The LHCb is a single arm forward spectrometer designed to study heavy-flavour physics at the LHC. Its very precise tracking and excellent particle identification play currently a major role in providing the world-best measurements of the Unitary Triangle parameters. In this paper, selected results of the Cabibbo–Kobayashi–Maskawa (CKM) angle $\\gamma$ measurements, with special attention for $B \\rightarrow DK$ decays family, obtained at the LHCb, are presented.
Moehler, O.; Cziczo, D. J.; DeMott, P. J.; Hiranuma, N.; Petters, M. D.
2015-12-01
The role of aerosol particles for ice formation in clouds is one of the largest uncertainties in understanding the Earth's weather and climate systems, which is related to the poor knowledge of ice nucleation microphysics or of the nature and atmospheric abundance of ice nucleating particles (INPs). During the recent years, new mobile instruments were developed for measuring the concentration, size and chemical composition of INPs, which were tested during the three-part Fifth International Ice Nucleation (FIN) workshop. The FIN activities addressed not only instrument issues, but also important science topics like the nature of atmospheric INP and cloud ice residuals, the ice nucleation activity of relevant atmospheric aerosols, or the parameterization of ice formation in atmospheric weather and climate models. The first activity FIN-1 was conducted during November 2014 at the AIDA cloud chamber. It involved co-locating nine single particle mass spectrometers to evaluate how well they resolve the INP and ice residual composition and how spectra from different instruments compare for relevant atmospheric aerosols. We conducted about 90 experiments with mineral, carbonaceous and biological aerosol types, some also coated with organic and inorganic compounds. The second activity FIN-2 was conducted during March 2015 at the AIDA facility. A total of nine mobile INP instruments directly sampled from the AIDA aerosol chambers. Wet suspension and filter samples were also taken for offline INP processing. A refereed blind intercomparison was conducted during two days of the FIN-2 activity. The third activity FIN-3 will take place at the Desert Research Institute's Storm Peak Laboratory (SPL) in order to test the instruments' performance in the field. This contribution will introduce the FIN activities, summarize first results from the formal part of FIN-2, and discuss selected results, mainly from FIN-1 for the effect of coating on the ice nucleation (IN) by mineral
Marker-assisted selection reduces expected inbreeding but can result in large effects of hitchhiking
DEFF Research Database (Denmark)
Pedersen, L D; Sørensen, A C; Berg, P
2010-01-01
We used computer simulations to investigate to what extent true inbreeding, i.e. identity-by-descent, is affected by the use of marker-assisted selection (MAS) relative to traditional best linear unbiased predictions (BLUP) selection. The effect was studied by varying the heritability (h2 = 0.04 vs....... 0.25), the marker distance (MAS vs. selection on the gene, GAS), the favourable QTL allele effect (α = 0.118 vs. 0.236) and the initial frequency of the favourable QTL allele (p = 0.01 vs. 0.1) in a population resembling the breeding nucleus of a dairy cattle population. The simulated genome...... consisted of two chromosomes of 100 cM each in addition to a polygenic component. On chromosome 1, a biallelic QTL as well as 4 markers were simulated in linkage disequilibrium. Chromosome 2 was selectively neutral. The results showed that, while reducing pedigree estimated inbreeding, MAS and GAS did...
Forecasting house prices in the 50 states using Dynamic Model Averaging and Dynamic Model Selection
DEFF Research Database (Denmark)
Bork, Lasse; Møller, Stig Vinther
2015-01-01
We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves substantia...
Shi, Jinfei; Zhu, Songqing; Chen, Ruwen
2017-12-01
An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.
Generalized Degrees of Freedom and Adaptive Model Selection in Linear Mixed-Effects Models.
Zhang, Bo; Shen, Xiaotong; Mumford, Sunni L
2012-03-01
Linear mixed-effects models involve fixed effects, random effects and covariance structure, which require model selection to simplify a model and to enhance its interpretability and predictability. In this article, we develop, in the context of linear mixed-effects models, the generalized degrees of freedom and an adaptive model selection procedure defined by a data-driven model complexity penalty. Numerically, the procedure performs well against its competitors not only in selecting fixed effects but in selecting random effects and covariance structure as well. Theoretically, asymptotic optimality of the proposed methodology is established over a class of information criteria. The proposed methodology is applied to the BioCycle study, to determine predictors of hormone levels among premenopausal women and to assess variation in hormone levels both between and within women across the menstrual cycle.
Some results regarding the comparison of the Earth's atmospheric models
Directory of Open Access Journals (Sweden)
Šegan S.
2005-01-01
Full Text Available In this paper we examine air densities derived from our realization of aeronomic atmosphere models based on accelerometer measurements from satellites in a low Earth's orbit (LEO. Using the adapted algorithms we derive comparison parameters. The first results concerning the adjustment of the aeronomic models to the total-density model are given.
Parameter estimation and model selection in computational biology.
Directory of Open Access Journals (Sweden)
Gabriele Lillacci
2010-03-01
Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.
Models of speciation by sexual selection on polygenic traits
Lande, Russell
1981-01-01
The joint evolution of female mating preferences and secondary sexual characters of males is modeled for polygamous species in which males provide only genetic material to the next generation and females have many potential mates to choose among. Despite stabilizing natural selection on males, various types of mating preferences may create a runaway process in which the outcome of phenotypic evolution depends critically on the genetic variation parameters and initial conditions of a populatio...
A Model of Social Selection and Successful Altruism
1989-10-07
D., The evolution of social behavior. Annual Reviews of Ecological Systems, 5:325-383 (1974). 2. Dawkins , R., The selfish gene . Oxford: Oxford...alive and well. it will be important to re- examine this striking historical experience,-not in terms o, oversimplified models of the " selfish gene ," but...Darwinian Analysis The acceptance by many modern geneticists of the axiom that the basic unit of selection Is the " selfish gene " quickly led to the
A Bayesian Technique for Selecting a Linear Forecasting Model
Ramona L. Trader
1983-01-01
The specification of a forecasting model is considered in the context of linear multiple regression. Several potential predictor variables are available, but some of them convey little information about the dependent variable which is to be predicted. A technique for selecting the "best" set of predictors which takes into account the inherent uncertainty in prediction is detailed. In addition to current data, there is often substantial expert opinion available which is relevant to the forecas...
ExEP yield modeling tool and validation test results
Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul
2017-09-01
EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.
Selected topics in photochemistry of nucleic acids. Recent results and perspectives
International Nuclear Information System (INIS)
Loeber, G.; Kittler, L.
1977-01-01
Recent results on the following photoreactions of nucleic acids are reported: photochemistry of aza-bases and minor bases, formation of photoproducts of the non-cyclobutane type, formations of furocoumarin-pyrimidine photoadducts, fluorescence of dye-nucleic acid complexes and their role in chromosomal fluorescence staining, and mechanisms of the photochemical reaction. Results are discussed with respect to: (i) photobiological relevance of light-induced defects in nucleic acids; (ii) possibilities of achieving higher selectivity of light-induced defects in nucleic acids; (iii) the use of nucleic acid photochemistry to analyze genetic material. An extensive bibliography is included. (author)
Selection of key terrain attributes for SOC model
DEFF Research Database (Denmark)
Greve, Mogens Humlekrog; Adhikari, Kabindra; Chellasamy, Menaka
As an important component of the global carbon pool, soil organic carbon (SOC) plays an important role in the global carbon cycle. SOC pool is the basic information to carry out global warming research, and needs to sustainable use of land resources. Digital terrain attributes are often use...... was selected, total 2,514,820 data mining models were constructed by 71 differences grid from 12m to 2304m and 22 attributes, 21 attributes derived by DTM and the original elevation. Relative importance and usage of each attributes in every model were calculated. Comprehensive impact rates of each attribute...
Selecting, weeding, and weighting biased climate model ensembles
Jackson, C. S.; Picton, J.; Huerta, G.; Nosedal Sanchez, A.
2012-12-01
In the Bayesian formulation, the "log-likelihood" is a test statistic for selecting, weeding, or weighting climate model ensembles with observational data. This statistic has the potential to synthesize the physical and data constraints on quantities of interest. One of the thorny issues for formulating the log-likelihood is how one should account for biases. While in the past we have included a generic discrepancy term, not all biases affect predictions of quantities of interest. We make use of a 165-member ensemble CAM3.1/slab ocean climate models with different parameter settings to think through the issues that are involved with predicting each model's sensitivity to greenhouse gas forcing given what can be observed from the base state. In particular we use multivariate empirical orthogonal functions to decompose the differences that exist among this ensemble to discover what fields and regions matter to the model's sensitivity. We find that the differences that matter are a small fraction of the total discrepancy. Moreover, weighting members of the ensemble using this knowledge does a relatively poor job of adjusting the ensemble mean toward the known answer. This points out the shortcomings of using weights to correct for biases in climate model ensembles created by a selection process that does not emphasize the priorities of your log-likelihood.
Martin-StPaul, N. K.; Ay, J. S.; Guillemot, J.; Doyen, L.; Leadley, P.
2014-12-01
Species distribution models (SDMs) are widely used to study and predict the outcome of global changes on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of applications on forest trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8km). We also compared the outputs of the SSDM with outputs of a classical SDM (i.e. Biomod ensemble modelling) in terms of bioclimatic response curves and potential distributions under current climate and climate change scenarios. The shapes of the bioclimatic response curves and the modelled species distribution maps differed markedly between SSDM and classical SDMs, with contrasted patterns according to species and spatial resolutions. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents
Optimal foraging in marine ecosystem models: selectivity, profitability and switching
DEFF Research Database (Denmark)
Visser, Andre W.; Fiksen, Ø.
2013-01-01
ecological mechanics and evolutionary logic as a solution to diet selection in ecosystem models. When a predator can consume a range of prey items it has to choose which foraging mode to use, which prey to ignore and which ones to pursue, and animals are known to be particularly skilled in adapting...... to the preference functions commonly used in models today. Indeed, depending on prey class resolution, optimal foraging can yield feeding rates that are considerably different from the ‘switching functions’ often applied in marine ecosystem models. Dietary inclusion is dictated by two optimality choices: 1...... by letting predators maximize energy intake or more properly, some measure of fitness where predation risk and cost are also included. An optimal foraging or fitness maximizing approach will give marine ecosystem models a sound principle to determine trophic interactions...
Covariate selection for the semiparametric additive risk model
DEFF Research Database (Denmark)
Martinussen, Torben; Scheike, Thomas
2009-01-01
This paper considers covariate selection for the additive hazards model. This model is particularly simple to study theoretically and its practical implementation has several major advantages to the similar methodology for the proportional hazards model. One complication compared...... and study their large sample properties for the situation where the number of covariates p is smaller than the number of observations. We also show that the adaptive Lasso has the oracle property. In many practical situations, it is more relevant to tackle the situation with large p compared with the number...... of observations. We do this by studying the properties of the so-called Dantzig selector in the setting of the additive risk model. Specifically, we establish a bound on how close the solution is to a true sparse signal in the case where the number of covariates is large. In a simulation study, we also compare...
An Introduction to Model Selection: Tools and Algorithms
Directory of Open Access Journals (Sweden)
Sébastien Hélie
2006-03-01
Full Text Available Model selection is a complicated matter in science, and psychology is no exception. In particular, the high variance in the object of study (i.e., humans prevents the use of Poppers falsification principle (which is the norm in other sciences. Therefore, the desirability of quantitative psychological models must be assessed by measuring the capacity of the model to fit empirical data. In the present paper, an error measure (likelihood, as well as five methods to compare model fits (the likelihood ratio test, Akaikes information criterion, the Bayesian information criterion, bootstrapping and cross-validation, are presented. The use of each method is illustrated by an example, and the advantages and weaknesses of each method are also discussed.
Verification of aseismic design model by using experimental results
International Nuclear Information System (INIS)
Mizuno, N.; Sugiyama, N.; Suzuki, T.; Shibata, Y.; Miura, K.; Miyagawa, N.
1985-01-01
A lattice model is applied as an analysis model for an aseismic design of the Hamaoka nuclear reactor building. With object to verify an availability of this design model, two reinforced concrete blocks are constructed on the ground and the forced vibration tests are carried out. The test results are well followed by simulation analysis using the lattice model. Damping value of the ground obtained from the test is more conservative than the design value. (orig.)
Directory of Open Access Journals (Sweden)
Mark N Read
2016-09-01
Full Text Available The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto
Selective Media for Actinide Collection and Pre-Concentration: Results of FY 2006 Studies
Energy Technology Data Exchange (ETDEWEB)
Lumetta, Gregg J.; Addleman, Raymond S.; Hay, Benjamin P.; Hubler, Timothy L.; Levitskaia, Tatiana G.; Sinkov, Sergey I.; Snow, Lanee A.; Warner, Marvin G.; Latesky, Stanley L.
2006-11-17
3] > 0.3 M. Preliminary results suggest that the Kl?ui resins can separate Pu(IV) from sample solutions containing high concentrations of competing ions. Conceptual protocols for recovery of the Pu from the resin for subsequent analysis have been proposed, but further work is needed to perfect these techniques. Work on this subject will be continued in FY 2007. Automated laboratory equipment (in conjunction with Task 3 of the NA-22 Automation Project) will be used in FY 2007 to improve the efficiency of these experiments. The sorption of actinide ions on self-assembled monolayer on mesoporous supports materials containing diphosphonate groups was also investigated. These materials also showed a very high affinity for tetravalent actinides, and they also sorbed U(VI) fairly strongly. Computational Ligand Design An extended MM3 molecular mechanics model was developed for calculating the structures of Kl?ui ligand complexes. This laid the groundwork necessary to perform the computer-aided design of bis-Kl?ui architectures tailored for Pu(IV) complexation. Calculated structures of the Kl?ui ligand complexes [Pu(Kl?ui)2(OH2)2]2+ and [Fe(Kl?ui)2]+ indicate a ''bent'' sandwich arrangement of the Kl?ui ligands in the Pu(IV) complex, whereas the Fe(III) complex prefers a ''linear'' octahedral arrangement of the two Kl?ui ligands. This offers the possibility that two Kl?ui ligands can be tethered together to form a material with very high binding affinity for Pu(IV) over Fe(III). The next step in the design process is to use de novo molecule building software (HostDesigner) to identify potential candidate architectures.
The Living Dead: Transformative Experiences in Modelling Natural Selection
Petersen, Morten Rask
2017-01-01
This study considers how students change their coherent conceptual understanding of natural selection through a hands-on simulation. The results show that most students change their understanding. In addition, some students also underwent a transformative experience and used their new knowledge in a leisure time activity. These transformative…
Directory of Open Access Journals (Sweden)
Vaida Marius
2009-12-01
Full Text Available In realizing this study I started from the premise that, by elaborating certain orientation models and initial selection for the speed skating and their application will appear superior results, necessary results, taking into account the actual evolution of the high performance sport in general and of the speed skating, in special.The target of this study has been the identification of an orientation model and a complete initial selection that should be based on the favorable aptitudes of the speed skating. On the basis of the made researched orientation models and initial selection has been made, things that have been demonstrated experimental that are not viable, the study starting from the data of the 120 copies, the complete experiment being made by 32 subjects separated in two groups, one using the proposed model and the other formed fromsubjects randomly selected.These models can serve as common working instruments both for the orientation process and for the initial selection one, being able to integrate in the proper practical activity, these being used easily both by coaches that are in charge with the proper selection of the athletes but also by the physical education teachers orschool teachers that are in contact with children of an early age.
Loywyck, V.; Bijma, P.; Pinard-van der Laan, M.H.; Arendonk, van J.A.M.; Verrier, E.
2005-01-01
Selection programmes are mainly concerned with increasing genetic gain. However, short-term progress should not be obtained at the expense of the within-population genetic variability. Different prediction models for the evolution within a small population of the genetic mean of a selected trait,
Zhou, Ligang; Keung Lai, Kin; Yen, Jerome
2014-03-01
Due to the economic significance of bankruptcy prediction of companies for financial institutions, investors and governments, many quantitative methods have been used to develop effective prediction models. Support vector machine (SVM), a powerful classification method, has been used for this task; however, the performance of SVM is sensitive to model form, parameter setting and features selection. In this study, a new approach based on direct search and features ranking technology is proposed to optimise features selection and parameter setting for 1-norm and least-squares SVM models for bankruptcy prediction. This approach is also compared to the SVM models with parameter optimisation and features selection by the popular genetic algorithm technique. The experimental results on a data set with 2010 instances show that the proposed models are good alternatives for bankruptcy prediction.
Steel Containment Vessel Model Test: Results and Evaluation
Energy Technology Data Exchange (ETDEWEB)
Costello, J.F.; Hashimote, T.; Hessheimer, M.F.; Luk, V.K.
1999-03-01
A high pressure test of the steel containment vessel (SCV) model was conducted on December 11-12, 1996 at Sandia National Laboratories, Albuquerque, NM, USA. The test model is a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of an improved Mark II boiling water reactor (BWR) containment. A concentric steel contact structure (CS), installed over the SCV model and separated at a nominally uniform distance from it, provided a simplified representation of a reactor shield building in the actual plant. The SCV model and contact structure were instrumented with strain gages and displacement transducers to record the deformation behavior of the SCV model during the high pressure test. This paper summarizes the conduct and the results of the high pressure test and discusses the posttest metallurgical evaluation results on specimens removed from the SCV model.
Differences between selection on sex versus recombination in red queen models with diploid hosts.
Agrawal, Aneil F
2009-08-01
The Red Queen hypothesis argues that parasites generate selection for genetic mixing (sex and recombination) in their hosts. A number of recent papers have examined this hypothesis using models with haploid hosts. In these haploid models, sex and recombination are selectively equivalent. However, sex and recombination are not equivalent in diploids because selection on sex depends on the consequences of segregation as well as recombination. Here I compare how parasites select on modifiers of sexual reproduction and modifiers of recombination rate. Across a wide set of parameters, parasites tend to select against both sex and recombination, though recombination is favored more often than is sex. There is little correspondence between the conditions favoring sex and those favoring recombination, indicating that the direction of selection on sex is often determined by the effects of segregation, not recombination. Moreover, when sex was favored it is usually due to a long-term advantage whereas short-term effects are often responsible for selection favoring recombination. These results strongly indicate that Red Queen models focusing exclusively on the effects of recombination cannot be used to infer the type of selection on sex that is generated by parasites on diploid hosts.
Selecting an Appropriate Upscaled Reservoir Model Based on Connectivity Analysis
Directory of Open Access Journals (Sweden)
Preux Christophe
2016-09-01
Full Text Available Reservoir engineers aim to build reservoir models to investigate fluid flows within hydrocarbon reservoirs. These models consist of three-dimensional grids populated by petrophysical properties. In this paper, we focus on permeability that is known to significantly influence fluid flow. Reservoir models usually encompass a very large number of fine grid blocks to better represent heterogeneities. However, performing fluid flow simulations for such fine models is extensively CPU-time consuming. A common practice consists in converting the fine models into coarse models with less grid blocks: this is the upscaling process. Many upscaling methods have been proposed in the literature that all lead to distinct coarse models. The problem is how to choose the appropriate upscaling method. Various criteria have been established to evaluate the information loss due to upscaling, but none of them investigate connectivity. In this paper, we propose to first perform a connectivity analysis for the fine and candidate coarse models. This makes it possible to identify shortest paths connecting wells. Then, we introduce two indicators to quantify the length and trajectory mismatch between the paths for the fine and the coarse models. The upscaling technique to be recommended is the one that provides the coarse model for which the shortest paths are the closest to the shortest paths determined for the fine model, both in terms of length and trajectory. Last, the potential of this methodology is investigated from two test cases. We show that the two indicators help select suitable upscaling techniques as long as gravity is not a prominent factor that drives fluid flows.
A finite volume alternate direction implicit approach to modeling selective laser melting
DEFF Research Database (Denmark)
Hattel, Jesper Henri; Mohanty, Sankhya
2013-01-01
Over the last decade, several studies have attempted to develop thermal models for analyzing the selective laser melting process with a vision to predict thermal stresses, microstructures and resulting mechanical properties of manufactured products. While a holistic model addressing all involved...... is proposed for modeling single-layer and few-layers selective laser melting processes. The ADI technique is implemented and applied for two cases involving constant material properties and non-linear material behavior. The ADI FV method consume less time while having comparable accuracy with respect to 3D...
Bioeconomic model and selection indices in Aberdeen Angus cattle.
Campos, G S; Braccini Neto, J; Oaigen, R P; Cardoso, F F; Cobuci, J A; Kern, E L; Campos, L T; Bertoli, C D; McManus, C M
2014-08-01
A bioeconomic model was developed to calculate economic values for biological traits in full-cycle production systems and propose selection indices based on selection criteria used in the Brazilian Aberdeen Angus genetic breeding programme (PROMEBO). To assess the impact of changes in the performance of the traits on the profit of the production system, the initial values of the traits were increased by 1%. The economic values for number of calves weaned (NCW) and slaughter weight (SW) were, respectively, R$ 6.65 and R$ 1.43/cow/year. The selection index at weaning showed a 44.77% emphasis on body weight, 14.24% for conformation, 30.36% for early maturing and 10.63% for muscle development. The eighteen-month index showed emphasis of 77.61% for body weight, 4.99% for conformation, 11.09% for early maturing, 6.10% for muscle development and 0.22% for scrotal circumference. NCW showed highest economic impact, and SW had important positive effect on the economics of the production system. The selection index proposed can be used by breeders and should contribute to greater profitability. © 2014 Blackwell Verlag GmbH.
Identifiability Results for Several Classes of Linear Compartment Models.
Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa
2015-08-01
Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.
Directory of Open Access Journals (Sweden)
Xiaofeng Lv
2018-01-01
Full Text Available Sensor data-based test selection optimization is the basis for designing a test work, which ensures that the system is tested under the constraint of the conventional indexes such as fault detection rate (FDR and fault isolation rate (FIR. From the perspective of equipment maintenance support, the ambiguity isolation has a significant effect on the result of test selection. In this paper, an improved test selection optimization model is proposed by considering the ambiguity degree of fault isolation. In the new model, the fault test dependency matrix is adopted to model the correlation between the system fault and the test group. The objective function of the proposed model is minimizing the test cost with the constraint of FDR and FIR. The improved chaotic discrete particle swarm optimization (PSO algorithm is adopted to solve the improved test selection optimization model. The new test selection optimization model is more consistent with real complicated engineering systems. The experimental result verifies the effectiveness of the proposed method.
Directory of Open Access Journals (Sweden)
María Varea-Sánchez
Full Text Available Interspecific comparative studies have shown that, in most taxa, postcopulatory sexual selection (PCSS in the form of sperm competition drives the evolution of longer and faster swimming sperm. Work on passserine birds has revealed that PCSS also reduces variation in sperm size between males at the intraspecific level. However, the influence of PCSS upon intra-male sperm size diversity is poorly understood, since the few studies carried out to date in birds have yielded contradictory results. In mammals, PCSS increases sperm size but there is little information on the effects of this selective force on variations in sperm size and shape. Here, we test whether sperm competition associates with a reduction in the degree of variation of sperm dimensions in rodents. We found that as sperm competition levels increase males produce sperm that are more similar in both the size of the head and the size of the flagellum. On the other hand, whereas with increasing levels of sperm competition there is less variation in head length in relation to head width (ratio CV head length/CV head width, there is no relation between variation in head and flagellum sizes (ratio CV head length/CV flagellum length. Thus, it appears that, in addition to a selection for longer sperm, sperm competition may select more uniform sperm heads and flagella, which together may enhance swimming velocity. Overall, sperm competition seems to drive sperm components towards an optimum design that may affect sperm performance which, in turn, will be crucial for successful fertilization.
Selected results of simulation studies in “The Smart Peninsula” project
Directory of Open Access Journals (Sweden)
Andrzej Kąkol
2012-03-01
Full Text Available “The Intelligent Peninsula” project implementation required the development of a computational model of a medium voltage grid and of a section of a low voltage grid in the Hel Peninsula. The model was used to perform many simulation analyses in the MV grid. The analyses were used to develop MV grid operation control algorithms. The paper presents results of the analyses aimed at verification of a MLDC method-based voltage control algorithm. The paper presents results of the analyses aimed at verification of EC Władysławowo cogeneration plant’s suitability for standalone operation in the Hel Peninsula.
How Many Separable Sources? Model Selection In Independent Components Analysis
DEFF Research Database (Denmark)
Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen
2015-01-01
Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though...
Results of the Marine Ice Sheet Model Intercomparison Project, MISMIP
Directory of Open Access Journals (Sweden)
F. Pattyn
2012-05-01
Full Text Available Predictions of marine ice-sheet behaviour require models that are able to robustly simulate grounding line migration. We present results of an intercomparison exercise for marine ice-sheet models. Verification is effected by comparison with approximate analytical solutions for flux across the grounding line using simplified geometrical configurations (no lateral variations, no effects of lateral buttressing. Unique steady state grounding line positions exist for ice sheets on a downward sloping bed, while hysteresis occurs across an overdeepened bed, and stable steady state grounding line positions only occur on the downward-sloping sections. Models based on the shallow ice approximation, which does not resolve extensional stresses, do not reproduce the approximate analytical results unless appropriate parameterizations for ice flux are imposed at the grounding line. For extensional-stress resolving "shelfy stream" models, differences between model results were mainly due to the choice of spatial discretization. Moving grid methods were found to be the most accurate at capturing grounding line evolution, since they track the grounding line explicitly. Adaptive mesh refinement can further improve accuracy, including fixed grid models that generally perform poorly at coarse resolution. Fixed grid models, with nested grid representations of the grounding line, are able to generate accurate steady state positions, but can be inaccurate over transients. Only one full-Stokes model was included in the intercomparison, and consequently the accuracy of shelfy stream models as approximations of full-Stokes models remains to be determined in detail, especially during transients.
Assessment of public acceptability in site selection process. The methodology and the results
International Nuclear Information System (INIS)
Zeleznik, N.; Kralj, M.; Polic, M.; Kos, D.; Pek Drapal, D.
2005-01-01
The site selection process for the low and intermediate radioactive waste (LILW) repository in Slovenia follows the mixed mode approach according to the model proposed by IAEA. After finishing the conceptual and planning stage in 1999, and after identification of the potentially suitable areas in the area survey stage in 2001, ARAO (Agency for radwaste management) invited all municipalities to volunteer in the procedure of placing the LILW repository in the physical environment. A positive response was received from eight municipalities, though three municipalities later resigned from it. A selection between twelve locations in these five municipalities had to be done because Slovenian procedure provides for only three locations to be further evaluated in the stage of identification of potentially suitable sites. A pre-feasibility study of the public acceptability, together with the technical aspects (safety, technical functionality, economic, environmental and spatial aspects) was performed. The aspect of public acceptability included objective and subjective evaluation criteria. The former included information obtained from studies of demography, data on local economy, infrastructure and eventual environmental problems, media analysis, and earlier public opinion polls. The latter included data obtained from topical workshops, free phone line, telephone interviews with the general public and personal interviews with representatives of decision makers and public opinion leaders, as well as a public opinion poll in all included communities. Evaluated municipalities were ranked regarding their social suitability for the radioactive waste site. (author)
METHODS OF SELECTING THE EFFECTIVE MODELS OF BUILDINGS REPROFILING PROJECTS
Directory of Open Access Journals (Sweden)
Александр Иванович МЕНЕЙЛЮК
2016-02-01
Full Text Available The article highlights the important task of project management in reprofiling of buildings. It is expedient to pay attention to selecting effective engineering solutions to reduce the duration and cost reduction at the project management in the construction industry. This article presents a methodology for the selection of efficient organizational and technical solutions for the reconstruction of buildings reprofiling. The method is based on a compilation of project variants in the program Microsoft Project and experimental statistical analysis using the program COMPEX. The introduction of this technique in the realigning of buildings allows choosing efficient models of projects, depending on the given constraints. Also, this technique can be used for various construction projects.
A Reliability Based Model for Wind Turbine Selection
Directory of Open Access Journals (Sweden)
A.K. Rajeevan
2013-06-01
Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.
Yang, Ziheng; Zhu, Tianqi
2018-02-20
The Bayesian method is noted to produce spuriously high posterior probabilities for phylogenetic trees in analysis of large datasets, but the precise reasons for this overconfidence are unknown. In general, the performance of Bayesian selection of misspecified models is poorly understood, even though this is of great scientific interest since models are never true in real data analysis. Here we characterize the asymptotic behavior of Bayesian model selection and show that when the competing models are equally wrong, Bayesian model selection exhibits surprising and polarized behaviors in large datasets, supporting one model with full force while rejecting the others. If one model is slightly less wrong than the other, the less wrong model will eventually win when the amount of data increases, but the method may become overconfident before it becomes reliable. We suggest that this extreme behavior may be a major factor for the spuriously high posterior probabilities for evolutionary trees. The philosophical implications of our results to the application of Bayesian model selection to evaluate opposing scientific hypotheses are yet to be explored, as are the behaviors of non-Bayesian methods in similar situations.
Model selection for integrated pest management with stochasticity.
Akman, Olcay; Comar, Timothy D; Hrozencik, Daniel
2018-04-07
In Song and Xiang (2006), an integrated pest management model with periodically varying climatic conditions was introduced. In order to address a wider range of environmental effects, the authors here have embarked upon a series of studies resulting in a more flexible modeling approach. In Akman et al. (2013), the impact of randomly changing environmental conditions is examined by incorporating stochasticity into the birth pulse of the prey species. In Akman et al. (2014), the authors introduce a class of models via a mixture of two birth-pulse terms and determined conditions for the global and local asymptotic stability of the pest eradication solution. With this work, the authors unify the stochastic and mixture model components to create further flexibility in modeling the impacts of random environmental changes on an integrated pest management system. In particular, we first determine the conditions under which solutions of our deterministic mixture model are permanent. We then analyze the stochastic model to find the optimal value of the mixing parameter that minimizes the variance in the efficacy of the pesticide. Additionally, we perform a sensitivity analysis to show that the corresponding pesticide efficacy determined by this optimization technique is indeed robust. Through numerical simulations we show that permanence can be preserved in our stochastic model. Our study of the stochastic version of the model indicates that our results on the deterministic model provide informative conclusions about the behavior of the stochastic model. Copyright © 2017 Elsevier Ltd. All rights reserved.
Results on the symmetries of integrable fermionic models on chains
International Nuclear Information System (INIS)
Dolcini, F.; Montorsi, A.
2001-01-01
We investigate integrable fermionic models within the scheme of the graded quantum inverse scattering method, and prove that any symmetry imposed on the solution of the Yang-Baxter equation reflects on the constants of motion of the model; generalizations with respect to known results are discussed. This theorem is shown to be very effective when combined with the polynomial R-matrix technique (PRT): we apply both of them to the study of the extended Hubbard models, for which we find all the subcases enjoying several kinds of (super)symmetries. In particular, we derive a geometrical construction expressing any gl(2,1)-invariant model as a linear combination of EKS and U-supersymmetric models. Further, we use the PRT to obtain 32 integrable so(4)-invariant models. By joint use of the Sutherland's species technique and η-pairs construction we propose a general method to derive their physical features, and we provide some explicit results
Fuzzy Goal Programming Approach in Selective Maintenance Reliability Model
Directory of Open Access Journals (Sweden)
Neha Gupta
2013-12-01
Full Text Available 800x600 In the present paper, we have considered the allocation problem of repairable components for a parallel-series system as a multi-objective optimization problem and have discussed two different models. In first model the reliability of subsystems are considered as different objectives. In second model the cost and time spent on repairing the components are considered as two different objectives. These two models is formulated as multi-objective Nonlinear Programming Problem (MONLPP and a Fuzzy goal programming method is used to work out the compromise allocation in multi-objective selective maintenance reliability model in which we define the membership functions of each objective function and then transform membership functions into equivalent linear membership functions by first order Taylor series and finally by forming a fuzzy goal programming model obtain a desired compromise allocation of maintenance components. A numerical example is also worked out to illustrate the computational details of the method. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4
Selection Strategies for Social Influence in the Threshold Model
Karampourniotis, Panagiotis; Szymanski, Boleslaw; Korniss, Gyorgy
The ubiquity of online social networks makes the study of social influence extremely significant for its applications to marketing, politics and security. Maximizing the spread of influence by strategically selecting nodes as initiators of a new opinion or trend is a challenging problem. We study the performance of various strategies for selection of large fractions of initiators on a classical social influence model, the Threshold model (TM). Under the TM, a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. The strategies we study are of two kinds: strategies based solely on the initial network structure (Degree-rank, Dominating Sets, PageRank etc.) and strategies that take into account the change of the states of the nodes during the evolution of the cascade, e.g. the greedy algorithm. We find that the performance of these strategies depends largely on both the network structure properties, e.g. the assortativity, and the distribution of the thresholds assigned to the nodes. We conclude that the optimal strategy needs to combine the network specifics and the model specific parameters to identify the most influential spreaders. Supported in part by ARL NS-CTA, ARO, and ONR.
Selection of models to calculate the LLW source term
International Nuclear Information System (INIS)
Sullivan, T.M.
1991-10-01
Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab
SOME USES OF MODELS OF QUANTITATIVE GENETIC SELECTION IN SOCIAL SCIENCE.
Weight, Michael D; Harpending, Henry
2017-01-01
The theory of selection of quantitative traits is widely used in evolutionary biology, agriculture and other related fields. The fundamental model known as the breeder's equation is simple, robust over short time scales, and it is often possible to estimate plausible parameters. In this paper it is suggested that the results of this model provide useful yardsticks for the description of social traits and the evaluation of transmission models. The differences on a standard personality test between samples of Old Order Amish and Indiana rural young men from the same county and the decline of homicide in Medieval Europe are used as illustrative examples of the overall approach. It is shown that the decline of homicide is unremarkable under a threshold model while the differences between rural Amish and non-Amish young men are too large to be a plausible outcome of simple genetic selection in which assortative mating by affiliation is equivalent to truncation selection.
Mutation-selection dynamics and error threshold in an evolutionary model for Turing machines.
Musso, Fabio; Feverati, Giovanni
2012-01-01
We investigate the mutation-selection dynamics for an evolutionary computation model based on Turing machines. The use of Turing machines allows for very simple mechanisms of code growth and code activation/inactivation through point mutations. To any value of the point mutation probability corresponds a maximum amount of active code that can be maintained by selection and the Turing machines that reach it are said to be at the error threshold. Simulations with our model show that the Turing machines population evolve toward the error threshold. Mathematical descriptions of the model point out that this behaviour is due more to the mutation-selection dynamics than to the intrinsic nature of the Turing machines. This indicates that this result is much more general than the model considered here and could play a role also in biological evolution. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Kodrans-Nsiah, M.; de Lange, G. J.; Zonneveld, K. A.
2007-12-01
Organic-walled dinoflagellate cysts undergo species-selective decomposition in oxic environments but are preserved under anoxic conditions. The relative degradation rate of individual species has been assessed during exposure experiments under oxic and anoxic conditions at two sites in the Eastern Mediterranean, namely the Urania and Bannock Basins areas. These areas contain anoxic brine waters and are overlain by oxic intermediate and surface waters. The exposed material consisted of sediments collected from (a) the Namibian shelf, (b) the S1 sapropel and (c) modern eastern Mediterranean. After 15 months of exposure to oxic conditions, sub-samples from the Namibian shelf and the sapropel showed a considerable reduction in cysts concentration compared to their original abundance. Exposure to anoxic conditions did not lead to detectable differences in relation to the initial composition. Our experimental results indicate that Brigantedinium spp. and Echinidinium granulatum are very sensitive to oxygen exposure, whereas Spiniferites spp., Lingulodinium machaerophorum and Echinidinium spp. appear to be moderately sensitive. Nematosphaeropsis labyrinthus, Echinidinium aculeatum, Operculodinium israelianum, and Impagidinium aculeatum are extremely resistant against aerobic decay. The observed changes in the dinocysts assemblages provide clear and straightforward argument that species- selective degradation of dinocysts takes place rapidly under oxic conditions independently from dinocyst production and transport processes. Our results imply that diagenetically induced composition changes have to be taken into account not only during interpretations of fossil records but also (sub) recent dinocyst assemblages.
Island-Model Genomic Selection for Long-Term Genetic Improvement of Autogamous Crops.
Yabe, Shiori; Yamasaki, Masanori; Ebana, Kaworu; Hayashi, Takeshi; Iwata, Hiroyoshi
2016-01-01
Acceleration of genetic improvement of autogamous crops such as wheat and rice is necessary to increase cereal production in response to the global food crisis. Population and pedigree methods of breeding, which are based on inbred line selection, are used commonly in the genetic improvement of autogamous crops. These methods, however, produce a few novel combinations of genes in a breeding population. Recurrent selection promotes recombination among genes and produces novel combinations of genes in a breeding population, but it requires inaccurate single-plant evaluation for selection. Genomic selection (GS), which can predict genetic potential of individuals based on their marker genotype, might have high reliability of single-plant evaluation and might be effective in recurrent selection. To evaluate the efficiency of recurrent selection with GS, we conducted simulations using real marker genotype data of rice cultivars. Additionally, we introduced the concept of an "island model" inspired by evolutionary algorithms that might be useful to maintain genetic variation through the breeding process. We conducted GS simulations using real marker genotype data of rice cultivars to evaluate the efficiency of recurrent selection and the island model in an autogamous species. Results demonstrated the importance of producing novel combinations of genes through recurrent selection. An initial population derived from admixture of multiple bi-parental crosses showed larger genetic gains than a population derived from a single bi-parental cross in whole cycles, suggesting the importance of genetic variation in an initial population. The island-model GS better maintained genetic improvement in later generations than the other GS methods, suggesting that the island-model GS can utilize genetic variation in breeding and can retain alleles with small effects in the breeding population. The island-model GS will become a new breeding method that enhances the potential of genomic
Island-Model Genomic Selection for Long-Term Genetic Improvement of Autogamous Crops.
Directory of Open Access Journals (Sweden)
Shiori Yabe
Full Text Available Acceleration of genetic improvement of autogamous crops such as wheat and rice is necessary to increase cereal production in response to the global food crisis. Population and pedigree methods of breeding, which are based on inbred line selection, are used commonly in the genetic improvement of autogamous crops. These methods, however, produce a few novel combinations of genes in a breeding population. Recurrent selection promotes recombination among genes and produces novel combinations of genes in a breeding population, but it requires inaccurate single-plant evaluation for selection. Genomic selection (GS, which can predict genetic potential of individuals based on their marker genotype, might have high reliability of single-plant evaluation and might be effective in recurrent selection. To evaluate the efficiency of recurrent selection with GS, we conducted simulations using real marker genotype data of rice cultivars. Additionally, we introduced the concept of an "island model" inspired by evolutionary algorithms that might be useful to maintain genetic variation through the breeding process. We conducted GS simulations using real marker genotype data of rice cultivars to evaluate the efficiency of recurrent selection and the island model in an autogamous species. Results demonstrated the importance of producing novel combinations of genes through recurrent selection. An initial population derived from admixture of multiple bi-parental crosses showed larger genetic gains than a population derived from a single bi-parental cross in whole cycles, suggesting the importance of genetic variation in an initial population. The island-model GS better maintained genetic improvement in later generations than the other GS methods, suggesting that the island-model GS can utilize genetic variation in breeding and can retain alleles with small effects in the breeding population. The island-model GS will become a new breeding method that enhances the
Variable Selection in Model-based Clustering: A General Variable Role Modeling
Maugis, Cathy; Celeux, Gilles; Martin-Magniette, Marie-Laure
2008-01-01
The currently available variable selection procedures in model-based clustering assume that the irrelevant clustering variables are all independent or are all linked with the relevant clustering variables. We propose a more versatile variable selection model which describes three possible roles for each variable: The relevant clustering variables, the irrelevant clustering variables dependent on a part of the relevant clustering variables and the irrelevant clustering variables totally indepe...
A Dual-Stage Two-Phase Model of Selective Attention
Hubner, Ronald; Steinhauser, Marco; Lehle, Carola
2010-01-01
The dual-stage two-phase (DSTP) model is introduced as a formal and general model of selective attention that includes both an early and a late stage of stimulus selection. Whereas at the early stage information is selected by perceptual filters whose selectivity is relatively limited, at the late stage stimuli are selected more efficiently on a…
Direction selectivity in a model of the starburst amacrine cell.
Tukker, John J; Taylor, W Rowland; Smith, Robert G
2004-01-01
The starburst amacrine cell (SBAC), found in all mammalian retinas, is thought to provide the directional inhibitory input recorded in On-Off direction-selective ganglion cells (DSGCs). While voltage recordings from the somas of SBACs have not shown robust direction selectivity (DS), the dendritic tips of these cells display direction-selective calcium signals, even when gamma-aminobutyric acid (GABAa,c) channels are blocked, implying that inhibition is not necessary to generate DS. This suggested that the distinctive morphology of the SBAC could generate a DS signal at the dendritic tips, where most of its synaptic output is located. To explore this possibility, we constructed a compartmental model incorporating realistic morphological structure, passive membrane properties, and excitatory inputs. We found robust DS at the dendritic tips but not at the soma. Two-spot apparent motion and annulus radial motion produced weak DS, but thin bars produced robust DS. For these stimuli, DS was caused by the interaction of a local synaptic input signal with a temporally delayed "global" signal, that is, an excitatory postsynaptic potential (EPSP) that spread from the activated inputs into the soma and throughout the dendritic tree. In the preferred direction the signals in the dendritic tips coincided, allowing summation, whereas in the null direction the local signal preceded the global signal, preventing summation. Sine-wave grating stimuli produced the greatest amount of DS, especially at high velocities and low spatial frequencies. The sine-wave DS responses could be accounted for by a simple mathematical model, which summed phase-shifted signals from soma and dendritic tip. By testing different artificial morphologies, we discovered DS was relatively independent of the morphological details, but depended on having a sufficient number of inputs at the distal tips and a limited electrotonic isolation. Adding voltage-gated calcium channels to the model showed that their
Predictive and Descriptive CoMFA Models: The Effect of Variable Selection.
Sepehri, Bakhtyar; Omidikia, Nematollah; Kompany-Zareh, Mohsen; Ghavami, Raouf
2018-01-01
Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Parametric pattern selection in a reaction-diffusion model.
Directory of Open Access Journals (Sweden)
Michael Stich
Full Text Available We compare spot patterns generated by Turing mechanisms with those generated by replication cascades, in a model one-dimensional reaction-diffusion system. We determine the stability region of spot solutions in parameter space as a function of a natural control parameter (feed-rate where degenerate patterns with different numbers of spots coexist for a fixed feed-rate. While it is possible to generate identical patterns via both mechanisms, we show that replication cascades lead to a wider choice of pattern profiles that can be selected through a tuning of the feed-rate, exploiting hysteresis and directionality effects of the different pattern pathways.
The Continual Reassessment Method for Multiple Toxicity Grades: A Bayesian Model Selection Approach
Yuan, Ying; Zhang, Shemin; Zhang, Wenhong; Li, Chanjuan; Wang, Ling; Xia, Jielai
2014-01-01
Grade information has been considered in Yuan et al. (2007) wherein they proposed a Quasi-CRM method to incorporate the grade toxicity information in phase I trials. A potential problem with the Quasi-CRM model is that the choice of skeleton may dramatically vary the performance of the CRM model, which results in similar consequences for the Quasi-CRM model. In this paper, we propose a new model by utilizing bayesian model selection approach – Robust Quasi-CRM model – to tackle the above-mentioned pitfall with the Quasi-CRM model. The Robust Quasi-CRM model literally inherits the BMA-CRM model proposed by Yin and Yuan (2009) to consider a parallel of skeletons for Quasi-CRM. The superior performance of Robust Quasi-CRM model was demonstrated by extensive simulation studies. We conclude that the proposed method can be freely used in real practice. PMID:24875783
Discounting model selection with area-based measures: A case for numerical integration.
Gilroy, Shawn P; Hantula, Donald A
2018-03-01
A novel method for analyzing delay discounting data is proposed. This newer metric, a model-based Area Under Curve (AUC) combining approximate Bayesian model selection and numerical integration, was compared to the point-based AUC methods developed by Myerson, Green, and Warusawitharana (2001) and extended by Borges, Kuang, Milhorn, and Yi (2016). Using data from computer simulation and a published study, comparisons of these methods indicated that a model-based form of AUC offered a more consistent and statistically robust measurement of area than provided by using point-based methods alone. Beyond providing a form of AUC directly from a discounting model, numerical integration methods permitted a general calculation in cases when the Effective Delay 50 (ED50) measure could not be calculated. This allowed discounting model selection to proceed in conditions where data are traditionally more challenging to model and measure, a situation where point-based AUC methods are often enlisted. Results from simulation and existing data indicated that numerical integration methods extended both the area-based interpretation of delay discounting as well as the discounting model selection approach. Limitations of point-based AUC as a first-line analysis of discounting and additional extensions of discounting model selection were also discussed. © 2018 Society for the Experimental Analysis of Behavior.
gamboostLSS: An R Package for Model Building and Variable Selection in the GAMLSS Framework
Directory of Open Access Journals (Sweden)
Benjamin Hofner
2016-10-01
Full Text Available Generalized additive models for location, scale and shape are a flexible class of regression models that allow to model multiple parameters of a distribution function, such as the mean and the standard deviation, simultaneously. With the R package gamboostLSS, we provide a boosting method to fit these models. Variable selection and model choice are naturally available within this regularized regression framework. To introduce and illustrate the R package gamboostLSS and its infrastructure, we use a data set on stunted growth in India. In addition to the specification and application of the model itself, we present a variety of convenience functions, including methods for tuning parameter selection, prediction and visualization of results. The package gamboostLSS is available from the Comprehensive R Archive Network (CRAN at https://CRAN.R-project.org/package=gamboostLSS.
International Nuclear Information System (INIS)
Miller, C.W.; Dunning, D.E. Jr.; Etnier, E.L.; Hoffman, F.O.; Little, C.A.; Meyer, H.R.; Shaeffer, D.L.; Till, J.E.
1979-07-01
Evaluations of selected predictive models and parameters used in the assessment of the environmental transport and dosimetry of radionuclides are summarized. Mator sections of this report include a validation of the Gaussian plume disperson model, comparison of the output of a model for the transport of 131 I from vegetation to milk with field data, validation of a model for the fraction of aerosols intercepted by vegetation, an evaluation of dose conversion factors for 232 Th, an evaluation of considering the effect of age dependency on population dose estimates, and a summary of validation results for hydrologic transport models
Energy Technology Data Exchange (ETDEWEB)
Miller, C.W.; Dunning, D.E. Jr.; Etnier, E.L.; Hoffman, F.O.; Little, C.A.; Meyer, H.R.; Shaeffer, D.L.; Till, J.E.
1979-07-01
Evaluations of selected predictive models and parameters used in the assessment of the environmental transport and dosimetry of radionuclides are summarized. Mator sections of this report include a validation of the Gaussian plume disperson model, comparison of the output of a model for the transport of /sup 131/I from vegetation to milk with field data, validation of a model for the fraction of aerosols intercepted by vegetation, an evaluation of dose conversion factors for /sup 232/Th, an evaluation of considering the effect of age dependency on population dose estimates, and a summary of validation results for hydrologic transport models.
The Baltic Sea experiment BALTEX: a brief overview and some selected results
Energy Technology Data Exchange (ETDEWEB)
Raschke, E.; Karstens, U.; Nolte-Holube, R.; Brandt, R.; Isemer, H.J.; Lohmann, D.; Lobmeyr, M.; Rockel, B.; Stuhlmann, R. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Atmosphaerenphysik
1997-12-31
The mechanisms responsible for the transfer of energy and water within the climate system are under worldwide investigation within the framework of the Global Energy and Water Cycle Experiment (GEWEX) to improve the predictability of natural and man-made climate changes at short and long ranges and their impact on water resources. Five continental-scale experiments have been established within GEWEX to enable a more complete coupling between atmospheric and hydrodlogical models. One of them is the Baltic Sea Experiment (BALTEX). In this paper, the goals and structure of BALTEX are outlined. A short overview of measuring and modelling strategies is given. Atmospheric and hydrological model results of the authors are presented. This includes validation of precipitation using station measurements as well as validation of modelled cloud cover with cloud estimates form satellite data. Furthermore, results of a large-scale grid based hydrological model to be coupled to atmospheric models are presented. (orig.) [Deutsch] Im Rahmen des Programmes GEWEX (Globales Energie- und Wasserkreislauf-Experiment) werden weltweite Untersuchungen derjenigen Mechanismen unternommen, die die Uebertragung von Energie und Wasser innerhalb des Klimasystems bestimmen. Dadurch soll die Vorhersagebarkeit von natuerlichen und anthropogenen Klimaaenderungen in kurzen und laengeren Zeitraeumen und deren Wirkung auf die verfuegbaren Wasservorraete verbessert werden. Insgesamt fuenf kontinentweite Experimente wurden innerhalb von GEWEX fuer diese Zwecke begonnen. In ihnen soll vordringlich eine Kopplung von Hydrologiemodellen an Atmosphaermodelle erfolgen. Eines dieser Experimente ist das BALTEX (Baltic Sea Experiment). In dieser Arbeit werden die Ziele und die Struktur von BALTEX vorgestellt. Es wird auch ein kurzer Ueberblick ueber die Mess- und Modellierstrategie vermittelt. Ferner werden erste Ergebnisse der Autoren vorgestellt. Diese schliessen auch einen Vergleich zwischen gemessenen und
Labonne, Jacques; Hendry, Andrew P
2010-07-01
The standard predictions of ecological speciation might be nuanced by the interaction between natural and sexual selection. We investigated this hypothesis with an individual-based model tailored to the biology of guppies (Poecilia reticulata). We specifically modeled the situation where a high-predation population below a waterfall colonizes a low-predation population above a waterfall. Focusing on the evolution of male color, we confirm that divergent selection causes the appreciable evolution of male color within 20 generations. The rate and magnitude of this divergence were reduced when dispersal rates were high and when female choice did not differ between environments. Adaptive divergence was always coupled to the evolution of two reproductive barriers: viability selection against immigrants and hybrids. Different types of sexual selection, however, led to contrasting results for another potential reproductive barrier: mating success of immigrants. In some cases, the effects of natural and sexual selection offset each other, leading to no overall reproductive isolation despite strong adaptive divergence. Sexual selection acting through female choice can thus strongly modify the effects of divergent natural selection and thereby alter the standard predictions of ecological speciation. We also found that under no circumstances did divergent selection cause appreciable divergence in neutral genetic markers.
Modeling Knowledge Resource Selection in Expert Librarian Search
KAUFMAN, David R.; MEHRYAR, Maryam; CHASE, Herbert; HUNG, Peter; CHILOV, Marina; JOHNSON, Stephen B.; MENDONCA, Eneida
2011-01-01
Providing knowledge at the point of care offers the possibility for reducing error and improving patient outcomes. However, the vast majority of physician’s information needs are not met in a timely fashion. The research presented in this paper models an expert librarian’s search strategies as it pertains to the selection and use of various electronic information resources. The 10 searches conducted by the librarian to address physician’s information needs, varied in terms of complexity and question type. The librarian employed a total of 10 resources and used as many as 7 in a single search. The longer term objective is to model the sequential process in sufficient detail as to be able to contribute to the development of intelligent automated search agents. PMID:19380912
Directory of Open Access Journals (Sweden)
Rodrigo Alves Silva
2017-09-01
Full Text Available This paper aims to show the importance of the use of financial metrics in decision-making of credit scoring models selection. In order to achieve such, we considered an automatic approval system approach and we carried out a performance analysis of the financial metrics on the theoretical portfolios generated by seven credit scoring models based on main statistical learning techniques. The models were estimated on German Credit dataset and the results were analyzed based on four metrics: total accuracy, error cost, risk adjusted return on capital and Sharpe index. The results show that total accuracy, widely used as a criterion for selecting credit scoring models, is unable to select the most profitable model for the company, indicating the need to incorporate financial metrics into the credit scoring model selection process. Keywords Credit risk; Model’s selection; Statistical learning.
Ye, M.; Elshall, A. S.; Tang, G.; Samani, S.
2016-12-01
Bayesian Model Evidence (BME) is the measure of the average fit of the model to data given all the parameter values that the model can take. By accounting for the trade-off between the model ability to reproduce the observation data and model complexity, BME estimates of candidate models are employed to calculate model weights, which are used for model selection and model averaging. This study shows that accurate estimation of the BME is important for penalizing models with more complexity. To improve the accuracy of BME estimation, we resort to Monte Carlo numerical estimators over semi-analytical solutions (such as Laplace approximations, BIC, KIC and other). This study examines prominent numerical estimators of BME that are the thermodynamic integration (TI), and the importance sampling methods of arithmetic mean (AM), harmonic mean (HM), and steppingstone sampling (SS). AM estimator (based on prior sampling) and HM estimator (based on posterior sampling) are straightforward to implement, yet they lead to under and over estimation, respectively. TI and SS improve beyond this by means of sampling multiple intermediate distributions that links the prior and the posterior, using Markov Chain Monte Carlo (MCMC). TI and SS are theoretically unbiased estimators that are mathematically rigorous. Yet a theoretically unbiased estimator could have large bias in practice arising from numerical implementation, because MCMC sampling errors of certain intermediate distributions can introduce bias. We propose an SS variant, namely the multiple one-steppingstone sampling (MOSS), which turns these intermediate stumbling "blocks" of SS into steppingstones toward BME estimation. Thus, MOSS is less sensitive to MCMC sampling errors. We evaluate these estimators using a problem of groundwater transport model selection. The modeling results show that SS and MOSS estimators gave the most accurate results. In addition, the results show that the magnitude of the estimation error is a
Fetal Intervention in Right Outflow Tract Obstructive Disease: Selection of Candidates and Results
Directory of Open Access Journals (Sweden)
E. Gómez Montes
2012-01-01
Full Text Available Objectives. To describe the process of selection of candidates for fetal cardiac intervention (FCI in fetuses diagnosed with pulmonary atresia-critical stenosis with intact ventricular septum (PA/CS-IVS and report our own experience with FCI for such disease. Methods. We searched our database for cases of PA/CS-IVS prenatally diagnosed in 2003–2012. Data of 38 fetuses were retrieved and analyzed. FCI were offered to 6 patients (2 refused. In the remaining it was not offered due to the presence of either favourable prognostic echocardiographic markers (n=20 or poor prognostic indicators (n=12. Results. The outcome of fetuses with PA/CS-IVS was accurately predicted with multiparametric scoring systems. Pulmonary valvuloplasty was technically successful in all 4 fetuses. The growth of the fetal right heart and hemodynamic parameters showed a Gaussian-like behaviour with an improvement in the first weeks and slow worsening as pregnancy advanced, probably indicating a restenosis. Conclusions. The most likely type of circulation after birth may be predicted in the second trimester of pregnancy by means of combining cardiac dimensions and functional parameters. Fetal pulmonary valvuloplasty in midgestation is technically feasible and in well-selected cases may improve right heart growth, fetal hemodynamics, and postnatal outcome.
Results of radiation tests at cryogenic temperature on some selected organic materials for the LHC
International Nuclear Information System (INIS)
Schoenbacher, H.; Szeless, B.; Tavlet, M.; Humer, K.; Weber, H.W.
1996-01-01
Future multi-TeV particle accelerators like the CERN Large Hadron Collider (LHC) will use superconducting magnets where organic materials will be exposed to high radiation levels at temperatures as low as 2 K. A representative selection of organic materials comprising insulating films, cable insulations, and epoxy-type impregnated resins were exposed to neutron and gamma radiation of a nuclear reactor. Depending on the type of materials, the integrated radiation doses varied between 180 kGy and 155 MGy. During irradiation, the samples were kept close to the boiling temperature of liquid nitrogen i.e. ∼ 80 K and thereafter stored in liquid nitrogen and transferred at the same temperature into the testing device for measurement of tensile and flexural strength. Tests were carried out on the same materials at similar dose rates at room temperature, and the results were compared with those obtained at cryogenic temperature. They show that, within the selected dose range, a number of organic materials are suitable for use in the radiation field of the LHC at cryogenic temperature. (orig.)
Sensor selection of helicopter transmission systems based on physical model and sensitivity analysis
Directory of Open Access Journals (Sweden)
Lyu Kehong
2014-06-01
Full Text Available In the helicopter transmission systems, it is important to monitor and track the tooth damage evolution using lots of sensors and detection methods. This paper develops a novel approach for sensor selection based on physical model and sensitivity analysis. Firstly, a physical model of tooth damage and mesh stiffness is built. Secondly, some effective condition indicators (CIs are presented, and the optimal CIs set is selected by comparing their test statistics according to Mann–Kendall test. Afterwards, the selected CIs are used to generate a health indicator (HI through sen slop estimator. Then, the sensors are selected according to the monotonic relevance and sensitivity to the damage levels. Finally, the proposed method is verified by the simulation and experimental data. The results show that the approach can provide a guide for health monitoring of helicopter transmission systems, and it is effective to reduce the test cost and improve the system’s reliability.
Vendor selection and order allocation using an integrated fuzzy mathematical programming model
Directory of Open Access Journals (Sweden)
Farzaneh Talebi
2015-09-01
Full Text Available In the context of supply chain management, supplier selection plays a key role in reaching desirable production planning. In today's competitive world, many enterprises have focused on selecting the appropriate suppliers in an attempt to reduce purchasing costs and improve quality products and services. Supplier selection is a multi-criteria decision problem, which includes different qualitative and quantitative criteria such as purchase cost, on time delivery, quality of service, etc. In this study, a fuzzy multi-objective mathematical programming model is presented to select appropriate supplier and assign desirable order to different supplies. The proposed model was implemented for an organization by considering 16 different scenarios and the results are compared with two other existing methods.
Orbital-selective Mott phase in multiorbital models for iron pnictides and chalcogenides
Yu, Rong; Si, Qimiao
2017-09-01
There is increasing recognition that the multiorbital nature of the 3 d electrons is important to the proper description of the electronic states in the normal state of the iron-based superconductors. Earlier studies of the pertinent multiorbital Hubbard models identified an orbital-selective Mott phase, which anchors the orbital-selective behavior seen in the overall phase diagram. An important characteristics of the models is that the orbitals are kinetically coupled, i.e., hybridized, to each other, which makes the orbital-selective Mott phase especially nontrivial. A U (1 ) slave-spin method was used to analyze the model with nonzero orbital-level splittings. Here we develop a Landau free-energy functional to shed further light on this issue. We put the microscopic analysis from the U (1 ) slave-spin approach in this perspective, and show that the intersite spin correlations are crucial to the renormalization of the bare hybridization amplitude towards zero and the concomitant realization of the orbital-selective Mott transition. Based on this insight, we discuss additional ways to study the orbital-selective Mott physics from a dynamical competition between the interorbital hybridization and collective spin correlations. Our results demonstrate the robustness of the orbital-selective Mott phase in the multiorbital models appropriate for the iron-based superconductors.
Cliff-edge model of obstetric selection in humans.
Mitteroecker, Philipp; Huttegger, Simon M; Fischer, Barbara; Pavlicev, Mihaela
2016-12-20
The strikingly high incidence of obstructed labor due to the disproportion of fetal size and the mother's pelvic dimensions has puzzled evolutionary scientists for decades. Here we propose that these high rates are a direct consequence of the distinct characteristics of human obstetric selection. Neonatal size relative to the birth-relevant maternal dimensions is highly variable and positively associated with reproductive success until it reaches a critical value, beyond which natural delivery becomes impossible. As a consequence, the symmetric phenotype distribution cannot match the highly asymmetric, cliff-edged fitness distribution well: The optimal phenotype distribution that maximizes population mean fitness entails a fraction of individuals falling beyond the "fitness edge" (i.e., those with fetopelvic disproportion). Using a simple mathematical model, we show that weak directional selection for a large neonate, a narrow pelvic canal, or both is sufficient to account for the considerable incidence of fetopelvic disproportion. Based on this model, we predict that the regular use of Caesarean sections throughout the last decades has led to an evolutionary increase of fetopelvic disproportion rates by 10 to 20%.
Developing a conceptual model for selecting and evaluating online markets
Directory of Open Access Journals (Sweden)
Sadegh Feizollahi
2013-04-01
Full Text Available There are many evidences, which emphasis on the benefits of using new technologies of information and communication in international business and many believe that E-Commerce can help satisfy customer explicit and implicit requirements. Internet shopping is a concept developed after the introduction of electronic commerce. Information technology (IT and its applications, specifically in the realm of the internet and e-mail promoted the development of e-commerce in terms of advertising, motivating and information. However, with the development of new technologies, credit and financial exchange on the internet websites were constructed so to facilitate e-commerce. The proposed study sends a total of 200 questionnaires to the target group (teachers - students - professionals - managers of commercial web sites and it manages to collect 130 questionnaires for final evaluation. Cronbach's alpha test is used for measuring reliability and to evaluate the validity of measurement instruments (questionnaires, and to assure construct validity, confirmatory factor analysis is employed. In addition, in order to analyze the research questions based on the path analysis method and to determine markets selection models, a regular technique is implemented. In the present study, after examining different aspects of e-commerce, we provide a conceptual model for selecting and evaluating online marketing in Iran. These findings provide a consistent, targeted and holistic framework for the development of the Internet market in the country.
Elsheikh, A. H.
2013-12-01
Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known as nested sampling (NS), which can simultaneously sample the posterior distribution for uncertainty quantification, and estimate the Bayesian evidence for model selection. Model selection statistics, such as the Bayesian evidence, are needed to choose or assign different weights to different models of different levels of complexities. In this work, we report the first successful application of nested sampling for calibration of several nonlinear subsurface flow problems. The estimated Bayesian evidence by the NS algorithm is used to weight different parameterizations of the subsurface flow models (prior model selection). The results of the numerical evaluation implicitly enforced Occam\\'s razor where simpler models with fewer number of parameters are favored over complex models. The proper level of model complexity was automatically determined based on the information content of the calibration data and the data mismatch of the calibrated model.
The effect of template selection on diffusion tensor voxel-based analysis results.
Van Hecke, Wim; Leemans, Alexander; Sage, Caroline A; Emsell, Louise; Veraart, Jelle; Sijbers, Jan; Sunaert, Stefan; Parizel, Paul M
2011-03-15
Diffusion tensor imaging (DTI) is increasingly being used to study white matter (WM) degeneration in patients with psychiatric and neurological disorders. In order to compare diffusion measures across subjects in an automated way, voxel-based analysis (VBA) methods were introduced. In VBA, all DTI data are transformed to a template, after which the diffusion measures of control subjects and patients are compared quantitatively in each voxel. Although VBA has many advantages compared to other post-processing approaches, such as region of interest analysis or tractography, VBA results need to be interpreted cautiously, since it has been demonstrated that they depend on the different parameter settings that are applied in the VBA processing pipeline. In this paper, we examine the effect of the template selection on the VBA results of DTI data. We hypothesized that the choice of template to which all data are transformed would also affect the VBA results. To this end, simulated DTI data sets as well as DTI data from control subjects and multiple sclerosis patients were aligned to (i) a population-specific DTI template, (ii) a subject-based DTI atlas in MNI space, and (iii) the ICBM-81 DTI atlas. Our results suggest that the highest sensitivity and specificity to detect WM abnormalities in a VBA setting was achieved using the population-specific DTI atlas, presumably due to the better spatial image alignment to this template. Copyright © 2010 Elsevier Inc. All rights reserved.
Observing with a space-borne gamma-ray telescope: selected results from INTEGRAL
International Nuclear Information System (INIS)
Schanne, Stephane
2006-01-01
The International Gamma-Ray Astrophysics Laboratory, i.e. the INTEGRAL satellite of ESA, in orbit since about 3 years, performs gamma-ray observations of the sky in the 15 keV to 8 MeV energy range. Thanks to its imager IBIS, and in particular the ISGRI detection plane based on 16384 CdTe pixels, it achieves an excellent angular resolution (12 arcmin) for point source studies with good continuum spectrum sensitivity. Thanks to its spectrometer SPI, based on 19 germanium detectors maintained at 85 K by a cryogenic system, located inside an active BGO veto shield, it achieves excellent spectral resolution of about 2 keV for 1 MeV photons, which permits astrophysical gamma-ray line studies with good narrow-line sensitivity. In this paper we review some goals of gamma-ray astronomy from space and present the INTEGRAL satellite, in particular its instruments ISGRI and SPI. Ground and in-flight calibration results from SPI are presented, before presenting some selected astrophysical results from INTEGRAL. In particular results on point source searches are presented, followed by results on nuclear astrophysics, exemplified by the study of the 1809 keV gamma-ray line from radioactive 26 Al nuclei produced by the ongoing stellar nucleosynthesis in the Galaxy. Finally a review on the study of the positron-electron annihilation in the Galactic center region, producing 511 keV gamma-rays, is presented
Directory of Open Access Journals (Sweden)
Masoud Ghodrati
Full Text Available Humans can effectively and swiftly recognize objects in complex natural scenes. This outstanding ability has motivated many computational object recognition models. Most of these models try to emulate the behavior of this remarkable system. The human visual system hierarchically recognizes objects in several processing stages. Along these stages a set of features with increasing complexity is extracted by different parts of visual system. Elementary features like bars and edges are processed in earlier levels of visual pathway and as far as one goes upper in this pathway more complex features will be spotted. It is an important interrogation in the field of visual processing to see which features of an object are selected and represented by the visual cortex. To address this issue, we extended a hierarchical model, which is motivated by biology, for different object recognition tasks. In this model, a set of object parts, named patches, extracted in the intermediate stages. These object parts are used for training procedure in the model and have an important role in object recognition. These patches are selected indiscriminately from different positions of an image and this can lead to the extraction of non-discriminating patches which eventually may reduce the performance. In the proposed model we used an evolutionary algorithm approach to select a set of informative patches. Our reported results indicate that these patches are more informative than usual random patches. We demonstrate the strength of the proposed model on a range of object recognition tasks. The proposed model outperforms the original model in diverse object recognition tasks. It can be seen from the experiments that selected features are generally particular parts of target images. Our results suggest that selected features which are parts of target objects provide an efficient set for robust object recognition.
DEFF Research Database (Denmark)
Yang, Ziheng; Nielsen, Rasmus
2008-01-01
to examine the null hypothesis that codon usage is due to mutation bias alone, not influenced by natural selection. Application of the test to the mammalian data led to rejection of the null hypothesis in most genes, suggesting that natural selection may be a driving force in the evolution of synonymous......Current models of codon substitution are formulated at the levels of nucleotide substitution and do not explicitly consider the separate effects of mutation and selection. They are thus incapable of inferring whether mutation or selection is responsible for evolution at silent sites. Here we...... implement a few population genetics models of codon substitution that explicitly consider mutation bias and natural selection at the DNA level. Selection on codon usage is modeled by introducing codon-fitness parameters, which together with mutation-bias parameters, predict optimal codon frequencies...
Directory of Open Access Journals (Sweden)
A. Pawliczek
2015-10-01
Full Text Available The presented paper deals with the issue of employment and other selected personnel attributes as employees’ affiliations, employees’ benefits, monitoring of employees’ satisfaction, monitoring of work productivity, investments into employees education and obstacles in hiring qualified human resources. The characteristics are benchmarked on the background of enterprise size based on the employees count in the year 2013. The relevant data were collected in Czech industrial enterprises, including metallurgical companies, with the help of university questionnaire research in order to induce synergy effect arising from mutual communication of academy-students-industry. The most important results are presented later in the paper, complemented with discussion based on relevant professional literature sources. The findings suggest that bigger companies check productivity and satisfaction and dismiss employees more frequently, unlike medium companies which do not reduce their workforce and solve the impact of crisis by decreased affiliations, reduced benefits and similar savings.
International Nuclear Information System (INIS)
Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao
2017-01-01
Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction. (paper)
Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao
2017-03-01
Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.
The selection pressures induced non-smooth infectious disease model and bifurcation analysis
International Nuclear Information System (INIS)
Qin, Wenjie; Tang, Sanyi
2014-01-01
Highlights: • A non-smooth infectious disease model to describe selection pressure is developed. • The effect of selection pressure on infectious disease transmission is addressed. • The key factors which are related to the threshold value are determined. • The stabilities and bifurcations of model have been revealed in more detail. • Strategies for the prevention of emerging infectious disease are proposed. - Abstract: Mathematical models can assist in the design strategies to control emerging infectious disease. This paper deduces a non-smooth infectious disease model induced by selection pressures. Analysis of this model reveals rich dynamics including local, global stability of equilibria and local sliding bifurcations. Model solutions ultimately stabilize at either one real equilibrium or the pseudo-equilibrium on the switching surface of the present model, depending on the threshold value determined by some related parameters. Our main results show that reducing the threshold value to a appropriate level could contribute to the efficacy on prevention and treatment of emerging infectious disease, which indicates that the selection pressures can be beneficial to prevent the emerging infectious disease under medical resource limitation
Genomic Selection Accuracy using Multifamily Prediction Models in a Wheat Breeding Program
Directory of Open Access Journals (Sweden)
Elliot L. Heffner
2011-03-01
Full Text Available Genomic selection (GS uses genome-wide molecular marker data to predict the genetic value of selection candidates in breeding programs. In plant breeding, the ability to produce large numbers of progeny per cross allows GS to be conducted within each family. However, this approach requires phenotypes of lines from each cross before conducting GS. This will prolong the selection cycle and may result in lower gains per year than approaches that estimate marker-effects with multiple families from previous selection cycles. In this study, phenotypic selection (PS, conventional marker-assisted selection (MAS, and GS prediction accuracy were compared for 13 agronomic traits in a population of 374 winter wheat ( L. advanced-cycle breeding lines. A cross-validation approach that trained and validated prediction accuracy across years was used to evaluate effects of model selection, training population size, and marker density in the presence of genotype × environment interactions (G×E. The average prediction accuracies using GS were 28% greater than with MAS and were 95% as accurate as PS. For net merit, the average accuracy across six selection indices for GS was 14% greater than for PS. These results provide empirical evidence that multifamily GS could increase genetic gain per unit time and cost in plant breeding.
The animal model determines the results of Aeromonas virulence factors
Directory of Open Access Journals (Sweden)
Alejandro Romero
2016-10-01
Full Text Available The selection of an experimental animal model is of great importance in the study of bacterial virulence factors. Here, a bath infection of zebrafish larvae is proposed as an alternative model to study the virulence factors of A. hydrophila. Intraperitoneal infections in mice and trout were compared with bath infections in zebrafish larvae using specific mutants. The great advantage of this model is that bath immersion mimics the natural route of infection, and injury to the tail also provides a natural portal of entry for the bacteria. The implication of T3SS in the virulence of A. hydrophila was analysed using the AH-1::aopB mutant. This mutant was less virulent than the wild-type strain when inoculated into zebrafish larvae, as described in other vertebrates. However, the zebrafish model exhibited slight differences in mortality kinetics only observed using invertebrate models. Infections using the mutant AH-1∆vapA lacking the gene coding for the surface S-layer suggested that this protein was not totally necessary to the bacteria once it was inside the host, but it contributed to the inflammatory response. Only when healthy zebrafish larvae were infected did the mutant produce less mortality than the wild type. Variations between models were evidenced using the AH-1∆rmlB, which lacks the O-antigen lipopolysaccharide (LPS, and the AH-1∆wahD, which lacks the O-antigen LPS and part of the LPS outer-core. Both mutants showed decreased mortality in all of the animal models, but the differences between them were only observed in injured zebrafish larvae, suggesting that residues from the LPS outer core must be important for virulence. The greatest differences were observed using the AH-1ΔFlaB-J (lacking polar flagella and unable to swim and the AH-1::motX (non-motile but producing flagella. They were as pathogenic as the wild-type strain when injected into mice and trout, but no mortalities were registered in zebrafish larvae. This study
A multicriteria decision making model for assessment and selection of an ERP in a logistics context
Pereira, Teresa; Ferreira, Fernanda A.
2017-07-01
The aim of this work is to apply a methodology of decision support based on a multicriteria decision analyses (MCDA) model that allows the assessment and selection of an Enterprise Resource Planning (ERP) in a Portuguese logistics company by Group Decision Maker (GDM). A Decision Support system (DSS) that implements a MCDA - Multicriteria Methodology for the Assessment and Selection of Information Systems / Information Technologies (MMASSI / IT) is used based on its features and facility to change and adapt the model to a given scope. Using this DSS it was obtained the information system that best suited to the decisional context, being this result evaluated through a sensitivity and robustness analysis.
Directory of Open Access Journals (Sweden)
Sveiczer Akos
2006-03-01
Full Text Available Abstract Background There is considerable controversy concerning the exact growth profile of size parameters during the cell cycle. Linear, exponential and bilinear models are commonly considered, and the same model may not apply for all species. Selection of the most adequate model to describe a given data-set requires the use of quantitative model selection criteria, such as the partial (sequential F-test, the Akaike information criterion and the Schwarz Bayesian information criterion, which are suitable for comparing differently parameterized models in terms of the quality and robustness of the fit but have not yet been used in cell growth-profile studies. Results Length increase data from representative individual fission yeast (Schizosaccharomyces pombe cells measured on time-lapse films have been reanalyzed using these model selection criteria. To fit the data, an extended version of a recently introduced linearized biexponential (LinBiExp model was developed, which makes possible a smooth, continuously differentiable transition between two linear segments and, hence, allows fully parametrized bilinear fittings. Despite relatively small differences, essentially all the quantitative selection criteria considered here indicated that the bilinear model was somewhat more adequate than the exponential model for fitting these fission yeast data. Conclusion A general quantitative framework was introduced to judge the adequacy of bilinear versus exponential models in the description of growth time-profiles. For single cell growth, because of the relatively limited data-range, the statistical evidence is not strong enough to favor one model clearly over the other and to settle the bilinear versus exponential dispute. Nevertheless, for the present individual cell growth data for fission yeast, the bilinear model seems more adequate according to all metrics, especially in the case of wee1Δ cells.
A structured approach for selecting carbon capture process models : A case study on monoethanolamine
van der Spek, Mijndert; Ramirez, Andrea
2014-01-01
Carbon capture and storage is considered a promising option to mitigate CO2 emissions. This has resulted in many R&D efforts focusing at developing viable carbon capture technologies. During carbon capture technology development, process modeling plays an important role. Selecting an appropriate
Climate Change and Agricultural Productivity in Sub-Saharan Africa: A Spatial Sample Selection Model
Ward, P.S.; Florax, R.J.G.M.; Flores-Lagunes, A.
2014-01-01
Using spatially explicit data, we estimate a cereal yield response function using a recently developed estimator for spatial error models when endogenous sample selection is of concern. Our results suggest that yields across Sub-Saharan Africa will decline with projected climatic changes, and that
Computationally efficient thermal-mechanical modelling of selective laser melting
Yang, Yabin; Ayas, Can
2017-10-01
The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is anticipated to be instrumental for understanding and predicting the development of residual stress field during the build process. However, SLM process modelling requires determination of the heat transients within the part being built which is coupled to a mechanical boundary value problem to calculate displacement and residual stress fields. Thermal models associated with SLM are typically complex and computationally demanding. In this paper, we present a simple semi-analytical thermal-mechanical model, developed for SLM that represents the effect of laser scanning vectors with line heat sources. The temperature field within the part being build is attained by superposition of temperature field associated with line heat sources in a semi-infinite medium and a complimentary temperature field which accounts for the actual boundary conditions. An analytical solution of a line heat source in a semi-infinite medium is first described followed by the numerical procedure used for finding the complimentary temperature field. This analytical description of the line heat sources is able to capture the steep temperature gradients in the vicinity of the laser spot which is typically tens of micrometers. In turn, semi-analytical thermal model allows for having a relatively coarse discretisation of the complimentary temperature field. The temperature history determined is used to calculate the thermal strain induced on the SLM part. Finally, a mechanical model governed by elastic-plastic constitutive rule having isotropic hardening is used to predict the residual stresses.
Convergence models for cylindrical caverns and the resulting ground subsidence
Energy Technology Data Exchange (ETDEWEB)
Haupt, W.; Sroka, A.; Schober, F.
1983-02-01
The authors studied the effects of different convergence characteristics on surface soil response for the case of narrow, cylindrical caverns. Maximum ground subsidence - a parameter of major importance in this type of cavern - was calculated for different convergence models. The models were established without considering the laws of rock mechanics and rheology. As a result, two limiting convergence models were obtained that describe an interval of expectation into which all other models fit. This means that ground movements over cylindrical caverns can be calculated ''on the safe side'', correlating the trough resulting on the surface with the convergence characterisitcs of the cavern. Among other applications, the method thus permits monitoring of caverns.
A Site Selection Model for a Straw-Based Power Generation Plant with CO2 Emissions
Directory of Open Access Journals (Sweden)
Hao Lv
2014-10-01
Full Text Available The decision on the location of a straw-based power generation plant has a great influence on the plant’s operation and performance. This study explores traditional theories for site selection. Using integer programming, the study optimizes the economic and carbon emission outcomes of straw-based power generation as two objectives, with the supply and demand of straw as constraints. It provides a multi-objective mixed-integer programming model to solve the site selection problem for a straw-based power generation plant. It then provides a case study to demonstrate the application of the model in the decision on the site selection for a straw-based power generation plant with a Chinese region. Finally, the paper discusses the result of the model in the context of the wider aspect of straw-based power generation.
Directory of Open Access Journals (Sweden)
Rytz Andreas
2002-06-01
Full Text Available Abstract Background The biomedical community is developing new methods of data analysis to more efficiently process the massive data sets produced by microarray experiments. Systematic and global mathematical approaches that can be readily applied to a large number of experimental designs become fundamental to correctly handle the otherwise overwhelming data sets. Results The gene selection model presented herein is based on the observation that: (1 variance of gene expression is a function of absolute expression; (2 one can model this relationship in order to set an appropriate lower fold change limit of significance; and (3 this relationship defines a function that can be used to select differentially expressed genes. The model first evaluates fold change (FC across the entire range of absolute expression levels for any number of experimental conditions. Genes are systematically binned, and those genes within the top X% of highest FCs for each bin are evaluated both with and without the use of replicates. A function is fitted through the top X% of each bin, thereby defining a limit fold change. All genes selected by the 5% FC model lie above measurement variability using a within standard deviation (SDwithin confidence level of 99.9%. Real time-PCR (RT-PCR analysis demonstrated 85.7% concordance with microarray data selected by the limit function. Conclusion The FC model can confidently select differentially expressed genes as corroborated by variance data and RT-PCR. The simplicity of the overall process permits selecting model limits that best describe experimental data by extracting information on gene expression patterns across the range of expression levels. Genes selected by this process can be consistently compared between experiments and enables the user to globally extract information with a high degree of confidence.
Meteorological Uncertainty of atmospheric Dispersion model results (MUD)
DEFF Research Database (Denmark)
Havskov Sørensen, Jens; Amstrup, Bjarne; Feddersen, Henrik
. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent uncertainties......The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario...... of the meteorological model results. These uncertainties stem from e.g. limits in meteorological obser-vations used to initialise meteorological forecast series. By perturbing the initial state of an NWP model run in agreement with the available observa-tional data, an ensemble of meteorological forecasts is produced...
Meteorological Uncertainty of atmospheric Dispersion model results (MUD)
DEFF Research Database (Denmark)
Havskov Sørensen, Jens; Amstrup, Bjarne; Feddersen, Henrik
’ dispersion scenario. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for long-range atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent......The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the ‘most likely...... uncertainties of the meteorological model results. These uncertainties stem from e.g. limits in meteorological observations used to initialise meteorological forecast series. By perturbing e.g. the initial state of an NWP model run in agreement with the available observational data, an ensemble...
The 2013 European Seismic Hazard Model: key components and results
Jochen Woessner; Danciu Laurentiu; Domenico Giardini; Helen Crowley; Fabrice Cotton; G. Grünthal; Gianluca Valensise; Ronald Arvidsson; Roberto Basili; Mine Betül Demircioglu; Stefan Hiemer; Carlo Meletti; Roger W. Musson; Andrea N. Rovida; Karin Sesetyan
2015-01-01
The 2013 European Seismic Hazard Model (ESHM13) results from a community-based probabilistic seismic hazard assessment supported by the EU-FP7 project “Seismic Hazard Harmonization in Europe” (SHARE, 2009–2013). The ESHM13 is a consistent seismic hazard model for Europe and Turkey which overcomes the limitation of national borders and includes a through quantification of the uncertainties. It is the first completed regional effort contributing to the “Global Earthquake Model” initiative. It m...
Hydroclimatology of the Nile: results from a regional climate model
Directory of Open Access Journals (Sweden)
Y. A. Mohamed
2005-01-01
Full Text Available This paper presents the result of the regional coupled climatic and hydrologic model of the Nile Basin. For the first time the interaction between the climatic processes and the hydrological processes on the land surface have been fully coupled. The hydrological model is driven by the rainfall and the energy available for evaporation generated in the climate model, and the runoff generated in the catchment is again routed over the wetlands of the Nile to supply moisture for atmospheric feedback. The results obtained are quite satisfactory given the extremely low runoff coefficients in the catchment. The paper presents the validation results over the sub-basins: Blue Nile, White Nile, Atbara river, the Sudd swamps, and the Main Nile for the period 1995 to 2000. Observational datasets were used to evaluate the model results including radiation, precipitation, runoff and evaporation data. The evaporation data were derived from satellite images over a major part of the Upper Nile. Limitations in both the observational data and the model are discussed. It is concluded that the model provides a sound representation of the regional water cycle over the Nile. The sources of atmospheric moisture to the basin, and location of convergence/divergence fields could be accurately illustrated. The model is used to describe the regional water cycle in the Nile basin in terms of atmospheric fluxes, land surface fluxes and land surface-climate feedbacks. The monthly moisture recycling ratio (i.e. locally generated/total precipitation over the Nile varies between 8 and 14%, with an annual mean of 11%, which implies that 89% of the Nile water resources originates from outside the basin physical boundaries. The monthly precipitation efficiency varies between 12 and 53%, and the annual mean is 28%. The mean annual result of the Nile regional water cycle is compared to that of the Amazon and the Mississippi basins.
Results of a model for premixed combustion oscillations
Energy Technology Data Exchange (ETDEWEB)
Janus, M.C.; Richards, G.A.
1996-09-01
Combustion oscillations are receiving renewed research interest due to increasing use of lean premix (LPM) combustion to gas turbines. A simple, nonlinear model for premixed combustion is described in this paper. The model was developed to help explain specific experimental observations and to provide guidance for development of active control schemes based on nonlinear concepts. The model can be used to quickly examine instability trends associated with changes in equivalence ratio, mass flow rate, geometry, ambient conditions, etc. The model represents the relevant processes occurring in a fuel nozzle and combustor which are analogous to current LPM turbine combustors. Conservation equations for the fuel nozzle and combustor are developed from simple control volume analysis, providing a set of ordinary differential equations that can be solved on a personal computer. Combustion is modeled as a stirred reactor, with a bimolecular reaction rate between fuel and air. A variety of numerical results and comparisons to experimental data are presented to demonstrate the utility of the model. Model results are used to understand the fundamental mechanisms which drive combustion oscillations, effects of inlet air temperature and nozzle geometry on instability, and effectiveness of open loop control schemes.
Directory of Open Access Journals (Sweden)
Yoshihiro eUesawa
2016-02-01
Full Text Available Random forest (RF is a machine-learning ensemble method with high predictive performance. Majority voting in RF uses the discrimination results in numerous decision trees produced from bootstrapping data. For the same dataset, the bootstrapping process yields different predictive capacities in each generation. As participants in the Toxicology in the 21st Century (Tox21 DATA Challenge 2014, we produced numerous RF models for predicting the structures of compounds that can activate each toxicity-related pathway, and then selected the model with the highest predictive ability. Half of the compounds in the training dataset supplied by the competition organizer were allocated to the validation dataset. The remaining compounds were used in model construction. The charged and uncharged forms of each molecule were calculated using the molecular operating environment (MOE software. Subsequently, the descriptors were computed using MOE, MarvinView, and Dragon. These combined methods yielded over 4,071 descriptors for model construction. Using these descriptors, pattern recognition analyses were performed by RF implemented in JMP Pro (a statistical software package. A hundred to two hundred RF models were generated for each pathway. The predictive performance of each model was tested against the validation dataset, and the best-performing model was selected. In the competition, the latter model selected a best-performing model from the 50% test set that best predicted the structures of compounds that activate the estrogen receptor ligand-binding domain (ER-LBD.
Nallikuzhy, Jiss J; Dandapat, S
2017-06-01
In this work, a new patient-specific approach to enhance the spatial resolution of ECG is proposed and evaluated. The proposed model transforms a three-lead ECG into a standard twelve-lead ECG thereby enhancing its spatial resolution. The three leads used for prediction are obtained from the standard twelve-lead ECG. The proposed model takes advantage of the improved inter-lead correlation in wavelet domain. Since the model is patient-specific, it also selects the optimal predictor leads for a given patient using a lead selection algorithm. The lead selection algorithm is based on a new diagnostic similarity score which computes the diagnostic closeness between the original and the spatially enhanced leads. Standard closeness measures are used to assess the performance of the model. The similarity in diagnostic information between the original and the spatially enhanced leads are evaluated using various diagnostic measures. Repeatability and diagnosability are performed to quantify the applicability of the model. A comparison of the proposed model is performed with existing models that transform a subset of standard twelve-lead ECG into the standard twelve-lead ECG. From the analysis of the results, it is evident that the proposed model preserves diagnostic information better compared to other models. Copyright © 2017 Elsevier Ltd. All rights reserved.
Summary of FY15 results of benchmark modeling activities
Energy Technology Data Exchange (ETDEWEB)
Arguello, J. Guadalupe [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-08-01
Sandia is participating in the third phase of an is a contributing partner to a U.S.-German "Joint Project" entitled "Comparison of current constitutive models and simulation procedures on the basis of model calculations of the thermo-mechanical behavior and healing of rock salt." The first goal of the project is to check the ability of numerical modeling tools to correctly describe the relevant deformation phenomena in rock salt under various influences. Achieving this goal will lead to increased confidence in the results of numerical simulations related to the secure storage of radioactive wastes in rock salt, thereby enhancing the acceptance of the results. These results may ultimately be used to make various assertions regarding both the stability analysis of an underground repository in salt, during the operating phase, and the long-term integrity of the geological barrier against the release of harmful substances into the biosphere, in the post-operating phase.
Consistency in Estimation and Model Selection of Dynamic Panel Data Models with Fixed Effects
Directory of Open Access Journals (Sweden)
Guangjie Li
2015-07-01
Full Text Available We examine the relationship between consistent parameter estimation and model selection for autoregressive panel data models with fixed effects. We find that the transformation of fixed effects proposed by Lancaster (2002 does not necessarily lead to consistent estimation of common parameters when some true exogenous regressors are excluded. We propose a data dependent way to specify the prior of the autoregressive coefficient and argue for comparing different model specifications before parameter estimation. Model selection properties of Bayes factors and Bayesian information criterion (BIC are investigated. When model uncertainty is substantial, we recommend the use of Bayesian Model Averaging to obtain point estimators with lower root mean squared errors (RMSE. We also study the implications of different levels of inclusion probabilities by simulations.
Multicriteria decision group model for the selection of suppliers
Directory of Open Access Journals (Sweden)
Luciana Hazin Alencar
2008-08-01
Full Text Available Several authors have been studying group decision making over the years, which indicates how relevant it is. This paper presents a multicriteria group decision model based on ELECTRE IV and VIP Analysis methods, to those cases where there is great divergence among the decision makers. This model includes two stages. In the first, the ELECTRE IV method is applied and a collective criteria ranking is obtained. In the second, using criteria ranking, VIP Analysis is applied and the alternatives are selected. To illustrate the model, a numerical application in the context of the selection of suppliers in project management is used. The suppliers that form part of the project team have a crucial role in project management. They are involved in a network of connected activities that can jeopardize the success of the project, if they are not undertaken in an appropriate way. The question tackled is how to select service suppliers for a project on behalf of an enterprise that assists the multiple objectives of the decision-makers.Vários autores têm estudado decisão em grupo nos últimos anos, o que indica a relevância do assunto. Esse artigo apresenta um modelo multicritério de decisão em grupo baseado nos métodos ELECTRE IV e VIP Analysis, adequado aos casos em que se tem uma grande divergência entre os decisores. Esse modelo é composto por dois estágios. No primeiro, o método ELECTRE IV é aplicado e uma ordenação dos critérios é obtida. No próximo estágio, com a ordenação dos critérios, o método VIP Analysis é aplicado e as alternativas são selecionadas. Para ilustrar o modelo, uma aplicação numérica no contexto da seleção de fornecedores em projetos é realizada. Os fornecedores que fazem parte da equipe do projeto têm um papel fundamental no gerenciamento de projetos. Eles estão envolvidos em uma rede de atividades conectadas que, caso não sejam executadas de forma apropriada, podem colocar em risco o sucesso do
Improving permafrost distribution modelling using feature selection algorithms
Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail
2016-04-01
The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its
Sivakumar, Brahman S; Wong, Peter; Dick, Charles G; Steer, Richard A; Tetsworth, Kevin
2014-10-01
To highlight a technique combining fluoroscopy and arthroscopy to aid percutaneous reduction and internal fixation of selected displaced intra-articular calcaneal fractures, assess outcome scores, and compare this method with other previously reported percutaneous methods. Retrospective review of all patients treated by this technique between June 2009 and June 2012. A tertiary care center located in Brisbane, Queensland, Australia. Thirteen consecutive patients were treated by this method during this period. All patients had a minimum of 13 months follow-up and were available for radiological checks and assessment of complications; functional outcome scores were available for 9 patients. The patient was placed in a lateral decubitus position. Reduction was achieved with the aid of both intraoperative fluoroscopy and subtalar arthroscopy and held with cannulated screws in orthogonal planes. The patient was mobilized non-weight bearing for 10 weeks. Outcomes measured were improvement in Bohler angle, postoperative complications, and 3 functional outcome scores (American Orthopaedic Foot and Ankle Society ankle-hindfoot score, Foot Function Index, and Calcaneal Fracture Scoring System). Mean postoperative improvement in Bohler angle was 18.3 degrees, with subsidence of 1.7 degrees. Functional outcome scores compared favorably with the prior literature. Based on available postoperative computed tomography scans (8/13), maximal residual articular incongruity measured 2 mm or less in 87.5% of our cases. Early results indicate that this technique, when combined with careful patient selection, offers a valid therapeutic option for the treatment of a distinct subset of displaced intra-articular calcaneal fractures, with diminished risk of wound complications. Large, prospective multicenter studies will be necessary to better evaluate the potential benefits of this technique. Level IV Therapeutic. See Instructions for Authors for a complete description of levels of evidence.
Selective Cerebro-Myocardial Perfusion in Complex Neonatal Aortic Arch Pathology: Midterm Results.
Hoxha, Stiljan; Abbasciano, Riccardo Giuseppe; Sandrini, Camilla; Rossetti, Lucia; Menon, Tiziano; Barozzi, Luca; Linardi, Daniele; Rungatscher, Alessio; Faggian, Giuseppe; Luciani, Giovanni Battista
2018-03-06
Aortic arch repair in newborns and infants has traditionally been accomplished using a period of deep hypothermic circulatory arrest. To reduce neurologic and cardiac dysfunction related to circulatory arrest and myocardial ischemia during complex aortic arch surgery, an alternative and novel strategy for cerebro-myocardial protection was recently developed, where regional low-flow perfusion is combined with controlled and independent coronary perfusion. The aim of the present retrospective study was to assess short-term and mid-term results of selective and independent cerebro-myocardial perfusion in neonatal aortic arch surgery. From April 2008 to August 2015, 28 consecutive neonates underwent aortic arch surgery under cerebro-myocardial perfusion. There were 17 male and 11 female, with median age of 15 days (3-30 days) and median body weight of 3 kg (1.6-4.2 kg), 9 (32%) of whom with low body weight (cerebro-myocardial perfusion was 30 ± 11 min (15-69 min). Renal dysfunction, requiring a period of peritoneal dialysis was observed in 10 (36%) patients, while liver dysfunction was noted only in 3 (11%). There were three (11%) early and two late deaths during a median follow-up of 2.9 years (range 6 months-7.7 years), with an actuarial survival of 82% at 7 years. At latest follow-up, no patient showed signs of cardiac or neurologic dysfunction. The present experience shows that a strategy of selective and independent cerebro-myocardial perfusion is safe, versatile, and feasible in high-risk neonates with complex congenital arch pathology. Encouraging outcomes were noted in terms of cardiac and neurological function, with limited end-organ morbidity. © 2018 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
MHC allele frequency distributions under parasite-driven selection: A simulation model
Directory of Open Access Journals (Sweden)
Radwan Jacek
2010-10-01
Full Text Available Abstract Background The extreme polymorphism that is observed in major histocompatibility complex (MHC genes, which code for proteins involved in recognition of non-self oligopeptides, is thought to result from a pressure exerted by parasites because parasite antigens are more likely to be recognized by MHC heterozygotes (heterozygote advantage and/or by rare MHC alleles (negative frequency-dependent selection. The Ewens-Watterson test (EW is often used to detect selection acting on MHC genes over the recent history of a population. EW is based on the expectation that allele frequencies under balancing selection should be more even than under neutrality. We used computer simulations to investigate whether this expectation holds for selection exerted by parasites on host MHC genes under conditions of heterozygote advantage and negative frequency-dependent selection acting either simultaneously or separately. Results In agreement with simple models of symmetrical overdominance, we found that heterozygote advantage acting alone in populations does, indeed, result in more even allele frequency distributions than expected under neutrality, and this is easily detectable by EW. However, under negative frequency-dependent selection, or under the joint action of negative frequency-dependent selection and heterozygote advantage, distributions of allele frequencies were less predictable: the majority of distributions were indistinguishable from neutral expectations, while the remaining runs resulted in either more even or more skewed distributions than under neutrality. Conclusions Our results indicate that, as long as negative frequency-dependent selection is an important force maintaining MHC variation, the EW test has limited utility in detecting selection acting on these genes.
Multiphysics modeling of selective laser sintering/melting
Ganeriwala, Rishi Kumar
A significant percentage of total global employment is due to the manufacturing industry. However, manufacturing also accounts for nearly 20% of total energy usage in the United States according to the EIA. In fact, manufacturing accounted for 90% of industrial energy consumption and 84% of industry carbon dioxide emissions in 2002. Clearly, advances in manufacturing technology and efficiency are necessary to curb emissions and help society as a whole. Additive manufacturing (AM) refers to a relatively recent group of manufacturing technologies whereby one can 3D print parts, which has the potential to significantly reduce waste, reconfigure the supply chain, and generally disrupt the whole manufacturing industry. Selective laser sintering/melting (SLS/SLM) is one type of AM technology with the distinct advantage of being able to 3D print metals and rapidly produce net shape parts with complicated geometries. In SLS/SLM parts are built up layer-by-layer out of powder particles, which are selectively sintered/melted via a laser. However, in order to produce defect-free parts of sufficient strength, the process parameters (laser power, scan speed, layer thickness, powder size, etc.) must be carefully optimized. Obviously, these process parameters will vary depending on material, part geometry, and desired final part characteristics. Running experiments to optimize these parameters is costly, energy intensive, and extremely material specific. Thus a computational model of this process would be highly valuable. In this work a three dimensional, reduced order, coupled discrete element - finite difference model is presented for simulating the deposition and subsequent laser heating of a layer of powder particles sitting on top of a substrate. Validation is provided and parameter studies are conducted showing the ability of this model to help determine appropriate process parameters and an optimal powder size distribution for a given material. Next, thermal stresses upon
Costa e Silva, João; Potts, Brad M.; Lopez, Gustavo A.
2014-01-01
Using native trees from near the northern and southern extremities of the relatively continuous eastern distribution of Eucalyptus globulus in Tasmania, we compared the progenies derived from natural open-pollination (OP) with those generated from within-region and long-distance outcrossing. Controlled outcrossing amongst eight parents - with four parents from each of the northern and southern regions - was undertaken using a diallel mating scheme. The progeny were planted in two field trials located within the species native range in southern Tasmania, and their survival and diameter growth were monitored over a 13-year-period. The survival and growth performances of all controlled cross types exceeded those of the OP progenies, consistent with inbreeding depression due to a combination of selfing and bi-parental inbreeding. The poorer survival of the northern regional (♀N♂N) outcrosses compared with the local southern regional outcrosses (♀S♂S) indicated differential selection against the former. Despite this mal-adaptation of the non-local ♀N♂N crosses at both southern sites, the survival of the inter-regional hybrids (♀N♂S and ♀S♂N) was never significantly different from that of the local ♀S♂S crosses. Significant site-dependent heterosis was detected for the growth of the surviving long-distance hybrids. This was expressed as mid-parent heterosis, particularly at the more northern planting site. Heterosis increased with age, while the difference between the regional ♀N♂N and ♀S♂S crosses remained insignificant at any age at either site. Nevertheless, the results for growth suggest that the fitness of individuals derived from long-distance crossing may be better at the more northern of the planting sites. Our results demonstrate the potential for early-age assessments of pollen dispersal to underestimate realised gene flow, with local inbreeding under natural open-pollination resulting in selection favouring the products of
Directory of Open Access Journals (Sweden)
João Costa E Silva
Full Text Available Using native trees from near the northern and southern extremities of the relatively continuous eastern distribution of Eucalyptus globulus in Tasmania, we compared the progenies derived from natural open-pollination (OP with those generated from within-region and long-distance outcrossing. Controlled outcrossing amongst eight parents - with four parents from each of the northern and southern regions - was undertaken using a diallel mating scheme. The progeny were planted in two field trials located within the species native range in southern Tasmania, and their survival and diameter growth were monitored over a 13-year-period. The survival and growth performances of all controlled cross types exceeded those of the OP progenies, consistent with inbreeding depression due to a combination of selfing and bi-parental inbreeding. The poorer survival of the northern regional (♀N♂N outcrosses compared with the local southern regional outcrosses (♀S♂S indicated differential selection against the former. Despite this mal-adaptation of the non-local ♀N♂N crosses at both southern sites, the survival of the inter-regional hybrids (♀N♂S and ♀S♂N was never significantly different from that of the local ♀S♂S crosses. Significant site-dependent heterosis was detected for the growth of the surviving long-distance hybrids. This was expressed as mid-parent heterosis, particularly at the more northern planting site. Heterosis increased with age, while the difference between the regional ♀N♂N and ♀S♂S crosses remained insignificant at any age at either site. Nevertheless, the results for growth suggest that the fitness of individuals derived from long-distance crossing may be better at the more northern of the planting sites. Our results demonstrate the potential for early-age assessments of pollen dispersal to underestimate realised gene flow, with local inbreeding under natural open-pollination resulting in selection favouring the
Model selection for convolutive ICA with an application to spatiotemporal analysis of EEG
DEFF Research Database (Denmark)
Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai
2007-01-01
We present a new algorithm for maximum likelihood convolutive independent component analysis (ICA) in which components are unmixed using stable autoregressive filters determined implicitly by estimating a convolutive model of the mixing process. By introducing a convolutive mixing model...... for the components, we show how the order of the filters in the model can be correctly detected using Bayesian model selection. We demonstrate a framework for deconvolving a subspace of independent components in electroencephalography (EEG). Initial results suggest that in some cases, convolutive mixing may...
Estimating a dynamic model of sex selection in China.
Ebenstein, Avraham
2011-05-01
High ratios of males to females in China, which have historically concerned researchers (Sen 1990), have increased in the wake of China's one-child policy, which began in 1979. Chinese policymakers are currently attempting to correct the imbalance in the sex ratio through initiatives that provide financial compensation to parents with daughters. Other scholars have advocated a relaxation of the one-child policy to allow more parents to have a son without engaging in sex selection. In this article, I present a model of fertility choice when parents have access to a sex-selection technology and face a mandated fertility limit. By exploiting variation in fines levied in China for unsanctioned births, I estimate the relative price of a son and daughter for mothers observed in China's census data (1982-2000). I find that a couple's first son is worth 1.42 years of income more than a first daughter, and the premium is highest among less-educated mothers and families engaged in agriculture. Simulations indicate that a subsidy of 1 year of income to families without a son would reduce the number of "missing girls" by 67% but impose an annual cost of 1.8% of Chinese gross domestic product (GDP). Alternatively, a three-child policy would reduce the number of "missing girls" by 56% but increase the fertility rate by 35%.
A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method
Directory of Open Access Journals (Sweden)
Jun-He Yang
2017-01-01
Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.
Relationship Marketing results: proposition of a cognitive mapping model
Directory of Open Access Journals (Sweden)
Iná Futino Barreto
2015-12-01
Full Text Available Objective - This research sought to develop a cognitive model that expresses how marketing professionals understand the relationship between the constructs that define relationship marketing (RM. It also tried to understand, using the obtained model, how objectives in this field are achieved. Design/methodology/approach – Through cognitive mapping, we traced 35 individual mental maps, highlighting how each respondent understands the interactions between RM elements. Based on the views of these individuals, we established an aggregate mental map. Theoretical foundation – The topic is based on a literature review that explores the RM concept and its main elements. Based on this review, we listed eleven main constructs. Findings – We established an aggregate mental map that represents the RM structural model. Model analysis identified that CLV is understood as the final result of RM. We also observed that the impact of most of the RM elements on CLV is brokered by loyalty. Personalization and quality, on the other hand, proved to be process input elements, and are the ones that most strongly impact others. Finally, we highlight that elements that punish customers are much less effective than elements that benefit them. Contributions - The model was able to insert core elements of RM, but absent from most formal models: CLV and customization. The analysis allowed us to understand the interactions between the RM elements and how the end result of RM (CLV is formed. This understanding improves knowledge on the subject and helps guide, assess and correct actions.
On theoretical models of gene expression evolution with random genetic drift and natural selection.
Ogasawara, Osamu; Okubo, Kousaku
2009-11-20
The relative contributions of natural selection and random genetic drift are a major source of debate in the study of gene expression evolution, which is hypothesized to serve as a bridge from molecular to phenotypic evolution. It has been suggested that the conflict between views is caused by the lack of a definite model of the neutral hypothesis, which can describe the long-run behavior of evolutionary change in mRNA abundance. Therefore previous studies have used inadequate analogies with the neutral prediction of other phenomena, such as amino acid or nucleotide sequence evolution, as the null hypothesis of their statistical inference. In this study, we introduced two novel theoretical models, one based on neutral drift and the other assuming natural selection, by focusing on a common property of the distribution of mRNA abundance among a variety of eukaryotic cells, which reflects the result of long-term evolution. Our results demonstrated that (1) our models can reproduce two independently found phenomena simultaneously: the time development of gene expression divergence and Zipf's law of the transcriptome; (2) cytological constraints can be explicitly formulated to describe long-term evolution; (3) the model assuming that natural selection optimized relative mRNA abundance was more consistent with previously published observations than the model of optimized absolute mRNA abundances. The models introduced in this study give a formulation of evolutionary change in the mRNA abundance of each gene as a stochastic process, on the basis of previously published observations. This model provides a foundation for interpreting observed data in studies of gene expression evolution, including identifying an adequate time scale for discriminating the effect of natural selection from that of random genetic drift of selectively neutral variations.
Marginal production in the Gulf of Mexico - II. Model results
International Nuclear Information System (INIS)
Kaiser, Mark J.; Yu, Yunke
2010-01-01
In the second part of this two-part article on marginal production in the Gulf of Mexico, we estimate the number of committed assets in water depth less than 1000 ft that are expected to be marginal over a 60-year time horizon. We compute the expected quantity and value of the production and gross revenue streams of the gulf's committed asset inventory circa. January 2007 using a probabilistic model framework. Cumulative hydrocarbon production from the producing inventory is estimated to be 1056 MMbbl oil and 13.3 Tcf gas. Marginal production from the committed asset inventory is expected to contribute 4.1% of total oil production and 5.4% of gas production. A meta-evaluation procedure is adapted to present the results of sensitivity analysis. Model results are discussed along with a description of the model framework and limitations of the analysis. (author)
Marginal production in the Gulf of Mexico - II. Model results
Energy Technology Data Exchange (ETDEWEB)
Kaiser, Mark J.; Yu, Yunke [Center for Energy Studies, Louisiana State University, Baton Rouge, LA 70803 (United States)
2010-08-15
In the second part of this two-part article on marginal production in the Gulf of Mexico, we estimate the number of committed assets in water depth less than 1000 ft that are expected to be marginal over a 60-year time horizon. We compute the expected quantity and value of the production and gross revenue streams of the gulf's committed asset inventory circa. January 2007 using a probabilistic model framework. Cumulative hydrocarbon production from the producing inventory is estimated to be 1056 MMbbl oil and 13.3 Tcf gas. Marginal production from the committed asset inventory is expected to contribute 4.1% of total oil production and 5.4% of gas production. A meta-evaluation procedure is adapted to present the results of sensitivity analysis. Model results are discussed along with a description of the model framework and limitations of the analysis. (author)
Coarsening in an interfacial equation without slope selection revisited: Analytical results
Energy Technology Data Exchange (ETDEWEB)
Guedda, M., E-mail: guedda@u-picardie.f [LAMFA, CNRS UMR 6140, Universite de Picardie Jules Verne, Amiens (France); Trojette, H. [LAMFA, CNRS UMR 6140, Universite de Picardie Jules Verne, Amiens (France)
2010-09-20
In this Letter, we re-examen a one-dimensional model of epitaxial growth that describes pyramidal structures characterized by the absence of a preferred slope [L. Golubovic, Phys. Rev. Lett. 78 (1997) 90]. A similarity approach shows that the typical mound lateral size and the interfacial width growth with time like t{sup 1/2} and t{sup 1/4}, respectively. This result was previously presented by Golubovic. Our contribution provides a mathematical justification for the existence of similarity solutions which correspond to, or predict, the typical coarsening process.
Evaluating experimental design for soil-plant model selection with Bayesian model averaging
Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang; Gayler, Sebastian
2013-04-01
The objective selection of appropriate models for realistic simulations of coupled soil-plant processes is a challenging task since the processes are complex, not fully understood at larger scales, and highly non-linear. Also, comprehensive data sets are scarce, and measurements are uncertain. In the past decades, a variety of different models have been developed that exhibit a wide range of complexity regarding their approximation of processes in the coupled model compartments. We present a method for evaluating experimental design for maximum confidence in the model selection task. The method considers uncertainty in parameters, measurements and model structures. Advancing the ideas behind Bayesian Model Averaging (BMA), the model weights in BMA are perceived as uncertain quantities with assigned probability distributions that narrow down as more data are made available. This allows assessing the power of different data types, data densities and data locations in identifying the best model structure from among a suite of plausible models. The models considered in this study are the crop models CERES, SUCROS, GECROS and SPASS, which are coupled to identical routines for simulating soil processes within the modelling framework Expert-N. The four models considerably differ in the degree of detail at which crop growth and root water uptake are represented. Monte-Carlo simulations were conducted for each of these models considering their uncertainty in soil hydraulic properties and selected crop model parameters. The models were then conditioned on field measurements of soil moisture, leaf-area index (LAI), and evapotranspiration rates (from eddy-covariance measurements) during a vegetation period of winter wheat at the Nellingen site in Southwestern Germany. Following our new method, we derived the BMA model weights (and their distributions) when using all data or different subsets thereof. We discuss to which degree the posterior BMA mean outperformed the prior BMA
Zawbaa, Hossam M.; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander
2016-01-01
Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven. PMID:27315205
Directory of Open Access Journals (Sweden)
Hossam M Zawbaa
Full Text Available Poly-lactide-co-glycolide (PLGA is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP, multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR. The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven.
Zawbaa, Hossam M; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander
2016-01-01
Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven.
International Nuclear Information System (INIS)
Piechowski, J.; Menoux, B.
1995-01-01
The interpretation methods used for estimating individual doses from measurement results contain some difficulties with regard to the adaptation of the metabolic models to situations encountered in practice. In essence, these difficulties concern the definition of the right parameters of the cases encountered. Moreover apart from characterised incidents, they are very often related to a lack of knowledge of the contamination routes and the chronology of the episodes. As a consequence, we were led to contemplate how the interpretation of monitoring data could be simplified by considering separately the results specific to the routes of entry and those relating to systemic contamination, i.e. after the radionuclides have been transferred to the blood. The study first develops an approach to the interpretation of the systemic contamination measurement results. Using the systemic contamination results and the appropriate retention and excretion functions, the values of the activity absorbed daily from the routes of entry to the blood are calculated. A day-to-day follow-up of absorbed activity is thus made possible, providing easy-to-consult results in real time. Some applications of the method are proposed for acute tritium and uranium contamination cases, and for chronic tritium, uranium and iodine contaminations. The conditions and constraints affecting the validity of the approach proposed are discussed. (authors). 10 refs., 5 figs., 3 tabs
PARAMETER ESTIMATION AND MODEL SELECTION FOR INDOOR ENVIRONMENTS BASED ON SPARSE OBSERVATIONS
Directory of Open Access Journals (Sweden)
Y. Dehbi
2017-09-01
Full Text Available This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.
Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations
Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.
2017-09-01
This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.
Comparison of Agricultural Trade in Selected Groups of Countries – Comparison of Real Results
Directory of Open Access Journals (Sweden)
Luboš Smutka
2015-01-01
Full Text Available The world trade is a dynamically changing in the long term horizon, its total value as well as share in the global economy are continuously growing. Despite the growth in agricultural trade, the gaps among various groups of countries and regions are becoming deeper. More and more countries loose its self-sufficiency or its netto export status and become dependent on imports. On the other hand, there exists another limited group of countries controlling most of the world exports. The aim of the article is to identify differences in changing values of agricultural trade among selected groups of countries. An accent is given primarily on an identification of differences relating to the real value of trading streams. These differences are defined not only in relation to the absolute value, but also to values recalculated per capita, active farmer or agricultural respective arable land. The results indicate extreme differences between developed and developing countries, just in favour of developed countries, which control an important share of the world agricultural trade. There is worth noting that despite the fact that developed countries affect essentially the character of the world agricultural market, there exist huge differences among them. They can be illustrated on the EU-15 and EU-13 countries. The differences relate not only to the value of agricultural trade but they can be observed when analysing the trade dynamics and productivity in relation to the production factors labour and land.
Energy Technology Data Exchange (ETDEWEB)
Nakamura, K. [Department of Radiology, Nagoya University School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya 466 (Japan); Ishiguchi, T. [Department of Radiology, Nagoya University School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya 466 (Japan); Maekoshi, H. [Department of Radiological Technology, Nagoya University College of Medical Technology, Nagoya (Japan); Ando, Y. [Department of Radiology, Nagoya University School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya 466 (Japan); Tsuzaka, M. [Department of Radiological Technology, Nagoya University College of Medical Technology, Nagoya (Japan); Tamiya, T. [Department of Radiological Technology, Nagoya University College of Medical Technology, Nagoya (Japan); Suganuma, N. [Department of Obstetrics and Gynecology, Branch Hospital, Nagoya University School of Medicine, Nagoya (Japan); Ishigaki, T. [Department of Radiology, Nagoya University School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya 466 (Japan)
1996-08-01
Clinical results of fluoroscopic fallopian tube catheterisation and absorbed radiation doses during the procedure were evaluated in 30 infertility patients with unilateral or bilateral tubal obstruction documented on hysterosalpingography. The staged technique consisted of contrast injection through an intrauterine catheter with a vacuum cup device, ostial salpingography with the wedged catheter, and selective salpingography with a coaxial microcatheter. Of 45 fallopian tubes examined, 35 (78 %) were demonstrated by the procedure, and at least one tube was newly demonstrated in 26 patients (87 %). Six of these patients conceived spontaneously in the follow-up period of 1-11 months. Four pregnancies were intrauterine and 2 were ectopic. This technique provided accurate and detailed information in the diagnosis and treatment of tubal obstruction in infertility patients. The absorbed radiation dose to the ovary in the average standardised procedure was estimated to be 0.9 cGy. Further improvement in the X-ray equipment and technique is required to reduce the radiation dose. (orig.). With 3 figs., 3 tabs.
Energy Technology Data Exchange (ETDEWEB)
Barros, Livia F.; Pecequilo, Brigitte R.S.; Aquino, Reginaldo R., E-mail: lfbarros@ipen.b, E-mail: brigitte@ipen.b, E-mail: raquino@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2011-07-01
The variation of natural radioactivity along the surface of the beach sands of Camburi, located in Vitoria, capital of Espirito Santo, southeastern Brazil, was determined from the contents of {sup 226}Ra, {sup 232}Th and {sup 40}K. Eleven collecting points was selected along all the 6 km extension of the Camburi beach. Sand samples collected from all established points on January 2011 were dried and sealed in standard 100 mL polyethylene flasks and measured by high resolution gamma spectrometry after a 4 weeks ingrowth period, in order to allow the secular equilibrium in the {sup 238}U and {sup 232}Th series. The {sup 226}Ra concentration was determined from the weighted average concentrations of {sup 214}Pb and {sup 214}Bi. The {sup 232}Th concentration was determined from the weighted average concentrations of {sup 228}Ac, {sup 212}Pb and {sup 212}Bi and the {sup 40}K from its single gamma transition. Preliminary results show activity concentrations varying from 5 Bq.kg{sup -1} to {sup 222} Bq.kg{sup -1} for {sup 226}Ra and from 14 Bq.kg{sup -1} to 1074 Bq.kg{sup -'}1 for {sup 232}Th, both with the highest values for Camburi South and Central. For {sup 40}K, the activity concentrations ranged from 14 Bq.kg{sup -1} to 179 Bq.kg{sup -1} and the highest values were obtained for Camburi South. (author)
The results of selective cytogenetic monitoring of Chernobyl accident victims in the Ukraine
International Nuclear Information System (INIS)
Pilinskaya, M.A.
1996-01-01
Selective cytogenetic monitoring of the highest priority groups of Chernobyl disaster victims has been carried out since 1987. In 1992-1993, 125 liquidators (irradiated mainly in 1986) and 42 persons recovering from acute radiation sickness of the second and third degrees of severity were examined. Cytogenetic effects (an elevated level of unstable as well as stable markers of radiation exposure) were found in all groups, which showed a positive correlation with the initial degree of irradiation severity even 6-7 y after the accident. Comparative scoring of conventional staining vs. G-banding in 10 liquidators showed the identical rate of unstable aberrations. At the same time, the yield of stable aberrations for G-banded slides exceeded the frequency for conventional staining. In order to study possible mutagenic activity of chronic low levels of irradiation, the cytogenetic monitoring of some critical groups of the population (especially children and occupational groups-tractor drivers and foresters) living in areas of the Ukraine contaminated by radionuclides was carried out. In all the examined groups, a significant increase in the frequency of aberrant metaphases, chromosome aberrations (both unstable and stable), an chromatid aberrations was observed. Data gathered from groups of children reflect the intensity of mutagenic impact on the studied populations and demonstrate a positive correlation with the duration of exposure. Results of cytogenetic examination of adults confirmed the importance of considering the contribution of occupational radiation exposure to genetic effects of Chernobyl accident factors on the population of contaminated areas. 17 refs., 3 tabs
Energy Technology Data Exchange (ETDEWEB)
Fisk, W.J.; Faulkner, D.; Sullivan, D. [and others
1998-02-17
To test proposed methods for reducing SBS symptoms and to learn about the causes of these symptoms, a double-blind controlled intervention study was designed and implemented. This study utilized two different interventions designed to reduce occupants` exposures to airborne particles: (1) high efficiency filters in the building`s HVAC systems; and (2) thorough cleaning of carpeted floors and fabric-covered chairs with an unusually powerful vacuum cleaner. The study population was the workers on the second and fourth floors of a large office building with mechanical ventilation, air conditioning, and sealed windows. Interventions were implemented on one floor while the occupants on the other floor served as a control group. For the enhanced-filtration intervention, a multiple crossover design was used (a crossover is a repeat of the experiment with the former experimental group as the control group and vice versa). Demographic and health symptom data were collected via an initial questionnaire on the first study week and health symptom data were obtained each week, for eight additional weeks, via weekly questionnaires. A large number of indoor environmental parameters were measured during the study including air temperatures and humidities, carbon dioxide concentrations, particle concentrations, concentrations of several airborne bioaerosols, and concentrations of several microbiologic compounds within the dust sampled from floors and chairs. This report describes the study methods and summarizes the results of selected environmental measurements.
Fruit, Michel; Gussarov, Andrei; Berghmans, Francis; Doyle, Dominic; Ulbrich, Gerd
2017-11-01
It is well known within the Space optics community that radiation may significantly affect transmittance of glasses. To overcome this drawback, glass manufacturers have developed Cerium doped counterparts of classical glasses. This doped glasses display much less transmittance sensitivity to radiation. Still, the impact of radiation on refractive index is less known and may affect indifferently classical or Cerium doped glasses. ESTEC has initialised an R&D program with the aim of establishing a comprehensive data base gathering radiation sensitivity data, called Dose coefficients, for all the glass optical parameters (transmittance / refractive index / compaction……). The first part of this study, to define the methodology for such a data base, is run by ASTRIUM SAS in co-operation with SCK CEN. This covers theoretical studies associated to testing of a selected set of classical and "radiation hardened" glasses. It is proposed here to present first the theoretical backgrounds of this study and then to give results which have been obtained so far.
Modeling Results For the ITER Cryogenic Fore Pump. Final Report
Energy Technology Data Exchange (ETDEWEB)
Pfotenhauer, John M. [University of Wisconsin, Madison, WI (United States); Zhang, Dongsheng [University of Wisconsin, Madison, WI (United States)
2014-03-31
A numerical model characterizing the operation of a cryogenic fore-pump (CFP) for ITER has been developed at the University of Wisconsin – Madison during the period from March 15, 2011 through June 30, 2014. The purpose of the ITER-CFP is to separate hydrogen isotopes from helium gas, both making up the exhaust components from the ITER reactor. The model explicitly determines the amount of hydrogen that is captured by the supercritical-helium-cooled pump as a function of the inlet temperature of the supercritical helium, its flow rate, and the inlet conditions of the hydrogen gas flow. Furthermore the model computes the location and amount of hydrogen captured in the pump as a function of time. Throughout the model’s development, and as a calibration check for its results, it has been extensively compared with the measurements of a CFP prototype tested at Oak Ridge National Lab. The results of the model demonstrate that the quantity of captured hydrogen is very sensitive to the inlet temperature of the helium coolant on the outside of the cryopump. Furthermore, the model can be utilized to refine those tests, and suggests methods that could be incorporated in the testing to enhance the usefulness of the measured data.
Fuel assembly bow: analytical modeling and resulting design improvements
International Nuclear Information System (INIS)
Stabel, J.; Huebsch, H.P.
1995-01-01
The bowing of fuel assemblies may result in a contact between neighbouring fuel assemblies and in connection with a vibration to a resulting wear or even perforation at the corners of the spacer grids of neighbouring assemblies. Such events allowed reinsertion of a few fuel assemblies in Germany only after spacer repair. In order to identify the most sensitive parameters causing the observed bowing of fuel assemblies a new computer model was develop which takes into a account the highly nonlinear behaviour of the interaction between fuel rods and spacers. As a result of the studies performed with this model, design improvements such as a more rigid connection between guide thimbles and spacer grids, could be defined. First experiences with this improved design show significantly better fuel behaviour. (author). 5 figs., 1 tabs
Model unspecific search in CMS. Results at 8 TeV
Energy Technology Data Exchange (ETDEWEB)
Albert, Andreas; Duchardt, Deborah; Hebbeker, Thomas; Knutzen, Simon; Lieb, Jonas; Meyer, Arnd; Pook, Tobias; Roemer, Jonas [III. Physikalisches Institut A, RWTH Aachen University (Germany)
2016-07-01
In the year 2012, CMS collected a total data set of approximately 20 fb{sup -1} in proton-proton collisions at √(s)=8 TeV. Dedicated searches for physics beyond the standard model are commonly designed with the signatures of a given theoretical model in mind. While this approach allows for an optimised sensitivity to the sought-after signal, it may cause unexpected phenomena to be overlooked. In a complementary approach, the Model Unspecific Search in CMS (MUSiC) analyses CMS data in a general way. Depending on the reconstructed final state objects (e.g. electrons), collision events are sorted into classes. In each of the classes, the distributions of selected kinematic variables are compared to standard model simulation. An automated statistical analysis is performed to quantify the agreement between data and prediction. In this talk, the analysis concept is introduced and selected results of the analysis of the 2012 CMS data set are presented.
Wöhling, T.; Schöniger, A.; Geiges, A.; Nowak, W.; Gayler, S.
2013-12-01
The objective selection of appropriate models for realistic simulations of coupled soil-plant processes is a challenging task since the processes are complex, not fully understood at larger scales, and highly non-linear. Also, comprehensive data sets are scarce, and measurements are uncertain. In the past decades, a variety of different models have been developed that exhibit a wide range of complexity regarding their approximation of processes in the coupled model compartments. We present a method for evaluating experimental design for maximum confidence in the model selection task. The method considers uncertainty in parameters, measurements and model structures. Advancing the ideas behind Bayesian Model Averaging (BMA), we analyze the changes in posterior model weights and posterior model choice uncertainty when more data are made available. This allows assessing the power of different data types, data densities and data locations in identifying the best model structure from among a suite of plausible models. The models considered in this study are the crop models CERES, SUCROS, GECROS and SPASS, which are coupled to identical routines for simulating soil processes within the modelling framework Expert-N. The four models considerably differ in the degree of detail at which crop growth and root water uptake are represented. Monte-Carlo simulations were conducted for each of these models considering their uncertainty in soil hydraulic properties and selected crop model parameters. Using a Bootstrap Filter (BF), the models were then conditioned on field measurements of soil moisture, matric potential, leaf-area index, and evapotranspiration rates (from eddy-covariance measurements) during a vegetation period of winter wheat at a field site at the Swabian Alb in Southwestern Germany. Following our new method, we derived model weights when using all data or different subsets thereof. We discuss to which degree the posterior mean outperforms the prior mean and all
Model Selection in the Analysis of Photoproduction Data
Landay, Justin
2017-01-01
Scattering experiments provide one of the most powerful and useful tools for probing matter to better understand its fundamental properties governed by the strong interaction. As the spectroscopy of the excited states of nucleons enters a new era of precision ushered in by improved experiments at Jefferson Lab and other facilities around the world, traditional partial-wave analysis methods must be adjusted accordingly. In this poster, we present a rigorous set of statistical tools and techniques that we implemented; most notably, the LASSO method, which serves for the selection of the simplest model, allowing us to avoid over fitting. In the case of establishing the spectrum of exited baryons, it avoids overpopulation of the spectrum and thus the occurrence of false-positives. This is a prerequisite to reliably compare theories like lattice QCD or quark models to experiments. Here, we demonstrate the principle by simultaneously fitting three observables in neutral pion photo-production, such as the differential cross section, beam asymmetry and target polarization across thousands of data points. Other authors include Michael Doring, Bin Hu, and Raquel Molina.
muMAB: A Multi-Armed Bandit Model for Wireless Network Selection
Directory of Open Access Journals (Sweden)
Stefano Boldrini
2018-01-01
Full Text Available Multi-armed bandit (MAB models are a viable approach to describe the problem of best wireless network selection by a multi-Radio Access Technology (multi-RAT device, with the goal of maximizing the quality perceived by the final user. The classical MAB model does not allow, however, to properly describe the problem of wireless network selection by a multi-RAT device, in which a device typically performs a set of measurements in order to collect information on available networks, before a selection takes place. The MAB model foresees in fact only one possible action for the player, which is the selection of one among different arms at each time step; existing arm selection algorithms thus mainly differ in the rule according to which a specific arm is selected. This work proposes a new MAB model, named measure-use-MAB (muMAB, aiming at providing a higher flexibility, and thus a better accuracy in describing the network selection problem. The muMAB model extends the classical MAB model in a twofold manner; first, it foresees two different actions: to measure and to use; second, it allows actions to span over multiple time steps. Two new algorithms designed to take advantage of the higher flexibility provided by the muMAB model are also introduced. The first one, referred to as measure-use-UCB1 (muUCB1 is derived from the well known UCB1 algorithm, while the second one, referred to as Measure with Logarithmic Interval (MLI, is appositely designed for the new model so to take advantage of the new measure action, while aggressively using the best arm. The new algorithms are compared against existing ones from the literature in the context of the muMAB model, by means of computer simulations using both synthetic and captured data. Results show that the performance of the algorithms heavily depends on the Probability Density Function (PDF of the reward received on each arm, with different algorithms leading to the best performance depending on the PDF
Energy Technology Data Exchange (ETDEWEB)
De Lucia, Frank C., E-mail: frank.delucia@us.army.mil; Gottfried, Jennifer L.
2011-02-15
Using a series of thirteen organic materials that includes novel high-nitrogen energetic materials, conventional organic military explosives, and benign organic materials, we have demonstrated the importance of variable selection for maximizing residue discrimination with partial least squares discriminant analysis (PLS-DA). We built several PLS-DA models using different variable sets based on laser induced breakdown spectroscopy (LIBS) spectra of the organic residues on an aluminum substrate under an argon atmosphere. The model classification results for each sample are presented and the influence of the variables on these results is discussed. We found that using the whole spectra as the data input for the PLS-DA model gave the best results. However, variables due to the surrounding atmosphere and the substrate contribute to discrimination when the whole spectra are used, indicating this may not be the most robust model. Further iterative testing with additional validation data sets is necessary to determine the most robust model.
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification
Fuzzy Multicriteria Model for Selection of Vibration Technology
Directory of Open Access Journals (Sweden)
María Carmen Carnero
2016-01-01
Full Text Available The benefits of applying the vibration analysis program are well known and have been so for decades. A large number of contributions have been produced discussing new diagnostic, signal treatment, technical parameter analysis, and prognosis techniques. However, to obtain the expected benefits from a vibration analysis program, it is necessary to choose the instrumentation which guarantees the best results. Despite its importance, in the literature, there are no models to assist in taking this decision. This research describes an objective model using Fuzzy Analytic Hierarchy Process (FAHP to make a choice of the most suitable technology among portable vibration analysers. The aim is to create an easy-to-use model for processing, manufacturing, services, and research organizations, to guarantee adequate decision-making in the choice of vibration analysis technology. The model described recognises that judgements are often based on ambiguous, imprecise, or inadequate information that cannot provide precise values. The model incorporates judgements from several decision-makers who are experts in the field of vibration analysis, maintenance, and electronic devices. The model has been applied to a Health Care Organization.
Methodology and Results of Mathematical Modelling of Complex Technological Processes
Mokrova, Nataliya V.
2018-03-01
The methodology of system analysis allows us to draw a mathematical model of the complex technological process. The mathematical description of the plasma-chemical process was proposed. The importance the quenching rate and initial temperature decrease time was confirmed for producing the maximum amount of the target product. The results of numerical integration of the system of differential equations can be used to describe reagent concentrations, plasma jet rate and temperature in order to achieve optimal mode of hardening. Such models are applicable both for solving control problems and predicting future states of sophisticated technological systems.
Modeling vertical loads in pools resulting from fluid injection. [BWR
Energy Technology Data Exchange (ETDEWEB)
Lai, W.; McCauley, E.W.
1978-06-15
Table-top model experiments were performed to investigate pressure suppression pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peachbottom Mark I boiling water reactor containment system. The results guided subsequent conduct of experiments in the /sup 1///sub 5/-scale facility and provided new insight into the vertical load function (VLF). Model experiments show an oscillatory VLF with the download typically double-spiked followed by a more gradual sinusoidal upload. The load function contains a high frequency oscillation superimposed on a low frequency one; evidence from measurements indicates that the oscillations are initiated by fluid dynamics phenomena.
Modeling vertical loads in pools resulting from fluid injection
International Nuclear Information System (INIS)
Lai, W.; McCauley, E.W.
1978-01-01
Table-top model experiments were performed to investigate pressure suppression pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peachbottom Mark I boiling water reactor containment system. The results guided subsequent conduct of experiments in the 1 / 5 -scale facility and provided new insight into the vertical load function (VLF). Model experiments show an oscillatory VLF with the download typically double-spiked followed by a more gradual sinusoidal upload. The load function contains a high frequency oscillation superimposed on a low frequency one; evidence from measurements indicates that the oscillations are initiated by fluid dynamics phenomena
Some results on the dynamics generated by the Bazykin model
Directory of Open Access Journals (Sweden)
Georgescu, R M
2006-07-01
Full Text Available A predator-prey model formerly proposed by A. Bazykin et al. [Bifurcation diagrams of planar dynamical systems (1985] is analyzed in the case when two of the four parameters are kept fixed. Dynamics and bifurcation results are deduced by using the methods developed by D. K. Arrowsmith and C. M. Place [Ordinary differential equations (1982], S.-N. Chow et al. [Normal forms and bifurcation of planar fields (1994], Y. A. Kuznetsov [Elements of applied bifurcation theory (1998], and A. Georgescu [Dynamic bifurcation diagrams for some models in economics and biology (2004]. The global dynamic bifurcation diagram is constructed and graphically represented. The biological interpretation is presented, too.
PRESEMO - a predictive model of codend selectivity - a tool for fishery managers
DEFF Research Database (Denmark)
O'Neill, F.G.; Herrmann, Bent
2007-01-01
parameters are expressed in terms of the gear design parameters and in terms of both catch size and gear design parameters. The potential use of these results in a management context and for the development of more selective gears is highlighted by plotting iso-/(50) and iso-sr curves used to identify gear...... design parameters that give equal estimates of the 50% retention length and the selection range, respectively. It is emphasized that this approach can be extended to consider the influence of other design parameters and, if sufficient relevant quantitative information exists, biological and behavioural...... parameters. As such, the model presented here will provide a better understanding of the selection process, permit a more targeted approach to codend selectivity experiments, and assist fishery managers to assess the impact of proposed technical measures that are introduced to reduce the catch of undersized...
Results of the eruptive column model inter-comparison study
Costa, Antonio; Suzuki, Yujiro; Cerminara, M.; Devenish, Ben J.; Esposti Ongaro, T.; Herzog, Michael; Van Eaton, Alexa; Denby, L.C.; Bursik, Marcus; de' Michieli Vitturi, Mattia; Engwell, S.; Neri, Augusto; Barsotti, Sara; Folch, Arnau; Macedonio, Giovanni; Girault, F.; Carazzo, G.; Tait, S.; Kaminski, E.; Mastin, Larry G.; Woodhouse, Mark J.; Phillips, Jeremy C.; Hogg, Andrew J.; Degruyter, Wim; Bonadonna, Costanza
2016-01-01
This study compares and evaluates one-dimensional (1D) and three-dimensional (3D) numerical models of volcanic eruption columns in a set of different inter-comparison exercises. The exercises were designed as a blind test in which a set of common input parameters was given for two reference eruptions, representing a strong and a weak eruption column under different meteorological conditions. Comparing the results of the different models allows us to evaluate their capabilities and target areas for future improvement. Despite their different formulations, the 1D and 3D models provide reasonably consistent predictions of some of the key global descriptors of the volcanic plumes. Variability in plume height, estimated from the standard deviation of model predictions, is within ~ 20% for the weak plume and ~ 10% for the strong plume. Predictions of neutral buoyancy level are also in reasonably good agreement among the different models, with a standard deviation ranging from 9 to 19% (the latter for the weak plume in a windy atmosphere). Overall, these discrepancies are in the range of observational uncertainty of column height. However, there are important differences amongst models in terms of local properties along the plume axis, particularly for the strong plume. Our analysis suggests that the simplified treatment of entrainment in 1D models is adequate to resolve the general behaviour of the weak plume. However, it is inadequate to capture complex features of the strong plume, such as large vortices, partial column collapse, or gravitational fountaining that strongly enhance entrainment in the lower atmosphere. We conclude that there is a need to more accurately quantify entrainment rates, improve the representation of plume radius, and incorporate the effects of column instability in future versions of 1D volcanic plume models.
A simple model of group selection that cannot be analyzed with inclusive fitness
van Veelen, M.; Luo, S.; Simon, B.
2014-01-01
A widespread claim in evolutionary theory is that every group selection model can be recast in terms of inclusive fitness. Although there are interesting classes of group selection models for which this is possible, we show that it is not true in general. With a simple set of group selection models,
The effect of mis-specification on mean and selection between the Weibull and lognormal models
Jia, Xiang; Nadarajah, Saralees; Guo, Bo
2018-02-01
The lognormal and Weibull models are commonly used to analyse data. Although selection procedures have been extensively studied, it is possible that the lognormal model could be selected when the true model is Weibull or vice versa. As the mean is important in applications, we focus on the effect of mis-specification on mean. The effect on lognormal mean is first considered if the lognormal sample is wrongly fitted by a Weibull model. The maximum likelihood estimate (MLE) and quasi-MLE (QMLE) of lognormal mean are obtained based on lognormal and Weibull models. Then, the impact is evaluated by computing ratio of biases and ratio of mean squared errors (MSEs) between MLE and QMLE. For completeness, the theoretical results are demonstrated by simulation studies. Next, the effect of the reverse mis-specification on Weibull mean is discussed. It is found that the ratio of biases and the ratio of MSEs are independent of the location and scale parameters of the lognormal and Weibull models. The influence could be ignored if some special conditions hold. Finally, a model selection method is proposed by comparing ratios concerning biases and MSEs. We also present a published data to illustrate the study in this paper.
The Sim-SEQ Project: Comparison of Selected Flow Models for the S-3 Site
Energy Technology Data Exchange (ETDEWEB)
Mukhopadhyay, Sumit; Doughty, Christine A.; Bacon, Diana H.; Li, Jun; Wei, Lingli; Yamamoto, Hajime; Gasda, Sarah E.; Hosseini, Seyyed; Nicot, Jean-Philippe; Birkholzer, Jens
2015-05-23
Sim-SEQ is an international initiative on model comparison for geologic carbon sequestration, with an objective to understand and, if possible, quantify model uncertainties. Model comparison efforts in Sim-SEQ are at present focusing on one specific field test site, hereafter referred to as the Sim-SEQ Study site (or S-3 site). Within Sim-SEQ, different modeling teams are developing conceptual models of CO2 injection at the S-3 site. In this paper, we select five flow models of the S-3 site and provide a qualitative comparison of their attributes and predictions. These models are based on five different simulators or modeling approaches: TOUGH2/EOS7C, STOMP-CO2e, MoReS, TOUGH2-MP/ECO2N, and VESA. In addition to model-to-model comparison, we perform a limited model-to-data comparison, and illustrate how model choices impact model predictions. We conclude the paper by making recommendations for model refinement that are likely to result in less uncertainty in model predictions.
DISCRETE DEFORMATION WAVE DYNAMICS IN SHEAR ZONES: PHYSICAL MODELLING RESULTS
Directory of Open Access Journals (Sweden)
S. A. Bornyakov
2016-01-01
Full Text Available Observations of earthquake migration along active fault zones [Richter, 1958; Mogi, 1968] and related theoretical concepts [Elsasser, 1969] have laid the foundation for studying the problem of slow deformation waves in the lithosphere. Despite the fact that this problem has been under study for several decades and discussed in numerous publications, convincing evidence for the existence of deformation waves is still lacking. One of the causes is that comprehensive field studies to register such waves by special tools and equipment, which require sufficient organizational and technical resources, have not been conducted yet.The authors attempted at finding a solution to this problem by physical simulation of a major shear zone in an elastic-viscous-plastic model of the lithosphere. The experiment setup is shown in Figure 1 (A. The model material and boundary conditions were specified in accordance with the similarity criteria (described in detail in [Sherman, 1984; Sherman et al., 1991; Bornyakov et al., 2014]. The montmorillonite clay-and-water paste was placed evenly on two stamps of the installation and subject to deformation as the active stamp (1 moved relative to the passive stamp (2 at a constant speed. The upper model surface was covered with fine sand in order to get high-contrast photos. Photos of an emerging shear zone were taken every second by a Basler acA2000-50gm digital camera. Figure 1 (B shows an optical image of a fragment of the shear zone. The photos were processed by the digital image correlation method described in [Sutton et al., 2009]. This method estimates the distribution of components of displacement vectors and strain tensors on the model surface and their evolution over time [Panteleev et al., 2014, 2015].Strain fields and displacements recorded in the optical images of the model surface were estimated in a rectangular box (220.00×72.17 mm shown by a dot-and-dash line in Fig. 1, A. To ensure a sufficient level of
Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests
Energy Technology Data Exchange (ETDEWEB)
Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-08-07
The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.
Directory of Open Access Journals (Sweden)
Kadhim Raheem Erzaij
2016-06-01
Full Text Available Engineering equipment is essential part in the construction project and usually manufactured with long lead times, large costs and special engineering requirements. Construction manager targets that equipment to be delivered in the site need date with the right quantity, appropriate cost and required quality, and this entails an efficient supplier can satisfy these targets. Selection of engineering equipment supplier is a crucial managerial process .it requires evaluation of multiple suppliers according to multiple criteria. This process is usually performed manually and based on just limited evaluation criteria, so better alternatives may be neglected. Three stages of survey comprised number of public and private companies in Iraqi construction sector were employed to identify main criteria and sub criteria for supplier selection and their priorities.The main criteria identified were quality of product, commercial aspect, delivery, reputation and position, and system quality . An effective technique in multiple criteria decision making (MCDM as analytical hierarchy process (AHP have been used to get importance weights of criteria based on experts judgment. Thereafter, a management software system for Evaluation and Selection of Engineering Equipment Suppliers (ESEES has been developed based on the results obtained from AHP. This model was validated in a case study at municipality of Baghdad involved actual cases of selection pumps suppliers for infrastructure projects .According to experts, this model can improve the current process followed in the supplier selection and aid decision makers to adopt better choices in the domain of selection engineering equipment suppliers.
CSIR Research Space (South Africa)
Mbawala, SJ
2017-12-01
Full Text Available . Three soil samples commonly found on construction sites in Tanzania were sampled and submitted to the selected five laboratories that were requested to perform the foundation indicator tests (particle size distribution, liquid limit and plastic limit...
RESULTS OF THE SELECTION OF BREEDING SAMPLES OF CARROT BASED ON BIOCHEMICAL COMPOSITION
V. K. Cherkasova; O. N. Shabetya
2014-01-01
12 samples of carrot were analyzed for biochemical components in roots. 5 genotypes with high content of vitamin C, β-carotene, and total sugar were selected as genetic sources of high biochemical components.
Interaction between subducting plates: results from numerical and analogue modeling
Kiraly, Agnes; Capitanio, Fabio A.; Funiciello, Francesca; Faccenna, Claudio
2016-04-01
The tectonic setting of the Alpine-Mediterranean area is achieved during the late Cenozoic subduction, collision and suturing of several oceanic fragments and continental blocks. In this stage, processes such as interactions among subducting slabs, slab migrations and related mantle flow played a relevant role on the resulting tectonics. Here, we use numerical models to first address the mantle flow characteristic in 3D. During the subduction of a single plate the strength of the return flow strongly depends on the slab pull force, that is on the plate's buoyancy, however the physical properties of the slab, such as density, viscosity or width, do not affect largely the morphology of the toroidal cell. Instead, dramatic effects on the geometry and the dynamics of the toroidal cell result in models where the thickness of the mantle is varied. The vertical component of the vorticity vector is used to define the characteristic size of the toroidal cell, which is ~1.2-1.3 times the mantle depth. This latter defines the range of viscous stress propagation through the mantle and consequent interactions with other slabs. We thus further investigate on this setup where two separate lithospheric plates subduct in opposite sense, developing opposite polarities and convergent slab retreat, and model different initial sideways distance between the plates. The stress profiles in time illustrate that the plates interacts when slabs are at the characteristic distance and the two slabs toroidal cells merge. Increased stress and delayed slab migrations are the results. Analogue models of double-sided subduction show similar maximum distance and allow testing the additional role of stress propagated through the plates. We use a silicon plate subducting on its two opposite margins, which is either homogeneous or comprises oceanic and continental lithospheres, differing in buoyancy. The modeling results show that the double-sided subduction is strongly affected by changes in plate
Modeling of cesium sorption on biotite using cation exchange selectivity coefficients
Energy Technology Data Exchange (ETDEWEB)
Kylloenen, Jarkko; Hakanen, Martti; Harjula, Risto; Lehto, Jukka [Helsinki Univ. (Finland). Lab. of Radiochemistry; Lindberg, Antero [Geological Survey of Finland, Espoo (Finland); Vehkamaeki, Marko [Helsinki Univ. (Finland). Lab. of Inorganic Chemistry
2014-07-01
For the modeling of cesium sorption on biotite, samples of natural biotite separated from gneissic rocks were converted into monoionic potassium, sodium, and calcium forms, and sorption isotherms for Cs/K, Cs/Na and Cs/Ca exchange were determined at pH 6 and 8 in 10{sup -4}-10{sup -8} M Cs solutions. Selectivity coefficients for Cs/K, Cs/Na, and Cs/Ca ion exchange reactions were calculated from the isotherm data, using the Gaines-Thomas convention. At Cs loadings below 1% of the total ion exchange capacity, the overall selectivity coefficient for Cs/Ca exchange was approximately five and seven orders of magnitude higher than those for Cs/Na and Cs/K exchange, respectively. Based on the selectivity coefficients, the ion exchange isotherms were modeled with the U.S. Geological Survey PhreeqC program, assuming three different types of ion exchange site: sites on the basal planes on biotite crystal surfaces with 95% site abundance, probable interlayer sites on crystal edges [frayed edge sites (FESs)] (0.02%) and third-type sites (5%), the physical background of which is unclear. Of these three types, the FES sites were superior in Cs selectivity, while the planar sites exhibited the lowest selectivity, and the third-type sites had selectivity between these two. The functionality of the model was successfully verified by modeling the Cs sorption isotherms on crushed mica gneiss rock in saline groundwater. Determination of the exchangeable ions K, Na, Ca, and Cs on the basal plane and edge surfaces by scanning electron microscopy-energy-dispersive x-ray spectroscopy (SEM-EDX) supports the results of modeling: edge sites highly prefer Cs ions and also Ca and Na ions but not K ions.
First experiments results about the engineering model of Rapsodie
International Nuclear Information System (INIS)
Chalot, A.; Ginier, R.; Sauvage, M.
1964-01-01
This report deals with the first series of experiments carried out on the engineering model of Rapsodie and on an associated sodium facility set in a laboratory hall of Cadarache. It conveys more precisely: 1/ - The difficulties encountered during the erection and assembly of the engineering model and a compilation of the results of the first series of experiments and tests carried out on this installation (loading of the subassemblies preheating, thermal chocks...). 2/ - The experiments and tests carried out on the two prototypes control rod drive mechanisms which brought to the choice for the design of the definitive drive mechanism. As a whole, the results proved the validity of the general design principles adopted for Rapsodie. (authors) [fr
Workshop to transfer VELMA watershed model results to ...
An EPA Western Ecology Division (WED) watershed modeling team has been working with the Snoqualmie Tribe Environmental and Natural Resources Department to develop VELMA watershed model simulations of the effects of historical and future restoration and land use practices on streamflow, stream temperature, and other habitat characteristics affecting threatened salmon populations in the 100 square mile Tolt River watershed in Washington state. To date, the WED group has fully calibrated the watershed model to simulate Tolt River flows with a high degree of accuracy under current and historical conditions and practices, and is in the process of simulating long-term responses to specific watershed restoration practices conducted by the Snoqualmie Tribe and partners. On July 20-21 WED Researchers Bob McKane, Allen Brookes and ORISE Fellow Jonathan Halama will be attending a workshop at the Tolt River site in Carnation, WA, to present and discuss modeling results with the Snoqualmie Tribe and other Tolt River watershed stakeholders and land managers, including the Washington Departments of Ecology and Natural Resources, U.S. Forest Service, City of Seattle, King County, and representatives of the Northwest Indian Fisheries Commission. The workshop is being co-organized by the Snoqualmie Tribe, EPA Region 10 and WED. The purpose of this 2-day workshop is two-fold. First, on Day 1, the modeling team will perform its second site visit to the watershed, this time focus
Meteorological uncertainty of atmospheric dispersion model results (MUD)
International Nuclear Information System (INIS)
Havskov Soerensen, J.; Amstrup, B.; Feddersen, H.
2013-08-01
The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for long-range atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent uncertainties of the meteorological model results. These uncertainties stem from e.g. limits in meteorological observations used to initialise meteorological forecast series. By perturbing e.g. the initial state of an NWP model run in agreement with the available observational data, an ensemble of meteorological forecasts is produced from which uncertainties in the various meteorological parameters are estimated, e.g. probabilities for rain. Corresponding ensembles of atmospheric dispersion can now be computed from which uncertainties of predicted radionuclide concentration and deposition patterns can be derived. (Author)
Selection of antioxidants against ovarian oxidative stress in mouse model.
Li, Bojiang; Weng, Qiannan; Liu, Zequn; Shen, Ming; Zhang, Jiaqing; Wu, Wangjun; Liu, Honglin
2017-12-01
Oxidative stress (OS) plays an important role in the process of ovarian granulosa cell apoptosis and follicular atresia. The aim of this study was to select antioxidant against OS in ovary tissue. Firstly, we chose the six antioxidants and analyzed the reactive oxygen species (ROS) level in the ovary tissue. The results showed that proanthocyanidins, gallic acid, curcumin, and carotene decrease the ROS level compared with control group. We further demonstrated that both proanthocyanidins and gallic acid increase the antioxidant enzymes activity. Moreover, change in the ROS level was not observed in proanthocyanidins and gallic acid group of brain, liver, spleen, and kidney tissues. Finally, we found that proanthocyanidins and gallic acid inhibit pro-apoptotic genes expression in granulosa cells. Taken together, proanthocyanidins and gallic acid may be the most acceptable and optimal antioxidants specifically against ovarian OS and also may be involved in the inhibition of granulosa cells apoptosis in mouse ovary. © 2017 Wiley Periodicals, Inc.
Some results on hyperscaling in the 3D Ising model
Energy Technology Data Exchange (ETDEWEB)
Baker, G.A. Jr. [Los Alamos National Lab., NM (United States). Theoretical Div.; Kawashima, Naoki [Univ. of Tokyo (Japan). Dept. of Physics
1995-09-01
The authors review exact studies on finite-sized 2 dimensional Ising models and show that the point for an infinite-sized model at the critical temperature is a point of nonuniform approach in the temperature-size plane. They also illuminate some strong effects of finite-size on quantities which do not diverge at the critical point. They then review Monte Carlo studies for 3 dimensional Ising models of various sizes (L = 2--100) at various temperatures. From these results they find that the data for the renormalized coupling constant collapses nicely when plotted against the correlation length, determined in a system of edge length L, divided by L. They also find that {zeta}{sub L}/L {ge} 0.26 is definitely too large for reliable studies of the critical value, g*, of the renormalized coupling constant. They have reasonable evidence that {zeta}{sub L}/L {approx} 0.1 is adequate for results that are within one percent of those for the infinite system size. On this basis, they have conducted a series of Monte Carlo calculations with this condition imposed. These calculations were made practical by the development of improved estimators for use in the Swendsen-Wang cluster method. The authors found from these results, coupled with a reversed limit computation (size increases with the temperature fixed at the critical temperature), that g* > 0, although there may well be a sharp downward drop in g as the critical temperature is approached in accord with the predictions of series analysis. The results support the validity of hyperscaling in the 3 dimensional Ising model.
The effects of modeling contingencies in the treatment of food selectivity in children with autism.
Fu, Sherrene B; Penrod, Becky; Fernand, Jonathan K; Whelan, Colleen M; Griffith, Kristin; Medved, Shannon
2015-11-01
The current study investigated the effectiveness of stating and modeling contingencies in increasing food consumption for two children with food selectivity. Results suggested that stating and modeling a differential reinforcement (DR) contingency for food consumption was effective in increasing consumption of two target foods for one child, and stating and modeling a DR plus nonremoval of the spoon contingency was effective in increasing consumption of the remaining food for the first child and all target foods for the second child. © The Author(s) 2015.
Fight against malnutrition (FAM): Selected results of 2006-2012 nutrition day survey in Poland.
Ostrowska, Joanna; Jeznach-Steinhagen, Anna
Prevalence of malnutrition among hospitalized patients is a common issue increasing the morbidity and mortality rate. In response to the aforementioned problem the European Society for Clinical Nutrition and Metabolism (ESPEN) stated an action plan to fight malnutrition and created in 2004 the global health project named NutritionDay (nD) - a single-day, population based, standardized, multinational cross-sectional audit which is performed worldwide in hospitals and nursing homes. To present selected NutritionDay (nD) results from Poland describing the nutritional situation of hospitalized patients in 2006 - 2012 compared to other countries participating in nD study. Data were collected in nD study through voluntary participation all over the world during seven years - from 2006 to 2012. Data collection was performed on ward level by staff members and patients using standardized questionnaires. The data were analyzed by the Vienna coordinating centre using the Structured Query Language ("my SQL") - an open source relational database management system as well as the Statistical Analysis System version 9.2 (SAS). In Poland 2,830 patients were included in the study during a 7-year survey, while 5,597 units recruited 103,920 patients in the world (nD reference). About 45% of the patients had a weight loss within the last 3 months prior to admission (same for nD references); 58.34% reported a decrease in eating during last week (54.85% in case of nD references). Food intake at nD illustrated that 60.55% of the patients ate half to nothing of the served meal (58.37% in the case of nD references). For both Poland and other countries participated in audit at the time of detection of malnutrition on the half of hospital wards wasn't reported any action aimed at combating this phenomenon. Malnutrition of hospitalized patients in Poland was found comparable to the rest of the world. These results reflects the fact that malnutrition is a common issue among hospitalized
Challenges in validating model results for first year ice
Melsom, Arne; Eastwood, Steinar; Xie, Jiping; Aaboe, Signe; Bertino, Laurent
2017-04-01
In order to assess the quality of model results for the distribution of first year ice, a comparison with a product based on observations from satellite-borne instruments has been performed. Such a comparison is not straightforward due to the contrasting algorithms that are used in the model product and the remote sensing product. The implementation of the validation is discussed in light of the differences between this set of products, and validation results are presented. The model product is the daily updated 10-day forecast from the Arctic Monitoring and Forecasting Centre in CMEMS. The forecasts are produced with the assimilative ocean prediction system TOPAZ. Presently, observations of sea ice concentration and sea ice drift are introduced in the assimilation step, but data for sea ice thickness and ice age (or roughness) are not included. The model computes the age of the ice by recording and updating the time passed after ice formation as sea ice grows and deteriorates as it is advected inside the model domain. Ice that is younger than 365 days is classified as first year ice. The fraction of first-year ice is recorded as a tracer in each grid cell. The Ocean and Sea Ice Thematic Assembly Centre in CMEMS redistributes a daily product from the EUMETSAT OSI SAF of gridded sea ice conditions which include "ice type", a representation of the separation of regions between those infested by first year ice, and those infested by multi-year ice. The ice type is parameterized based on data for the gradient ratio GR(19,37) from SSMIS observations, and from the ASCAT backscatter parameter. This product also includes information on ambiguity in the processing of the remote sensing data, and the product's confidence level, which have a strong seasonal dependency.
Rummukainen, M.; Räisänen, J.; Bringfelt, B.; Ullerstig, A.; Omstedt, A.; Willén, U.; Hansson, U.; Jones, C.
This work presents a regional climate model, the Rossby Centre regional Atmospheric model (RCA1), recently developed from the High Resolution Limited Area Model (HIRLAM). The changes in the HIRLAM parametrizations, necessary for climate-length integrations, are described. A regional Baltic Sea ocean model and a modeling system for the Nordic inland lake systems have been coupled with RCA1. The coupled system has been used to downscale 10-year time slices from two different general circulation model (GCM) simulations to provide high-resolution regional interpretation of large-scale modeling. A selection of the results from the control runs, i.e. the present-day climate simulations, are presented: large-scale free atmospheric fields, the surface temperature and precipitation results and results for the on-line simulated regional ocean and lake surface climates. The regional model modifies the surface climate description compared to the GCM simulations, but it is also substantially affected by the biases in the GCM simulations. The regional model also improves the representation of the regional ocean and the inland lakes, compared to the GCM results.
Energy Technology Data Exchange (ETDEWEB)
Rummukainen, M.; Raeisaenen, J.; Bringfelt, B.; Ullerstig, A.; Omstedt, A.; Willen, U.; Hansson, U.; Jones, C. [Rossby Centre, Swedish Meteorological and Hydrological Inst., Norrkoeping (Sweden)
2001-03-01
This work presents a regional climate model, the Rossby Centre regional Atmospheric model (RCA1), recently developed from the High Resolution Limited Area Model (HIRLAM). The changes in the HIRLAM parametrizations, necessary for climate-length integrations, are described. A regional Baltic Sea ocean model and a modeling system for the Nordic inland lake systems have been coupled with RCA1. The coupled system has been used to downscale 10-year time slices from two different general circulation model (GCM) simulations to provide high-resolution regional interpretation of large-scale modeling. A selection of the results from the control runs, i.e. the present-day climate simulations, are presented: large-scale free atmospheric fields, the surface temperature and precipitation results and results for the on-line simulated regional ocean and lake surface climates. The regional model modifies the surface climate description compared to the GCM simulations, but it is also substantially affected by the biases in the GCM simulations. The regional model also improves the representation of the regional ocean and the inland lakes, compared to the GCM results. (orig.)
Directory of Open Access Journals (Sweden)
Marcin Luczak
2014-01-01
Full Text Available This paper presents selected results and aspects of the multidisciplinary and interdisciplinary research oriented for the experimental and numerical study of the structural dynamics of a bend-twist coupled full scale section of a wind turbine blade structure. The main goal of the conducted research is to validate finite element model of the modified wind turbine blade section mounted in the flexible support structure accordingly to the experimental results. Bend-twist coupling was implemented by adding angled unidirectional layers on the suction and pressure side of the blade. Dynamic test and simulations were performed on a section of a full scale wind turbine blade provided by Vestas Wind Systems A/S. The numerical results are compared to the experimental measurements and the discrepancies are assessed by natural frequency difference and modal assurance criterion. Based on sensitivity analysis, set of model parameters was selected for the model updating process. Design of experiment and response surface method was implemented to find values of model parameters yielding results closest to the experimental. The updated finite element model is producing results more consistent with the measurement outcomes.
Thermal-Chemical Model Of Subduction: Results And Tests
Gorczyk, W.; Gerya, T. V.; Connolly, J. A.; Yuen, D. A.; Rudolph, M.
2005-12-01
Seismic structures with strong positive and negative velocity anomalies in the mantle wedge above subduction zones have been interpreted as thermally and/or chemically induced phenomena. We have developed a thermal-chemical model of subduction, which constrains the dynamics of seismic velocity structure beneath volcanic arcs. Our simulations have been calculated over a finite-difference grid with (201×101) to (201×401) regularly spaced Eulerian points, using 0.5 million to 10 billion markers. The model couples numerical thermo-mechanical solution with Gibbs energy minimization to investigate the dynamic behavior of partially molten upwellings from slabs (cold plumes) and structures associated with their development. The model demonstrates two chemically distinct types of plumes (mixed and unmixed), and various rigid body rotation phenomena in the wedge (subduction wheel, fore-arc spin, wedge pin-ball). These thermal-chemical features strongly perturb seismic structure. Their occurrence is dependent on the age of subducting slab and the rate of subduction.The model has been validated through a series of test cases and its results are consistent with a variety of geological and geophysical data. In contrast to models that attribute a purely thermal origin for mantle wedge seismic anomalies, the thermal-chemical model is able to simulate the strong variations of seismic velocity existing beneath volcanic arcs which are associated with development of cold plumes. In particular, molten regions that form beneath volcanic arcs as a consequence of vigorous cold wet plumes are manifest by > 20% variations in the local Poisson ratio, as compared to variations of ~ 2% expected as a consequence of temperature variation within the mantle wedge.
Meuwissen, Theo H E; Indahl, Ulf G; Ødegård, Jørgen
2017-12-27
Non-linear Bayesian genomic prediction models such as BayesA/B/C/R involve iteration and mostly Markov chain Monte Carlo (MCMC) algorithms, which are computationally expensive, especially when whole-genome sequence (WGS) data are analyzed. Singular value decomposition (SVD) of the genotype matrix can facilitate genomic prediction in large datasets, and can be used to estimate marker effects and their prediction error variances (PEV) in a computationally efficient manner. Here, we developed, implemented, and evaluated a direct, non-iterative method for the estimation of marker effects for the BayesC genomic prediction model. The BayesC model assumes a priori that markers have normally distributed effects with probability [Formula: see text] and no effect with probability (1 - [Formula: see text]). Marker effects and their PEV are estimated by using SVD and the posterior probability of the marker having a non-zero effect is calculated. These posterior probabilities are used to obtain marker-specific effect variances, which are subsequently used to approximate BayesC estimates of marker effects in a linear model. A computer simulation study was conducted to compare alternative genomic prediction methods, where a single reference generation was used to estimate marker effects, which were subsequently used for 10 generations of forward prediction, for which accuracies were evaluated. SVD-based posterior probabilities of markers having non-zero effects were generally lower than MCMC-based posterior probabilities, but for some regions the opposite occurred, resulting in clear signals for QTL-rich regions. The accuracies of breeding values estimated using SVD- and MCMC-based BayesC analyses were similar across the 10 generations of forward prediction. For an intermediate number of generations (2 to 5) of forward prediction, accuracies obtained with the BayesC model tended to be slightly higher than accuracies obtained using the best linear unbiased prediction of SNP
Heat transfer modelling and stability analysis of selective laser melting
International Nuclear Information System (INIS)
Gusarov, A.V.; Yadroitsev, I.; Bertrand, Ph.; Smurov, I.
2007-01-01
The process of direct manufacturing by selective laser melting basically consists of laser beam scanning over a thin powder layer deposited on a dense substrate. Complete remelting of the powder in the scanned zone and its good adhesion to the substrate ensure obtaining functional parts with improved mechanical properties. Experiments with single-line scanning indicate, that an interval of scanning velocities exists where the remelted tracks are uniform. The tracks become broken if the scanning velocity is outside this interval. This is extremely undesirable and referred to as the 'balling' effect. A numerical model of coupled radiation and heat transfer is proposed to analyse the observed instability. The 'balling' effect at high scanning velocities (above ∼20 cm/s for the present conditions) can be explained by the Plateau-Rayleigh capillary instability of the melt pool. Two factors stabilize the process with decreasing the scanning velocity: reducing the length-to-width ratio of the melt pool and increasing the width of its contact with the substrate
DEFF Research Database (Denmark)
Savietto, D; Cervera, C; Rodenas, L
2014-01-01
This study examined the effect of long-term selection of a maternal rabbit line, solely for a reproductive criterion, on the ability of female rabbits to deal with constrained environmental conditions. Female rabbits from generations 16 and 36 (n=72 and 79, respectively) of a line founded...... and selected to increase litter size at weaning were compared simultaneously. Female rabbits were subjected to normal (NC), nutritional (NF) or heat (HC) challenging conditions from 1st to 3rd parturition. Animals in NC and NF were housed at normal room temperatures (18°C to 25°C) and respectively fed...... different resource allocation strategies in the animals from the different generations. Selection to increase litter size at weaning led to increased reproductive robustness at the onset of an environmental constraint, but failure to sustain the reproductive liability when the challenge was maintained...
International Nuclear Information System (INIS)
Artemov, V.G.; Gusev, V.I.; Zinatullin, R.E.; Karpov, A.S.
2007-01-01
Using modeled WWER cram rod drop experiments, performed at the Rostov NPP, as an example, the influence of delayed neutron parameters on the modeling results was investigated. The delayed neutron parameter values were taken from both domestic and foreign nuclear databases. Numerical modeling was carried out on the basis of SAPFIR 9 5andWWERrogram package. Parameters of delayed neutrons were acquired from ENDF/B-VI and BNAB-78 validated data files. It was demonstrated that using delay fraction data from different databases in reactivity meters led to significantly different reactivity results. Based on the results of numerically modeled experiments, delayed neutron parameters providing the best agreement between calculated and measured data were selected and recommended for use in reactor calculations (Authors)
Application Of Decision Tree Approach To Student Selection Model- A Case Study
Harwati; Sudiya, Amby
2016-01-01
The main purpose of the institution is to provide quality education to the students and to improve the quality of managerial decisions. One of the ways to improve the quality of students is to arrange the selection of new students with a more selective. This research takes the case in the selection of new students at Islamic University of Indonesia, Yogyakarta, Indonesia. One of the university's selection is through filtering administrative selection based on the records of prospective students at the high school without paper testing. Currently, that kind of selection does not yet has a standard model and criteria. Selection is only done by comparing candidate application file, so the subjectivity of assessment is very possible to happen because of the lack standard criteria that can differentiate the quality of students from one another. By applying data mining techniques classification, can be built a model selection for new students which includes criteria to certain standards such as the area of origin, the status of the school, the average value and so on. These criteria are determined by using rules that appear based on the classification of the academic achievement (GPA) of the students in previous years who entered the university through the same way. The decision tree method with C4.5 algorithm is used here. The results show that students are given priority for admission is that meet the following criteria: came from the island of Java, public school, majoring in science, an average value above 75, and have at least one achievement during their study in high school.
Measurement model choice influenced randomized controlled trial results.
Gorter, Rosalie; Fox, Jean-Paul; Apeldoorn, Adri; Twisk, Jos
2016-11-01
In randomized controlled trials (RCTs), outcome variables are often patient-reported outcomes measured with questionnaires. Ideally, all available item information is used for score construction, which requires an item response theory (IRT) measurement model. However, in practice, the classical test theory measurement model (sum scores) is mostly used, and differences between response patterns leading to the same sum score are ignored. The enhanced differentiation between scores with IRT enables more precise estimation of individual trajectories over time and group effects. The objective of this study was to show the advantages of using IRT scores instead of sum scores when analyzing RCTs. Two studies are presented, a real-life RCT, and a simulation study. Both IRT and sum scores are used to measure the construct and are subsequently used as outcomes for effect calculation. The bias in RCT results is conditional on the measurement model that was used to construct the scores. A bias in estimated trend of around one standard deviation was found when sum scores were used, where IRT showed negligible bias. Accurate statistical inferences are made from an RCT study when using IRT to estimate construct measurements. The use of sum scores leads to incorrect RCT results. Copyright Â© 2016 Elsevier Inc. All rights reserved.
Chi, John H; Gokaslan, Ziya; McCormick, Paul; Tibbs, Phillip A; Kryscio, Richard J; Patchell, Roy A
2009-03-01
Randomized clinical trial. OBJECTIVE.: To determine if age affects outcomes from differing treatments in patients with spinal metastases. Recently, class I data were published supporting surgery with radiation over radiation alone for patients with malignant epidural spinal cord compression (MESCC). However, the criteria to properly select candidates for surgery remains controversial and few independent variables which predict success after treatment have been identified. Data for this study was obtained in a randomized clinical trial comparing surgery versus radiation for MESCC. Hazard ratios were determined for the effect of age and the interaction between age and treatment. Age estimates at which prespecified relative risks could be expected were calculated with greater than 95% confidence to suggest possible age cut points for further stratification. Multivariate models and Kaplan-Meier curves were tested using stratified cohorts for both treatment groups in the randomized trial each divided into 2 age groups. Secondary data analysis with age stratification demonstrated a strong interaction between age and treatment (hazard ratio = 1.61, P = 0.01), such that as age increases, the chances of surgery being equal to radiation alone increases. The best estimate for the age at which surgery is no longer superior to radiation alone was calculated to be between 60 and 70 years of age (95% CI), using sequential prespecified relative risk ratios. Multivariate modeling and Kaplan-Meier curves for stratified treatment groups showed that there was no difference in outcome between treatments for patients >or=65 years of age. Ambulation preservation was significantly prolonged in patients variable in predicting preservation of ambulation and survival for patients being treated for spinal metastases. Our results provide compelling evidence for the first time that particular age cut points may help in selecting patients for surgical or nonsurgical intervention based on outcome.
Directory of Open Access Journals (Sweden)
N. Sczygiol
2007-12-01
Full Text Available Presented paper contains evaluation of influence of selected parameters on sensitivity of a numerical model of solidification. The investigated model is based on the heat conduction equation with a heat source and solved using the finite element method (FEM. The model is built with the use of enthalpy formulation for solidification and using an intermediate solid fraction growth model. The model sensitivity is studied with the use of Morris method, which is one of global sensitivity methods. Characteristic feature of the global methods is necessity to conduct a series of simulations applying the investigated model with appropriately chosen model parameters. The advantage of Morris method is possibility to reduce the number of necessary simulations. Results of the presented work allow to answer the question how generic sensitivity analysis results are, particularly if sensitivity analysis results depend only on model characteristics and not on things such as density of the finite element mesh or shape of the region. Results of this research allow to conclude that sensitivity analysis with use of Morris method depends only on characteristic of the investigated model.
Indian Academy of Sciences (India)
With reference to the detailed evaluation of bids submitted the following agencies has been selected to award the contract on L1 ( lowest bidder) basis. 1. M/s . CITO INFOTECH, Bengaluru ( for procurement of desktop computers). 2. M/s. MCCANNINFO SOLUTION, Mumbai ( for procurement of laptops computers)
RESULTS OF THE SELECTION OF BREEDING SAMPLES OF CARROT BASED ON BIOCHEMICAL COMPOSITION
Directory of Open Access Journals (Sweden)
V. K. Cherkasova
2014-01-01
Full Text Available 12 samples of carrot were analyzed for biochemical components in roots. 5 genotypes with high content of vitamin C, β-carotene, and total sugar were selected as genetic sources of high biochemical components.
Results of 24 years of selection for post-weaning weight on the ...
African Journals Online (AJOL)
Tobias
Being exposed to a natural selection process for many generations, the Caracu cattle adapted to local conditions and developed traits that allowed them to survive on a diet generally poor in nutrients, and exposed to high ectoparasite infestation and high ambient temperatures (Spritze et al., 2003). Up to the start of the 20th ...
SR-Site groundwater flow modelling methodology, setup and results
Energy Technology Data Exchange (ETDEWEB)
Selroos, Jan-Olof (Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)); Follin, Sven (SF GeoLogic AB, Taeby (Sweden))
2010-12-15
As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken three groundwater flow modelling studies. These are performed within the SR-Site project and represent time periods with different climate conditions. The simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. Three time periods are addressed; the Excavation and operational phases, the Initial period of temperate climate after closure, and the Remaining part of the reference glacial cycle. The present report is a synthesis of the background reports describing the modelling methodology, setup, and results. It is the primary reference for the conclusions drawn in a SR-Site specific context concerning groundwater flow during the three climate periods. These conclusions are not necessarily provided explicitly in the background reports, but are based on the results provided in these reports. The main results and comparisons presented in the present report are summarised in the SR-Site Main report.
Morin, Benjamin R; Perrings, Charles; Levin, Simon; Kinzig, Ann
2014-12-21
The personal choices affecting the transmission of infectious diseases include the number of contacts an individual makes, and the risk-characteristics of those contacts. We consider whether these different choices have distinct implications for the course of an epidemic. We also consider whether choosing contact mitigation (how much to mix) and affinity mitigation (with whom to mix) strategies together has different epidemiological effects than choosing each separately. We use a set of differential equation compartmental models of the spread of disease, coupled with a model of selective mixing. We assess the consequences of varying contact or affinity mitigation as a response to disease risk. We do this by comparing disease incidence and dynamics under varying contact volume, contact type, and both combined across several different disease models. Specifically, we construct a change of variables that allows one to transition from contact mitigation to affinity mitigation, and vice versa. In the absence of asymptomatic infection we find no difference in the epidemiological impacts of the two forms of disease risk mitigation. Furthermore, since models that include both mitigation strategies are underdetermined, varying both results in no outcome that could not be reached by choosing either separately. Which strategy is actually chosen then depends not on their epidemiological consequences, but on the relative cost of reducing contact volume versus altering contact type. Although there is no fundamental epidemiological difference between the two forms of mitigation, the social cost of alternative strategies can be very different. From a social perspective, therefore, whether one strategy should be promoted over another depends on economic not epidemiological factors. Copyright © 2014 Elsevier Ltd. All rights reserved.
Deriving user-informed climate information from climate model ensemble results
Directory of Open Access Journals (Sweden)
H. Huebener
2017-07-01
Full Text Available Communication between providers and users of climate model simulation results still needs to be improved. In the German regional climate modeling project ReKliEs-De a midterm user workshop was conducted to allow the intended users of the project results to assess the preliminary results and to streamline the final project results to their needs. The user feedback highlighted, in particular, the still considerable gap between climate research output and user-tailored input for climate impact research. Two major requests from the user community addressed the selection of sub-ensembles and some condensed, easy to understand information on the strengths and weaknesses of the climate models involved in the project.
Results of the benchmark for blade structural models, part A
DEFF Research Database (Denmark)
Lekou, D.J.; Chortis, D.; Belen Fariñas, A.
2013-01-01
Task 2.2 of the InnWind.Eu project. The benchmark is based on the reference wind turbine and the reference blade provided by DTU [1]. "Structural Concept developers/modelers" of WP2 were provided with the necessary input for a comparison numerical simulation run, upon definition of the reference blade......A benchmark on structural design methods for blades was performed within the InnWind.Eu project under WP2 “Lightweight Rotor” Task 2.2 “Lightweight structural design”. The present document is describes the results of the comparison simulation runs that were performed by the partners involved within...
Preliminary results of steel containment vessel model test
International Nuclear Information System (INIS)
Matsumoto, T.; Komine, K.; Arai, S.
1997-01-01
A high pressure test of a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of a steel containment vessel (SCV), representing an improved boiling water reactor (BWR) Mark II containment, was conducted on December 11-12, 1996 at Sandia National Laboratories. This paper describes the preliminary results of the high pressure test. In addition, the preliminary post-test measurement data and the preliminary comparison of test data with pretest analysis predictions are also presented
Garmer, D R; Gresh, N; Roques, B P
1998-04-01
We investigated the binding properties of the metalloprotease inhibitors hydroxamate, methanethiolate, and methylphosphoramidate to a model coordination site occurring in several Zn2+ metalloproteases, including thermolysin. This was carried out using both the SIBFA (sum of interactions between fragments ab initio-computed) molecular mechanics and the SCF/MP2 procedures for the purpose of evaluating SIBFA as a metalloenzyme modeling tool. The energy-minimized structures were closely similar to the X-ray crystallographic structures of related thermolysin-inhibitor complexes. We found that selectivity between alternative geometries and between inhibitors usually stemmed from multiple interaction components included in SIBFA. The binding strength sequence is hydroxamate > methanethiolate > or = methylphosphoramidate from multiple interaction components included in SIBFA. The trends in interaction energy components, rankings, and preferences for mono- or bidentate binding were consistent in both computational procedures. We also compared the Zn2+ vs. Mg2+ selectivities in several other polycoordinated sites having various "hard" and "soft" qualities. This included a hexahydrate, a model representing Mg2+/Ca2+ binding sites, a chlorophyll-like structure, and a zinc finger model. The latter three favor Zn2+ over Mg2+ by a greater degree than the hydrated state, but the selectivity varies widely according to the ligand "softness." SIBFA was able to match the ab initio binding energies by < 2%, with the SIBFA terms representing dispersion and charge-transfer contributing the most to Zn2+/Mg2+ selectivity. These results showed this procedure to be a very capable modeling tool for metalloenzyme problems, in this case giving valuable information about details and limitations of "hard" and "soft" selectivity trends.
Peer selection and influence effects on adolescent alcohol use: a stochastic actor-based model
Directory of Open Access Journals (Sweden)
Mundt Marlon P
2012-08-01
Full Text Available Abstract Background Early adolescent alcohol use is a major public health challenge. Without clear guidance on the causal pathways between peers and alcohol use, adolescent alcohol interventions may be incomplete. The objective of this study is to disentangle selection and influence effects associated with the dynamic interplay of adolescent friendships and alcohol use. Methods The study analyzes data from Add Health, a longitudinal survey of seventh through eleventh grade U.S. students enrolled between 1995 and 1996. A stochastic actor-based model is used to model the co-evolution of alcohol use and friendship connections. Results Selection effects play a significant role in the creation of peer clusters with similar alcohol use. Friendship nominations between two students who shared the same alcohol use frequency were 3.60 (95% CI: 2.01-9.62 times more likely than between otherwise identical students with differing alcohol use frequency. The model controlled for alternative pathways to friendship nomination including reciprocity, transitivity, and similarities in age, gender, and race/ethnicity. The simulation model did not support a significant friends’ influence effect on alcohol behavior. Conclusions The findings suggest that peer selection plays a major role in alcohol use behavior among adolescent friends. Our simulation results would lend themselves to adolescent alcohol abuse interventions that leverage adolescent social network characteristics.
A Formal Model of Corruption, Dishonesty and Selection into Public Service
DEFF Research Database (Denmark)
Barfort, Sebastian; Harmon, Nikolaj Arpe; Hjorth, Frederik Georg
2015-01-01
Recent empirical studies have found that in high corruption countries, inherently more dishonest individuals are more likely to want to enter into public service, while the reverse is true in low corruption countries. In this note, we provide a simple formal model that rationalizes this empirical...... pattern as the result of countries being stuck in different selfsustaining equilibria where high levels of corruption and negative selection into public service are mutually reinforcing....
Lehmann, Rüdiger; Lösler, Michael
2017-12-01
Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.
Structure and selection in an autocatalytic binary polymer model
DEFF Research Database (Denmark)
Tanaka, Shinpei; Fellermann, Harold; Rasmussen, Steen
2014-01-01
a pool of monomers, highly ordered populations with particular sequence patterns are dynamically selected out of a vast number of possible states. The interplay between the selected microscopic sequence patterns and the macroscopic cooperative structures is examined both analytically and in simulation...
DEFF Research Database (Denmark)
Finlay, Chris; Olsen, Nils; Tøffner-Clausen, Lars
Ten months of data from ESA's Swarm mission, together with recent ground observatory monthly means, are used to update the CHAOS series of geomagnetic field models with a focus on time-changes of the core field. As for previous CHAOS field models quiet-time, night-side, data selection criteria......th order spline representation with knot points spaced at 0.5 year intervals. The resulting field model is able to consistently fit data from six independent low Earth orbit satellites: Oersted, CHAMP, SAC-C and the three Swarm satellites. As an example, we present comparisons of the excellent model...... fit obtained to both the Swarm data and the CHAMP data. The new model also provides a good description of observatory secular variation, capturing rapid field evolution events during the past decade. Maps of the core surface field and its secular variation can already be extracted in the Swarm-era. We...
Human Commercial Models' Eye Colour Shows Negative Frequency-Dependent Selection.
Directory of Open Access Journals (Sweden)
Isabela Rodrigues Nogueira Forti
Full Text Available In this study we investigated the eye colour of human commercial models registered in the UK (400 female and 400 male and Brazil (400 female and 400 male to test the hypothesis that model eye colour frequency was the result of negative frequency-dependent selection. The eye colours of the models were classified as: blue, brown or intermediate. Chi-square analyses of data for countries separated by sex showed that in the United Kingdom brown eyes and intermediate colours were significantly more frequent than expected in comparison to the general United Kingdom population (P<0.001. In Brazil, the most frequent eye colour brown was significantly less frequent than expected in comparison to the general Brazilian population. These results support the hypothesis that model eye colour is the result of negative frequency-dependent selection. This could be the result of people using eye colour as a marker of genetic diversity and finding rarer eye colours more attractive because of the potential advantage more genetically diverse offspring that could result from such a choice. Eye colour may be important because in comparison to many other physical traits (e.g., hair colour it is hard to modify, hide or disguise, and it is highly polymorphic.
Human Commercial Models' Eye Colour Shows Negative Frequency-Dependent Selection.
Forti, Isabela Rodrigues Nogueira; Young, Robert John
2016-01-01
In this study we investigated the eye colour of human commercial models registered in the UK (400 female and 400 male) and Brazil (400 female and 400 male) to test the hypothesis that model eye colour frequency was the result of negative frequency-dependent selection. The eye colours of the models were classified as: blue, brown or intermediate. Chi-square analyses of data for countries separated by sex showed that in the United Kingdom brown eyes and intermediate colours were significantly more frequent than expected in comparison to the general United Kingdom population (PBrazilian population. These results support the hypothesis that model eye colour is the result of negative frequency-dependent selection. This could be the result of people using eye colour as a marker of genetic diversity and finding rarer eye colours more attractive because of the potential advantage more genetically diverse offspring that could result from such a choice. Eye colour may be important because in comparison to many other physical traits (e.g., hair colour) it is hard to modify, hide or disguise, and it is highly polymorphic.
Performance Measurement Model for the Supplier Selection Based on AHP
Directory of Open Access Journals (Sweden)
Fabio De Felice
2015-10-01
Full Text Available The performance of the supplier is a crucial factor for the success or failure of any company. Rational and effective decision making in terms of the supplier selection process can help the organization to optimize cost and quality functions. The nature of supplier selection processes is generally complex, especially when the company has a large variety of products and vendors. Over the years, several solutions and methods have emerged for addressing the supplier selection problem (SSP. Experience and studies have shown that there is no best way for evaluating and selecting a specific supplier process, but that it varies from one organization to another. The aim of this research is to demonstrate how a multiple attribute decision making approach can be effectively applied for the supplier selection process.
Impact Flash Physics: Modeling and Comparisons With Experimental Results
Rainey, E.; Stickle, A. M.; Ernst, C. M.; Schultz, P. H.; Mehta, N. L.; Brown, R. C.; Swaminathan, P. K.; Michaelis, C. H.; Erlandson, R. E.
2015-12-01
horizontal. High-speed radiometer measurements were made of the time-dependent impact flash at wavelengths of 350-1100 nm. We will present comparisons between these measurements and the output of APL's model. The results of this validation allow us to determine basic relationships between observed optical signatures and impact conditions.
Atypical at skew in Firmicute genomes results from selection and not from mutation.
Directory of Open Access Journals (Sweden)
Catherine A Charneski
2011-09-01
Full Text Available The second parity rule states that, if there is no bias in mutation or selection, then within each strand of DNA complementary bases are present at approximately equal frequencies. In bacteria, however, there is commonly an excess of G (over C and, to a lesser extent, T (over A in the replicatory leading strand. The low G+C Firmicutes, such as Staphylococcus aureus, are unusual in displaying an excess of A over T on the leading strand. As mutation has been established as a major force in the generation of such skews across various bacterial taxa, this anomaly has been assumed to reflect unusual mutation biases in Firmicute genomes. Here we show that this is not the case and that mutation bias does not explain the atypical AT skew seen in S. aureus. First, recently arisen intergenic SNPs predict the classical replication-derived equilibrium enrichment of T relative to A, contrary to what is observed. Second, sites predicted to be under weak purifying selection display only weak AT skew. Third, AT skew is primarily associated with largely non-synonymous first and second codon sites and is seen with respect to their sense direction, not which replicating strand they lie on. The atypical AT skew we show to be a consequence of the strong bias for genes to be co-oriented with the replicating fork, coupled with the selective avoidance of both stop codons and costly amino acids, which tend to have T-rich codons. That intergenic sequence has more A than T, while at mutational equilibrium a preponderance of T is expected, points to a possible further unresolved selective source of skew.
Anheuser, P; Kranz, J; Dieckmann, K P; Steffens, J; Oubaid, V
2017-11-01
As in aviation and other organizations requiring high levels of safety, medical complications and errors can in most cases be traced back to the human factor as a main cause. The correct selection of medical students and physicians is therefore very important, especially in leadership and key positions. This is not only a necessary safety aspect but also the prerequisite for the stipulated efficiency of modern medicine.
Genome-wide selection by mixed model ridge regression and extensions based on geostatistical models.
Schulz-Streeck, Torben; Piepho, Hans-Peter
2010-03-31
The success of genome-wide selection (GS) approaches will depend crucially on the availability of efficient and easy-to-use computational tools. Therefore, approaches that can be implemented using mixed models hold particular promise and deserve detailed study. A particular class of mixed models suitable for GS is given by geostatistical mixed models, when genetic distance is treated analogously to spatial distance in geostatistics. We consider various spatial mixed models for use in GS. The analyses presented for the QTL-MAS 2009 dataset pay particular attention to the modelling of residual errors as well as of polygenetic effects. It is shown that geostatistical models are viable alternatives to ridge regression, one of the common approaches to GS. Correlations between genome-wide estimated breeding values and true breeding values were between 0.879 and 0.889. In the example considered, we did not find a large effect of the residual error variance modelling, largely because error variances were very small. A variance components model reflecting the pedigree of the crosses did not provide an improved fit. We conclude that geostatistical models deserve further study as a tool to GS that is easily implemented in a mixed model package.
A Model for Service Life Control of Selected Device Systems
Directory of Open Access Journals (Sweden)
Zieja Mariusz
2014-04-01
Full Text Available This paper presents a way of determining distribution of limit state exceedence time by a diagnostic parameter which determines accuracy of maintaining zero state. For calculations it was assumed that the diagnostic parameter is deviation from nominal value (zero state. Change of deviation value occurs as a result of destructive processes which occur during service. For estimation of deviation increasing rate in probabilistic sense, was used a difference equation from which, after transformation, Fokker-Planck differential equation was obtained [4, 11]. A particular solution of the equation is deviation increasing rate density function which was used for determining exceedance probability of limit state. The so-determined probability was then used to determine density function of limit state exceedance time, by increasing deviation. Having at disposal the density function of limit state exceedance time one determined service life of a system of maladjustment. In the end, a numerical example based on operational data of selected aircraft [weapon] sights was presented. The elaborated method can be also applied to determining residual life of shipboard devices whose technical state is determined on the basis of analysis of values of diagnostic parameters.
Coakley, Kevin J.; Qu, Jifeng
2017-04-01
In the electronic measurement of the Boltzmann constant based on Johnson noise thermometry, the ratio of the power spectral densities of thermal noise across a resistor at the triple point of water, and pseudo-random noise synthetically generated by a quantum-accurate voltage-noise source is constant to within 1 part in a billion for frequencies up to 1 GHz. Given knowledge of this ratio, and the values of other parameters that are known or measured, one can determine the Boltzmann constant. Due, in part, to mismatch between transmission lines, the experimental ratio spectrum varies with frequency. We model this spectrum as an even polynomial function of frequency where the constant term in the polynomial determines the Boltzmann constant. When determining this constant (offset) from experimental data, the assumed complexity of the ratio spectrum model and the maximum frequency analyzed (fitting bandwidth) dramatically affects results. Here, we select the complexity of the model by cross-validation—a data-driven statistical learning method. For each of many fitting bandwidths, we determine the component of uncertainty of the offset term that accounts for random and systematic effects associated with imperfect knowledge of model complexity. We select the fitting bandwidth that minimizes this uncertainty. In the most recent measurement of the Boltzmann constant, results were determined, in part, by application of an earlier version of the method described here. Here, we extend the earlier analysis by considering a broader range of fitting bandwidths and quantify an additional component of uncertainty that accounts for imperfect performance of our fitting bandwidth selection method. For idealized simulated data with additive noise similar to experimental data, our method correctly selects the true complexity of the ratio spectrum model for all cases considered. A new analysis of data from the recent experiment yields evidence for a temporal trend in the offset
Hidalgo, Homero, Jr.
2000-01-01
An innovative methodology for determining structural target mode selection and mode selection based on a specific criterion is presented. An effective approach to single out modes which interact with specific locations on a structure has been developed for the X-33 Launch Vehicle Finite Element Model (FEM). We presented Root-Sum-Square (RSS) displacement method computes resultant modal displacement for each mode at selected degrees of freedom (DOF) and sorts to locate modes with highest values. This method was used to determine modes, which most influenced specific locations/points on the X-33 flight vehicle such as avionics control components, aero-surface control actuators, propellant valve and engine points for use in flight control stability analysis and for flight POGO stability analysis. Additionally, the modal RSS method allows for primary or global target vehicle modes to also be identified in an accurate and efficient manner.
Trust-Enhanced Cloud Service Selection Model Based on QoS Analysis.
Pan, Yuchen; Ding, Shuai; Fan, Wenjuan; Li, Jing; Yang, Shanlin
2015-01-01
Cloud computing technology plays a very important role in many areas, such as in the construction and development of the smart city. Meanwhile, numerous cloud services appear on the cloud-based platform. Therefore how to how to select trustworthy cloud services remains a significant problem in such platforms, and extensively investigated owing to the ever-growing needs of users. However, trust relationship in social network has not been taken into account in existing methods of cloud service selection and recommendation. In this paper, we propose a cloud service selection model based on the trust-enhanced similarity. Firstly, the direct, indirect, and hybrid trust degrees are measured based on the interaction frequencies among users. Secondly, we estimate the overall similarity by combining the experience usability measured based on Jaccard's Coefficient and the numerical distance computed by Pearson Correlation Coefficient. Then through using the trust degree to modify the basic similarity, we obtain a trust-enhanced similarity. Finally, we utilize the trust-enhanced similarity to find similar trusted neighbors and predict the missing QoS values as the basis of cloud service selection and recommendation. The experimental results show that our approach is able to obtain optimal results via adjusting parameters and exhibits high effectiveness. The cloud services ranking by our model also have better QoS properties than other methods in the comparison experiments.
Lutz, Arthur F.; ter Maat, Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.
2016-01-01
Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change
Lutz, Arthur F.; Maat, ter Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.
2016-01-01
Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change
Animal genetic resources in Brazil: result of five centuries of natural selection.
Mariante, A da S; Egito, A A
2002-01-01
Brazil has various species of domestic animals, which developed from breeds brought by the Portuguese settlers soon after their discovery. For five centuries, these breeds have been subjected to natural selection in specific environments. Today, they present characteristics adapted to the specific Brazilian environmental conditions. These breeds developed in Brazil are known as "Crioulo," "local," or naturalized. From the beginning of the 20th century, some exotic breeds, selected in temperate regions, have begun to be imported. Although more productive, these breeds do not have adaptive traits, such as resistance to disease and parasites found in breeds considered to be "native." Even so, little by little, they replaced the native breeds, to such an extent that the latter are in danger of extinction. In 1983, to avoid the loss of this important genetic material, the National Research Center for Genetic Resources and Biotechnology (Cenargen) of the Brazilian Agricultural Research Corporation (Embrapa) decided to include conservation of animal genetic resources in its research program Conservation and Utilization of Genetic Resources. Until this time, they were only concerned with conservation of native plants. Conservation has been carried out by various research centers of Embrapa, universities, state research corporations, and private farmers, with a single coordinator at the national level, Cenargen. Specifically, conservation is being carried out by conservation nuclei, which are specific herds in which the animals are being conserved, situated in the habitats where the animals have been subjected to natural selection. This involves storage of semen and embryos from cattle, horses, buffaloes, donkeys, goats, sheep, and pigs. The Brazilian Animal Germplasm Bank is kept at Cenargen, which is responsible for the storage of semen and embryos of various breeds of domestic animals threatened with extinction, where almost 45,000 doses of semen and more than 200
Position-sensitive transition edge sensor modeling and results
Energy Technology Data Exchange (ETDEWEB)
Hammock, Christina E-mail: chammock@milkyway.gsfc.nasa.gov; Figueroa-Feliciano, Enectali; Apodaca, Emmanuel; Bandler, Simon; Boyce, Kevin; Chervenak, Jay; Finkbeiner, Fred; Kelley, Richard; Lindeman, Mark; Porter, Scott; Saab, Tarek; Stahle, Caroline
2004-03-11
We report the latest design and experimental results for a Position-Sensitive Transition-Edge Sensor (PoST). The PoST is motivated by the desire to achieve a larger field-of-view without increasing the number of readout channels. A PoST consists of a one-dimensional array of X-ray absorbers connected on each end to a Transition Edge Sensor (TES). Position differentiation is achieved through a comparison of pulses between the two TESs and X-ray energy is inferred from a sum of the two signals. Optimizing such a device involves studying the available parameter space which includes device properties such as heat capacity and thermal conductivity as well as TES read-out circuitry parameters. We present results for different regimes of operation and the effects on energy resolution, throughput, and position differentiation. Results and implications from a non-linear model developed to study the saturation effects unique to PoSTs are also presented.
International Nuclear Information System (INIS)
Ng, Y.C.; Hoffman, F.O.
1983-01-01
A parameter value for a radioecological assessment model is not a single value but a distribution of values about a central value. The sources that contribute to the variability of transfer factors to predict foodchain transport of radionuclides are enumerated. Knowledge of these sources, judgement in interpreting the available data, consideration of collateral information, and established criteria that specify the desired level of conservatism in the resulting predictions are essential elements when selecting appropriate parameter values for radioecological assessment models and regulatory guides. 39 references, 4 figures, 5 tables
Energy Technology Data Exchange (ETDEWEB)
Ng, Y.C.; Hoffman, F.O.
1983-01-01
A parameter value for a radioecological assessment model is not a single value but a distribution of values about a central value. The sources that contribute to the variability of transfer factors to predict foodchain transport of radionuclides are enumerated. Knowledge of these sources, judgement in interpreting the available data, consideration of collateral information, and established criteria that specify the desired level of conservatism in the resulting predictions are essential elements when selecting appropriate parameter values for radioecological assessment models and regulatory guides. 39 references, 4 figures, 5 tables.
An Integrated DEMATEL-QFD Model for Medical Supplier Selection
Mehtap Dursun; Zeynep Şener
2014-01-01
Supplier selection is considered as one of the most critical issues encountered by operations and purchasing managers to sharpen the company’s competitive advantage. In this paper, a novel fuzzy multi-criteria group decision making approach integrating quality function deployment (QFD) and decision making trial and evaluation laboratory (DEMATEL) method is proposed for supplier selection. The proposed methodology enables to consider the impacts of inner dependence among supplier assessment cr...
Evaluation of uncertainties in selected environmental dispersion models
International Nuclear Information System (INIS)
Little, C.A.; Miller, C.W.
1979-01-01
Compliance with standards of radiation dose to the general public has necessitated the use of dispersion models to predict radionuclide concentrations in the environment due to releases from nuclear facilities. Because these models are only approximations of reality and because of inherent variations in the input parameters used in these models, their predictions are subject to uncertainty. Quantification of this uncertainty is necessary to assess the adequacy of these models for use in determining compliance with protection standards. This paper characterizes the capabilities of several dispersion models to predict accurately pollutant concentrations in environmental media. Three types of models are discussed: aquatic or surface water transport models, atmospheric transport models, and terrestrial and aquatic food chain models. Using data published primarily by model users, model predictions are compared to observations
Smith, Graham C; Delahay, Richard J; McDonald, Robbie A; Budgey, Richard
2016-01-01
Bovine tuberculosis (bTB) causes substantial economic losses to cattle farmers and taxpayers in the British Isles. Disease management in cattle is complicated by the role of the European badger (Meles meles) as a host of the infection. Proactive, non-selective culling of badgers can reduce the incidence of disease in cattle but may also have negative effects in the area surrounding culls that have been associated with social perturbation of badger populations. The selective removal of infected badgers would, in principle, reduce the number culled, but the effects of selective culling on social perturbation and disease outcomes are unclear. We used an established model to simulate non-selective badger culling, non-selective badger vaccination and a selective trap and vaccinate or remove (TVR) approach to badger management in two distinct areas: South West England and Northern Ireland. TVR was simulated with and without social perturbation in effect. The lower badger density in Northern Ireland caused no qualitative change in the effect of management strategies on badgers, although the absolute number of infected badgers was lower in all cases. However, probably due to differing herd density in Northern Ireland, the simulated badger management strategies caused greater variation in subsequent cattle bTB incidence. Selective culling in the model reduced the number of badgers killed by about 83% but this only led to an overall benefit for cattle TB incidence if there was no social perturbation of badgers. We conclude that the likely benefit of selective culling will be dependent on the social responses of badgers to intervention but that other population factors including badger and cattle density had little effect on the relative benefits of selective culling compared to other methods, and that this may also be the case for disease management in other wild host populations.
Czech Academy of Sciences Publication Activity Database
Šlampová, Andrea; Kubáň, Pavel; Boček, Petr
2014-01-01
Roč. 35, č. 17 (2014), s. 2429-2437 ISSN 0173-0835 R&D Projects: GA ČR(CZ) GA13-05762S Institutional support: RVO:68081715 Keywords : electromembrane extraction * chlorophenols * extraction selectivity Subject RIV: CB - Analytical Chemistry, Separation Impact factor: 3.028, year: 2014
Comparison of blade-strike modeling results with empirical data
Energy Technology Data Exchange (ETDEWEB)
Ploskey, Gene R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Carlson, Thomas J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2004-03-01
This study is the initial stage of further investigation into the dynamics of injury to fish during passage through a turbine runner. As part of the study, Pacific Northwest National Laboratory (PNNL) estimated the probability of blade strike, and associated injury, as a function of fish length and turbine operating geometry at two adjacent turbines in Powerhouse 1 of Bonneville Dam. Units 5 and 6 had identical intakes, stay vanes, wicket gates, and draft tubes, but Unit 6 had a new runner and curved discharge ring to minimize gaps between the runner hub and blades and between the blade tips and discharge ring. We used a mathematical model to predict blade strike associated with two Kaplan turbines and compared results with empirical data from biological tests conducted in 1999 and 2000. Blade-strike models take into consideration the geometry of the turbine blades and discharges as well as fish length, orientation, and distribution along the runner. The first phase of this study included a sensitivity analysis to consider the effects of difference in geometry and operations between families of turbines on the strike probability response surface. The analysis revealed that the orientation of fish relative to the leading edge of a runner blade and the location that fish pass along the blade between the hub and blade tip are critical uncertainties in blade-strike models. Over a range of discharges, the average prediction of injury from blade strike was two to five times higher than average empirical estimates of visible injury from shear and mechanical devices. Empirical estimates of mortality may be better metrics for comparison to predicted injury rates than other injury measures for fish passing at mid-blade and blade-tip locations.
Peer selection and influence effects on adolescent alcohol use: a stochastic actor-based model.
Mundt, Marlon P; Mercken, Liesbeth; Zakletskaia, Larissa
2012-08-06
Early adolescent alcohol use is a major public health challenge. Without clear guidance on the causal pathways between peers and alcohol use, adolescent alcohol interventions may be incomplete. The objective of this study is to disentangle selection and influence effects associated with the dynamic interplay of adolescent friendships and alcohol use. The study analyzes data from Add Health, a longitudinal survey of seventh through eleventh grade U.S. students enrolled between 1995 and 1996. A stochastic actor-based model is used to model the co-evolution of alcohol use and friendship connections. Selection effects play a significant role in the creation of peer clusters with similar alcohol use. Friendship nominations between two students who shared the same alcohol use frequency were 3.60 (95% CI: 2.01-9.62) times more likely than between otherwise identical students with differing alcohol use frequency. The model controlled for alternative pathways to friendship nomination including reciprocity, transitivity, and similarities in age, gender, and race/ethnicity. The simulation model did not support a significant friends' influence effect on alcohol behavior. The findings suggest that peer selection plays a major role in alcohol use behavior among adolescent friends. Our simulation results would lend themselves to adolescent alcohol abuse interventions that leverage adolescent social network characteristics.
Interval-valued intuitionistic fuzzy multi-criteria model for design concept selection
Directory of Open Access Journals (Sweden)
Daniel Osezua Aikhuele
2017-09-01
Full Text Available This paper presents a new approach for design concept selection by using an integrated Fuzzy Analytical Hierarchy Process (FAHP and an Interval-valued intuitionistic fuzzy modified TOP-SIS (IVIF-modified TOPSIS model. The integrated model which uses the improved score func-tion and a weighted normalized Euclidean distance method for the calculation of the separation measures of alternatives from the positive and negative intuitionistic ideal solutions provides a new approach for the computation of intuitionistic fuzzy ideal solutions. The results of the two approaches are integrated using a reflection defuzzification integration formula. To ensure the feasibility and the rationality of the integrated model, the method is successfully applied for eval-uating and selecting some design related problems including a real-life case study for the selec-tion of the best concept design for a new printed-circuit-board (PCB and for a hypothetical ex-ample. The model which provides a novel alternative, has been compared with similar computa-tional methods in the literature.
N-mix for fish: estimating riverine salmonid habitat selection via N-mixture models
Som, Nicholas A.; Perry, Russell W.; Jones, Edward C.; De Juilio, Kyle; Petros, Paul; Pinnix, William D.; Rupert, Derek L.
2018-01-01
Models that formulate mathematical linkages between fish use and habitat characteristics are applied for many purposes. For riverine fish, these linkages are often cast as resource selection functions with variables including depth and velocity of water and distance to nearest cover. Ecologists are now recognizing the role that detection plays in observing organisms, and failure to account for imperfect detection can lead to spurious inference. Herein, we present a flexible N-mixture model to associate habitat characteristics with the abundance of riverine salmonids that simultaneously estimates detection probability. Our formulation has the added benefits of accounting for demographics variation and can generate probabilistic statements regarding intensity of habitat use. In addition to the conceptual benefits, model application to data from the Trinity River, California, yields interesting results. Detection was estimated to vary among surveyors, but there was little spatial or temporal variation. Additionally, a weaker effect of water depth on resource selection is estimated than that reported by previous studies not accounting for detection probability. N-mixture models show great promise for applications to riverine resource selection.
An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This
An improved swarm optimization for parameter estimation and biological model selection.
Directory of Open Access Journals (Sweden)
Afnizanfaizal Abdullah
Full Text Available One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete
Directory of Open Access Journals (Sweden)
Huamin Zhu
2016-01-01
Full Text Available Nowadays more and more cloud infrastructure service providers are providing large numbers of service instances which are a combination of diversified resources, such as computing, storage, and network. However, for cloud infrastructure services, the lack of a description standard and the inadequate research of systematic discovery and selection methods have exposed difficulties in discovering and choosing services for users. First, considering the highly configurable properties of a cloud infrastructure service, the feature model method is used to describe such a service. Second, based on the description of the cloud infrastructure service, a systematic discovery and selection method for cloud infrastructure services are proposed. The automatic analysis techniques of the feature model are introduced to verify the model’s validity and to perform the matching of the service and demand models. Finally, we determine the critical decision metrics and their corresponding measurement methods for cloud infrastructure services, where the subjective and objective weighting results are combined to determine the weights of the decision metrics. The best matching instances from various providers are then ranked by their comprehensive evaluations. Experimental results show that the proposed methods can effectively improve the accuracy and efficiency of cloud infrastructure service discovery and selection.
Villanea, Fernando A.; Safi, Kristin N.; Busch, Jeremiah W.
2015-01-01
The ABO locus in humans is characterized by elevated heterozygosity and very similar allele frequencies among populations scattered across the globe. Using knowledge of ABO protein function, we generated a simple model of asymmetric negative frequency dependent selection and genetic drift to explain the maintenance of ABO polymorphism and its loss in human populations. In our models, regardless of the strength of selection, models with large effective population sizes result in ABO allele frequencies that closely match those observed in most continental populations. Populations must be moderately small to fall out of equilibrium and lose either the A or B allele (Ne ≤ 50) and much smaller (Ne ≤ 25) for the complete loss of diversity, which nearly always involved the fixation of the O allele. A pattern of low heterozygosity at the ABO locus where loss of polymorphism occurs in our model is consistent with small populations, such as Native American populations. This study provides a general evolutionary model to explain the observed global patterns of polymorphism at the ABO locus and the pattern of allele loss in small populations. Moreover, these results inform the range of population sizes associated with the recent human colonization of the Americas. PMID:25946124
Directory of Open Access Journals (Sweden)
Fernando A Villanea
Full Text Available The ABO locus in humans is characterized by elevated heterozygosity and very similar allele frequencies among populations scattered across the globe. Using knowledge of ABO protein function, we generated a simple model of asymmetric negative frequency dependent selection and genetic drift to explain the maintenance of ABO polymorphism and its loss in human populations. In our models, regardless of the strength of selection, models with large effective population sizes result in ABO allele frequencies that closely match those observed in most continental populations. Populations must be moderately small to fall out of equilibrium and lose either the A or B allele (N(e ≤ 50 and much smaller (N(e ≤ 25 for the complete loss of diversity, which nearly always involved the fixation of the O allele. A pattern of low heterozygosity at the ABO locus where loss of polymorphism occurs in our model is consistent with small populations, such as Native American populations. This study provides a general evolutionary model to explain the observed global patterns of polymorphism at the ABO locus and the pattern of allele loss in small populations. Moreover, these results inform the range of population sizes associated with the recent human colonization of the Americas.
Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao
2012-01-01
Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.
Directory of Open Access Journals (Sweden)
Junbao Zheng
2012-03-01
Full Text Available Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor as well as its parallel channels (inner factor. The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.
Demographic model selection using random forests and the site frequency spectrum.
Smith, Megan L; Ruffley, Megan; Espíndola, Anahí; Tank, David C; Sullivan, Jack; Carstens, Bryan C
2017-09-01
Phylogeographic data sets have grown from tens to thousands of loci in recent years, but extant statistical methods do not take full advantage of these large data sets. For example, approximate Bayesian computation (ABC) is a commonly used method for the explicit comparison of alternate demographic histories, but it is limited by the "curse of dimensionality" and issues related to the simulation and summarization of data when applied to next-generation sequencing (NGS) data sets. We implement here several improvements to overcome these difficulties. We use a Random Forest (RF) classifier for model selection to circumvent the curse of dimensionality and apply a binned representation of the multidimensional site frequency spectrum (mSFS) to address issues related to the simulation and summarization of large SNP data sets. We evaluate the performance of these improvements using simulation and find low overall error rates (~7%). We then apply the approach to data from Haplotrema vancouverense, a land snail endemic to the Pacific Northwest of North America. Fifteen demographic models were compared, and our results support a model of recent dispersal from coastal to inland rainforests. Our results demonstrate that binning is an effective strategy for the construction of a mSFS and imply that the statistical power of RF when applied to demographic model selection is at least comparable to traditional ABC algorithms. Importantly, by combining these strategies, large sets of models with differing numbers of populations can be evaluated. © 2017 John Wiley & Sons Ltd.
2013-04-03
... mathematical modeling methods used in predicting the dispersion of heated effluent in natural water bodies. The... COMMISSION Reporting Procedure for Mathematical Models Selected To Predict Heated Effluent Dispersion in... Mathematical Models Selected to Predict Heated Effluent Dispersion in Natural Water Bodies.'' The guide is...
1975-06-01
7) UNIMAK PASS (a) PHYSICAL CHARACTERISTICS (b) BIOLOGICAL CHARACTERISTICS (c) RESULTS (3) PORT MÖLLER (a) PHYSICAL CHARACTERISTICS (b...CLEANUP VALDEZ NARROWS - CLEANUP DRIFT RIVER - CLEANUP PORT GRAHAM - CLEANUP KAMISHAK - CLEANUP UNIMAK PASS - CLEANUP PORT MÖLLER KVICHAK...2-239 2-12 KAMISHAK BAY CASE RESULTS, NO CLEANUP 2-299 2-13 MATRIX RESULTS-CASE 1 2-305 2-14 UNIMAK PASS CASE RESULTS NO CLEANUP 2-349 2-15
Statistical power of model selection strategies for genome-wide association studies.
Directory of Open Access Journals (Sweden)
Zheyang Wu
2009-07-01
Full Text Available Genome-wide association studies (GWAS aim to identify genetic variants related to diseases by examining the associations between phenotypes and hundreds of thousands of genotyped markers. Because many genes are potentially involved in common diseases and a large number of markers are analyzed, it is crucial to devise an effective strategy to identify truly associated variants that have individual and/or interactive effects, while controlling false positives at the desired level. Although a number of model selection methods have been proposed in the literature, including marginal search, exhaustive search, and forward search, their relative performance has only been evaluated through limited simulations due to the lack of an analytical approach to calculating the power of these methods. This article develops a novel statistical approach for power calculation, derives accurate formulas for the power of different model selection strategies, and then uses the formulas to evaluate and compare these strategies in genetic model spaces. In contrast to previous studies, our theoretical framework allows for random genotypes, correlations among test statistics, and a false-positive control based on GWAS practice. After the accuracy of our analytical results is validated through simulations, they are utilized to systematically evaluate and compare the performance of these strategies in a wide class of genetic models. For a specific genetic model, our results clearly reveal how different factors, such as effect size, allele frequency, and interaction, jointly affect the statistical power of each strategy. An example is provided for the application of our approach to empirical research. The statistical approach used in our derivations is general and can be employed to address the model selection problems in other random predictor settings. We have developed an R package markerSearchPower to implement our formulas, which can be downloaded from the
Directory of Open Access Journals (Sweden)
David S. Younger
2010-01-01
Full Text Available Lyme neuroborreliosis or “neurological Lyme disease” was evidenced in 2 of 23 patients submitted to strict criteria for case selection of the Centers for Disease Control and Prevention employing a two-tier test to detect antibodies to Borrelia burgdorferi at a single institution. One patient had symptomatic polyradiculoneuritis, dysautonomia, and serological evidence of early infection; and another had symptomatic small fiber sensory neuropathy, distal polyneuropathy, dysautonomia, and serological evidence of late infection. In the remaining patients symptoms initially ascribed to Lyme disease were probably unrelated to B. burgdorferi infection. Our findings suggest early susceptibility and protracted involvement of the nervous system most likely due to the immunological effects of B. burgdorferi infection, although the exact mechanisms remain uncertain.
Selecting an interprofessional education model for a tertiary health care setting.
Menard, Prudy; Varpio, Lara
2014-07-01
The World Health Organization describes interprofessional education (IPE) and collaboration as necessary components of all health professionals' education - in curriculum and in practice. However, no standard framework exists to guide healthcare settings in developing or selecting an IPE model that meets the learning needs of licensed practitioners in practice and that suits the unique needs of their setting. Initially, a broad review of the grey literature (organizational websites, government documents and published books) and healthcare databases was undertaken for existing IPE models. Subsequently, database searches of published papers using Scopus, Scholars Portal and Medline was undertaken. Through this search process five IPE models were identified in the literature. This paper attempts to: briefly outline the five different models of IPE that are presently offered in the literature; and illustrate how a healthcare setting can select the IPE model within their context using Reeves' seven key trends in developing IPE. In presenting these results, the paper contributes to the interprofessional literature by offering an overview of possible IPE models that can be used to inform the implementation or modification of interprofessional practices in a tertiary healthcare setting.
Muller, Benjamin J.; Cade, Brian S.; Schwarzkoph, Lin
2018-01-01
Many different factors influence animal activity. Often, the value of an environmental variable may influence significantly the upper or lower tails of the activity distribution. For describing relationships with heterogeneous boundaries, quantile regressions predict a quantile of the conditional distribution of the dependent variable. A quantile count model extends linear quantile regression methods to discrete response variables, and is useful if activity is quantified by trapping, where there may be many tied (equal) values in the activity distribution, over a small range of discrete values. Additionally, different environmental variables in combination may have synergistic or antagonistic effects on activity, so examining their effects together, in a modeling framework, is a useful approach. Thus, model selection on quantile counts can be used to determine the relative importance of different variables in determining activity, across the entire distribution of capture results. We conducted model selection on quantile count models to describe the factors affecting activity (numbers of captures) of cane toads (Rhinella marina) in response to several environmental variables (humidity, temperature, rainfall, wind speed, and moon luminosity) over eleven months of trapping. Environmental effects on activity are understudied in this pest animal. In the dry season, model selection on quantile count models suggested that rainfall positively affected activity, especially near the lower tails of the activity distribution. In the wet season, wind speed limited activity near the maximum of the distribution, while minimum activity increased with minimum temperature. This statistical methodology allowed us to explore, in depth, how environmental factors influenced activity across the entire distribution, and is applicable to any survey or trapping regime, in which environmental variables affect activity.
Results of EPRI/ANL DCH investigations and model development
International Nuclear Information System (INIS)
Spencer, B.W.; Sienicki, J.J.; Sehgal, B.R.; Merilo, M.
1988-01-01
The results of a series of five experiments are described addressing the severity and mitigation of direct containment heating. The tests were performed in a 1:30 linear scale mockup of the Zion PWR containment system using a reactor-material corium melt consisting of 60% UO 2 , 16% ZrO 2 , 24% SSt at nominally 2800C initial temperature. A ''worst-case'' type test involving unimpeded corium dispersal through an air atmosphere in a closed vessel produced an atmosphere heatup of 323K, equivalent to a DCH efficiency of 62%. With the addition of structural features which impeded the corium dispersal, representative of dispersal pathway features at Zion, the DCH efficiency was reduced to 1--5%. (This important result is scale dependent and requires larger scale tests such as the SURTSEY program at SNL plus mechanistic modeling for application to the reactor system.) With the addition of water in the cavity region, there was no measurable heatup of the atmosphere. This was attributable to the vigorous codispersal of water with corium which prevented the temperature of the atmosphere from significantly exceeding T/sub sat/. In this case the DCH load was replaced by the more benign ''steam spike'' from corium quench. Significant oxidation of the corium constituents occurred in the tests, adding chemical energy to the system and producing hydrogen. Overall, the results suggest that with consideration of realistic, plant specific, mitigating features, DCH may be no worse and possibly far less severe than the previously examined steam spike. Implications for accident management are addressed. 17 refs., 7 figs., 4 tabs
Akman, Olcay; Hallam, Joshua W.
2010-01-01
We implement genetic algorithm based predictive model building as an alternative to the traditional stepwise regression. We then employ the Information Complexity Measure (ICOMP) as a measure of model fitness instead of the commonly used measure of R-square. Furthermore, we propose some modifications to the genetic algorithm to increase the overall efficiency. PMID:20661297
Directory of Open Access Journals (Sweden)
Olcay Akman
2010-07-01
Full Text Available We implement genetic algorithm based predictive model building as an alternative to the traditional stepwise regression. We then employ the Information Complexity Measure (ICOMP as a measure of model fitness instead of the commonly used measure of R-square. Furthermore, we propose some modifications to the genetic algorithm to increase the overall efficiency.
Models of Aire-dependent gene regulation for thymic negative selection
Directory of Open Access Journals (Sweden)
Dina eDanso-Abeam
2011-05-01
Full Text Available Mutations in the Autoimmune Regulator (AIRE gene lead to Autoimmune Polyendocrinopathy Syndrome type 1 (APS1, characterized by the development of multi-organ autoimmune damage. The mechanism by which defects in AIRE result in autoimmunity has been the subject of intense scrutiny. At the cellular level, the working model explains most of the clinical and immunological characteristics of APS1, with AIRE driving the expression of tissue restricted antigens (TRAs in the epithelial cells of the thymic medulla. This TRA expression results in effective negative selection of TRA-reactive thymocytes, preventing autoimmune disease. At the molecular level, the mechanism by which AIRE initiates TRA expression in the thymic medulla remains unclear. Multiple different models for the molecular mechanism have been proposed, ranging from classical transcriptional activity, to random induction of gene expression, to epigenetic tag recognition effect, to altered cell biology. In this review, we evaluate each of these models and discuss their relative strengths and weaknesses.
A Duality Result for the Generalized Erlang Risk Model
Directory of Open Access Journals (Sweden)
Lanpeng Ji
2014-11-01
Full Text Available In this article, we consider the generalized Erlang risk model and its dual model. By using a conditional measure-preserving correspondence between the two models, we derive an identity for two interesting conditional probabilities. Applications to the discounted joint density of the surplus prior to ruin and the deficit at ruin are also discussed.
Implications of allometric model selection for county-level biomass mapping
Directory of Open Access Journals (Sweden)
Laura Duncanson
2017-10-01
Full Text Available Abstract Background Carbon accounting in forests remains a large area of uncertainty in the global carbon cycle. Forest aboveground biomass is therefore an attribute of great interest for the forest management community, but the accuracy of aboveground biomass maps depends on the accuracy of the underlying field estimates used to calibrate models. These field estimates depend on the application of allometric models, which often have unknown and unreported uncertainties outside of the size class or environment in which they were developed. Results Here, we test three popular allometric approaches to field biomass estimation, and explore the implications of allometric model selection for county-level biomass mapping in Sonoma County, California. We test three allometric models: Jenkins et al. (For Sci 49(1: 12–35, 2003, Chojnacky et al. (Forestry 87(1: 129–151, 2014 and the US Forest Service’s Component Ratio Method (CRM. We found that Jenkins and Chojnacky models perform comparably, but that at both a field plot level and a total county level there was a ~ 20% difference between these estimates and the CRM estimates. Further, we show that discrepancies are greater in high biomass areas with high canopy covers and relatively moderate heights (25–45 m. The CRM models, although on average ~ 20% lower than Jenkins and Chojnacky, produce higher estimates in the tallest forests samples (> 60 m, while Jenkins generally produces higher estimates of biomass in forests < 50 m tall. Discrepancies do not continually increase with increasing forest height, suggesting that inclusion of height in allometric models is not primarily driving discrepancies. Models developed using all three allometric models underestimate high biomass and overestimate low biomass, as expected with random forest biomass modeling. However, these deviations were generally larger using the Jenkins and Chojnacky allometries, suggesting that the CRM approach may be more
Directory of Open Access Journals (Sweden)
Vishwanath Varma
2014-06-01
Full Text Available Since the ability to time rhythmic behaviours in accordance with cyclic environments is likely to confer adaptive advantage to organisms, the underlying clocks are believed to be selected for stability in timekeeping over evolutionary time scales. Here we report the results of a study aimed at assessing fitness consequences of a long-term laboratory selection for tighter circadian organisation using fruit fly Drosophila melanogaster populations. We selected flies emerging in a narrow window of 1 h in the morning for several generations and assayed their life history traits such as pre-adult development time, survivorship, adult lifespan and lifetime fecundity. We chose flies emerging during the selection window (in the morning and another window (in the evening to represent adaptive and non-adaptive phenotypes, respectively, and examined the correlation of emergence time with adult fitness traits. Adult lifespan of males from the selected populations does not differ from the controls, whereas females from the selected populations have significantly shorter lifespan and produce more eggs during their mid-life compared to the controls. Although there is no difference in the lifespan of males of the selected populations, whether they emerge in morning or evening window, morning emerging females live slightly shorter and lay more eggs during the mid-life stage compared to those emerging in the evening. Interestingly, such a time of emergence dependent difference in fitness is not seen in flies from the control populations. These results, therefore, suggest reduced lifespan and enhanced mid-life reproductive output in females selected for narrow gate of emergence, and a sex-dependent genetic correlation between the timing of emergence and key fitness traits in these populations.
Lammers, Jeroen; Goossens, Ferry; Conrod, Patricia; Engels, Rutger; Wiers, Reinout W; Kleinjan, Marloes
2017-08-01
To explore whether specific groups of adolescents (i.e., scoring high on personality risk traits, having a lower education level, or being male) benefit more from the Preventure intervention with regard to curbing their drinking behaviour. A clustered randomized controlled trial, with participants randomly assigned to a 2-session coping skills intervention or a control no-intervention condition. Fifteen secondary schools throughout The Netherlands; 7 schools in the intervention and 8 schools in the control condition. 699 adolescents aged 13-15; 343 allocated to the intervention and 356 to the control condition; with drinking experience and elevated scores in either negative thinking, anxiety sensitivity, impulsivity or sensation seeking. Differential effectiveness of the Preventure program was examined for the personality traits group, education level and gender on past-month binge drinking (main outcome), binge frequency, alcohol use, alcohol frequency and problem drinking, at 12months post-intervention. Preventure is a selective school-based alcohol prevention programme targeting personality risk factors. The comparator was a no-intervention control. Intervention effects were moderated by the personality traits group and by education level. More specifically, significant intervention effects were found on reducing alcohol use within the anxiety sensitivity group (OR=2.14, CI=1.40, 3.29) and reducing binge drinking (OR=1.76, CI=1.38, 2.24) and binge drinking frequency (β=0.24, p=0.04) within the sensation seeking group at 12months post-intervention. Also, lower educated young adolescents reduced binge drinking (OR=1.47, CI=1.14, 1.88), binge drinking frequency (β=0.25, p=0.04), alcohol use (OR=1.32, CI=1.06, 1.65) and alcohol use frequency (β=0.47, p=0.01), but not those in the higher education group. Post hoc latent-growth analyses revealed significant effects on the development of binge drinking (β=-0.19, p=0.02) and binge drinking frequency (β=-0.10, p=0
DEFF Research Database (Denmark)
Mikkelsen, Frederik Vissing
Broadly speaking, this thesis is devoted to model selection applied to ordinary dierential equations and risk estimation under model selection. A model selection framework was developed for modelling time course data by ordinary dierential equations. The framework is accompanied by the R software...... eective computational tools for estimating unknown structures in dynamical systems, such as gene regulatory networks, which may be used to predict downstream eects of interventions in the system. A recommended algorithm based on the computational tools is presented and thoroughly tested in various...... simulation studies and applications. The second part of the thesis also concerns model selection, but focuses on risk estimation, i.e., estimating the error of mean estimators involving model selection. An extension of Stein's unbiased risk estimate (SURE), which applies to a class of estimators with model...
Model selection criteria : how to evaluate order restrictions
Kuiper, R.M.
2012-01-01
Researchers often have ideas about the ordering of model parameters. They frequently have one or more theories about the ordering of the group means, in analysis of variance (ANOVA) models, or about the ordering of coefficients corresponding to the predictors, in regression models.A researcher might
The Selection of Turbulence Models for Prediction of Room Airflow
DEFF Research Database (Denmark)
Nielsen, Peter V.
This paper discusses the use of different turbulence models and their advantages in given situations. As an example, it is shown that a simple zero-equation model can be used for the prediction of special situations as flow with a low level of turbulence. A zero-equation model with compensation...
A Four-Step Model for Teaching Selection Interviewing Skills
Kleiman, Lawrence S.; Benek-Rivera, Joan
2010-01-01
The topic of selection interviewing lends itself well to experience-based teaching methods. Instructors often teach this topic by using a two-step process. The first step consists of lecturing students on the basic principles of effective interviewing. During the second step, students apply these principles by role-playing mock interviews with…
Modelling the negative effects of landscape fragmentation on habitat selection
Langevelde, van F.
2015-01-01
Landscape fragmentation constrains movement of animals between habitat patches. Fragmentation may, therefore, limit the possibilities to explore and select the best habitat patches, and some animals may have to cope with low-quality patches due to these movement constraints. If so, these individuals
Selecting Human Error Types for Cognitive Modelling and Simulation
Mioch, T.; Osterloh, J.P.; Javaux, D.
2010-01-01
This paper presents a method that has enabled us to make a selection of error types and error production mechanisms relevant to the HUMAN European project, and discusses the reasons underlying those choices. We claim that this method has the advantage that it is very exhaustive in determining the
RUC at TREC 2014: Select Resources Using Topic Models
2014-11-01
preprocess the data by parsing the pages ( html , txt, doc, xls, ppt, pdf, xml files) into tokens, removing the stopwords listed in the Indri’s...Gravano. Classification-Aware Hidden- Web Text Database Selection. ACM Trans. Inf. Syst. Vol. 26, No. 2, Article 6, April 2008. [8] J. Seo and B. W
Waste glass corrosion modeling: Comparison with experimental results
International Nuclear Information System (INIS)
Bourcier, W.L.
1994-01-01
Models for borosilicate glass dissolution must account for the processes of (1) kinetically-controlled network dissolution, (2) precipitation of secondary phases, (3) ion exchange, (4) rate-limiting diffusive transport of silica through a hydrous surface reaction layer, and (5) specific glass surface interactions with dissolved cations and anions. Current long-term corrosion models for borosilicate glass employ a rate equation consistent with transition state theory embodied in a geochemical reaction-path modeling program that calculates aqueous phase speciation and mineral precipitation/dissolution. These models are currently under development. Future experimental and modeling work to better quantify the rate-controlling processes and validate these models are necessary before the models can be used in repository performance assessment calculations
Argonne Fuel Cycle Facility ventilation system -- modeling and results
International Nuclear Information System (INIS)
Mohr, D.; Feldman, E.E.; Danielson, W.F.
1995-01-01
This paper describes an integrated study of the Argonne-West Fuel Cycle Facility (FCF) interconnected ventilation systems during various operations. Analyses and test results include first a nominal condition reflecting balanced pressures and flows followed by several infrequent and off-normal scenarios. This effort is the first study of the FCF ventilation systems as an integrated network wherein the hydraulic effects of all major air systems have been analyzed and tested. The FCF building consists of many interconnected regions in which nuclear fuel is handled, transported and reprocessed. The ventilation systems comprise a large number of ducts, fans, dampers, and filters which together must provide clean, properly conditioned air to the worker occupied spaces of the facility while preventing the spread of airborne radioactive materials to clean am-as or the atmosphere. This objective is achieved by keeping the FCF building at a partial vacuum in which the contaminated areas are kept at lower pressures than the other worker occupied spaces. The ventilation systems of FCF and the EBR-II reactor are analyzed as an integrated totality, as demonstrated. We then developed the network model shown in Fig. 2 for the TORAC code. The scope of this study was to assess the measured results from the acceptance/flow balancing testing and to predict the effects of power failures, hatch and door openings, single-failure faulted conditions, EBR-II isolation, and other infrequent operations. The studies show that the FCF ventilation systems am very controllable and remain stable following off-normal events. In addition, the FCF ventilation system complex is essentially immune to reverse flows and spread of contamination to clean areas during normal and off-normal operation
A hierarchy of models for simulating experimental results from a 3D heterogeneous porous medium
Vogler, Daniel; Ostvar, Sassan; Paustian, Rebecca; Wood, Brian D.
2018-04-01
In this work we examine the dispersion of conservative tracers (bromide and fluorescein) in an experimentally-constructed three-dimensional dual-porosity porous medium. The medium is highly heterogeneous (σY2 = 5.7), and consists of spherical, low-hydraulic-conductivity inclusions embedded in a high-hydraulic-conductivity matrix. The bimodal medium was saturated with tracers, and then flushed with tracer-free fluid while the effluent breakthrough curves were measured. The focus for this work is to examine a hierarchy of four models (in the absence of adjustable parameters) with decreasing complexity to assess their ability to accurately represent the measured breakthrough curves. The most information-rich model was (1) a direct numerical simulation of the system in which the geometry, boundary and initial conditions, and medium properties were fully independently characterized experimentally with high fidelity. The reduced-information models included; (2) a simplified numerical model identical to the fully-resolved direct numerical simulation (DNS) model, but using a domain that was one-tenth the size; (3) an upscaled mobile-immobile model that allowed for a time-dependent mass-transfer coefficient; and, (4) an upscaled mobile-immobile model that assumed a space-time constant mass-transfer coefficient. The results illustrated that all four models provided accurate representations of the experimental breakthrough curves as measured by global RMS error. The primary component of error induced in the upscaled models appeared to arise from the neglect of convection within the inclusions. We discuss the necessity to assign value (via a utility function or other similar method) to outcomes if one is to further select from among model options. Interestingly, these results suggested that the conventional convection-dispersion equation, when applied in a way that resolves the heterogeneities, yields models with high fidelity without requiring the imposition of a more
Final model independent result of DAMA/LIBRA-phase1
Energy Technology Data Exchange (ETDEWEB)
Bernabei, R.; D' Angelo, S.; Di Marco, A. [Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Belli, P. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Cappella, F.; D' Angelo, A.; Prosperi, D. [Universita di Roma ' ' La Sapienza' ' , Dipartimento di Fisica, Rome (Italy); INFN, sez. Roma, Rome (Italy); Caracciolo, V.; Castellano, S.; Cerulli, R. [INFN, Laboratori Nazionali del Gran Sasso, Assergi (Italy); Dai, C.J.; He, H.L.; Kuang, H.H.; Ma, X.H.; Sheng, X.D.; Wang, R.G. [Chinese Academy, IHEP, Beijing (China); Incicchitti, A. [INFN, sez. Roma, Rome (Italy); Montecchia, F. [INFN, sez. Roma ' ' Tor Vergata' ' , Rome (Italy); Universita di Roma ' ' Tor Vergata' ' , Dipartimento di Ingegneria Civile e Ingegneria Informatica, Rome (Italy); Ye, Z.P. [Chinese Academy, IHEP, Beijing (China); University of Jing Gangshan, Jiangxi (China)
2013-12-15
The results obtained with the total exposure of 1.04 ton x yr collected by DAMA/LIBRA-phase1 deep underground at the Gran Sasso National Laboratory (LNGS) of the I.N.F.N. during 7 annual cycles (i.e. adding a further 0.17 ton x yr exposure) are presented. The DAMA/LIBRA-phase1 data give evidence for the presence of Dark Matter (DM) particles in the galactic halo, on the basis of the exploited model independent DM annual modulation signature by using highly radio-pure NaI(Tl) target, at 7.5{sigma} C.L. Including also the first generation DAMA/NaI experiment (cumulative exposure 1.33 ton x yr, corresponding to 14 annual cycles), the C.L. is 9.3{sigma} and the modulation amplitude of the single-hit events in the (2-6) keV energy interval is: (0.0112{+-}0.0012) cpd/kg/keV; the measured phase is (144{+-}7) days and the measured period is (0.998{+-}0.002) yr, values well in agreement with those expected for DM particles. No systematic or side reaction able to mimic the exploited DM signature has been found or suggested by anyone over more than a decade. (orig.)
Innovation ecosystem model for commercialization of research results
Directory of Open Access Journals (Sweden)
Vlăduţ Gabriel
2017-07-01
Full Text Available Innovation means Creativity and Added value recognise by the market. The first step in creating a sustainable commercialization of research results, Technological Transfer – TT mechanism, on one hand is to define the “technology” which will be transferred and on other hand to define the context in which the TT mechanism work, the ecosystem. The focus must be set on technology as an entity, not as a science or a study of the practical industrial arts and certainly not any specific applied science. The transfer object, the technology, must rely on a subjectively determined but specifiable set of processes and products. Focusing on the product is not sufficient to the transfer and diffusion of technology. It is not merely the product that is transferred but also knowledge of its use and application. The innovation ecosystem model brings together new companies, experienced business leaders, researchers, government officials, established technology companies, and investors. This environment provides those new companies with a wealth of technical expertise, business experience, and access to capital that supports innovation in the early stages of growth.
Directory of Open Access Journals (Sweden)
Qian Wang
2016-01-01
Full Text Available Spectroscopy is an efficient and widely used quantitative analysis method. In this paper, a spectral quantitative analysis model with combining wavelength selection and topology structure optimization is proposed. For the proposed method, backpropagation neural network is adopted for building the component prediction model, and the simultaneousness optimization of the wavelength selection and the topology structure of neural network is realized by nonlinear adaptive evolutionary programming (NAEP. The hybrid chromosome in binary scheme of NAEP has three parts. The first part represents the topology structure of neural network, the second part represents the selection of wavelengths in the spectral data, and the third part represents the parameters of mutation of NAEP. Two real flue gas datasets are used in the experiments. In order to present the effectiveness of the methods, the partial least squares with full spectrum, the partial least squares combined with genetic algorithm, the uninformative variable elimination method, the backpropagation neural network with full spectrum, the backpropagation neural network combined with genetic algorithm, and the proposed method are performed for building the component prediction model. Experimental results verify that the proposed method has the ability to predict more accurately and robustly as a practical spectral analysis tool.
A model of sexual selection and female use of refuge in a coercive mating system.
Bokides, Dessa; Lou, Yuan; Hamilton, Ian M
2012-08-22
In many non-monogamous systems, males invest less in progeny than do females. This leaves males with higher potential rates of reproduction, and a likelihood of sexual conflict, including, in some systems, coercive matings. If coercive matings are costly, the best female strategy may be to avoid male interaction. We present a model that demonstrates female movement in response to male harassment as a mechanism to lower the costs associated with male coercion, and the effect that female movement has on selection in males for male harassment. We found that, when females can move from a habitat patch to a refuge to which males do not have access, there may be a selection for either high, or low harassment male phenotype, or both, depending on the relationship between the harassment level of male types in the population and a threshold level of male harassment. This threshold harassment level depends on the relative number of males and females in the population, and the relative resource values of the habitat; the threshold increases as the sex ratio favours females, and decreases with the value of the refuge patch or total population. Our model predicts that selection will favour the harassment level that lies closest to this threshold level of harassment, and differing harassment levels will coexist within the population only if they lie on the opposite sides of the threshold harassment. Our model is consistent with empirical results suggesting that an intermediate harassment level provides maximum reproductive fitness to males when females are mobile.
Selected Results from the ATLAS Experiment on its 25th Anniversary
Djama, Fares; The ATLAS collaboration
2018-01-01
The Lomonosov Conference and the ATLAS Collaboration celebrated their 25th anniversaries at a few week interval. This gave us the opportunity to present a brief history of ATLAS and to discuss some of its more important results.
Models of frequency-dependent selection with mutation from parental alleles.
Trotter, Meredith V; Spencer, Hamish G
2013-09-01
Frequency-dependent selection (FDS) remains a common heuristic explanation for the maintenance of genetic variation in natural populations. The pairwise-interaction model (PIM) is a well-studied general model of frequency-dependent selection, which assumes that a genotype's fitness is a function of within-population intergenotypic interactions. Previous theoretical work indicated that this type of model is able to sustain large numbers of alleles at a single locus when it incorporates recurrent mutation. These studies, however, have ignored the impact of the distribution of fitness effects of new mutations on the dynamics and end results of polymorphism construction. We suggest that a natural way to model mutation would be to assume mutant fitness is related to the fitness of the parental allele, i.e., the existing allele from which the mutant arose. Here we examine the numbers and distributions of fitnesses and alleles produced by construction under the PIM with mutation from parental alleles and the impacts on such measures due to different methods of generating mutant fitnesses. We find that, in comparison with previous results, generating mutants from existing alleles lowers the average number of alleles likely to be observed in a system subject to FDS, but produces polymorphisms that are highly stable and have realistic allele-frequency distributions.
Shirk, Andrew J; Landguth, Erin L; Cushman, Samuel A
2018-01-01
Anthropogenic migration barriers fragment many populations and limit the ability of species to respond to climate-induced biome shifts. Conservation actions designed to conserve habitat connectivity and mitigate barriers are needed to unite fragmented populations into larger, more viable metapopulations, and to allow species to track their climate envelope over time. Landscape genetic analysis provides an empirical means to infer landscape factors influencing gene flow and thereby inform such conservation actions. However, there are currently many methods available for model selection in landscape genetics, and considerable uncertainty as to which provide the greatest accuracy in identifying the true landscape model influencing gene flow among competing alternative hypotheses. In this study, we used population genetic simulations to evaluate the performance of seven regression-based model selection methods on a broad array of landscapes that varied by the number and type of variables contributing to resistance, the magnitude and cohesion of resistance, as well as the functional relationship between variables and resistance. We also assessed the effect of transformations designed to linearize the relationship between genetic and landscape distances. We found that linear mixed effects models had the highest accuracy in every way we evaluated model performance; however, other methods also performed well in many circumstances, particularly when landscape resistance was high and the correlation among competing hypotheses was limited. Our results provide guidance for which regression-based model selection methods provide the most accurate inferences in landscape genetic analysis and thereby best inform connectivity conservation actions. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Modeling and Experimental Validation of the Electron Beam Selective Melting Process
Directory of Open Access Journals (Sweden)
Wentao Yan
2017-10-01
Full Text Available Electron beam selective melting (EBSM is a promising additive manufacturing (AM technology. The EBSM process consists of three major procedures: ① spreading a powder layer, ② preheating to slightly sinter the powder, and ③ selectively melting the powder bed. The highly transient multi-physics phenomena involved in these procedures pose a significant challenge for in situ experimental observation and measurement. To advance the understanding of the physical mechanisms in each procedure, we leverage high-fidelity modeling and post-process experiments. The models resemble the actual fabrication procedures, including ① a powder-spreading model using the discrete element method (DEM, ② a phase field (PF model of powder sintering (solid-state sintering, and ③ a powder-melting (liquid-state sintering model using the finite volume method (FVM. Comprehensive insights into all the major procedures are provided, which have rarely been reported. Preliminary simulation results (including powder particle packing within the powder bed, sintering neck formation between particles, and single-track defects agree qualitatively with experiments, demonstrating the ability to understand the mechanisms and to guide the design and optimization of the experimental setup and manufacturing process.
Blade element momentum modeling of inflow with shear in comparison with advanced model results
DEFF Research Database (Denmark)
Aagaard Madsen, Helge; Riziotis, V.; Zahle, Frederik
2012-01-01
shear is present in the inflow. This gives guidance to how the BEM modeling of shear should be implemented. Another result from the advanced vortex model computations is a clear indication of influence of the ground, and the general tendency is a speed up effect of the flow through the rotor giving...
Regionalization of climate model results for the North Sea
Energy Technology Data Exchange (ETDEWEB)
Kauker, F.
1999-07-01
A dynamical downscaling is presented that allows an estimation of potential effects of climate change on the North Sea. Therefore, the ocean general circulation model OPYC is adapted for application on a shelf by adding a lateral boundary formulation and a tide model. In this set-up the model is forced, first, with data from the ECMWF reanalysis for model validation and the study of the natural variability, and, second, with data from climate change experiments to estimate the effects of climate change on the North Sea. (orig.)
Selected Aspects of Computer Modeling of Reinforced Concrete Structures
Directory of Open Access Journals (Sweden)
Szczecina M.
2016-03-01
Full Text Available The paper presents some important aspects concerning material constants of concrete and stages of modeling of reinforced concrete structures. The problems taken into account are: a choice of proper material model for concrete, establishing of compressive and tensile behavior of concrete and establishing the values of dilation angle, fracture energy and relaxation time for concrete. Proper values of material constants are fixed in simple compression and tension tests. The effectiveness and correctness of applied model is checked on the example of reinforced concrete frame corners under opening bending moment. Calculations are performed in Abaqus software using Concrete Damaged Plasticity model of concrete.
Santing, R.E; de Boer, J; Rohof, A.A B; van der Zee, N.M; Zaagsma, Hans
2001-01-01
In a guinea pig model of allergic asthma, we investigated the effects of the selective phosphodiesterase inhibitors rolipram (phosphodiesterase 4-selective), Org 9935 (phosphodiesterase 3-selective) and Org 20241 (dual phosphodiesterase 4/phosphodiesterase 3-selective), administered by aerosol
Computer-aided test selection and result validation-opportunities and pitfalls
DEFF Research Database (Denmark)
McNair, P; Brender, J; Talmon, J
1998-01-01
Dynamic test scheduling is concerned with pre-analytical preprocessing of the individual samples within a clinical laboratory production by means of decision algorithms. The purpose of such scheduling is to provide maximal information with minimal data production (to avoid data pollution and...... implementing such dynamic test scheduling within a Laboratory Information System (and/or an advanced analytical workstation). The challenge is related to 1) generation of appropriately validated decision models, and 2) mastering consequences of analytical imprecision and bias......./or to increase cost-efficiency). Our experience shows that there is a practical limit to the extent of exploitation of the principle of dynamic test scheduling, unless it is automated in one way or the other. This paper analyses some issues of concern related to the profession of clinical biochemistry, when...