Guo, Gang; Hou, Yali; Zhang, Yuan; Su, Guosheng
2017-04-01
Number of inseminations to conception (NINS), an important fertility trait, requires appropriate approaches for genetic evaluation due to its non-normal distribution and censoring records. In this study, we analyzed NINS in 474 837 Danish Holstein cows at their first lactation by using seven models which deal with the categorical phenotypes and censored records in different manners, further assessed these models with regard to stability, lack of bias and accuracy of prediction. The estimated heritability from four models based on original NINS specified as a linear Gaussian model, categorical threshold model, threshold linear model and survival model were similar (0.031-0.037). While for the other three models based on the binary response derived from NINS, referred as threshold model (TM), logistic and probit models (LOGM and PROM), the heritability were estimated as 0.027, 0.063 and 0.027, respectively. The model comparison concluded that different models could lead to slightly different sire rankings in terms of breeding values; a more complicated model led to less stability of prediction; the models based on the binary response derived from NINS (TM, LOGM and PROM) had slightly better performances in terms of unbiased and accurate prediction of breeding values. © 2016 Japanese Society of Animal Science.
Efficiency evaluation of a small number of DMUs: an approach based on Li and Reeves's model
Directory of Open Access Journals (Sweden)
João Carlos Correia Baptista Soares de Mello
2009-04-01
Full Text Available This paper deals with the evaluation of Decision Making Units (DMU when their number is not large enough to allow the use of classic Data Envelopment Analysis (DEA models. To do so, we take advantage of the TRIMAP software when used to study the Li and Reeves MultiCriteria DEA (MCDEA model. We introduce an evaluation measure obtained with the integration of one of the objective functions along the weight space. This measure allows the DMUs joint evaluation. This approach is exemplified with numerical data from some Brazilian electrical companies.Este artigo trata da avaliação de Unidades Produtivas (Decision Making Units - DMUs quando seu número é inferior ao recomendado na Análise Envoltória de Dados (Data Envelopment Analysis - DEA. Para isso é explorado o uso do software TRIMAP no modelo MCDEA (MultiCriteria DEA de Li e Reeves. É proposto um índice de avaliação de desempenho baseado nos valores assumidos por uma das funções objetivo do modelo MCDEA. Estes valores, obtidos pelo TRIMAP, são integrados ao longo de todo o espaço dos pesos. O índice obtido permite uma avaliação de conjunto das DMUs avaliadas. O modelo é ilustrado com um exemplo numérico de avaliação de empresas distribuidoras de energia elétrica.
Fermion number in supersymmetric models
International Nuclear Information System (INIS)
Mainland, G.B.; Tanaka, K.
1975-01-01
The two known methods for introducing a conserved fermion number into supersymmetric models are discussed. While the introduction of a conserved fermion number often requires that the Lagrangian be massless or that bosons carry fermion number, a model is discussed in which masses can be introduced via spontaneous symmetry breaking and fermion number is conserved at all stages without assigning fermion number to bosons. (U.S.)
From Concurrency Models to Numbers
DEFF Research Database (Denmark)
Hermanns, Holger; Zhang, Lijun
2011-01-01
Discrete-state Markov processes are very common models used for performance and dependability evaluation of, for example, distributed information and communication systems. Over the last fifteen years, compositional model construction and model checking algorithms have been studied for these proc...
Directory of Open Access Journals (Sweden)
M. Ketzel
2007-08-01
Full Text Available A field measurement campaign was conducted near a major road "Itäväylä" in an urban area in Helsinki in 17–20 February 2003. Aerosol measurements were conducted using a mobile laboratory "Sniffer" at various distances from the road, and at an urban background location. Measurements included particle size distribution in the size range of 7 nm–10 μm (aerodynamic diameter by the Electrical Low Pressure Impactor (ELPI and in the size range of 3–50 nm (mobility diameter by Scanning Mobility Particle Sizer (SMPS, total number concentration of particles larger than 3 nm detected by an ultrafine condensation particle counter (UCPC, temperature, relative humidity, wind speed and direction, driving route of the mobile laboratory, and traffic density on the studied road. In this study, we have compared measured concentration data with the predictions of the road network dispersion model CAR-FMI used in combination with an aerosol process model MONO32. For model comparison purposes, one of the cases was additionally computed using the aerosol process model UHMA, combined with the CAR-FMI model. The vehicular exhaust emissions, and atmospheric dispersion and transformation of fine and ultrafine particles was evaluated within the distance scale of 200 m (corresponding to a time scale of a couple of minutes. We computed the temporal evolution of the number concentrations, size distributions and chemical compositions of various particle size classes. The atmospheric dilution rate of particles is obtained from the roadside dispersion model CAR-FMI. Considering the evolution of total number concentration, dilution was shown to be the most important process. The influence of coagulation and condensation on the number concentrations of particle size modes was found to be negligible on this distance scale. Condensation was found to affect the evolution of particle diameter in the two smallest particle modes. The assumed value of the concentration of
Tomas, Jose M.; Hontangas, Pedro M.; Oliver, Amparo
2000-01-01
Assessed two models for confirmatory factor analysis of multitrait-multimethod data through Monte Carlo simulation. The correlated traits-correlated methods (CTCM) and the correlated traits-correlated uniqueness (CTCU) models were compared. Results suggest that CTCU is a good alternative to CTCM in the typical multitrait-multimethod matrix, but…
Directory of Open Access Journals (Sweden)
Julia Kravchenko
Full Text Available BACKGROUND: Adenocarcinomas (ACs and squamous cell carcinomas (SCCs differ by clinical and molecular characteristics. We evaluated the characteristics of carcinogenesis by modeling the age patterns of incidence rates of ACs and SCCs of various organs to test whether these characteristics differed between cancer subtypes. METHODOLOGY/PRINCIPAL FINDINGS: Histotype-specific incidence rates of 14 ACs and 12 SCCs from the SEER Registry (1973-2003 were analyzed by fitting several biologically motivated models to observed age patterns. A frailty model with the Weibull baseline was applied to each age pattern to provide the best fit for the majority of cancers. For each cancer, model parameters describing the underlying mechanisms of carcinogenesis including the number of stages occurring during an individual's life and leading to cancer (m-stages were estimated. For sensitivity analysis, the age-period-cohort model was incorporated into the carcinogenesis model to test the stability of the estimates. For the majority of studied cancers, the numbers of m-stages were similar within each group (i.e., AC and SCC. When cancers of the same organs were compared (i.e., lung, esophagus, and cervix uteri, the number of m-stages were more strongly associated with the AC/SCC subtype than with the organ: 9.79±0.09, 9.93±0.19 and 8.80±0.10 for lung, esophagus, and cervical ACs, compared to 11.41±0.10, 12.86±0.34 and 12.01±0.51 for SCCs of the respective organs (p<0.05 between subtypes. Most SCCs had more than ten m-stages while ACs had fewer than ten m-stages. The sensitivity analyses of the model parameters demonstrated the stability of the obtained estimates. CONCLUSIONS/SIGNIFICANCE: A model containing parameters capable of representing the number of stages of cancer development occurring during individual's life was applied to the large population data on incidence of ACs and SCCs. The model revealed that the number of m-stages differed by cancer subtype
Kravchenko, Julia; Akushevich, Igor; Abernethy, Amy P; Lyerly, H Kim
2012-01-01
Adenocarcinomas (ACs) and squamous cell carcinomas (SCCs) differ by clinical and molecular characteristics. We evaluated the characteristics of carcinogenesis by modeling the age patterns of incidence rates of ACs and SCCs of various organs to test whether these characteristics differed between cancer subtypes. Histotype-specific incidence rates of 14 ACs and 12 SCCs from the SEER Registry (1973-2003) were analyzed by fitting several biologically motivated models to observed age patterns. A frailty model with the Weibull baseline was applied to each age pattern to provide the best fit for the majority of cancers. For each cancer, model parameters describing the underlying mechanisms of carcinogenesis including the number of stages occurring during an individual's life and leading to cancer (m-stages) were estimated. For sensitivity analysis, the age-period-cohort model was incorporated into the carcinogenesis model to test the stability of the estimates. For the majority of studied cancers, the numbers of m-stages were similar within each group (i.e., AC and SCC). When cancers of the same organs were compared (i.e., lung, esophagus, and cervix uteri), the number of m-stages were more strongly associated with the AC/SCC subtype than with the organ: 9.79±0.09, 9.93±0.19 and 8.80±0.10 for lung, esophagus, and cervical ACs, compared to 11.41±0.10, 12.86±0.34 and 12.01±0.51 for SCCs of the respective organs (p<0.05 between subtypes). Most SCCs had more than ten m-stages while ACs had fewer than ten m-stages. The sensitivity analyses of the model parameters demonstrated the stability of the obtained estimates. A model containing parameters capable of representing the number of stages of cancer development occurring during individual's life was applied to the large population data on incidence of ACs and SCCs. The model revealed that the number of m-stages differed by cancer subtype being more strongly associated with ACs/SCCs histotype than with organ/site.
Khair, Fauzi; Sopha, Bertha Maya
2017-12-01
One of the crucial phases in disaster management is the response phase or the emergency response phase. It requires a sustainable system and a well-integrated management system. Any errors in the system on this phase will impact on significant increase of the victims number as well as material damage caused. Policies related to the location of aid posts are important decisions. The facts show that there are many failures in the process of providing assistance to the refugees due to lack of preparation and determination of facilities and aid post location. Therefore, this study aims to evaluate the number and location of aid posts on Merapi eruption in 2010. This study uses an integration between Agent Based Modeling (ABM) and Geographic Information System (GIS) about evaluation of the number and location of the aid post using some scenarios. The ABM approach aims to describe the agents behaviour (refugees and volunteers) in the event of a disaster with their respective characteristics. While the spatial data, GIS useful to describe real condition of the Sleman regency road. Based on the simulation result, it shows alternative scenarios that combine DERU UGM post, Maguwoharjo Stadium, Tagana Post and Pakem Main Post has better result in handling and distributing aid to evacuation barrack compared to initial scenario. Alternative scenarios indicates the unmet demands are less than the initial scenario.
Evaluating Number Sense in Workforce Students
Steinke, Dorothea A.
2015-01-01
Earlier institution-sponsored research revealed that about 20% of students in community college basic math and pre-algebra programs lacked a sense of part-whole relationships with whole numbers. Using the same tool with a group of 86 workforce students, about 75% placed five whole numbers on an empty number line in a way that indicated lack of…
Investigative Journalism Techniques. Evaluation Guide Number 6.
St. John, Mark
Noting that program evaluators can profit by adopting the investigative journalist's goal of discovering hidden information, this guide explores the journalist's investigative process--without its element of suspicion--and discusses how components of this process can be applied to program evaluation. After listing the major characteristics of the…
Reproduction numbers of infectious disease models
Directory of Open Access Journals (Sweden)
Pauline van den Driessche
2017-08-01
Full Text Available This primer article focuses on the basic reproduction number, â0, for infectious diseases, and other reproduction numbers related to â0 that are useful in guiding control strategies. Beginning with a simple population model, the concept is developed for a threshold value of â0 determining whether or not the disease dies out. The next generation matrix method of calculating â0 in a compartmental model is described and illustrated. To address control strategies, type and target reproduction numbers are defined, as well as sensitivity and elasticity indices. These theoretical ideas are then applied to models that are formulated for West Nile virus in birds (a vector-borne disease, cholera in humans (a disease with two transmission pathways, anthrax in animals (a disease that can be spread by dead carcasses and spores, and Zika in humans (spread by mosquitoes and sexual contacts. Some parameter values from literature data are used to illustrate the results. Finally, references for other ways to calculate â0 are given. These are useful for more complicated models that, for example, take account of variations in environmental fluctuation or stochasticity. Keywords: Basic reproduction number, Disease control, West Nile virus, Cholera, Anthrax, Zika virus
Stochastic modeling of sunshine number data
Energy Technology Data Exchange (ETDEWEB)
Brabec, Marek, E-mail: mbrabec@cs.cas.cz [Department of Nonlinear Modeling, Institute of Computer Science, Academy of Sciences of the Czech Republic, Pod Vodarenskou vezi 2, 182 07 Prague 8 (Czech Republic); Paulescu, Marius [Physics Department, West University of Timisoara, V. Parvan 4, 300223 Timisoara (Romania); Badescu, Viorel [Candida Oancea Institute, Polytechnic University of Bucharest, Spl. Independentei 313, 060042 Bucharest (Romania)
2013-11-13
In this paper, we will present a unified statistical modeling framework for estimation and forecasting sunshine number (SSN) data. Sunshine number has been proposed earlier to describe sunshine time series in qualitative terms (Theor Appl Climatol 72 (2002) 127-136) and since then, it was shown to be useful not only for theoretical purposes but also for practical considerations, e.g. those related to the development of photovoltaic energy production. Statistical modeling and prediction of SSN as a binary time series has been challenging problem, however. Our statistical model for SSN time series is based on an underlying stochastic process formulation of Markov chain type. We will show how its transition probabilities can be efficiently estimated within logistic regression framework. In fact, our logistic Markovian model can be relatively easily fitted via maximum likelihood approach. This is optimal in many respects and it also enables us to use formalized statistical inference theory to obtain not only the point estimates of transition probabilities and their functions of interest, but also related uncertainties, as well as to test of various hypotheses of practical interest, etc. It is straightforward to deal with non-homogeneous transition probabilities in this framework. Very importantly from both physical and practical points of view, logistic Markov model class allows us to test hypotheses about how SSN dependents on various external covariates (e.g. elevation angle, solar time, etc.) and about details of the dynamic model (order and functional shape of the Markov kernel, etc.). Therefore, using generalized additive model approach (GAM), we can fit and compare models of various complexity which insist on keeping physical interpretation of the statistical model and its parts. After introducing the Markovian model and general approach for identification of its parameters, we will illustrate its use and performance on high resolution SSN data from the Solar
Stochastic modeling of sunshine number data
International Nuclear Information System (INIS)
Brabec, Marek; Paulescu, Marius; Badescu, Viorel
2013-01-01
In this paper, we will present a unified statistical modeling framework for estimation and forecasting sunshine number (SSN) data. Sunshine number has been proposed earlier to describe sunshine time series in qualitative terms (Theor Appl Climatol 72 (2002) 127-136) and since then, it was shown to be useful not only for theoretical purposes but also for practical considerations, e.g. those related to the development of photovoltaic energy production. Statistical modeling and prediction of SSN as a binary time series has been challenging problem, however. Our statistical model for SSN time series is based on an underlying stochastic process formulation of Markov chain type. We will show how its transition probabilities can be efficiently estimated within logistic regression framework. In fact, our logistic Markovian model can be relatively easily fitted via maximum likelihood approach. This is optimal in many respects and it also enables us to use formalized statistical inference theory to obtain not only the point estimates of transition probabilities and their functions of interest, but also related uncertainties, as well as to test of various hypotheses of practical interest, etc. It is straightforward to deal with non-homogeneous transition probabilities in this framework. Very importantly from both physical and practical points of view, logistic Markov model class allows us to test hypotheses about how SSN dependents on various external covariates (e.g. elevation angle, solar time, etc.) and about details of the dynamic model (order and functional shape of the Markov kernel, etc.). Therefore, using generalized additive model approach (GAM), we can fit and compare models of various complexity which insist on keeping physical interpretation of the statistical model and its parts. After introducing the Markovian model and general approach for identification of its parameters, we will illustrate its use and performance on high resolution SSN data from the Solar
Delisser, P J; McCombe, G P; Trask, R S; Etches, J A; German, A J; Holden, S L; Wallace, A M; Burton, N J
2013-01-01
To compare the biomechanical behaviour of plate-rod constructs with varying numbers of monocortical screws applied to an ex vivo canine femoral-gap ostectomy model. Twenty Greyhound dog cadaveric femurs. Bone mineral density (BMD) was assessed with dual x-ray absorptiometry. Bones were assigned to four groups. Bones had a 12-hole 3.5 mm locking compression plate with one bicortical non-locking cortical screw in the most proximal and distal plate holes and an intramedullary Steinmann pin applied across a 20 mm mid-diaphyseal ostectomy. Additionally, one to four monocortical non-locking cortical screws were then placed (Groups 1-4 respectively) in the proximal and distal fragments. Stiffness and axial collapse were determined before and after cyclic axial loading (6000 cycles at 20%, 40%, and 60% of mean bodyweight [total: 18000 cycles]). Constructs subsequently underwent an additional 45000 cycles at 60% of bodyweight (total: 63000 cycles). Loading to failure was then performed and ultimate load and mode of failure recorded. The BMD did not differ significantly between groups. Construct stiffness for group 1 was significantly less than group 4 (p = 0.008). Stiffness showed a linear increase with an increasing number of monocortical screws (p = 0.001). All constructs survived fatigue loading. Load-to-failure was not significantly different between groups. Mean load- to-failure of all groups was >1350N. Ex vivo canine large-breed femurs showed adequate stability biomechanically and gradually increasing stiffness with increasing monocortical screw numbers.
Nowcasting sunshine number using logistic modeling
Czech Academy of Sciences Publication Activity Database
Brabec, Marek; Badescu, V.; Paulescu, M.
2013-01-01
Roč. 120, č. 1-2 (2013), s. 61-71 ISSN 0177-7971 R&D Projects: GA MŠk LD12009 Grant - others:European Cooperation in Science and Technology(XE) COST ES1002 Institutional research plan: CEZ:AV0Z1030915 Keywords : logistic regression * Markov model * sunshine number Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.245, year: 2013
Intellectual Capital Evaluation Models
Agoston Simona; Puia Ramona Stefania; Orzea Ivona
2010-01-01
The evaluation and measurement of intellectual capital is an issue of increasing importance for companies because of the staleness of the traditional accounting systems which do not provide relevant information regarding the value of a company. Thus, specialists are working to identify a model for assessing intellectual capital that can be easily implemented and used. The large number of proposed models but also the major differences between them emphasizes the fact that the specialists are s...
Evaluation models and evaluation use
Contandriopoulos, Damien; Brousselle, Astrid
2012-01-01
The use of evaluation results is at the core of evaluation theory and practice. Major debates in the field have emphasized the importance of both the evaluator’s role and the evaluation process itself in fostering evaluation use. A recent systematic review of interventions aimed at influencing policy-making or organizational behavior through knowledge exchange offers a new perspective on evaluation use. We propose here a framework for better understanding the embedded relations between evaluation context, choice of an evaluation model and use of results. The article argues that the evaluation context presents conditions that affect both the appropriateness of the evaluation model implemented and the use of results. PMID:23526460
A general model framework for multisymbol number comparison.
Huber, Stefan; Nuerk, Hans-Christoph; Willmes, Klaus; Moeller, Korbinian
2016-11-01
Different models have been proposed for the processing of multisymbol numbers like two- and three-digit numbers but also for negative numbers and decimals. However, these multisymbol numbers are assembled from the same set of Arabic digits and comply with the place-value structure of the Arabic number system. Considering these shared properties, we suggest that the processing of multisymbol numbers can be described in one general model framework. Accordingly, we first developed a computational model framework realizing componential representations of multisymbol numbers and evaluated its validity by simulating standard empirical effects of number magnitude comparison. We observed that the model framework successfully accounted for most of these effects. Moreover, our simulations provided first evidence supporting the notion of a fully componential processing of multisymbol numbers for the specific case of comparing two negative numbers. Thus, our general model framework indicates that the processing of different kinds of multisymbol integer and decimal numbers shares common characteristics (e.g., componential representation). The relevance and applicability of our model goes beyond the case of basic number processing. In particular, we also successfully simulated effects from applied marketing and consumer research by accounting for the left-digit effect found in processing of prices. Finally, we provide evidence that our model framework can be integrated into the more general context of multiattribute decision making. In sum, this indicates that our model framework captures a general scheme of separate processing of different attributes weighted by their saliency for the task at hand. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
International Nuclear Information System (INIS)
Glass, R.J.; Yarrington, L.; Nicholl, M.J.
1997-09-01
The major results from SNL's Conceptual Model Development and Validation Task (WBS 1.2.5.4.6) as developed through exploration of small scale processes were synthesized in Glass et al. to give guidance to Performance Assessment on improving conceptual models for isothermal flow in unsaturated, fractured rock. There, pressure saturation and relative permeability curves for single fractures were proposed to be a function of both fracture orientation within the gravity field and initial conditions. We refer the reader to Glass et al. for a discussion of the implications of this behavior for Performance Assessment. The scientific research we report here substantiates this proposed behavior. We address the modeling of phase structure within fractures under natural gradient conditions relevant to unsaturated flow through fractures. This phase structure underlies the calculation of effective properties for individual fractures and hence fracture networks as required for Performance Assessment. Standard Percolation (SP) and Invasion Percolation (IP) approaches have been recently proposed to model the underlying phase saturation structures within the individual fractures during conditions of two-phase flow. Subsequent analysis of these structures yields effective two-phase pressure-saturation and relative permeability relations for the fracture. However, both of these approaches yield structures that are at odds with physical reality as we see in experiments and thus effective properties calculated from these structures are in error. Here we develop and evaluate a Modified Invasion Percolation (MIP) approach to better model quasi-static immiscible displacement in fractures. The effects of gravity, contact angle, local aperature field geometry, and local in-plane interfacial curvature between phases are included in the calculation of invasion pressure for individual sites in a discretized aperture field
Strackee, S. D.; Kroon, F. H.; Jaspers, J. E.; Bos, K. E.
2001-01-01
The fibula osteocutaneous free flap has become the preferred method for most cases of mandibular reconstruction after oncologic surgical ablation. To recreate the parabolic form of the mandible, the fibula has to be divided up into segments using a closed wedge osteotomy technique. The number of
Modeling the number of car theft using Poisson regression
Zulkifli, Malina; Ling, Agnes Beh Yen; Kasim, Maznah Mat; Ismail, Noriszura
2016-10-01
Regression analysis is the most popular statistical methods used to express the relationship between the variables of response with the covariates. The aim of this paper is to evaluate the factors that influence the number of car theft using Poisson regression model. This paper will focus on the number of car thefts that occurred in districts in Peninsular Malaysia. There are two groups of factor that have been considered, namely district descriptive factors and socio and demographic factors. The result of the study showed that Bumiputera composition, Chinese composition, Other ethnic composition, foreign migration, number of residence with the age between 25 to 64, number of employed person and number of unemployed person are the most influence factors that affect the car theft cases. These information are very useful for the law enforcement department, insurance company and car owners in order to reduce and limiting the car theft cases in Peninsular Malaysia.
Automated data model evaluation
International Nuclear Information System (INIS)
Kazi, Zoltan; Kazi, Ljubica; Radulovic, Biljana
2012-01-01
Modeling process is essential phase within information systems development and implementation. This paper presents methods and techniques for analysis and evaluation of data model correctness. Recent methodologies and development results regarding automation of the process of model correctness analysis and relations with ontology tools has been presented. Key words: Database modeling, Data model correctness, Evaluation
Mean photon number dependent variational method to the Rabi model
International Nuclear Information System (INIS)
Liu, Maoxin; Ying, Zu-Jian; Luo, Hong-Gang; An, Jun-Hong
2015-01-01
We present a mean photon number dependent variational method, which works well in the whole coupling regime if the photon energy is dominant over the spin-flipping, to evaluate the properties of the Rabi model for both the ground state and excited states. For the ground state, it is shown that the previous approximate methods, the generalized rotating-wave approximation (only working well in the strong coupling limit) and the generalized variational method (only working well in the weak coupling limit), can be recovered in the corresponding coupling limits. The key point of our method is to tailor the merits of these two existing methods by introducing a mean photon number dependent variational parameter. For the excited states, our method yields considerable improvements over the generalized rotating-wave approximation. The variational method proposed could be readily applied to more complex models, for which it is difficult to formulate an analytic formula. (paper)
DEFF Research Database (Denmark)
Borlund, Pia
2003-01-01
An alternative approach to evaluation of interactive information retrieval (IIR) systems, referred to as the IIR evaluation model, is proposed. The model provides a framework for the collection and analysis of IR interaction data. The aim of the model is two-fold: 1) to facilitate the evaluation...... of IIR systems as realistically as possible with reference to actual information searching and retrieval processes, though still in a relatively controlled evaluation environment; and 2) to calculate the IIR system performance taking into account the non-binary nature of the assigned relevance...... assessments. The IIR evaluation model is presented as an alternative to the system-driven Cranfield model (Cleverdon, Mills & Keen, 1966; Cleverdon & Keen, 1966) which still is the dominant approach to the evaluation of IR and IIR systems. Key elements of the IIR evaluation model are the use of realistic...
Evaluation of a number skills development programme | Pietersen ...
African Journals Online (AJOL)
A pre-test post-test correlated groups design was used to evaluate the effectiveness of the Shuttleworth/Rotary Number Skills Development Programme to enhance the numeracy of Grade 2 learners (N = 1 69) from five primary schools (a private school, a school of auditory impaired learners, and three rural schools).
Directory of Open Access Journals (Sweden)
Wei Xu
2018-01-01
Full Text Available How to maximize customer satisfaction is an important research topic in the service quality evaluation. This paper proposes an evaluation method of comprehensive product quality for customer satisfaction based on the intuitionistic fuzzy number. In this method, we design a questionnaire and investigate the customer’s language evaluation information of product quality evaluation, including product expectations and product perception at first. And then, the product quality evaluation model is obtained by Delphi method; that is, the first-level evaluation indexes and the second-level evaluation indexes are obtained and the weight vector of each evaluation index is determined. Next, language evaluation information translates into corresponding fuzzy numbers using intuitionistic fuzzy numbers. Therefore, the results of the product quality evaluation of the production system are obtained using the weighted mean method. Finally, an example is used to illustrate the feasibility and effectiveness of the proposed method.
Energy Technology Data Exchange (ETDEWEB)
Barchet, W.R. (Pacific Northwest Lab., Richland, WA (United States)); Dennis, R.L. (Environmental Protection Agency, Research Triangle Park, NC (United States)); Seilkop, S.K. (Analytical Sciences, Inc., Durham, NC (United States)); Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K. (Atmospheric Environment Service, Downsview, ON (Canada)); Byun, D.; McHenry, J.N.
1991-12-01
The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs.
International Nuclear Information System (INIS)
Barchet, W.R.; Dennis, R.L.; Seilkop, S.K.; Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K.; Byun, D.; McHenry, J.N.; Karamchandani, P.; Venkatram, A.; Fung, C.; Misra, P.K.; Hansen, D.A.; Chang, J.S.
1991-12-01
The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs
Modeling number of claims and prediction of total claim amount
Acar, Aslıhan Şentürk; Karabey, Uǧur
2017-07-01
In this study we focus on annual number of claims of a private health insurance data set which belongs to a local insurance company in Turkey. In addition to Poisson model and negative binomial model, zero-inflated Poisson model and zero-inflated negative binomial model are used to model the number of claims in order to take into account excess zeros. To investigate the impact of different distributional assumptions for the number of claims on the prediction of total claim amount, predictive performances of candidate models are compared by using root mean square error (RMSE) and mean absolute error (MAE) criteria.
Introducing Program Evaluation Models
Directory of Open Access Journals (Sweden)
Raluca GÂRBOAN
2008-02-01
Full Text Available Programs and project evaluation models can be extremely useful in project planning and management. The aim is to set the right questions as soon as possible in order to see in time and deal with the unwanted program effects, as well as to encourage the positive elements of the project impact. In short, different evaluation models are used in order to minimize losses and maximize the benefits of the interventions upon small or large social groups. This article introduces some of the most recently used evaluation models.
An Evaluation of App-Based and Paper-Based Number Lines for Teaching Number Comparison
Weng, Pei-Lin; Bouck, Emily C.
2016-01-01
Number comparison is a fundamental skill required for academic and functional mathematics (e.g., time, money, purchasing) for students with disabilities. The most commonly used method to teach number comparison is number lines. Although historically paper number lines are used, app-based number lines may offer greater flexibility. This study…
Prediction of cloud droplet number in a general circulation model
Energy Technology Data Exchange (ETDEWEB)
Ghan, S.J.; Leung, L.R. [Pacific Northwest National Lab., Richland, WA (United States)
1996-04-01
We have applied the Colorado State University Regional Atmospheric Modeling System (RAMS) bulk cloud microphysics parameterization to the treatment of stratiform clouds in the National Center for Atmospheric Research Community Climate Model (CCM2). The RAMS predicts mass concentrations of cloud water, cloud ice, rain and snow, and number concnetration of ice. We have introduced the droplet number conservation equation to predict droplet number and it`s dependence on aerosols.
Dual Numbers Approach in Multiaxis Machines Error Modeling
Directory of Open Access Journals (Sweden)
Jaroslav Hrdina
2014-01-01
Full Text Available Multiaxis machines error modeling is set in the context of modern differential geometry and linear algebra. We apply special classes of matrices over dual numbers and propose a generalization of such concept by means of general Weil algebras. We show that the classification of the geometric errors follows directly from the algebraic properties of the matrices over dual numbers and thus the calculus over the dual numbers is the proper tool for the methodology of multiaxis machines error modeling.
Lepton number violation in theories with a large number of standard model copies
International Nuclear Information System (INIS)
Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich
2011-01-01
We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided by introducing a spontaneously broken U 1(B-L) . Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.
The Influence of Investor Number on a Microscopic Market Model
Hellthaler, T.
The stock market model of Levy, Persky, Solomon is simulated for much larger numbers of investors. While small markets can lead to realistically looking prices, the resulting prices of large markets oscillate smoothly in a semi-regular fashion.
Athanasou, James A.; Langan, Dianne
The purpose of this study was to evaluate the roles of interest, knowledge, and learning strategies on recall within a specific subject domain at an early stage of learning. Students (n=17) at two levels in a postgraduate music therapy course were assessed for their levels of prior knowledge, interest, and the number of strategies they used to…
Modelling the dispersion of particle numbers in five European cities
Kukkonen, J.; Karl, M.; Keuken, M.P.; Denier van der Gon, H.A.C.; Denby, B.R.; Singh, V.; Douros, J.; Manders, A.M.M.; Samaras, Z.; Moussiopoulos, N.; Jonkers, S.; Aarnio, M.; Karppinen, A.; Kangas, L.; Lutzenkirchen, S.; Petaja, T.; Vouitsis, I.; Sokhi, R.S.
2016-01-01
We present an overview of the modelling of particle number concentrations (PNCs) in five major European cities, namely Helsinki, Oslo, London, Rotterdam and Athens in 2008. Novel emission inventories of particle numbers have been compiled both on urban and European scales. We used atmospheric
Training effectiveness evaluation model
International Nuclear Information System (INIS)
Penrose, J.B.
1993-01-01
NAESCO's Training Effectiveness Evaluation Model (TEEM) integrates existing evaluation procedures with new procedures. The new procedures are designed to measure training impact on organizational productivity. TEEM seeks to enhance organizational productivity through proactive training focused on operation results. These results can be identified and measured by establishing and tracking performance indicators. Relating training to organizational productivity is not easy. TEEM is a team process. It offers strategies to assess more effectively organizational costs and benefits of training. TEEM is one organization's attempt to refine, manage and extend its training evaluation program
CMAQ Model Evaluation Framework
CMAQ is tested to establish the modeling system’s credibility in predicting pollutants such as ozone and particulate matter. Evaluation of CMAQ has been designed to assess the model’s performance for specific time periods and for specific uses.
Mayer–Jensen Shell Model and Magic Numbers
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 12; Issue 12. Mayer-Jensen Shell Model and Magic Numbers - An Independent Nucleon Model with Spin-Orbit Coupling. R Velusamy. General Article Volume 12 Issue 12 December 2007 pp 12-24 ...
Optimal Number of States in Hidden Markov Models and its ...
African Journals Online (AJOL)
In this paper, Hidden Markov Model is applied to model human movements as to facilitate an automatic detection of the same. A number of activities were simulated with the help of two persons. The four movements considered are walking, sitting down-getting up, fall while walking and fall while standing. The data is ...
On the vacuum baryon number in the chiral bag model
International Nuclear Information System (INIS)
Jaroszewicz, T.
1984-01-01
We give a rederivation, generalization and interpretation of the result of Goldstone and Jaffe on the vacuum baryon number in the chiral bag model. Our results are based on considering the bag model as a theory of free quarks, massless inside and infinitely massive outside the bag. (orig.)
Evaluating Number Sense in Community College Developmental Math Students
Steinke, Dorothea A.
2017-01-01
Community college developmental math students (N = 657) from three math levels were asked to place five whole numbers on a line that had only endpoints 0 and 20 marked. How the students placed the numbers revealed the same three stages of behavior that Steffe and Cobb (1988) documented in determining young children's number sense. 23% of the…
Conserved number fluctuations in a hadron resonance gas model
International Nuclear Information System (INIS)
Garg, P.; Mishra, D.K.; Netrakanti, P.K.; Mohanty, B.; Mohanty, A.K.; Singh, B.K.; Xu, N.
2013-01-01
Net-baryon, net-charge and net-strangeness number fluctuations in high energy heavy-ion collisions are discussed within the framework of a hadron resonance gas (HRG) model. Ratios of the conserved number susceptibilities calculated in HRG are being compared to the corresponding experimental measurements to extract information about the freeze-out condition and the phase structure of systems with strong interactions. We emphasize the importance of considering the actual experimental acceptances in terms of kinematics (pseudorapidity (η) and transverse momentum (p T )), the detected charge state, effect of collective motion of particles in the system and the resonance decay contributions before comparisons are made to the theoretical calculations. In this work, based on HRG model, we report that the net-baryon number fluctuations are least affected by experimental acceptances compared to the net-charge and net-strangeness number fluctuations
Electroweak phase transition in a model with gauged lepton number
International Nuclear Information System (INIS)
Aranda, Alfredo; Jiménez, Enrique; Vaquera-Araujo, Carlos A.
2015-01-01
In this work we study the electroweak phase transition in a model with gauged lepton number. Here, a family of vector-like leptons is required in order to cancel the gauge anomalies. Furthermore, these leptons can play an important role in the transition process. We find that this framework is able to provide a strong transition, but only for a very limited number of cases.
On the Reproduction Number of a Gut Microbiota Model.
Barril, Carles; Calsina, Àngel; Ripoll, Jordi
2017-11-01
A spatially structured linear model of the growth of intestinal bacteria is analysed from two generational viewpoints. Firstly, the basic reproduction number associated with the bacterial population, i.e. the expected number of daughter cells per bacterium, is given explicitly in terms of biological parameters. Secondly, an alternative quantity is introduced based on the number of bacteria produced within the intestine by one bacterium originally in the external media. The latter depends on the parameters in a simpler way and provides more biological insight than the standard reproduction number, allowing the design of experimental procedures. Both quantities coincide and are equal to one at the extinction threshold, below which the bacterial population becomes extinct. Optimal values of both reproduction numbers are derived assuming parameter trade-offs.
Toward a model framework of generalized parallel componential processing of multi-symbol numbers.
Huber, Stefan; Cornelsen, Sonja; Moeller, Korbinian; Nuerk, Hans-Christoph
2015-05-01
In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining and investigating a sign-decade compatibility effect for the comparison of positive and negative numbers, which extends the unit-decade compatibility effect in 2-digit number processing. Then, we evaluated whether the model is capable of accounting for previous findings in negative number processing. In a magnitude comparison task, in which participants had to single out the larger of 2 integers, we observed a reliable sign-decade compatibility effect with prolonged reaction times for incompatible (e.g., -97 vs. +53; in which the number with the larger decade digit has the smaller, i.e., negative polarity sign) as compared with sign-decade compatible number pairs (e.g., -53 vs. +97). Moreover, an analysis of participants' eye fixation behavior corroborated our model of parallel componential processing of multi-symbol numbers. These results are discussed in light of concurrent theoretical notions about negative number processing. On the basis of the present results, we propose a generalized integrated model framework of parallel componential multi-symbol processing. (c) 2015 APA, all rights reserved).
Overfitting Bayesian Mixture Models with an Unknown Number of Components.
Directory of Open Access Journals (Sweden)
Zoé van Havre
Full Text Available This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.
Overfitting Bayesian Mixture Models with an Unknown Number of Components.
van Havre, Zoé; White, Nicole; Rousseau, Judith; Mengersen, Kerrie
2015-01-01
This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.
Application of Z-Number Based Modeling in Psychological Research
Directory of Open Access Journals (Sweden)
Rafik Aliev
2015-01-01
Full Text Available Pilates exercises have been shown beneficial impact on physical, physiological, and mental characteristics of human beings. In this paper, Z-number based fuzzy approach is applied for modeling the effect of Pilates exercises on motivation, attention, anxiety, and educational achievement. The measuring of psychological parameters is performed using internationally recognized instruments: Academic Motivation Scale (AMS, Test of Attention (D2 Test, and Spielberger’s Anxiety Test completed by students. The GPA of students was used as the measure of educational achievement. Application of Z-information modeling allows us to increase precision and reliability of data processing results in the presence of uncertainty of input data created from completed questionnaires. The basic steps of Z-number based modeling with numerical solutions are presented.
Application of Z-Number Based Modeling in Psychological Research.
Aliev, Rafik; Memmedova, Konul
2015-01-01
Pilates exercises have been shown beneficial impact on physical, physiological, and mental characteristics of human beings. In this paper, Z-number based fuzzy approach is applied for modeling the effect of Pilates exercises on motivation, attention, anxiety, and educational achievement. The measuring of psychological parameters is performed using internationally recognized instruments: Academic Motivation Scale (AMS), Test of Attention (D2 Test), and Spielberger's Anxiety Test completed by students. The GPA of students was used as the measure of educational achievement. Application of Z-information modeling allows us to increase precision and reliability of data processing results in the presence of uncertainty of input data created from completed questionnaires. The basic steps of Z-number based modeling with numerical solutions are presented.
Directory of Open Access Journals (Sweden)
Kardia Sharon LR
2011-05-01
Full Text Available Abstract Background Copy number data are routinely being extracted from genome-wide association study chips using a variety of software. We empirically evaluated and compared four freely-available software packages designed for Affymetrix SNP chips to estimate copy number: Affymetrix Power Tools (APT, Aroma.Affymetrix, PennCNV and CRLMM. Our evaluation used 1,418 GENOA samples that were genotyped on the Affymetrix Genome-Wide Human SNP Array 6.0. We compared bias and variance in the locus-level copy number data, the concordance amongst regions of copy number gains/deletions and the false-positive rate amongst deleted segments. Results APT had median locus-level copy numbers closest to a value of two, whereas PennCNV and Aroma.Affymetrix had the smallest variability associated with the median copy number. Of those evaluated, only PennCNV provides copy number specific quality-control metrics and identified 136 poor CNV samples. Regions of copy number variation (CNV were detected using the hidden Markov models provided within PennCNV and CRLMM/VanillaIce. PennCNV detected more CNVs than CRLMM/VanillaIce; the median number of CNVs detected per sample was 39 and 30, respectively. PennCNV detected most of the regions that CRLMM/VanillaIce did as well as additional CNV regions. The median concordance between PennCNV and CRLMM/VanillaIce was 47.9% for duplications and 51.5% for deletions. The estimated false-positive rate associated with deletions was similar for PennCNV and CRLMM/VanillaIce. Conclusions If the objective is to perform statistical tests on the locus-level copy number data, our empirical results suggest that PennCNV or Aroma.Affymetrix is optimal. If the objective is to perform statistical tests on the summarized segmented data then PennCNV would be preferred over CRLMM/VanillaIce. Specifically, PennCNV allows the analyst to estimate locus-level copy number, perform segmentation and evaluate CNV-specific quality-control metrics within a
Eckel-Passow, Jeanette E; Atkinson, Elizabeth J; Maharjan, Sooraj; Kardia, Sharon L R; de Andrade, Mariza
2011-05-31
Copy number data are routinely being extracted from genome-wide association study chips using a variety of software. We empirically evaluated and compared four freely-available software packages designed for Affymetrix SNP chips to estimate copy number: Affymetrix Power Tools (APT), Aroma.Affymetrix, PennCNV and CRLMM. Our evaluation used 1,418 GENOA samples that were genotyped on the Affymetrix Genome-Wide Human SNP Array 6.0. We compared bias and variance in the locus-level copy number data, the concordance amongst regions of copy number gains/deletions and the false-positive rate amongst deleted segments. APT had median locus-level copy numbers closest to a value of two, whereas PennCNV and Aroma.Affymetrix had the smallest variability associated with the median copy number. Of those evaluated, only PennCNV provides copy number specific quality-control metrics and identified 136 poor CNV samples. Regions of copy number variation (CNV) were detected using the hidden Markov models provided within PennCNV and CRLMM/VanillaIce. PennCNV detected more CNVs than CRLMM/VanillaIce; the median number of CNVs detected per sample was 39 and 30, respectively. PennCNV detected most of the regions that CRLMM/VanillaIce did as well as additional CNV regions. The median concordance between PennCNV and CRLMM/VanillaIce was 47.9% for duplications and 51.5% for deletions. The estimated false-positive rate associated with deletions was similar for PennCNV and CRLMM/VanillaIce. If the objective is to perform statistical tests on the locus-level copy number data, our empirical results suggest that PennCNV or Aroma.Affymetrix is optimal. If the objective is to perform statistical tests on the summarized segmented data then PennCNV would be preferred over CRLMM/VanillaIce. Specifically, PennCNV allows the analyst to estimate locus-level copy number, perform segmentation and evaluate CNV-specific quality-control metrics within a single software package. PennCNV has relatively small bias
Characterizing and modelling persistence in the number of lottery winners
Antonio, Fernando J.; Mendes, Renio S.; Itami, Andreia S.; Picoli, Sergio
2015-06-01
Lottery is the most famous branch among all the games of chance. By analysing data from Mega-Sena, the major lottery in Brazil, we investigated the presence of persistent behaviour in the time series of the number of winners. We found that the demand for tickets grew collectively as an exponential driven by the size of the accumulated jackpot. Finally, we identified that a stochastic model grounded on the rolling-over feature of lotteries can generate correlations qualitatively similar to those observed empirically. The model is consistent with the idea that the growth in the number of bets, motivated by the size of the expected jackpot, is a mechanism generator of correlations in an apparently random scenario.
Evaluation of R and D volume 2 number 3
International Nuclear Information System (INIS)
Anderson, F.; Cheah, C.; Dalpe, R.; O'Brecht, M.
1994-01-01
A Canadian newsletter on the evaluation of research and development. This issue contains an econometric assessment of the impact of Research and Development programs, the choosing of the location of pharmaceutical Research and Development, the industry's scientific publications, the standards as a strategic instrument, and how much future Research and Development can an organization justify
Number of generations in free fermionic string models
Giannakis, I; Yuan, K; Giannakis, Ioannis; Nanopoulos, D V; Yuan, Kajia
1995-01-01
In string theory there seems to be an intimate connection between spacetime and world-sheet physics. Following this line of thought we investigate the family problem in a particular class of string solutions, namely the free fermionic string models. We find that the number of generations N_g is related to the index of the supersymmetry generator of the underlying N=2 internal superconformal field theory which is always present in any N=1 spacetime supersymmetric string vacuum. We also derive a formula for the index and thus for the number of generations which is sensitive to the boundary condition assignments of the internal fermions and to certain coefficients which determine the weight with which each spin-structure of the model contributes to the one-loop partition function. Finally we apply our formula to several realistic string models in order to derive N_g and we verify our results by constructing explicitly the massless spectrum of these string models.
Modeling users' activity on Twitter networks: validation of Dunbar's number
Goncalves, Bruno; Perra, Nicola; Vespignani, Alessandro
2012-02-01
Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100-200 stable relationships. Thus, the ``economy of attention'' is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior.
Modeling users' activity on twitter networks: validation of Dunbar's number.
Directory of Open Access Journals (Sweden)
Bruno Gonçalves
Full Text Available Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100-200 stable relationships. Thus, the 'economy of attention' is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior.
Fuzzy model for predicting the number of deformed wheels
Directory of Open Access Journals (Sweden)
Ž. Đorđević
2015-10-01
Full Text Available Deformation of the wheels damage cars and rails and affect on vehicle stability and safety. Repair and replacement cause high costs and lack of wagons. Planning of maintenance of wagons can not be done without estimates of the number of wheels that will be replaced due to wear and deformation in a given period of time. There are many influencing factors, the most important are: weather conditions, quality of materials, operating conditions, and distance between the two replacements. The fuzzy logic model uses the collected data as input variables to predict the output variable - number of deformed wheels for a certain type of vehicle in the defined period at a particular section of the railway.
Baryon number dissipation at finite temperature in the standard model
International Nuclear Information System (INIS)
Mottola, E.; Raby, S.; Starkman, G.
1990-01-01
We analyze the phenomenon of baryon number violation at finite temperature in the standard model, and derive the relaxation rate for the baryon density in the high temperature electroweak plasma. The relaxation rate, γ is given in terms of real time correlation functions of the operator E·B, and is directly proportional to the sphaleron transition rate, Γ: γ preceq n f Γ/T 3 . Hence it is not instanton suppressed, as claimed by Cohen, Dugan and Manohar (CDM). We show explicitly how this result is consistent with the methods of CDM, once it is recognized that a new anomalous commutator is required in their approach. 19 refs., 2 figs
Modeling of dynamically loaded hydrodynamic bearings at low Sommerfeld numbers
DEFF Research Database (Denmark)
Thomsen, Kim
and failure risk of rolling element bearings do, however, grow exponentially with the size. Therefore hydrodynamic bearings can prove to be a competitive alternative to the current practice of rolling element bearings and ultimately help reducing the cost and carbon footprint of renewable energy generation....... The challenging main bearing operation conditions in a wind turbine pose a demanding development task for the design of a hydrodynamic bearing. In general these conditions include operation at low Reynolds numbers with frequent start and stop at high loads as well as difficult operating conditions dictated...... by environment and other wind turbine components. In this work a numerical multiphysics bearing model is developed in order to allow for accurate performance prediction of hydrodynamic bearings subjected to the challenging conditions that exist in modern wind turbines. This requires the coupling of several...
Modelling the number of olive groves in Spanish municipalities
Energy Technology Data Exchange (ETDEWEB)
Huete, M.D.; Marmolejo, J.A.
2016-11-01
The univariate generalized Waring distribution (UGWD) is presented as a new model to describe the goodness of fit, applicable in the context of agriculture. In this paper, it was used to model the number of olive groves recorded in Spain in the 8,091 municipalities recorded in the 2009 Agricultural Census, according to which the production of oil olives accounted for 94% of total output, while that of table olives represented 6% (with an average of 44.84 and 4.06 holdings per Spanish municipality, respectively). UGWD is suitable for fitting this type of discrete data, with strong left-sided asymmetry. This novel use of UGWD can provide the foundation for future research in agriculture, with the advantage over other discrete distributions that enables the analyst to split the variance. After defining the distribution, we analysed various methods for fitting the parameters associated with it, namely estimation by maximum likelihood, estimation by the method of moments and a variant of the latter, estimation by the method of frequencies and moments. For oil olives, the chi-square goodness of fit test gives p-values of 0.9992, 0.9967 and 0.9977, respectively. However, a poor fit was obtained for the table olive distribution. Finally, the variance was split, following Irwin, into three components related to random factors, external factors and internal differences. For the distribution of the number of olive grove holdings, this splitting showed that random and external factors only account about 0.22% and 0.05%. Therefore, internal differences within municipalities play an important role in determining total variability. (Author)
Modelling the number of olive groves in Spanish municipalities
Directory of Open Access Journals (Sweden)
María-Dolores Huete
2016-03-01
Full Text Available The univariate generalized Waring distribution (UGWD is presented as a new model to describe the goodness of fit, applicable in the context of agriculture. In this paper, it was used to model the number of olive groves recorded in Spain in the 8,091 municipalities recorded in the 2009 Agricultural Census, according to which the production of oil olives accounted for 94% of total output, while that of table olives represented 6% (with an average of 44.84 and 4.06 holdings per Spanish municipality, respectively. UGWD is suitable for fitting this type of discrete data, with strong left-sided asymmetry. This novel use of UGWD can provide the foundation for future research in agriculture, with the advantage over other discrete distributions that enables the analyst to split the variance. After defining the distribution, we analysed various methods for fitting the parameters associated with it, namely estimation by maximum likelihood, estimation by the method of moments and a variant of the latter, estimation by the method of frequencies and moments. For oil olives, the chi-square goodness of fit test gives p-values of 0.9992, 0.9967 and 0.9977, respectively. However, a poor fit was obtained for the table olive distribution. Finally, the variance was split, following Irwin, into three components related to random factors, external factors and internal differences. For the distribution of the number of olive grove holdings, this splitting showed that random and external factors only account about 0.22% and 0.05%. Therefore, internal differences within municipalities play an important role in determining total variability.
Penerapan Strategi Numbered Head Together dalam Setting Model Pembelajaran STAD
Directory of Open Access Journals (Sweden)
Muhammad Mifta Fausan
2016-07-01
Full Text Available This study aims to determine the increase of motivation and biology student learning outcomes through the implementation of Numbered Head Together (NHT strategies in setting Student Teams Achievement Division (STAD learning model based Lesson Study (LS. Subjects in this study were students of class X IS 1 MAN 3 Malang. Implementation this study consisted of two cycles and each cycle consisted of three meetings. The data obtained were analyzed using descriptive statistical analysis of qualitative and quantitative descriptive statistics. The research instrument used is the observation sheet, test, monitoring LS sheets and questionnaires. The results of this study indicate that through the implementation of NHT strategy in setting STAD learning model based LS can improve motivation and learning outcomes biology students. Students' motivation in the first cycle of 69% and increased in the second cycle of 86%. While the cognitive learning, classical completeness in the first cycle of 74% and increased in the second cycle to 93%. Affective learning outcomes of students in the first cycle by 93% and increased in the second cycle to 100%. Furthermore, psychomotor learning outcomes of students also increased from 74% in the first cycle to 93% in the second cycle.
DEFF Research Database (Denmark)
Olesen, H. R.
1998-01-01
Proceedings of the Twenty-Second NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held June 6-10, 1997, in Clermont-Ferrand, France.......Proceedings of the Twenty-Second NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held June 6-10, 1997, in Clermont-Ferrand, France....
A POD reduced order unstructured mesh ocean modelling method for moderate Reynolds number flows
Fang, F.; Pain, C. C.; Navon, I. M.; Gorman, G. J.; Piggott, M. D.; Allison, P. A.; Farrell, P. E.; Goddard, A. J. H.
Herein a new approach to enhance the accuracy of a novel Proper Orthogonal Decomposition (POD) model applied to moderate Reynolds number flows (of the type typically encountered in ocean models) is presented. This approach develops the POD model of Fang et al. [Fang, F., Pain, C.C., Navon, I.M., Piggott, M.D., Gorman, G.J., Allison, P., Goddard, A.J.H., 2008. Reduced-order modelling of an adaptive mesh ocean model. International Journal for Numerical Methods in Fluids. doi:10.1002/fld.1841] used in conjunction with the Imperial College Ocean Model (ICOM), an adaptive, non-hydrostatic finite element model. Both the velocity and vorticity results of the POD reduced order model (ROM) exhibit an overall good agreement with those obtained from the full model. The accuracy of the POD-Galerkin model with the use of adaptive meshes is first evaluated using the Munk gyre flow test case with Reynolds numbers ranging between 400 and 2000. POD models using the L2 norm become oscillatory when the Reynolds number exceeds Re=400. This is because the low-order truncation of the POD basis inhibits generally all the transfers between the large and the small (unresolved) scales of the fluid flow. Accuracy is improved by using the H1 POD projector in preference to the L2 POD projector. The POD bases are constructed by incorporating gradients as well as function values in the H1 Sobolev norm. The accuracy of numerical results is further enhanced by increasing the number of snapshots and POD bases. Error estimation was used to assess the effect of truncation (involved in the POD-Galerkin approach) when adaptive meshes are used in conjunction with POD/ROM. The RMSE of velocity results between the full model and POD-Galerkin model is reduced by as much as 50% by using the H1 norm and increasing the number of snapshots and POD bases.
International Nuclear Information System (INIS)
Castellano, G.; Trincavelli, J.; Del Giorgio, M.; Riveros, J.
1987-01-01
Recent models for the distribution function given by Sewell, Love and Scott (1985) and by Pouchou and Pichoir (1986) are compared with those models which have shown a good agreement with experimental data. The validity of the basis on which the different models have been developed is discussed. (Author) [es
Model Program Evaluations. Fact Sheet
Arkansas Safe Schools Initiative Division, 2002
2002-01-01
There are probably thousands of programs and courses intended to prevent or reduce violence in this nation's schools. Evaluating these many programs has become a problem or goal in itself. There are now many evaluation programs, with many levels of designations, such as model, promising, best practice, exemplary and noteworthy. "Model program" is…
Nuclear models relevant to evaluation
International Nuclear Information System (INIS)
Arthur, E.D.; Chadwick, M.B.; Hale, G.M.; Young, P.G.
1992-01-01
The widespread use of nuclear models continues in the creation of data evaluations. The reasons include extension of data evaluations to higher energies, creation of data libraries for isotopic components of natural materials, and production of evaluations for radioactive target species. In these cases, experimental data are often sparse or nonexistent. As this trend continues, the nuclear models employed in evaluation work move towards more microscopically-based theoretical methods, prompted in part by the availability of increasingly powerful computational resources. Advances in nuclear models applicable to evaluation will be reviewed. These include advances in optical model theory, microscopic and phenomenological state and level density theory, unified models that consistently describe both equilibrium and nonequilibrium reaction mechanisms, and improved methodologies for calculation of prompt radiation from fission. (orig.)
Nuclear models relevant to evaluation
International Nuclear Information System (INIS)
Arthur, E.D.; Chadwick, M.B.; Hale, G.M.; Young, P.G.
1991-01-01
The widespread use of nuclear models continues in the creation of data evaluations. The reasons include extension of data evaluations to higher energies, creation of data libraries for isotopic components of natural materials, and production of evaluations for radiative target species. In these cases, experimental data are often sparse or nonexistent. As this trend continues, the nuclear models employed in evaluation work move towards more microscopically-based theoretical methods, prompted in part by the availability of increasingly powerful computational resources. Advances in nuclear models applicable to evaluation will be reviewed. These include advances in optical model theory, microscopic and phenomenological state and level density theory, unified models that consistently describe both equilibrium and nonequilibrium reaction mechanism, and improved methodologies for calculation of prompt radiation from fission. 84 refs., 8 figs
Global gridded crop model evaluation
Müller, Christoph; Elliott, Joshua; Chryssanthacopoulos, James; Arneth, Almut; Balkovic, Juraj; Ciais, Philippe; Deryng, Delphine; Folberth, Christian; Glotter, Michael; Hoek, Steven; Iizumi, Toshichika; Izaurralde, Roberto C.; Jones, Curtis; Khabarov, Nikolay; Lawrence, Peter; Liu, Wenfeng; Olin, Stefan; Pugh, Thomas A.M.; Ray, Deepak K.; Reddy, Ashwan; Rosenzweig, Cynthia; Ruane, Alex C.; Sakurai, Gen; Schmid, Erwin; Skalsky, Rastislav; Song, Carol X.; Wang, Xuhui; Wit, De Allard; Yang, Hong
2017-01-01
Crop models are increasingly used to simulate crop yields at the global scale, but so far there is no general framework on how to assess model performance. Here we evaluate the simulation results of 14 global gridded crop modeling groups that have contributed historic crop yield simulations for
Deng, Xinyang; Jiang, Wen
2017-09-12
Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model.
The Air Quality Model Evaluation International Initiative ...
This presentation provides an overview of the Air Quality Model Evaluation International Initiative (AQMEII). It contains a synopsis of the three phases of AQMEII, including objectives, logistics, and timelines. It also provides a number of examples of analyses conducted through AQMEII with a particular focus on past and future analyses of deposition. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.
Evaluation of cell number and DNA content in mouse embryos cultivated with uranium
International Nuclear Information System (INIS)
Kundt, Mirian S.; Cabrini, Romulo L.
2000-01-01
The evaluation of the degree of development, the number of cells and the DNA content, were used to evaluate the embryotoxicity of uranium. Embryos at a one cell stage were cultured with uranyl nitrate hexahydrate (UN) at a final concentration of uranium (U) of 26, 52 and 104 μgU/ml. At 24 hs of culture, the embryos at the 2 cell stage, were put in new wells with the same concentrations of U as the previous day, until the end of the period of incubation at 72 hs. At 72 hs of culture, 87% of the original one cell embryos were at morula stage, and in those cultivated with uranium, the percentage decreased significantly to 77; 63.24 and 40.79% respectively for the different U concentrations. Those embryos that exhibited a normal morphology, were selected and fixed on slides. The number of cells per embryo was evaluated in Giemsa stained preparations. The DNA content was evaluated cytophotometrically in Feulgen stained nuclei. The number of cells decreased significantly from 20,3 ± 5.6 in the control to 19 ± 6; 14 ± 3 and 13.9 ± 5.6 for the different concentrations. All the embryos evaluated showed one easy recognizable polar body, which was used a haploid indicator (n). The content of DNA was measured in a total of 20 control embryos and 16 embryos cultivated with UN. In control embryos, 92,7% of the nuclei presented a normal ploidy from 2n to 4n, 2,9% nuclei were hypoploid and 4,4% were hyperploid. The percentage of hypoploid nuclei rose in a dose-dependent fashion to 3.45; 44.45 and 50.34% respectively for the embryos cultured at the different U concentrations. The results indicate that U is embryotoxic, that its effects are dose dependent at the concentrations used in this study and that even those embryos that show a normal morphology, can be genetically affected. We show that the model employed is extremely sensitive. It is possible to use the preimplantation embryos, as a model to test the effect of possibly mutagenic agents of the nuclear industry. (author)
A Cyberspace Command and Control Model (Maxwell Paper, Number 47)
2009-08-01
Norbert Wiener , Cyber- netics: Or the Control and Communication in the Animal and the Machine [Cambridge, MA: MIT Press, 1948]; and Ludwig von Bertalanffy...or external change has successfully aligned with the system’s goal. Cybernetic Model The Basic Cybernetic Model (sources: Wiener , 1948; von... Wiener , 1948) EMERGED OUT OF WWII AND IS THE FOUNDATION OF THE OVERWHELMING MAJORITY OF C2 MODELS Figure 4 The Cybernetic Model. (Adapted from
Rock mechanics models evaluation report
International Nuclear Information System (INIS)
1987-08-01
This report documents the evaluation of the thermal and thermomechanical models and codes for repository subsurface design and for design constraint analysis. The evaluation was based on a survey of the thermal and thermomechanical codes and models that are applicable to subsurface design, followed by a Kepner-Tregoe (KT) structured decision analysis of the codes and models. The primary recommendations of the analysis are that the DOT code be used for two-dimensional thermal analysis and that the STEALTH and HEATING 5/6 codes be used for three-dimensional and complicated two-dimensional thermal analysis. STEALTH and SPECTROM 32 are recommended for thermomechanical analyses. The other evaluated codes should be considered for use in certain applications. A separate review of salt creep models indicate that the commonly used exponential time law model is appropriate for use in repository design studies. 38 refs., 1 fig., 7 tabs
International Nuclear Information System (INIS)
Petersen, K.E.
1999-01-01
The model evaluation group (MEG) was launched in 1992 growing out of the Major Technological Hazards Programme with EU/DG XII. The goal of MEG was to improve the culture in which models were developed, particularly by encouraging voluntary model evaluation procedures based on a formalised and consensus protocol. The evaluation intended to assess the fitness-for-purpose of the models being used as a measure of the quality. The approach adopted was focused on developing a generic model evaluation protocol and subsequent targeting this onto specific areas of application. Five such developments have been initiated, on heavy gas dispersion, liquid pool fires, gas explosions, human factors and momentum fires. The quality of models is an important element when complying with the 'Seveso Directive' requiring that the safety reports submitted to the authorities comprise an assessment of the extent and severity of the consequences of identified major accidents. Further, the quality of models become important in the land use planning process, where the proximity of industrial sites to vulnerable areas may be critical. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)
Modelling of high-enthalpy, high-Mach number flows
International Nuclear Information System (INIS)
Degrez, G; Lani, A; Panesi, M; Chazot, O; Deconinck, H
2009-01-01
A review is made of the computational models of high-enthalpy flows developed over the past few years at the von Karman Institute and Universite Libre de Bruxelles, for the modelling of high-enthalpy hypersonic (re-)entry flows. Both flows in local thermo-chemical equilibrium (LTE) and flows in thermo-chemical non-equilibrium (TCNEQ) are considered. First, the physico-chemical models are described, i.e. the set of conservation laws, the thermodynamics, transport phenomena and chemical kinetics models. Particular attention is given to the correct modelling of elemental (LTE flows) and species (chemical non-equilibrium-CNEQ-flows) transport. The numerical algorithm, based on a state-of-the-art finite volume discretization, is then briefly described. Finally, selected examples are included to illustrate the capabilities of the developed solver. (review article)
Study and discretization of kinetic models and fluid models at low Mach number
International Nuclear Information System (INIS)
Dellacherie, Stephane
2011-01-01
This thesis summarizes our work between 1995 and 2010. It concerns the analysis and the discretization of Fokker-Planck or semi-classical Boltzmann kinetic models and of Euler or Navier-Stokes fluid models at low Mach number. The studied Fokker-Planck equation models the collisions between ions and electrons in a hot plasma, and is here applied to the inertial confinement fusion. The studied semi-classical Boltzmann equations are of two types. The first one models the thermonuclear reaction between a deuterium ion and a tritium ion producing an α particle and a neutron particle, and is also in our case used to describe inertial confinement fusion. The second one (known as the Wang-Chang and Uhlenbeck equations) models the transitions between electronic quantified energy levels of uranium and iron atoms in the AVLIS isotopic separation process. The basic properties of these two Boltzmann equations are studied, and, for the Wang-Chang and Uhlenbeck equations, a kinetic-fluid coupling algorithm is proposed. This kinetic-fluid coupling algorithm incited us to study the relaxation concept for gas and immiscible fluids mixtures, and to underline connections with classical kinetic theory. Then, a diphasic low Mach number model without acoustic waves is proposed to model the deformation of the interface between two immiscible fluids induced by high heat transfers at low Mach number. In order to increase the accuracy of the results without increasing computational cost, an AMR algorithm is studied on a simplified interface deformation model. These low Mach number studies also incited us to analyse on cartesian meshes the inaccuracy at low Mach number of Godunov schemes. Finally, the LBM algorithm applied to the heat equation is justified
The Baryon Number Two System in the Chiral Soliton Model
International Nuclear Information System (INIS)
Mantovani-Sarti, V.; Drago, A.; Vento, V.; Park, B.-Y.
2013-01-01
We study the interaction between two B = 1 states in a chiral soliton model where baryons are described as non-topological solitons. By using the hedgehog solution for the B = 1 states we construct three possible B = 2 configurations to analyze the role of the relative orientation of the hedgehog quills in the dynamics. The strong dependence of the inter soliton interaction on these relative orientations reveals that studies of dense hadronic matter using this model should take into account their implications. (author)
The Number of Atomic Models of Uncountable Theories
Ulrich, Douglas
2016-01-01
We show there exists a complete theory in a language of size continuum possessing a unique atomic model which is not constructible. We also show it is consistent with $ZFC + \\aleph_1 < 2^{\\aleph_0}$ that there is a complete theory in a language of size $\\aleph_1$ possessing a unique atomic model which is not constructible. Finally we show it is consistent with $ZFC + \\aleph_1 < 2^{\\aleph_0}$ that for every complete theory $T$ in a language of size $\\aleph_1$, if $T$ has uncountable atomic mod...
Özcan, Zeynep; Başkan, Oğuz; Düzgün, H Şebnem; Kentel, Elçin; Alp, Emre
2017-10-01
Fate and transport models are powerful tools that aid authorities in making unbiased decisions for developing sustainable management strategies. Application of pollution fate and transport models in semi-arid regions has been challenging because of unique hydrological characteristics and limited data availability. Significant temporal and spatial variability in rainfall events, complex interactions between soil, vegetation and topography, and limited water quality and hydrological data due to insufficient monitoring network make it a difficult task to develop reliable models in semi-arid regions. The performances of these models govern the final use of the outcomes such as policy implementation, screening, economical analysis, etc. In this study, a deterministic distributed fate and transport model, SWAT, is applied in Lake Mogan Watershed, a semi-arid region dominated by dry agricultural practices, to estimate nutrient loads and to develop the water budget of the watershed. To minimize the discrepancy due to limited availability of historical water quality data extensive efforts were placed in collecting site-specific data for model inputs such as soil properties, agricultural practice information and land use. Moreover, calibration parameter ranges suggested in the literature are utilized during calibration in order to obtain more realistic representation of Lake Mogan Watershed in the model. Model performance is evaluated using comparisons of the measured data with 95%CI for the simulated data and comparison of unit pollution load estimations with those provided in the literature for similar catchments, in addition to commonly used evaluation criteria such as Nash-Sutcliffe simulation efficiency, coefficient of determination and percent bias. These evaluations demonstrated that even though the model prediction power is not high according to the commonly used model performance criteria, the calibrated model may provide useful information in the comparison of the
Realistic Matematic Approach through Numbered Head Together Learning Model
Sugihatno, A. C. M. S.; Budiyono; Slamet, I.
2017-09-01
Recently, the teaching process which is conducted based on teacher center affect the students interaction in the class. It causes students become less interest to participate. That is why teachers should be more creative in designing learning using other types of cooperative learning model. Therefore, this research is aimed to implement NHT with RMA in the teaching process. We utilize NHT since it is a variant of group discussion whose aim is giving a chance to the students to share their ideas related to the teacher’s question. By using NHT in the class, a teacher can give a better understanding about the material which is given with the help of Realistic Mathematics Approach (RMA) which known for its real problem contex. Meanwhile, the researcher assumes instead of selecting teaching model, Adversity Quotient (AQ) of student also influences students’ achievement. This research used the quasi experimental research. The samples is 60 students in junior high school, it was taken by using the stratified cluster random sampling technique. The results show NHT-RMA gives a better learning achievement of mathematics than direct teaching model and NHT-RMA teaching model with categorized as high AQ show different learning achievement from the students with categorized as moderate and low AQ.
Increased mast cell numbers in a calcaneal tendon overuse model
DEFF Research Database (Denmark)
Pingel, Jessica; Wienecke, Jacob; Kongsgaard Madsen, Mads
2013-01-01
Tendinopathy is often discovered late because the initial development of tendon pathology is asymptomatic. The aim of this study was to examine the potential role of mast cell involvement in early tendinopathy using a high-intensity uphill running (HIUR) exercise model. Twenty-four male Wistar ra...
COST EVALUATION: STRUCTURING OF A MODEL
Directory of Open Access Journals (Sweden)
Altair Borgert
2010-07-01
Full Text Available This study’s purpose was to build a cost evaluation model with views to providing managers and decision makers with information to support the resolution process. From a strategic positioning standpoint, the pondering of variables involved in a cost system is key to corporate success. To this extent, overall consideration was given to contemporary cost approaches – the Theory of Constraints, Balanced Scorecard and Strategic Cost Management – and cost evaluation was analysed. It is understood that this is a relevant factor and that it ought to be taken into account when taking corporate decisions. Furthermore, considering that the MCDA methodology is recommended for the construction of cost evaluation models, some of it’s aspects were emphasised. Finally, the construction of the model itself complements this study. At this stage, cost variables for the three approaches were compiled. Thus, a repository of several variables was created and its use and combination is subject to the interests and needs of those responsible for it’s structuring within corporations. In so proceeding, the number of variables to ponder follows the complexity of the issue and of the required solution. Once meetings held with the study groups, the model was built, revised and reconstructed until consensus was reached. Thereafter, the conclusion was that a cost evaluation model, when built according to the characteristics and needs of each organization, might become the groundwork ensuring accounting becomes increasingly useful at companies. Key-words: Cost evaluation. Cost measurement. Strategy.
Number of Clusters and the Quality of Hybrid Predictive Models in Analytical CRM
Directory of Open Access Journals (Sweden)
Łapczyński Mariusz
2014-08-01
Full Text Available Making more accurate marketing decisions by managers requires building effective predictive models. Typically, these models specify the probability of customer belonging to a particular category, group or segment. The analytical CRM categories refer to customers interested in starting cooperation with the company (acquisition models, customers who purchase additional products (cross- and up-sell models or customers intending to resign from the cooperation (churn models. During building predictive models researchers use analytical tools from various disciplines with an emphasis on their best performance. This article attempts to build a hybrid predictive model combining decision trees (C&RT algorithm and cluster analysis (k-means. During experiments five different cluster validity indices and eight datasets were used. The performance of models was evaluated by using popular measures such as: accuracy, precision, recall, G-mean, F-measure and lift in the first and in the second decile. The authors tried to find a connection between the number of clusters and models' quality.
Directory of Open Access Journals (Sweden)
Flávia Barbosa Abreu
2006-01-01
Full Text Available This study presents the minimum number and the best combination of tomato harvests needed to compare tomato accessions from germplasm banks. Number and weight of fruit in tomato plants are important as auxiliary traits in the evaluation of germplasm banks and should be studied simultaneously with other desirable characteristics such as pest and disease resistance, improved flavor and early production. Brazilian tomato breeding programs should consider not only the number of fruit but also fruit size because Brazilian consumers value fruit that are homogeneous, large and heavy. Our experiment was a randomized block design with three replicates of 32 tomato accessions from the Vegetable Germplasm Bank (Banco de Germoplasma de Hortaliças at the Federal University of Viçosa, Minas Gerais, Brazil plus two control cultivars (Debora Plus and Santa Clara. Nine harvests were evaluated for four production-related traits. The results indicate that six successive harvests are sufficient to compare tomato genotypes and germplasm bank accessions. Evaluation of genotypes according to the number of fruit requires analysis from the second to the seventh harvest. Evaluation of fruit weight by genotype requires analysis from the fourth to the ninth harvest. Evaluation of both number and weight of fruit require analysis from the second to the ninth harvest.
Evaluation Methodology. The Evaluation Exchange. Volume 11, Number 2, Summer 2005
Coffman, Julia, Ed.
2005-01-01
This is the third issue of "The Evaluation Exchange" devoted entirely to the theme of methodology, though every issue tries to identify new methodological choices, the instructive ways in which people have applied or combined different methods, and emerging methodological trends. For example, lately "theories of change" have gained almost…
Testing a model of componential processing of multi-symbol numbers-evidence from measurement units.
Huber, Stefan; Bahnmueller, Julia; Klein, Elise; Moeller, Korbinian
2015-10-01
Research on numerical cognition has addressed the processing of nonsymbolic quantities and symbolic digits extensively. However, magnitude processing of measurement units is still a neglected topic in numerical cognition research. Hence, we investigated the processing of measurement units to evaluate whether typical effects of multi-digit number processing such as the compatibility effect, the string length congruity effect, and the distance effect are also present for measurement units. In three experiments, participants had to single out the larger one of two physical quantities (e.g., lengths). In Experiment 1, the compatibility of number and measurement unit (compatible: 3 mm_6 cm with 3 mm) as well as string length congruity (congruent: 1 m_2 km with m 2 characters) were manipulated. We observed reliable compatibility effects with prolonged reaction times (RT) for incompatible trials. Moreover, a string length congruity effect was present in RT with longer RT for incongruent trials. Experiments 2 and 3 served as control experiments showing that compatibility effects persist when controlling for holistic distance and that a distance effect for measurement units exists. Our findings indicate that numbers and measurement units are processed in a componential manner and thus highlight that processing characteristics of multi-digit numbers generalize to measurement units. Thereby, our data lend further support to the recently proposed generalized model of componential multi-symbol number processing.
Recommendations and illustrations for the evaluation of photonic random number generators
Hart, Joseph D.; Terashima, Yuta; Uchida, Atsushi; Baumgartner, Gerald B.; Murphy, Thomas E.; Roy, Rajarshi
2017-09-01
The never-ending quest to improve the security of digital information combined with recent improvements in hardware technology has caused the field of random number generation to undergo a fundamental shift from relying solely on pseudo-random algorithms to employing optical entropy sources. Despite these significant advances on the hardware side, commonly used statistical measures and evaluation practices remain ill-suited to understand or quantify the optical entropy that underlies physical random number generation. We review the state of the art in the evaluation of optical random number generation and recommend a new paradigm: quantifying entropy generation and understanding the physical limits of the optical sources of randomness. In order to do this, we advocate for the separation of the physical entropy source from deterministic post-processing in the evaluation of random number generators and for the explicit consideration of the impact of the measurement and digitization process on the rate of entropy production. We present the Cohen-Procaccia estimate of the entropy rate h (𝜖 ,τ ) as one way to do this. In order to provide an illustration of our recommendations, we apply the Cohen-Procaccia estimate as well as the entropy estimates from the new NIST draft standards for physical random number generators to evaluate and compare three common optical entropy sources: single photon time-of-arrival detection, chaotic lasers, and amplified spontaneous emission.
Recommendations and illustrations for the evaluation of photonic random number generators
Directory of Open Access Journals (Sweden)
Joseph D. Hart
2017-09-01
Full Text Available The never-ending quest to improve the security of digital information combined with recent improvements in hardware technology has caused the field of random number generation to undergo a fundamental shift from relying solely on pseudo-random algorithms to employing optical entropy sources. Despite these significant advances on the hardware side, commonly used statistical measures and evaluation practices remain ill-suited to understand or quantify the optical entropy that underlies physical random number generation. We review the state of the art in the evaluation of optical random number generation and recommend a new paradigm: quantifying entropy generation and understanding the physical limits of the optical sources of randomness. In order to do this, we advocate for the separation of the physical entropy source from deterministic post-processing in the evaluation of random number generators and for the explicit consideration of the impact of the measurement and digitization process on the rate of entropy production. We present the Cohen-Procaccia estimate of the entropy rate h(,τ as one way to do this. In order to provide an illustration of our recommendations, we apply the Cohen-Procaccia estimate as well as the entropy estimates from the new NIST draft standards for physical random number generators to evaluate and compare three common optical entropy sources: single photon time-of-arrival detection, chaotic lasers, and amplified spontaneous emission.
Metrics for evaluating performance and uncertainty of Bayesian network models
Bruce G. Marcot
2012-01-01
This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...
A model evaluation checklist for process-based environmental models
Jackson-Blake, Leah
2015-04-01
Mechanistic catchment-scale phosphorus models appear to perform poorly where diffuse sources dominate. The reasons for this were investigated for one commonly-applied model, the INtegrated model of CAtchment Phosphorus (INCA-P). Model output was compared to 18 months of daily water quality monitoring data in a small agricultural catchment in Scotland, and model structure, key model processes and internal model responses were examined. Although the model broadly reproduced dissolved phosphorus dynamics, it struggled with particulates. The reasons for poor performance were explored, together with ways in which improvements could be made. The process of critiquing and assessing model performance was then generalised to provide a broadly-applicable model evaluation checklist, incorporating: (1) Calibration challenges, relating to difficulties in thoroughly searching a high-dimensional parameter space and in selecting appropriate means of evaluating model performance. In this study, for example, model simplification was identified as a necessary improvement to reduce the number of parameters requiring calibration, whilst the traditionally-used Nash Sutcliffe model performance statistic was not able to discriminate between realistic and unrealistic model simulations, and alternative statistics were needed. (2) Data limitations, relating to a lack of (or uncertainty in) input data, data to constrain model parameters, data for model calibration and testing, and data to test internal model processes. In this study, model reliability could be improved by addressing all four kinds of data limitation. For example, there was insufficient surface water monitoring data for model testing against an independent dataset to that used in calibration, whilst additional monitoring of groundwater and effluent phosphorus inputs would help distinguish between alternative plausible model parameterisations. (3) Model structural inadequacies, whereby model structure may inadequately represent
Gene Copy Number Analysis for Family Data Using Semiparametric Copula Model
Directory of Open Access Journals (Sweden)
Ao Yuan
2008-01-01
Full Text Available Gene copy number changes are common characteristics of many genetic disorders. A new technology, array comparative genomic hybridization (a-CGH, is widely used today to screen for gains and losses in cancers and other genetic diseases with high resolution at the genome level or for specific chromosomal region. Statistical methods for analyzing such a-CGH data have been developed. However, most of the existing methods are for unrelated individual data and the results from them provide explanation for horizontal variations in copy number changes. It is potentially meaningful to develop a statistical method that will allow for the analysis of family data to investigate the vertical kinship effects as well. Here we consider a semiparametric model based on clustering method in which the marginal distributions are estimated nonparametrically, and the familial dependence structure is modeled by copula. The model is illustrated and evaluated using simulated data. Our results show that the proposed method is more robust than the commonly used multivariate normal model. Finally, we demonstrated the utility of our method using a real dataset.
Air Force Operational Test and Evaluation Center, Volume 2, Number 2
1988-01-01
often sinulation can be done at significantlv \\ithin a reasonabe time Irame . less cost. In so1ime cases, at combination of phisical testing and...MONITORING ORGANIZATION Air Force Operational Test & (If applicable) Evaluation Center (AFOTEC) HQ AFOTEC 6c. ADDRESS (City, State, and ZIP Code) 7b... Test and Evaluation 1/ n) Operations Research . Accession Fo- 19. ABSTRACT (Continue on reverse if necessary and identify by block number) NT IS GRA Z
A Taxonomy of Evaluation Models: Use of Evaluation Models in Program Evaluation.
Carter, Wayne E.
In the nine years following the passage of the Elementary Secondary Education Act (ESEA), several models have been developed to attempt to remedy the deficiencies in existing educational evaluation and decision theory noted by Stufflebeam and co-workers. Compilations of evaluation models have been undertaken and listings exist of models available…
Chemical Kinetics and Photochemical Data for Use in Atmospheric Studies: Evaluation Number 18
Burkholder, J. B.; Sander, S. P.; Abbatt, J. P. D.; Barker, J. R.; Huie, R. E.; Kolb, C. E.; Kurylo, M. J.; Orkin, V. L.; Wilmouth, D. M.; Wine, P. H.
2015-01-01
This is the eighteenth in a series of evaluated sets of rate constants, photochemical cross sections, heterogeneous parameters, and thermochemical parameters compiled by the NASA Panel for Data Evaluation. The data are used primarily to model stratospheric and upper tropospheric processes, with particular emphasis on the ozone layer and its possible perturbation by anthropogenic and natural phenomena. The evaluation is available in electronic form from the following Internet URL: http://jpldataeval.jpl.nasa.gov/
Strifler, Lisa; Cardoso, Roberta; McGowan, Jessie; Cogo, Elise; Nincic, Vera; Khan, Paul A; Scott, Alistair; Ghassemi, Marco; MacDonald, Heather; Lai, Yonda; Treister, Victoria; Tricco, Andrea C; Straus, Sharon E
2018-04-13
To conduct a scoping review of knowledge translation (KT) theories, models and frameworks that have been used to guide dissemination or implementation of evidence-based interventions targeted to prevention and/or management of cancer or other chronic diseases. We used a comprehensive multistage search process from 2000-2016, which included traditional bibliographic database searching, searching using names of theories, models and frameworks, and cited reference searching. Two reviewers independently screened the literature and abstracted data. We found 596 studies reporting on the use of 159 KT theories, models or frameworks. A majority (87%) of the identified theories, models or frameworks were used in five or fewer studies, with 60% used once. The theories, models and frameworks were most commonly used to inform planning/design, implementation and evaluation activities, and least commonly used to inform dissemination and sustainability/scalability activities. Twenty-six were used across the full implementation spectrum (from planning/design to sustainability/scalability) either within or across studies. All were used for at least individual-level behavior change, while 48% were used for organization-level, 33% for community-level and 17% for system-level change. We found a significant number of KT theories, models and frameworks with a limited evidence base describing their use. Copyright © 2018. Published by Elsevier Inc.
Peningkatan Hasil Belajar PKn Materi Organisasi melalui Model Numbered Head Together di Kelas V
Directory of Open Access Journals (Sweden)
Endah Tri Wahyuni
2017-11-01
Full Text Available Penelitian peningkatan hasil belajar pendidikan kewarganegaraan melalui model Numbered Heads Together bertujuan untuk mendeskripsikan penerapan model Numbered Heads Together dalam pendidikan kewarganegaraan dengan materi organisasi dan menggambarkan peningkatan hasil belajar siswa dengan menggunakan model Numbered Heads Together. Ini akan sangat berguna bagi siswa dan guru dalam belajar. Model Numbered Heads Together juga dapat meningkatkan aktivitas guru dan siswa. Penelitian ini merupakan penelitian tindakan kelas. Hasil penelitian menunjukkan (1 Penerapan model pembelajaran Numbered Heads Together pada pembelajaran PKn dilaksanakan guru dengan baik dan sesuai dengan langkah-langkah pembelajaran Numbered Heads Together, (2 siklus I, ketuntasan belajar klasikal mengalami peningkatan sebesar 67% dengan kualifikasi cukup dan pada siklus II ketuntasan klasikal meningkat menjadi 92% dengan kualifikasi sangat baik.
Educational Program Evaluation Using CIPP Model
Warju, Warju
2016-01-01
There are many models of evaluation that can be used to evaluate a program. However, the most commonly used is the context, input, process, output (CIPP) evaluation models. CIPP evaluation model developed by Stufflebeam and Shinkfield in 1985. The evaluation context is used to give a rational reason a selected program or curriculum to be implemented. A wide scale, context can be evaluated on: the program's objectives, policies that support the vision and mission of the institution, the releva...
Number-average size model for geological systems and its application in economic geology
Directory of Open Access Journals (Sweden)
Q. F. Wang
2011-07-01
Full Text Available Various natural objects follow a number-size relationship in the fractal domain. In such relationship, the accumulative number of the objects beyond a given size shows a power-law relationship with the size. Yet in most cases, we also need to know the relationship between the accumulative number of the objects and their average size. A generalized number-size model and a number-average size model are constructed in this paper. In the number-average size model, the accumulative number shows a power-law relationship with the average size when the given size is much less than the maximum size of the objects. When the fractal dimension D_{s} of the number-size model is smaller than 1, the fractal dimension D_{s} of the number-average size model is almost equal to 1; and when D_{s} > 1, the D_{m} is approximately equal to D_{s}. In mineral deposits, according to the number-average size model, the ore tonnage may show a fractal relationship with the grade, as the cutoff changes for a single ore deposit. This is demonstrated by a study of the relationship between tonnage and grade in the Reshuitang epithermal hot-spring gold deposit, China.
Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.
Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L
2017-05-31
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.
Simulation of a directed random-walk model: the effect of pseudo-random-number correlations
Shchur, L. N.; Heringa, J. R.; Blöte, H. W. J.
1996-01-01
We investigate the mechanism that leads to systematic deviations in cluster Monte Carlo simulations when correlated pseudo-random numbers are used. We present a simple model, which enables an analysis of the effects due to correlations in several types of pseudo-random-number sequences. This model provides qualitative understanding of the bias mechanism in a class of cluster Monte Carlo algorithms.
Directory of Open Access Journals (Sweden)
Trabelsi Soraya
2013-01-01
Full Text Available Combined convection and radiation in simultaneously developing laminar flow and heat transfer is numerically considered with a discrete-direction method. Coupled heat transfer in absorbing emitting but not scattering gases is presented in some cases of practical situations such as combustion of natural gas, propane and heavy fuel. Numerical calculations are performed to evaluate the thermal radiation effects on heat transfer through combustion products flowing inside circular ducts. The radiative properties of the flowing gases are modeled by using the absorption distribution function (ADF model. The fluid is a mixture of carbon dioxide, water vapor, and nitrogen. The flow and energy balance equations are solved simultaneously with temperature dependent fluid properties. The bulk mean temperature variations and Nusselt numbers are shown for a uniform inlet temperature. Total, radiative and convective mean Nusselt numbers and their axial evolution for different gas mixtures produced by combustion with oxygen are explored.
Evaluating predictive models of software quality
International Nuclear Information System (INIS)
Ciaschini, V; Canaparo, M; Ronchieri, E; Salomoni, D
2014-01-01
Applications from High Energy Physics scientific community are constantly growing and implemented by a large number of developers. This implies a strong churn on the code and an associated risk of faults, which is unavoidable as long as the software undergoes active evolution. However, the necessities of production systems run counter to this. Stability and predictability are of paramount importance; in addition, a short turn-around time for the defect discovery-correction-deployment cycle is required. A way to reconcile these opposite foci is to use a software quality model to obtain an approximation of the risk before releasing a program to only deliver software with a risk lower than an agreed threshold. In this article we evaluated two quality predictive models to identify the operational risk and the quality of some software products. We applied these models to the development history of several EMI packages with intent to discover the risk factor of each product and compare it with its real history. We attempted to determine if the models reasonably maps reality for the applications under evaluation, and finally we concluded suggesting directions for further studies.
Evaluation of Computational Method of High Reynolds Number Slurry Flow for Caverns Backfilling
Energy Technology Data Exchange (ETDEWEB)
Bettin, Giorgia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-05-01
The abandonment of salt caverns used for brining or product storage poses a significant environmental and economic risk. Risk mitigation can in part be address ed by the process of backfilling which can improve the cavern geomechanical stability and reduce the risk o f fluid loss to the environment. This study evaluate s a currently available computational tool , Barracuda, to simulate such process es as slurry flow at high Reynolds number with high particle loading . Using Barracuda software, a parametric sequence of simu lations evaluated slurry flow at Re ynolds number up to 15000 and loading up to 25%. Li mitations come into the long time required to run these simulation s due in particular to the mesh size requirement at the jet nozzle. This study has found that slurry - jet width and centerline velocities are functions of Re ynold s number and volume fractio n The solid phase was found to spread less than the water - phase with a spreading rate smaller than 1 , dependent on the volume fraction. Particle size distribution does seem to have a large influence on the jet flow development. This study constitutes a first step to understand the behavior of highly loaded slurries and their ultimate application to cavern backfilling.
International Nuclear Information System (INIS)
Lindgren, E.R.; Mattson, E.D.
1997-01-01
Electrokinetic remediation is generally an in situ method using direct current electric potentials to move ionic contaminants and/or water to collection electrodes. The method has been extensively studied for application in saturated clayey soils. Over the past few years, an electrokinetic extraction method specific for sandy, unsaturated soils has been developed and patented by Sandia National Laboratories. A RCRA RD ampersand D permitted demonstration of this technology for the in situ removal of chromate contamination from unsaturated soils in a former chromic acid disposal pit was operated during the summer and fall of 1996. This large scale field test represents the first use of electrokinetics for the removal of heavy metal contamination from unsaturated soils in the United States and is part of the US EPA Superfund Innovative Technology Evaluation (SITE) Program. Guidelines for characterizing a site for electrokinetic remediation are lacking, especially for applications in unsaturated soil. The transference number of an ion is the fraction of the current carried by that ion in an electric field and represents the best measure of contaminant removal efficiency in most electrokinetic remediation processes. In this paper we compare the transference number of chromate initially present in the contaminated unsaturated soil, with the transference number in the electrokinetic process effluent to demonstrate the utility of evaluating this parameter
An applied model for the evaluation of multiple physiological stressors.
Constable, S H; Sherry, C J; Walters, T J
1991-01-01
In everyday life, a human is likely to be exposed to the combined effects of a number of different stressors simultaneously. Consequently, if an applied model is to ultimately provide the best 'fit' between the modeling and modeled phenomena, it must be able to accommodate the evaluation of multiple stressors. Therefore, a multidimensional, primate model is described that can fully accommodate a large number of conceivably stressful, real life scenarios that may be encountered by civilian or military workers. A number of physiological measurements were made in female rhesus monkeys in order to validate the model against previous reports. These evaluations were further expanded to include the experimental perturbation of physical work (exercise). Physiological profiles during activity were extended with the incorporation of radio telemetry. In conclusion, this model allows maximal extrapolation of the potential deleterious or ergogenic effects on systemic physiological function under conditions of realistic operational demands and environments.
Optimization model using Markowitz model approach for reducing the number of dengue cases in Bandung
Yong, Benny; Chin, Liem
2017-05-01
Dengue fever is one of the most serious diseases and this disease can cause death. Currently, Indonesia is a country with the highest cases of dengue disease in Southeast Asia. Bandung is one of the cities in Indonesia that is vulnerable to dengue disease. The sub-districts in Bandung had different levels of relative risk of dengue disease. Dengue disease is transmitted to people by the bite of an Aedesaegypti mosquito that is infected with a dengue virus. Prevention of dengue disease is by controlling the vector mosquito. It can be done by various methods, one of the methods is fogging. The efforts made by the Health Department of Bandung through fogging had constraints in terms of limited funds. This problem causes Health Department selective in fogging, which is only done for certain locations. As a result, many sub-districts are not handled properly by the Health Department because of the unequal distribution of activities to prevent the spread of dengue disease. Thus, it needs the proper allocation of funds to each sub-district in Bandung for preventing dengue transmission optimally. In this research, the optimization model using Markowitz model approach will be applied to determine the allocation of funds should be given to each sub-district in Bandung. Some constraints will be added to this model and the numerical solution will be solved with generalized reduced gradient method using Solver software. The expected result of this research is the proportion of funds given to each sub-district in Bandung correspond to the level of risk of dengue disease in each sub-district in Bandung so that the number of dengue cases in this city can be reduced significantly.
Evaluation of CASP8 model quality predictions
Cozzetto, Domenico
2009-01-01
The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.
Model Performance Evaluation and Scenario Analysis (MPESA)
Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)
An Instructional Model for Teaching Proof Writing in the Number Theory Classroom
Schabel, Carmen
2005-01-01
I discuss an instructional model that I have used in my number theory classes. Facets of the model include using small group work and whole class discussion, having students generate examples and counterexamples, and giving students the opportunity to write proofs and make conjectures in class. The model is designed to actively engage students in…
Baryon-number generation in supersymmetric unified models: the effect of supermassive fermions
International Nuclear Information System (INIS)
Kolb, E.W.; Raby, S.
1983-01-01
In supersymmetric unified models, baryon-number-violating reactions may be mediated by supermassive fermions in addition to the usual supermassive bosons. The effective low-energy baryon-number-violating cross section for fermion-mediated reactions is sigma/sub DeltaB/approx.g 4 /m 2 , where g is a coupling constant and m is the supermassive fermion mass, as opposed to sigma/sub DeltaB/approx.g 4 s/m 4 for scalar- or vector-mediated reactions (√s is the center-of-mass energy). Since the fermion-mediated cross section is larger at low energy, it is more effective at damping the baryon number produced in decay of the supermassive particles. In this paper we calculate baryon-number generation in models with fermion-mediated baryon-number-violating reactions, and discuss implications for supersymmetric model building
Refined open intersection numbers and the Kontsevich-Penner matrix model
International Nuclear Information System (INIS)
Alexandrov, Alexander; Buryak, Alexandr; Tessler, Ran J.
2017-01-01
A study of the intersection theory on the moduli space of Riemann surfaces with boundary was recently initiated in a work of R. Pandharipande, J.P. Solomon and the third author, where they introduced open intersection numbers in genus 0. Their construction was later generalized to all genera by J.P. Solomon and the third author. In this paper we consider a refinement of the open intersection numbers by distinguishing contributions from surfaces with different numbers of boundary components, and we calculate all these numbers. We then construct a matrix model for the generating series of the refined open intersection numbers and conjecture that it is equivalent to the Kontsevich-Penner matrix model. An evidence for the conjecture is presented. Another refinement of the open intersection numbers, which describes the distribution of the boundary marked points on the boundary components, is also discussed.
Refined open intersection numbers and the Kontsevich-Penner matrix model
Energy Technology Data Exchange (ETDEWEB)
Alexandrov, Alexander [Center for Geometry and Physics, Institute for Basic Science (IBS),Pohang 37673 (Korea, Republic of); Centre de Recherches Mathématiques (CRM), Université de Montréal,Montréal (Canada); Department of Mathematics and Statistics, Concordia University,Montréal (Canada); Institute for Theoretical and Experimental Physics (ITEP),Moscow (Russian Federation); Buryak, Alexandr [Department of Mathematics, ETH Zurich, Zurich (Switzerland); Tessler, Ran J. [Institute for Theoretical Studies, ETH Zurich,Zurich (Switzerland)
2017-03-23
A study of the intersection theory on the moduli space of Riemann surfaces with boundary was recently initiated in a work of R. Pandharipande, J.P. Solomon and the third author, where they introduced open intersection numbers in genus 0. Their construction was later generalized to all genera by J.P. Solomon and the third author. In this paper we consider a refinement of the open intersection numbers by distinguishing contributions from surfaces with different numbers of boundary components, and we calculate all these numbers. We then construct a matrix model for the generating series of the refined open intersection numbers and conjecture that it is equivalent to the Kontsevich-Penner matrix model. An evidence for the conjecture is presented. Another refinement of the open intersection numbers, which describes the distribution of the boundary marked points on the boundary components, is also discussed.
Thiem, Alrik
2014-12-01
In recent years, the method of Qualitative Comparative Analysis (QCA) has been enjoying increasing levels of popularity in evaluation and directly neighboring fields. Its holistic approach to causal data analysis resonates with researchers whose theories posit complex conjunctions of conditions and events. However, due to QCA's relative immaturity, some of its technicalities and objectives have not yet been well understood. In this article, I seek to raise awareness of six pitfalls of employing QCA with regard to the following three central aspects: case numbers, necessity relations, and model ambiguities. Most importantly, I argue that case numbers are irrelevant to the methodological choice of QCA or any of its variants, that necessity is not as simple a concept as it has been suggested by many methodologists, and that doubt must be cast on the determinacy of virtually all results presented in past QCA research. By means of empirical examples from published articles, I explain the background of these pitfalls and introduce appropriate procedures, partly with reference to current software, that help avoid them. QCA carries great potential for scholars in evaluation and directly neighboring areas interested in the analysis of complex dependencies in configurational data. If users beware of the pitfalls introduced in this article, and if they avoid mechanistic adherence to doubtful "standards of good practice" at this stage of development, then research with QCA will gain in quality, as a result of which a more solid foundation for cumulative knowledge generation and well-informed policy decisions will also be created. © The Author(s) 2014.
Gifford, Sue
2014-01-01
This article sets out to evaluate the English Early Years Foundation Stage Goal for Numbers, in relation to research evidence. The Goal, which sets out to provide "a good foundation in mathematics", has greater breadth of content and higher levels of difficulty than previous versions. Research suggests that the additional expectations…
Numerical modeling of the impact of regenerator housing on the determination of Nusselt numbers
DEFF Research Database (Denmark)
Nielsen, Kaspar Kirstein; Nellis, G.F.; Klein, S.A.
2013-01-01
It is suggested that the housing of regenerators may have a significant impact when experimentally determining Nusselt numbers at low Reynolds and large Prandtl numbers. In this paper, a numerical model that takes the regenerator housing into account as a domain that is thermally coupled to the r...
Popović, Jovan; Mikov, Momir; Sabo, Ana; Jakovljević, Vida
2009-01-01
This study presents application of statistical power function for the t-test and ANOVA F-test on the evaluation of diclofenac bioequivalence in trials with the wide variations in sample sizes (N = 12, 18 and 24). The power function, together with appropriate equations tables and figures, is used to calculate the power of the ANOVA for crossover design, the number of subjects for a given value of power and the minimum detectable difference in treatment means for different pharmacokinetic parameters of the formulations. The power of the trial with a small, sample size (N = 12) to detect 20% differences between diclofenac formulations is shown to be more than 0.9 and almost the same as the power of the trial with a large sample size (N = 24). In all trials for all pharmacokinetic parameters the power to detect 20% difference is shown to be more than 0.8. For the power of 0.8, the needed subject number to detect 20% difference in treatment means is the same or smaller than used and the minimum detectable difference is smaller than 20% in all our trials. This investigation shows that bioequivalence studies with small number of subjects (N = 12) may be quite adequate for valid conclusions.
Klewicki, J. C.; Chini, G. P.; Gibson, J. F.
2017-01-01
Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585
Dynamic model of cage induction motor with number of rotor bars as parameter
Directory of Open Access Journals (Sweden)
Gojko Joksimović
2017-05-01
Full Text Available A dynamic mathematical model, using number of rotor bars as parameter, is reached for cage induction motors through the use of coupled-circuits and the concept of winding functions. The exact MMFs waveforms are accounted for by the model which is derived in natural frames of reference. By knowing the initial motor parameters for a priori adopted number of stator slots and rotor bars model allows change of rotor bars number what results in new model parameters. During this process, the rated machine power, number of stator slots and stator winding scheme remain the same. Although presented model has a potentially broad application area it is primarily suitable for the analysis of the different stator/rotor slot combination on motor behaviour during the transients or in steady-state regime. The model is significant in its potential to provide analysis of dozen of different number of rotor bars in a few tens of minutes. Numerical example on cage rotor induction motor exemplifies this application, including three variants of number of rotor bars.
Application of random number generators in genetic algorithms to improve rainfall-runoff modelling
Chlumecký, Martin; Buchtele, Josef; Richta, Karel
2017-10-01
The efficient calibration of rainfall-runoff models is a difficult issue, even for experienced hydrologists. Therefore, fast and high-quality model calibration is a valuable improvement. This paper describes a novel methodology and software for the optimisation of a rainfall-runoff modelling using a genetic algorithm (GA) with a newly prepared concept of a random number generator (HRNG), which is the core of the optimisation. The GA estimates model parameters using evolutionary principles, which requires a quality number generator. The new HRNG generates random numbers based on hydrological information and it provides better numbers compared to pure software generators. The GA enhances the model calibration very well and the goal is to optimise the calibration of the model with a minimum of user interaction. This article focuses on improving the internal structure of the GA, which is shielded from the user. The results that we obtained indicate that the HRNG provides a stable trend in the output quality of the model, despite various configurations of the GA. In contrast to previous research, the HRNG speeds up the calibration of the model and offers an improvement of rainfall-runoff modelling.
Prediction Model of Interval Grey Numbers with a Real Parameter and Its Application
Directory of Open Access Journals (Sweden)
Bo Zeng
2014-01-01
Full Text Available Grey prediction models have become common methods which are widely employed to solve the problems with “small examples and poor information.” However, modeling objects of existing grey prediction models are limited to the homogenous data sequences which only contain the same data type. This paper studies the methodology of building prediction models of interval grey numbers that are grey heterogeneous data sequence, with a real parameter. Firstly, the position of the real parameter in an interval grey number sequence is discussed, and the real number is expanded into an interval grey number by adopting the method of grey generation. On this basis, a prediction model of interval grey number with a real parameter is deduced and built. Finally, this novel model is successfully applied to forecast the concentration of organic pollutant DDT in the atmosphere. The analysis and research results in this paper extend the object of grey prediction from homogenous data sequence to grey heterogeneous data sequence. Those research findings are of positive significance in terms of enriching and improving the theory system of grey prediction models.
Directory of Open Access Journals (Sweden)
Prasenjit Chatterjee
2012-04-01
Full Text Available Evaluation of proper supplier for manufacturing organizations is one of the most challenging problems in real time manufacturing environment due to a wide variety of customer demands. It has become more and more complicated to meet the challenges of international competitiveness and as the decision makers need to assess a wide range of alternative suppliers based on a set of conflicting criteria. Thus, the main objective of supplier selection is to select highly potential supplier through which all the set goals regarding the purchasing and manufacturing activity can be achieved. Because of these reasons, supplier selection has got considerable attention by the academicians and researchers. This paper presents a combined multi-criteria decision making methodology for supplier evaluation for given industrial applications. The proposed methodology is based on a compromise ranking method combined with Grey Interval Numbers considering different cardinal and ordinal criteria and their relative importance. A ‘supplier selection index’ is also proposed to help evaluation and ranking the alternative suppliers. Two examples are illustrated to demonstrate the potentiality and applicability of the proposed method.
Low Mach and Peclet number limit for a model of stellar tachocline and upper radiative zones
Directory of Open Access Journals (Sweden)
Donatella Donatelli
2016-09-01
Full Text Available We study a hydrodynamical model describing the motion of internal stellar layers based on compressible Navier-Stokes-Fourier-Poisson system. We suppose that the medium is electrically charged, we include energy exchanges through radiative transfer and we assume that the system is rotating. We analyze the singular limit of this system when the Mach number, the Alfven number, the Peclet number and the Froude number approache zero in a certain way and prove convergence to a 3D incompressible MHD system with a stationary linear transport equation for transport of radiation intensity. Finally, we show that the energy equation reduces to a steady equation for the temperature corrector.
Baryon number fluctuations and the phase structure in the PNJL model
Shao, Guo-yun; Tang, Zhan-duo; Gao, Xue-yan; He, Wei-bo
2018-02-01
We investigate the kurtosis and skewness of net-baryon number fluctuations in the Polyakov loop extended Nambu-Jona-Lasinio (PNJL) model, and discuss the relations between fluctuation distributions and the phase structure of quark-gluon matter. The calculation shows that the traces of chiral and deconfinement transitions can be effectively reflected by the kurtosis and skewness of net-baryon number fluctuations not only in the critical region but also in the crossover region. The contour plot of baryon number kurtosis derived in the PNJL model can qualitatively explain the behavior of net-proton number kurtosis in the STAR beam energy scan experiments. Moreover, the three-dimensional presentations of the kurtosis and skewness in this study are helpful to understand the relations between baryon number fluctuations and QCD phase structure.
Directory of Open Access Journals (Sweden)
F. Agha Hosseini
2004-09-01
Full Text Available Statement of Problem: Amalgam is the most widely used dental restorative material.However, because of continuous low-level release of Mercury from amalgam fillings, its safety has been questionable.Purpose: The aim of this study was the evaluation of concentration of Mercury in saliva before and after amalgam fillings and its relation with numbers and surfaces of amalgam fillings.Materials and Methods: In an analytic interventional study we surveyed concentration Mercury in saliva before and after amalgam fillings. Twenty-five Patients (9 male, 16 female who referred to oral medicine department of Tehran university of medical scienceand Haj- Abdol- Vahab medical center who had no amalgam fillings were selected and the samples of saliva (5cc was collected before fillings. After that all of posterior decayed teeth were filled in an appointment with amalgam and, 24 hours later, the second samplesof saliva (5cc was collected. The amount of saliva Mercury before and after filling was measured and its difference was analyzed by paired t- test.Results: In this study the mean of Mercury in saliva was 0.00896 μg/ml before and 0.16404 μg/ml after amalgam fillings. The mean of number of fillings was 1.96 and mean of size of surfaces was 76.43 mm2 and mean of consumption amalgam was 4.1 units.Conclusion: There was no significant correlation between age (P=0.677, sex, number of fillings (P=0.055, number of surface of filling (P=0.059 and size of surfaces of fillings (P=0.072, with Mercury levels in saliva after amalgam fillings. There was a significant relation between Mercury level of saliva after fillings and amalgam amount (P= 0.036.Therefore amalgam may be designate a significant source for Mercury release in saliva.Since this is a preliminary study, it needs supplementary evaluations in saliva, blood and urine in different periods after amalgam fillings.
EPA Corporate GHG Goal Evaluation Model
The EPA Corporate GHG Goal Evaluation Model provides companies with a transparent and publicly available benchmarking resource to help evaluate and establish new or existing GHG goals that go beyond business as usual for their individual sectors.
Model for modulated and chaotic waves in zero-Prandtl-number ...
Indian Academy of Sciences (India)
The effects of time-periodic forcing in a few-mode model for zero-Prandtl-number convection with rigid body rotation is investigated. The time-periodic modulation of the rotation rate about the vertical axis and gravity modulation are considered separately. In the presence of periodic variation of the rotation rate, the model ...
438 Optimal Number of States in Hidden Markov Models and its ...
African Journals Online (AJOL)
In this paper, Hidden Markov Model is applied to model human movements as to facilitate an automatic detection of the same. A number of activities were simulated with the help of two persons. The four movements considered are walking, sitting down-getting up, fall while walking and fall while standing. The.
Unsuppressed fermion-number violation at high temperature: An O(3) model
International Nuclear Information System (INIS)
Mottola, E.; Wipf, A.
1989-01-01
The O(3) nonlinear σ model in 1+1 dimensions, modified by an explicit symmetry-breaking term, is presented as a model for baryon- and lepton-number violation in the standard electroweak theory. Although arguments based on the Atiyah-Singer index theorem and instanton physics apply to the model, we show by explicit calculations that the rate of chiral fermion-number violation due to the axial anomaly is entirely unsuppressed at sufficiently high temperatures. Our results apply to unbroken gauge theories as well and may require reevaluation of the role of instantons in high-temperature QCD
[Evaluation of variable number of tandem repeats (VNTR) isolates of Mycobacterium bovis in Algeria].
Sahraoui, Naima; Muller, Borna; Djamel, Yala; Fadéla, Boulahbal; Rachid, Ouzrout; Jakob, Zinsstag; Djamel, Guetarni
2010-01-01
The discriminatory potency of variable number of tandem repeats (VNTR), based on 7 loci (MIRU 26, 27 and 5 ETRs A, B, C, D, E) was assayed on Mycobacterium bovis strains obtained from samples due to tuberculosis in two slaughterhouses in Algeria. The technique of MIRU-VNTR has been evaluated on 88 strains of M. bovis and one strain of M. caprea and shows 41 different profiles. Results showed that the VNTR were highly discriminatory with an allelic diversity of 0.930 when four loci (ETR A, B, C and MIRU 27) were highly discriminatory (h>0.25) and three loci (ETR D and E MIRU 26) moderately discriminatory (0.11VNTR loci were highly discriminatory be adequate for the first proper differentiation of strains of M. bovis in Algeria. The VNTR technique has proved a valuable tool for further development and application of epidemiological research for the of tuberculosis transmission in Algeria.
Evaluation of lymph node numbers for adequate staging of Stage II and III colon cancer
Directory of Open Access Journals (Sweden)
Bumpers Harvey L
2011-05-01
Full Text Available Abstract Background Although evaluation of at least 12 lymph nodes (LNs is recommended as the minimum number of nodes required for accurate staging of colon cancer patients, there is disagreement on what constitutes an adequate identification of such LNs. Methods To evaluate the minimum number of LNs for adequate staging of Stage II and III colon cancer, 490 patients were categorized into groups based on 1-6, 7-11, 12-19, and ≥ 20 LNs collected. Results For patients with Stage II or III disease, examination of 12 LNs was not significantly associated with recurrence or mortality. For Stage II (HR = 0.33; 95% CI, 0.12-0.91, but not for Stage III patients (HR = 1.59; 95% CI, 0.54-4.64, examination of ≥20 LNs was associated with a reduced risk of recurrence within 2 years. However, examination of ≥20 LNs had a 55% (Stage II, HR = 0.45; 95% CI, 0.23-0.87 and a 31% (Stage III, HR = 0.69; 95% CI, 0.38-1.26 decreased risk of mortality, respectively. For each six additional LNs examined from Stage III patients, there was a 19% increased probability of finding a positive LN (parameter estimate = 0.18510, p Conclusions Thus, the 12 LN cut-off point cannot be supported as requisite in determining adequate staging of colon cancer based on current data. However, a minimum of 6 LNs should be examined for adequate staging of Stage II and III colon cancer patients.
Dotan, Dror; Friedmann, Naama
2018-04-01
We propose a detailed cognitive model of multi-digit number reading. The model postulates separate processes for visual analysis of the digit string and for oral production of the verbal number. Within visual analysis, separate sub-processes encode the digit identities and the digit order, and additional sub-processes encode the number's decimal structure: its length, the positions of 0, and the way it is parsed into triplets (e.g., 314987 → 314,987). Verbal production consists of a process that generates the verbal structure of the number, and another process that retrieves the phonological forms of each number word. The verbal number structure is first encoded in a tree-like structure, similarly to syntactic trees of sentences, and then linearized to a sequence of number-word specifiers. This model is based on an investigation of the number processing abilities of seven individuals with different selective deficits in number reading. We report participants with impairment in specific sub-processes of the visual analysis of digit strings - in encoding the digit order, in encoding the number length, or in parsing the digit string to triplets. Other participants were impaired in verbal production, making errors in the number structure (shifts of digits to another decimal position, e.g., 3,040 → 30,004). Their selective deficits yielded several dissociations: first, we found a double dissociation between visual analysis deficits and verbal production deficits. Second, several dissociations were found within visual analysis: a double dissociation between errors in digit order and errors in the number length; a dissociation between order/length errors and errors in parsing the digit string into triplets; and a dissociation between the processing of different digits - impaired order encoding of the digits 2-9, without errors in the 0 position. Third, within verbal production, a dissociation was found between digit shifts and substitutions of number words. A
The EMEFS model evaluation. An interim report
Energy Technology Data Exchange (ETDEWEB)
Barchet, W.R. [Pacific Northwest Lab., Richland, WA (United States); Dennis, R.L. [Environmental Protection Agency, Research Triangle Park, NC (United States); Seilkop, S.K. [Analytical Sciences, Inc., Durham, NC (United States); Banic, C.M.; Davies, D.; Hoff, R.M.; Macdonald, A.M.; Mickle, R.E.; Padro, J.; Puckett, K. [Atmospheric Environment Service, Downsview, ON (Canada); Byun, D.; McHenry, J.N. [Computer Sciences Corp., Research Triangle Park, NC (United States); Karamchandani, P.; Venkatram, A. [ENSR Consulting and Engineering, Camarillo, CA (United States); Fung, C.; Misra, P.K. [Ontario Ministry of the Environment, Toronto, ON (Canada); Hansen, D.A. [Electric Power Research Inst., Palo Alto, CA (United States); Chang, J.S. [State Univ. of New York, Albany, NY (United States). Atmospheric Sciences Research Center
1991-12-01
The binational Eulerian Model Evaluation Field Study (EMEFS) consisted of several coordinated data gathering and model evaluation activities. In the EMEFS, data were collected by five air and precipitation monitoring networks between June 1988 and June 1990. Model evaluation is continuing. This interim report summarizes the progress made in the evaluation of the Regional Acid Deposition Model (RADM) and the Acid Deposition and Oxidant Model (ADOM) through the December 1990 completion of a State of Science and Technology report on model evaluation for the National Acid Precipitation Assessment Program (NAPAP). Because various assessment applications of RADM had to be evaluated for NAPAP, the report emphasizes the RADM component of the evaluation. A protocol for the evaluation was developed by the model evaluation team and defined the observed and predicted values to be used and the methods by which the observed and predicted values were to be compared. Scatter plots and time series of predicted and observed values were used to present the comparisons graphically. Difference statistics and correlations were used to quantify model performance. 64 refs., 34 figs., 6 tabs.
The Influence of the Number of Different Stocks on the Levy-Levy-Solomon Model
Kohl, R.
The stock market model of Levy, Levy, Solomon is simulated for more than one stock to analyze the behavior for a large number of investors. Small markets can lead to realistic looking prices for one and more stocks. A large number of investors leads to a semi-regular fashion simulating one stock. For many stocks, three of the stocks are semi-regular and dominant, the rest is chaotic. Aside from that we changed the utility function and checked the results.
Particle-number-projected Hartree-Fock-Bogoliubov study with effective shell model interactions
Maqbool, I.; Sheikh, J. A.; Ganai, P. A.; Ring, P.
2011-04-01
We perform the particle-number-projected mean-field study using the recently developed symmetry-projected Hartree-Fock-Bogoliubov (HFB) equations. Realistic calculations have been performed in sd- and fp-shell nuclei using the shell model empirical interactions, USD and GXPFIA. It is demonstrated that the mean-field results for energy surfaces, obtained with these shell model interactions, are quite similar to those obtained using the density functional approaches. Further, it is shown that particle-number-projected results, for neutron-rich isotopes, can lead to different ground-state shapes in comparison to bare HFB calculations.
Evaluating topic models with stability
CSIR Research Space (South Africa)
De Waal, A
2008-11-01
Full Text Available on unlabelled data, so that a ground truth does not exist and (b) "soft" (probabilistic) document clusters are created by state-of-the-art topic models, which complicates comparisons even when ground truth labels are available. Perplexity has often been used...
Site descriptive modelling - strategy for integrated evaluation
International Nuclear Information System (INIS)
Andersson, Johan
2003-02-01
The current document establishes the strategy to be used for achieving sufficient integration between disciplines in producing Site Descriptive Models during the Site Investigation stage. The Site Descriptive Model should be a multidisciplinary interpretation of geology, rock mechanics, thermal properties, hydrogeology, hydrogeochemistry, transport properties and ecosystems using site investigation data from deep bore holes and from the surface as input. The modelling comprise the following iterative steps, evaluation of primary data, descriptive and quantitative modelling (in 3D), overall confidence evaluation. Data are first evaluated within each discipline and then the evaluations are checked between the disciplines. Three-dimensional modelling (i.e. estimating the distribution of parameter values in space and its uncertainty) is made in a sequence, where the geometrical framework is taken from the geological model and in turn used by the rock mechanics, thermal and hydrogeological modelling etc. The three-dimensional description should present the parameters with their spatial variability over a relevant and specified scale, with the uncertainty included in this description. Different alternative descriptions may be required. After the individual discipline modelling and uncertainty assessment a phase of overall confidence evaluation follows. Relevant parts of the different modelling teams assess the suggested uncertainties and evaluate the feedback. These discussions should assess overall confidence by, checking that all relevant data are used, checking that information in past model versions is considered, checking that the different kinds of uncertainty are addressed, checking if suggested alternatives make sense and if there is potential for additional alternatives, and by discussing, if appropriate, how additional measurements (i.e. more data) would affect confidence. The findings as well as the modelling results are to be documented in a Site Description
Development of a Statistical Model for Seasonal Prediction of North Atlantic Hurricane Numbers
Davis, K.; Zeng, X.
2014-12-01
Tropical cyclones cause more financial distress to insurance companies than any other natural disaster. From 1970-2002, it is estimated that hurricanes caused 44 billion dollars in damage, greater than 2.5 times the the next costliest catastrophe. Theses damages do not go without effect. A string of major catastrophes from 1991-1994 caused nine property firms to bankrupt and caused serious financial strain on others. The public was not only affected by the loss of life and property, but the increase in tax dollars for disaster relief. Providing better seasonal predictions of North Atlantic hurricane activity farther in advance will help alleviate some of the financial strains these major catastrophes put on the nation. A statistical model was first developed by Bill Gray's team to predict the total number of hurricanes over the North Atlantic in 1984, followed by other statistical methods, dynamic modeling, and hybrid methods in recent years. However, all these methods showed little to no skill with forecasts made by June 1 in recent years. In contrast to the relatively small year-to-year change in seasonal hurricane numbers pre-1980, there has been much greater interannual changes since, especially since the year 2000. For instance, while there were very high hurricane numbers in 2005 and 2010, 2013 was one of the lowest in history. Recognizing these interdecadal changes in the dispersion of hurricane numbers, we have developed a new statistical model to more realistically predict (by June 1 each year) the seasonal hurricane number over the North Atlantic. It is based on the Multivariate ENSO Index (MEI) conditioned by the Atlantic Multidecadal Oscillation (AMO) index, the zonal wind stress and sea surface temperature over the Atlantic. It provides both the deterministic number and the range of hurricane numbers. The details of the model and its performance from 1950-2014 in comparison with other methods will be presented in our presentation.
Reike, Dennis; Schwarz, Wolf
2016-01-01
The time required to determine the larger of 2 digits decreases with their numerical distance, and, for a given distance, increases with their magnitude (Moyer & Landauer, 1967). One detailed quantitative framework to account for these effects is provided by random walk models. These chronometric models describe how number-related noisy…
PREDICTIVE CAPACITY OF INSOLVENCY MODELS BASED ON ACCOUNTING NUMBERS AND DESCRIPTIVE DATA
Directory of Open Access Journals (Sweden)
Rony Petson Santana de Souza
2012-09-01
Full Text Available In Brazil, research into models to predict insolvency started in the 1970s, with most authors using discriminant analysis as a statistical tool in their models. In more recent years, authors have increasingly tried to verify whether it is possible to forecast insolvency using descriptive data contained in firms’ reports. This study examines the capacity of some insolvency models to predict the failure of Brazilian companies that have gone bankrupt. The study is descriptive in nature with a quantitative approach, based on research of documents. The sample is omposed of 13 companies that were declared bankrupt between 1997 and 2003. The results indicate that the majority of the insolvency prediction models tested showed high rates of correct forecasts. The models relying on descriptive reports on average were more likely to succeed than those based on accounting figures. These findings demonstrate that although some studies indicate a lack of validity of predictive models created in different business settings, some of these models have good capacity to forecast insolvency in Brazil. We can conclude that both models based on accounting numbers and those relying on descriptive reports can predict the failure of firms. Therefore, it can be inferred that the majority of bankruptcy prediction models that make use of accounting numbers can succeed in predicting the failure of firms.
Words or numbers? The evaluation of probability expressions in general practice.
O'Brien, B J
1989-03-01
A sample of 56 general practitioners were asked to rate, on a percentage scale, 23 words or phrases which denote frequency or likelihood. The hypothetical context of the exercise was that of communicating to patients the probability of a side-effect (headache) arising from an unspecified prescription medicine. Median phrase ratings ranged from 'never' at 0% to 'certain' at 95% with a 50% rating given to the phrase 'reasonable chance'. Despite relatively large variance in ratings between respondents, the median ratings of a number of phrases were similar, and some identical, to other studies from different medical professionals. Although the clinical context in which a given expression of probability is used may affect its meaning, the results are encouraging and suggest that phrases denoting likelihood might be systematically codified to enhance communication between doctor and patient. To move towards this objective more research is needed to evaluate how patients interpret expressions of probability, and the relative effectiveness of different modes of communicating likelihood.
International Nuclear Information System (INIS)
Liu, Haoyang Haven; Lanphere, Jacob; Walker, Sharon; Cohen, Yoram
2015-01-01
The effect of hydration repulsion on the agglomeration of nanoparticles in aqueous suspensions was investigated via the description of agglomeration by the Smoluchowski coagulation equation using constant number Monte–Carlo simulation making use of the classical DLVO theory extended to include the hydration repulsion energy. Evaluation of experimental DLS measurements for TiO 2 , CeO 2 , SiO 2 , and α-Fe 2 O 3 (hematite) at high IS (up to 900 mM) or low |ζ-potential| (≥1.35 mV) demonstrated that hydration repulsion energy can be above electrostatic repulsion energy such that the increased overall repulsion energy can significantly lower the agglomerate diameter relative to the classical DLVO prediction. While the classical DLVO theory, which is reasonably applicable for agglomeration of NPs of high |ζ-potential| (∼>35 mV) in suspensions of low IS (∼<1 mM), it can overpredict agglomerate sizes by up to a factor of 5 at high IS or low |ζ-potential|. Given the potential important role of hydration repulsion over a range of relevant conditions, there is merit in quantifying this repulsion energy over a wide range of conditions as part of overall characterization of NP suspensions. Such information would be of relevance to improved understanding of NP agglomeration in aqueous suspensions and its correlation with NP physicochemical and solution properties. (paper)
Further Evaluation of a Brief, Intensive Teacher-Training Model
Lerman, Dorothea C.; Tetreault, Allison; Hovanetz, Alyson; Strobel, Margaret; Garro, Joanie
2008-01-01
The purpose of this study was to further evaluate the outcomes of a model program that was designed to train current teachers of children with autism. Nine certified special education teachers participating in an intensive 5-day summer training program were taught a relatively large number of specific skills in two areas (preference assessment and…
Winahju, W. S.; Mukarromah, A.; Putri, S.
2015-03-01
Leprosy is a chronic infectious disease caused by bacteria of leprosy (Mycobacterium leprae). Leprosy has become an important thing in Indonesia because its morbidity is quite high. Based on WHO data in 2014, in 2012 Indonesia has the highest number of new leprosy patients after India and Brazil with a contribution of 18.994 people (8.7% of the world). This number makes Indonesia automatically placed as the country with the highest number of leprosy morbidity of ASEAN countries. The province that most contributes to the number of leprosy patients in Indonesia is East Java. There are two kind of leprosy. They consist of pausibacillary and multibacillary. The morbidity of multibacillary leprosy is higher than pausibacillary leprosy. This paper will discuss modeling both of the number of multibacillary and pausibacillary leprosy patients as responses variables. These responses are count variables, so modeling will be conducted by using bivariate poisson regression method. Unit experiment used is in East Java, and predictors involved are: environment, demography, and poverty. The model uses data in 2012, and the result indicates that all predictors influence significantly.
A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications
Grauer, Jared A.
2017-01-01
Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.
Serfling, Robert; Ogola, Gerald
2016-02-10
Among men, prostate cancer (CaP) is the most common newly diagnosed cancer and the second leading cause of death from cancer. A major issue of very large scale is avoiding both over-treatment and under-treatment of CaP cases. The central challenge is deciding clinical significance or insignificance when the CaP biopsy results are positive but only marginally so. A related concern is deciding how to increase the number of biopsy cores for larger prostates. As a foundation for improved choice of number of cores and improved interpretation of biopsy results, we develop a probability model for the number of positive cores found in a biopsy, given the total number of cores, the volumes of the tumor nodules, and - very importantly - the prostate volume. Also, three applications are carried out: guidelines for the number of cores as a function of prostate volume, decision rules for insignificant versus significant CaP using number of positive cores, and, using prior distributions on total tumor size, Bayesian posterior probabilities for insignificant CaP and posterior median CaP. The model-based results have generality of application, take prostate volume into account, and provide attractive tradeoffs of specificity versus sensitivity. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Rock mechanics models evaluation report: Draft report
International Nuclear Information System (INIS)
1985-10-01
This report documents the evaluation of the thermal and thermomechanical models and codes for repository subsurface design and for design constraint analysis. The evaluation was based on a survey of the thermal and thermomechanical codes and models that are applicable to subsurface design, followed by a Kepner-Tregoe (KT) structured decision analysis of the codes and models. The end result of the KT analysis is a balanced, documented recommendation of the codes and models which are best suited to conceptual subsurface design for the salt repository. The various laws for modeling the creep of rock salt are also reviewed in this report. 37 refs., 1 fig., 7 tabs
A new method to determine the number of experimental data using statistical modeling methods
Energy Technology Data Exchange (ETDEWEB)
Jung, Jung-Ho; Kang, Young-Jin; Lim, O-Kaung; Noh, Yoojeong [Pusan National University, Busan (Korea, Republic of)
2017-06-15
For analyzing the statistical performance of physical systems, statistical characteristics of physical parameters such as material properties need to be estimated by collecting experimental data. For accurate statistical modeling, many such experiments may be required, but data are usually quite limited owing to the cost and time constraints of experiments. In this study, a new method for determining a rea- sonable number of experimental data is proposed using an area metric, after obtaining statistical models using the information on the underlying distribution, the Sequential statistical modeling (SSM) approach, and the Kernel density estimation (KDE) approach. The area metric is used as a convergence criterion to determine the necessary and sufficient number of experimental data to be acquired. The pro- posed method is validated in simulations, using different statistical modeling methods, different true models, and different convergence criteria. An example data set with 29 data describing the fatigue strength coefficient of SAE 950X is used for demonstrating the performance of the obtained statistical models that use a pre-determined number of experimental data in predicting the probability of failure for a target fatigue life.
Diffusion Processes in the A-Model of Vector Admixture: Turbulent Prandtl Number
Jurčišinová, Eva; Jurčišin, Marián; Remecky, Richard
2018-02-01
Using analytical approach of the field theoretic renormalization-group technique in two-loop approximation we model a fully developed turbulent system with vector characteristics driven by stochastic Navier-Stokes equation. The behaviour of the turbulent Prandtl number PrA,t is investigated as a function of parameter A and spatial dimension d > 2 for three cases, namely, kinematic MHD turbulence (A = 1), the admixture of a vector impurity by the Navier-Stokes turbulent flow (A = 0) and the model of linearized Navier-Stokes equation (A = -1). It is shown that for A = -1 the turbulent Prandtl number is given already in the one-loop approximation and does not depend on d while turbulent Prandt numbers in first two cases show very similar behaviour as functions of dimension d in the two-loop approximation.
Reduction of the number of parameters needed for a polynomial random regression test-day model
Pool, M.H.; Meuwissen, T.H.E.
2000-01-01
Legendre polynomials were used to describe the (co)variance matrix within a random regression test day model. The goodness of fit depended on the polynomial order of fit, i.e., number of parameters to be estimated per animal but is limited by computing capacity. Two aspects: incomplete lactation
Modelling the number of viable vegetative cells of Bacillus cereus passing through the stomach
Wijnands, L.M.; Pielaat, A.; Dufrenne, J.B.; Zwietering, M.H.; Leusden, van F.M.
2009-01-01
Aims: Model the number of viable vegetative cells of B. cereus surviving the gastric passage after experiments in simulated gastric conditions. Materials and Methods: The inactivation of stationary and exponential phase vegetative cells of twelve different strains of Bacillus cereus, both mesophilic
Dependence of the number of dealers in a stochastic dealer model
Yamada, Kenta; Takayasu, Hideki; Takayasu, Misako
2010-04-01
We numerically analyze an artificial market model consisted of N dealers with time dependent stochastic strategy. Observing the change of market price statistics for different values of N, it is shown that the statistical properties are almost same when the dealer number is larger than about 30.
Deliyianni, Eleni; Gagatsis, Athanasios; Elia, Iliada; Panaoura, Areti
2016-01-01
The aim of this study was to propose and validate a structural model in fraction and decimal number addition, which is founded primarily on a synthesis of major theoretical approaches in the field of representations in Mathematics and also on previous research on the learning of fractions and decimals. The study was conducted among 1,701 primary…
Baryon number fluctuations in chiral effective models and their phenomenological implications
Almási, Gábor András; Friman, Bengt; Redlich, Krzysztof
2017-07-01
We study the critical properties of net-baryon-number fluctuations at the chiral restoration transition in a medium at finite temperature and net baryon density. The chiral dynamics of quantum chromodynamics (QCD) is modeled by the Polyakov-loop extended quark-meson Lagrangian that includes the coupling of quarks to vector meson and temporal gauge fields. The functional renormalization group is employed to properly account for the O (4 ) criticality at the phase boundary. We focus on the properties and systematics of ratios of the net-baryon-number cumulants χBn, for 1 ≤n ≤6 , near the phase boundary. The results are presented in the context of the recent experimental data of the STAR Collaboration on fluctuations of the net proton number in heavy-ion collisions at RHIC. We show that the model results for the energy dependence of the cumulant ratios are in good overall agreement with the data, with one exception. At center-of-mass energies below 19.6 GeV, we find that the measured fourth-order cumulant deviates considerably from the model results, which incorporate the expected O (4 ) and Z (2 ) criticality. We assess the influence of model assumptions and in particular of repulsive vector-interactions, which are used to modify the location of the critical end point in the model, on the cumulants ratios. Finally, we discuss a possibility to test to what extent the fluctuations are affected by nonequilibrium dynamics by comparing certain ratios of cumulants.
Net-baryon number fluctuations in the hybrid quark-meson-nucleon model at finite density
Marczenko, Michał; Sasaki, Chihiro
2018-02-01
We study the mean-field thermodynamics and the characteristics of the net-baryon number fluctuations at the phase boundaries for the chiral and deconfinement transitions in the hybrid quark-meson-nucleon model. The chiral dynamics is described in the linear sigma model, whereas the quark confinement is manipulated by a medium-dependent modification of the particle distribution functions, where an additional scalar field is introduced. At low temperature and finite baryon density, the model predicts a first-, second-order chiral phase transition, or a crossover, depending on the expectation value of the scalar field, and a first-order deconfinement phase transition. We focus on the influence of the confinement over higher-order cumulants of the net-baryon number density. We find that the cumulants show a substantial enhancement around the chiral phase transition; they are not as sensitive to the deconfinement transition.
Tan Rodney H. G.; Teow Matthew Y. W.
2016-01-01
This paper presents the evaluation of horizontal axis wind turbine torque and mechanical power generation and its relation to the number of blades at a given wind speed. The relationship of wind turbine rotational frequency, tip speed, minimum wind speed, mechanical power and torque related to the number of blades are derived. The purpose of this study is to determine the wind energy extraction efficiency achieved for every increment of blade number. Effective factor is introduced to interpre...
Directory of Open Access Journals (Sweden)
Ardisa U. Pradita
2014-04-01
Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 Green tea leaf (Camellia sinensis is one of herbal plants that is used for traditional medicine. Epigallocatechin gallate (EGCG in green tea is the most potential polyphenol component and has the strongest biological activity. It is known that EGCG has potential effect on wound healing. Objective: This study aimed to determine the effect of adding green tea EGCG into periodontal dressing on the number of fibroblasts after gingival artificial wound in animal model. Methods: Gingival artifical wound model was performed using 2mm punch biopsy on 24 rabbits (Oryctolagus cuniculus. The animals were divided into two groups. Periodontal dressing with EGCG and without EGCG was applied to the experimental and control group, respectively. Decapitation period was scheduled at day 3, 5, and 7 after treatment. Histological analysis to count the number of fibroblasts was performed. Results: Number of fibroblasts was significantly increased in time over the experimental group treated with EGCG periodontal dressing compared to control (p<0.05. Conclusion: EGCG periodontal dressing could increase the number of fibroblast, therefore having role in wound healing after periodontal surgery in animal model.DOI: 10.14693/jdi.v20i3.197
Individual model evaluation and probabilistic weighting of models
International Nuclear Information System (INIS)
Atwood, C.L.
1994-01-01
This note stresses the importance of trying to assess the accuracy of each model individually. Putting a Bayesian probability distribution on a population of models faces conceptual and practical complications, and apparently can come only after the work of evaluating the individual models. Moreover, the primary issue is open-quotes How good is this modelclose quotes? Therefore, the individual evaluations are first in both chronology and importance. They are not easy, but some ideas are given here on how to perform them
Numberical Calculations of Atmospheric Conditions over Tibetan Plateau by Using WRF Model
International Nuclear Information System (INIS)
Qian, Xuan; Yao, Yongqiang; Wang, Hongshuai; Liu, Liyong; Li, Junrong; Yin, Jia
2015-01-01
The wind field, precipitable water vapor are analyzed by using the mesoscale numerical model WRF over Tibetan Plateau, and the aerosol is analyzed by using WRF- CHEM model. The spatial and vertical distributions of the relevant atmospheric factors are summarized, providing truth evidence for selecting and further evaluating an astronomical site. It has been showed that this method could provide good evaluation of atmospheric conditions. This study serves as a further demonstration towards astro-climate regionalization, and provides with essential database for astronomical site survey over Tibetan Plateau. (paper)
Buela-Casal, Gualberto; Zych, Izabela
2010-05-01
The study analyzes the relationship between the number of citations as calculated by the IN-RECS database and the quality evaluated by experts. The articles published in journals of the Spanish Psychological Association between 1996 and 2008 and selected by the Editorial Board of Psychology in Spain were the subject of the study. Psychology in Spain is a journal that includes the best papers published throughout the previous year, chosen by the Editorial Board made up of fifty specialists of acknowledged prestige within Spanish psychology and translated into English. The number of the citations of the 140 original articles republished in Psychology in Spain was compared to the number of the citations of the 140 randomly selected articles. Additionally, the study searched for a relationship between the number of the articles selected from each journal and their mean number of citations. The number of citations received by the best articles as evaluated by experts is significantly higher than the number of citations of the randomly selected articles. Also, the number of citations is higher in the articles from the most frequently selected journals. A statistically significant relation between the quality evaluated by experts and the number of the citations was found.
[Evaluation model for municipal health planning management].
Berretta, Isabel Quint; Lacerda, Josimari Telino de; Calvo, Maria Cristina Marino
2011-11-01
This article presents an evaluation model for municipal health planning management. The basis was a methodological study using the health planning theoretical framework to construct the evaluation matrix, in addition to an understanding of the organization and functioning designed by the Planning System of the Unified National Health System (PlanejaSUS) and definition of responsibilities for the municipal level under the Health Management Pact. The indicators and measures were validated using the consensus technique with specialists in planning and evaluation. The applicability was tested in 271 municipalities (counties) in the State of Santa Catarina, Brazil, based on population size. The proposed model features two evaluative dimensions which reflect the municipal health administrator's commitment to planning: the guarantee of resources and the internal and external relations needed for developing the activities. The data were analyzed using indicators, sub-dimensions, and dimensions. The study concludes that the model is feasible and appropriate for evaluating municipal performance in health planning management.
An Evaluation Model of Digital Educational Resources
Directory of Open Access Journals (Sweden)
Abderrahim El Mhouti
2013-05-01
Full Text Available Abstract—Today, the use of digital educational resources in teaching and learning is considerably expanding. Such expansion calls educators and computer scientists to reflect more on the design of such products. However, this reflection exposes a number of criteria and recommendations that can guide and direct any teaching tool design be it campus-based or online (e-learning. Our work is at the heart of this issue. We suggest, through this article, examining academic, pedagogical, didactic and technical criteria to conduct this study which aims to evaluate the quality of digital educational resources. Our approach consists in addressing the specific and relevant factors of each evaluation criterion. We will then explain the detailed structure of the evaluation instrument used : “evaluation grid”. Finally, we show the evaluation outcomes based on the conceived grid and then we establish an analytical evaluation of the state of the art of digital educational resources.
Evaluation of green house gas emissions models.
2014-11-01
The objective of the project is to evaluate the GHG emissions models used by transportation agencies and industry leaders. Factors in the vehicle : operating environment that may affect modal emissions, such as, external conditions, : vehicle fleet c...
Vogt, R. A.
1979-01-01
The application of using the mission planning and analysis division (MPAD) common format trajectory data tape to predict temperatures for preflight and post flight mission analysis is presented and evaluated. All of the analyses utilized the latest Space Transportation System 1 flight (STS-1) MPAD trajectory tape, and the simplified '136 note' midsection/payload bay thermal math model. For the first 6.7 hours of the STS-1 flight profile, transient temperatures are presented for selected nodal locations with the current standard method, and the trajectory tape method. Whether the differences are considered significant or not depends upon the view point. Other transient temperature predictions are also presented. These results were obtained to investigate an initial concern that perhaps the predicted temperature differences between the two methods would not only be caused by the inaccuracies of the current method's assumed nominal attitude profile but also be affected by a lack of a sufficient number of orbit points in the current method. Comparison between 6, 12, and 24 orbit point parameters showed a surprising insensitivity to the number of orbit points.
COMPUTER MODEL FOR ORGANIC FERTILIZER EVALUATION
Lončarić, Zdenko; Vukobratović, Marija; Ragaly, Peter; Filep, Tibor; Popović, Brigita; Karalić, Krunoslav; Vukobratović, Želimir
2009-01-01
Evaluation of manures, composts and growing media quality should include enough properties to enable an optimal use from productivity and environmental points of view. The aim of this paper is to describe basic structure of organic fertilizer (and growing media) evaluation model to present the model example by comparison of different manures as well as example of using plant growth experiment for calculating impact of pH and EC of growing media on lettuce plant growth. The basic structure of ...
Yu, Cheng-He; Zhang, Ruo-Peng; Li, Juan; A, Zhou-Cun
2018-03-03
The aim of this study was to create a predictive model for high-quality blastocyst progression based on the traditional morphology parameters of embryos. A total of 1564 embryos from 234 women underwent conventional in vitro fertilization and were involved in the present study. High-quality blastocysts were defined as having a grade of at least 3BB, and all embryos were divided based on the development of high-quality blastocysts (group HQ) or the failure to develop high-quality blastocysts (group NHQ). A retrospective analysis of day-3 embryo parameters, focused on blastomere number, fragmentation, the presence of a vacuole, symmetry, and the presence of multinucleated blastomeres was conducted. All parameters were related to high-quality blastocysts (p quality blastocysts. Parameters are indicated by s_bn (blastomere number), s_f (fragmentation), s_pv (presence of a vacuole), s_s (symmetry), and s_MNB (multinucleated blastomeres). Subsequently, univariate and multivariate logistic regression analyses were conducted to explore their relationship. In the multivariate logistic regression analysis, a predictive model was constructed, and a parameter Hc was created based on the s_bn, s_f, and s_s parameters and their corresponding odds ratios. The value of Hc in group HQ was significantly higher than that in group NHQ. A receiver operating characteristic curve was used to test the effectiveness of the model. An area under the curve of 0.790, with a 95% confidence interval of 0.766-0.813, was calculated. A dataset was used to validate the predictive utility of the model. Moreover, another dataset was used to ensure that the model can be applied to predict the implantation of day-3 embryos. A predictive model for high-quality blastocysts was created based on blastomere number, fragmentation, and symmetry. This model provides novel information on the selection of potential embryos.
A MICROCOMPUTER MODEL FOR IRRIGATION SYSTEM EVALUATION
Williams, Jeffery R.; Buller, Orlan H.; Dvorak, Gary J.; Manges, Harry L.
1988-01-01
ICEASE (Irrigation Cost Estimator and System Evaluator) is a microcomputer model designed and developed to meet the need for conducting economic evaluation of adjustments to irrigation systems and management techniques to improve the use of irrigated water. ICEASE can calculate the annual operating costs for irrigation systems and has five options that can be used to economically evaluate improvements in the pumping plant or the way the irrigation system is used for crop production.
Revamping the Teacher Evaluation Process. Education Policy Brief. Volume 9, Number 4, Fall 2011
Whiteman, Rodney S.; Shi, Dingjing; Plucker, Jonathan A.
2011-01-01
This policy brief explores Senate Enrolled Act 001 (SEA 1), specifically the provisions for how teachers must be evaluated. After a short summary of SEA 1 and its direct changes to evaluation policies and practices, the brief reviews literature in teacher evaluation and highlights important issues for school corporations to consider when selecting…
Universal reduction of effective coordination number in the quasi-one-dimensional Ising model
Todo, Synge
2006-09-01
Critical temperature of quasi-one-dimensional general-spin Ising ferromagnets is investigated by means of the cluster Monte Carlo method performed on infinite-length strips, L×∞ or L×L×∞ . We find that in the weak interchain coupling regime the critical temperature as a function of the interchain coupling is well-described by a chain mean-field formula with a reduced effective coordination number, as the quantum Heisenberg antiferromagnets recently reported by Yasuda [Phys. Rev. Lett. 94, 217201 (2005)]. It is also confirmed that the effective coordination number is independent of the spin size. We show that in the weak interchain coupling limit the effective coordination number is, irrespective of the spin size, rigorously given by the quantum critical point of a spin- 1/2 transverse-field Ising model.
A new modeling and solution approach for the number partitioning problem
Directory of Open Access Journals (Sweden)
Bahram Alidaee
2005-01-01
Full Text Available The number partitioning problem has proven to be a challenging problem for both exact and heuristic solution methods. We present a new modeling and solution approach that consists of recasting the problem as an unconstrained quadratic binary program that can be solved by efficient metaheuristic methods. Our approach readily accommodates both the common two-subset partition case as well as the more general case of multiple subsets. Preliminary computational experience is presented illustrating the attractiveness of the method.
A continuous-index hidden Markov jump process for modeling DNA copy number data.
Stjernqvist, Susann; Rydén, Tobias
2009-10-01
The number of copies of DNA in human cells can be measured using array comparative genomic hybridization (aCGH), which provides intensity ratios of sample to reference DNA at genomic locations corresponding to probes on a microarray. In the present paper, we devise a statistical model, based on a latent continuous-index Markov jump process, that is aimed to capture certain features of aCGH data, including probes that are unevenly long, unevenly spaced, and overlapping. The model has a continuous state space, with 1 state representing a normal copy number of 2, and the rest of the states being either amplifications or deletions. We adopt a Bayesian approach and apply Markov chain Monte Carlo (MCMC) methods for estimating the parameters and the Markov process. The model can be applied to data from both tiling bacterial artificial chromosome arrays and oligonucleotide arrays. We also compare a model with normal distributed noise to a model with t-distributed noise, showing that the latter is more robust to outliers.
Advanced Daily Prediction Model for National Suicide Numbers with Social Media Data.
Lee, Kyung Sang; Lee, Hyewon; Myung, Woojae; Song, Gil-Young; Lee, Kihwang; Kim, Ho; Carroll, Bernard J; Kim, Doh Kwan
2018-04-01
Suicide is a significant public health concern worldwide. Social media data have a potential role in identifying high suicide risk individuals and also in predicting suicide rate at the population level. In this study, we report an advanced daily suicide prediction model using social media data combined with economic/meteorological variables along with observed suicide data lagged by 1 week. The social media data were drawn from weblog posts. We examined a total of 10,035 social media keywords for suicide prediction. We made predictions of national suicide numbers 7 days in advance daily for 2 years, based on a daily moving 5-year prediction modeling period. Our model predicted the likely range of daily national suicide numbers with 82.9% accuracy. Among the social media variables, words denoting economic issues and mood status showed high predictive strength. Observed number of suicides one week previously, recent celebrity suicide, and day of week followed by stock index, consumer price index, and sunlight duration 7 days before the target date were notable predictors along with the social media variables. These results strengthen the case for social media data to supplement classical social/economic/climatic data in forecasting national suicide events.
Evaluation of constitutive models for crushed salt
Energy Technology Data Exchange (ETDEWEB)
Callahan, G.D.; Loken, M.C. [RE/SPEC, Inc., Rapid City, SD (United States); Hurtado, L.D.; Hansen, F.D.
1996-05-01
Three constitutive models are recommended as candidates for describing the deformation of crushed salt. These models are generalized to three-dimensional states of stress to include the effects of mean and deviatoric stress and modified to include effects of temperature, grain size, and moisture content. A database including hydrostatic consolidation and shear consolidation tests conducted on Waste Isolation Pilot Plant (WIPP) and southeastern New Mexico salt is used to determine material parameters for the models. To evaluate the capability of the models, parameter values obtained from fitting the complete database are used to predict the individual tests. Finite element calculations of a WIPP shaft with emplaced crushed salt demonstrate the model predictions.
CSIR Research Space (South Africa)
Dorasamy, K
2015-09-01
Full Text Available Directional Patterns, which are formed by grouping regions of orientation fields falling within a specific range, vary under rotation and the number of regions. For fingerprint classification schemes, this can result in missclassification due...
Determination model for cetane number of biodiesel at different fatty acid composition: a review
Directory of Open Access Journals (Sweden)
Michal Angelovič
2014-05-01
Full Text Available The most accepted definition of biodiesel is stated at the EU technical regulation EN 14214 (2008 or in the USA in ASTM 6751-02. As a result of this highly strict description only methyl esters of fatty acids conform to these definitions, nevertheless the term ‘‘biodiesel’’ is spread out to other alkyl fatty esters. Some countries have adopted bioethanol for replacement of methanol in biodiesel transesterification and thus assuring a fully biological fuel. Of course, such position brings some problems in fulfilling technical requirements of EN 14214 or ASTM 6751-02. Biodiesel is actually a less complex mixture than petrodiesel, but different feedstock origins and the effect of seasonality may impose difficulties in fuel quality control. Since biodiesel is an alternative diesel fuel derived from the transesterification of triacylglycerol comprised materials, such as vegetable oils or animal fats, with simple alcohols to furnish the corresponding mono-alkyl esters, its composition depends on the raw material used, the cultivated area location, and harvest time. The choice of the raw material is usually the most important factor for fluctuations of biodiesel composition, because different vegetable oils and animal fats may contain different types of fatty acids. Important properties of this fuel vary significantly with the composition of the mixture. Cetane number, melting point, degree of saturation, density, cloud point, pour point, viscosity, and nitrogen oxides exhaust emission (NOx, for instance, deserve to be mentioned. One of the most important fuel quality indicators is the cetane number; however its experimental determination may be an expensive and lengthy task. To weaken situation concerning biodiesel, the availability of data in the literature is also scarce. In such scenario, the use of reliable models to predict the cetane number or any other essential characteristic may be of great utility. We reviewed available literature to
Modeling for Green Supply Chain Evaluation
Directory of Open Access Journals (Sweden)
Elham Falatoonitoosi
2013-01-01
Full Text Available Green supply chain management (GSCM has become a practical approach to develop environmental performance. Under strict regulations and stakeholder pressures, enterprises need to enhance and improve GSCM practices, which are influenced by both traditional and green factors. This study developed a causal evaluation model to guide selection of qualified suppliers by prioritizing various criteria and mapping causal relationships to find effective criteria to improve green supply chain. The aim of the case study was to model and examine the influential and important main GSCM practices, namely, green logistics, organizational performance, green organizational activities, environmental protection, and green supplier evaluation. In the case study, decision-making trial and evaluation laboratory technique is applied to test the developed model. The result of the case study shows only “green supplier evaluation” and “green organizational activities” criteria of the model are in the cause group and the other criteria are in the effect group.
Evaluation of models in performance assessment
International Nuclear Information System (INIS)
Dormuth, K.W.
1993-01-01
The reliability of models used for performance assessment for high-level waste repositories is a key factor in making decisions regarding the management of high-level waste. Model reliability may be viewed as a measure of the confidence that regulators and others have in the use of these models to provide information for decision making. The degree of reliability required for the models will increase as implementation of disposal proceeds and decisions become increasingly important to safety. Evaluation of the models by using observations of real systems provides information that assists the assessment analysts and reviewers in establishing confidence in the conclusions reached in the assessment. A continuing process of model calibration, evaluation, and refinement should lead to increasing reliability of models as implementation proceeds. However, uncertainty in the model predictions cannot be eliminated, so decisions will always be made under some uncertainty. Examples from the Canadian program illustrate the process of model evaluation using observations of real systems and its relationship to performance assessment. 21 refs., 2 figs
Radcliffe, Susan; Novak, Virginia E.
As part of an internal marketing effort, a study was conducted at Howard Community College (HCC) to determine employees' evaluation of key educational services provided by the college. All full-time faculty, administrators, and support staff were asked to evaluate 13 areas of service on a scale of 1 (poor) to 5 (excellent) and to identify HCC's…
Multi-criteria evaluation of hydrological models
Rakovec, Oldrich; Clark, Martyn; Weerts, Albrecht; Hill, Mary; Teuling, Ryan; Uijlenhoet, Remko
2013-04-01
Over the last years, there is a tendency in the hydrological community to move from the simple conceptual models towards more complex, physically/process-based hydrological models. This is because conceptual models often fail to simulate the dynamics of the observations. However, there is little agreement on how much complexity needs to be considered within the complex process-based models. One way to proceed to is to improve understanding of what is important and unimportant in the models considered. The aim of this ongoing study is to evaluate structural model adequacy using alternative conceptual and process-based models of hydrological systems, with an emphasis on understanding how model complexity relates to observed hydrological processes. Some of the models require considerable execution time and the computationally frugal sensitivity analysis, model calibration and uncertainty quantification methods are well-suited to providing important insights for models with lengthy execution times. The current experiment evaluates two version of the Framework for Understanding Structural Errors (FUSE), which both enable running model inter-comparison experiments. One supports computationally efficient conceptual models, and the second supports more-process-based models that tend to have longer execution times. The conceptual FUSE combines components of 4 existing conceptual hydrological models. The process-based framework consists of different forms of Richard's equations, numerical solutions, groundwater parameterizations and hydraulic conductivity distribution. The hydrological analysis of the model processes has evolved from focusing only on simulated runoff (final model output), to also including other criteria such as soil moisture and groundwater levels. Parameter importance and associated structural importance are evaluated using different types of sensitivity analyses techniques, making use of both robust global methods (e.g. Sobol') as well as several
Saphire models and software for ASP evaluations
International Nuclear Information System (INIS)
Sattison, M.B.
1997-01-01
The Idaho National Engineering Laboratory (INEL) over the three years has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of ASP evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both the U.S. Nuclear Regulatory Commission's (NRC's) Office of Nuclear Reactor Regulation (NRR) and the Office for Analysis and Evaluation of Operational Data (AEOD). This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events. Future plans for the ASP models is also presented
Directory of Open Access Journals (Sweden)
Ari Bintari
2017-12-01
Full Text Available Low mathematics learning outcomes and student activeness due to lack of effective learning model in optimizing students’ ability and motivation. The formulation of this research problem is whether the learning model Number Head Together (NHT with the help of effective discovery method to the result of student learning class III. This study aims to determine the effectiveness of the model with the help of the method of discovery of learning mathematics class III SDN Mewek. Pre-experimental method used in this research, which tests, documentation study, and observation sheet as its instruments. Based on the results of the final analysis that has been done visible from the percentage of mastery learning aiswa show learning without using learning model NHT as 5 students who completed with 25% percent and 15 students have not completed with the percentage 75%. Whereas after being given (posttest treatment using NHT learning model and pizza media fractions that complete 16 students with percentage 80% and 4 students have not completed with percentage 20%. It is reinforced by t-test, that students’ achievement has been increased significantly by NHT model.
Modeling Energy and Development : An Evaluation of Models and Concepts
Ruijven, Bas van; Urban, Frauke; Benders, René M.J.; Moll, Henri C.; Sluijs, Jeroen P. van der; Vries, Bert de; Vuuren, Detlef P. van
2008-01-01
Most global energy models are developed by institutes from developed countries focusing primarily oil issues that are important in industrialized countries. Evaluation of the results for Asia of the IPCC/SRES models shows that broad concepts of energy and development. the energy ladder and the
Directory of Open Access Journals (Sweden)
Nicholas J. Sexton
2014-07-01
Full Text Available Random number generation (RNG is a complex cognitive task for human subjects, requiring deliberative control to avoid production of habitual, stereotyped sequences. Under various manipulations (e.g., speeded responding, transcranial magnetic stimulation, or neurological damage the performance of human subjects deteriorates, as reflected in a number of qualitatively distinct, dissociable biases. For example, the intrusion of stereotyped behaviour (e.g., counting increases at faster rates of generation. Theoretical accounts of the task postulate that it requires the integrated operation of multiple, computationally heterogeneous cognitive control ('executive' processes. We present a computational model of RNG, within the framework of a novel, neuropsychologically-inspired cognitive architecture, ESPro. Manipulating the rate of sequence generation in the model reproduced a number of key effects observed in empirical studies, including increasing sequence stereotypy at faster rates. Within the model, this was due to time limitations on the interaction of supervisory control processes, namely, task setting, proposal of responses, monitoring, and response inhibition. The model thus supports the fractionation of executive function into multiple, computationally heterogeneous processes.
Evaluation of R and D, volume 1 number 1 Fall 1992
International Nuclear Information System (INIS)
1992-01-01
A newsletter on the evaluation of research and development in Canada. It is published every four months. This issue has information on a variety of topics including a new database for NSERC research grants available, national research and development expenditure targets, an assessment of Canada's biotechnology programs, the Manufacturing Research Corporation of Ontario assesses the research and development needs of industry plus a summary of the May 1992 Conference of the Canadian Evaluation Society
Directory of Open Access Journals (Sweden)
Nasim Karimi
2016-12-01
Conclusion: According to the results of this study, it can be concluded that occupational factors are associated with the number of MSDs developing among carpet weavers. Thus, using standard tools and decreasing hours of work per day can reduce frequency of MSDs among carpet weavers.
Instanton-mediated baryon number violation in non-universal gauge extended models
Fuentes-Martín, J.; Portolés, J.; Ruiz-Femenía, P.
2015-01-01
Instanton solutions of non-abelian Yang-Mills theories generate an effective action that may induce lepton and baryon number violations, namely Δ B = Δ L = n f , being n f the number of families coupled to the gauge group. In this article we study instanton mediated processes in a SU(2) ℓ ⊗SU(2) h ⊗U(1) extension of the Standard Model that breaks universality by singularizing the third family. In the construction of the instanton Green functions we account systematically for the inter-family mixing. This allows us to use the experimental bounds on proton decay in order to constrain the gauge coupling of SU(2) h . Tau lepton non-leptonic and radiative decays with Δ B = Δ L = 1 are also analysed.
[Evaluation of the Dresden Tympanoplasty Model (DTM)].
Beleites, T; Neudert, M; Lasurashvili, N; Kemper, M; Offergeld, C; Hofmann, G; Zahnert, T
2011-11-01
The training of microsurgical motor skills is essentiell for surgical education if the interests of the patient are to be safeguarded. In otosurgery the complex anatomy of the temporal bone and variations necessitate a special training before performing surgery on a patient. We therefore developed and evaluated a simplified middle ear model for acquiring first microsurgical skills in tympanoplasty.The simplified tympanoplasty model consists of the outer ear canal and a tympanic cavity. A stapes model is placed in projection of the upper posterior tympanic membrane quadrant at the medial wall of the simulated tympanic cavity. To imitate the annular ligament flexibility the stapes is fixed on a soft plastic pad. 41 subjects evaluated the model´s anatomical analogy, the comparability to the real surgical situation and the general model properties the using a special questionnaire.The tympanoplasty model was very well evaluated by all participants. It is a reasonably priced model and a useful tool in microsurgical skills training. Thereby, it closes the gap between theoretical training and real operation conditions. © Georg Thieme Verlag KG Stuttgart · New York.
Directory of Open Access Journals (Sweden)
Tan Rodney H. G.
2016-01-01
Full Text Available This paper presents the evaluation of horizontal axis wind turbine torque and mechanical power generation and its relation to the number of blades at a given wind speed. The relationship of wind turbine rotational frequency, tip speed, minimum wind speed, mechanical power and torque related to the number of blades are derived. The purpose of this study is to determine the wind energy extraction efficiency achieved for every increment of blade number. Effective factor is introduced to interpret the effectiveness of the wind turbine extracting wind energy below and above the minimum wind speed for a given number of blades. Improve factor is introduced to indicate the improvement achieved for every increment of blades. The evaluation was performance with wind turbine from 1 to 6 blades. The evaluation results shows that the higher the number of blades the lower the minimum wind speed to achieve unity effective factor. High improve factors are achieved between 1 to 2 and 2 to 3 blades increment. It contributes to better understanding and determination for the choice of the number of blades for wind turbine design.
A model for estimating the minimum number of offspring to sample in studies of reproductive success.
Anderson, Joseph H; Ward, Eric J; Carlson, Stephanie M
2011-01-01
Molecular parentage permits studies of selection and evolution in fecund species with cryptic mating systems, such as fish, amphibians, and insects. However, there exists no method for estimating the number of offspring that must be assigned parentage to achieve robust estimates of reproductive success when only a fraction of offspring can be sampled. We constructed a 2-stage model that first estimated the mean (μ) and variance (v) in reproductive success from published studies on salmonid fishes and then sampled offspring from reproductive success distributions simulated from the μ and v estimates. Results provided strong support for modeling salmonid reproductive success via the negative binomial distribution and suggested that few offspring samples are needed to reject the null hypothesis of uniform offspring production. However, the sampled reproductive success distributions deviated significantly (χ(2) goodness-of-fit test p value reproductive success distribution at rates often >0.05 and as high as 0.24, even when hundreds of offspring were assigned parentage. In general, reproductive success patterns were less accurate when offspring were sampled from cohorts with larger numbers of parents and greater variance in reproductive success. Our model can be reparameterized with data from other species and will aid researchers in planning reproductive success studies by providing explicit sampling targets required to accurately assess reproductive success.
CRITICAL ANALYSIS OF EVALUATION MODEL LOMCE
Directory of Open Access Journals (Sweden)
José Luis Bernal Agudo
2015-06-01
Full Text Available The evaluation model that the LOMCE projects sinks its roots into the neoliberal beliefs, reflecting a specific way of understanding the world. What matters is not the process but the results, being the evaluation the center of the education-learning processes. It presents an evil planning, since the theory that justifies the model doesn’t specify upon coherent proposals, where there is an excessive worry for excellence and diversity is left out. A comprehensive way of understanding education should be recovered.
Patient specific respiratory motion modeling using a limited number of 3D lung CT images.
Cui, Xueli; Gao, Xin; Xia, Wei; Liu, Yangchuan; Liang, Zhiyuan
2014-01-01
To build a patient specific respiratory motion model with a low dose, a novel method was proposed that uses a limited number of 3D lung CT volumes with an external respiratory signal. 4D lung CT volumes were acquired for patients with in vitro labeling on the upper abdominal surface. Meanwhile, 3D coordinates of in vitro labeling were measured as external respiratory signals. A sequential correspondence between the 4D lung CT and the external respiratory signal was built using the distance correlation method, and a 3D displacement for every registration control point in the CT volumes with respect to time can be obtained by the 4D lung CT deformable registration. A temporal fitting was performed for every registration control point displacements and an external respiratory signal in the anterior-posterior direction respectively to draw their fitting curves. Finally, a linear regression was used to fit the corresponding samples of the control point displacement fitting curves and the external respiratory signal fitting curve to finish the pulmonary respiration modeling. Compared to a B-spline-based method using the respiratory signal phase, the proposed method is highly advantageous as it offers comparable modeling accuracy and target modeling error (TME); while at the same time, the proposed method requires 70% less 3D lung CTs. When using a similar amount of 3D lung CT data, the mean of the proposed method's TME is smaller than the mean of the PCA (principle component analysis)-based methods' TMEs. The results indicate that the proposed method is successful in striking a balance between modeling accuracy and number of 3D lung CT volumes.
Directory of Open Access Journals (Sweden)
Subur Riyono
2016-12-01
Full Text Available The Implementation of Numbered Head Together (NHT learning model to Enhance the students Active Role in Learning the Brake System. A thesis of Machine Engginering Education Study Program Faculty of Teacher Training and Educatioon of Sarjanawiyata Tamansiswa University Yogyakarta, 2016.The type of this research is action research including three cycles. Each cycle is conducted by four stages including 1. Planning 2. Implementing 3. Observing and 4. Reflexting. In collecting the Data, the researcher appied test, observation as well as document. The technique used in analyzing the observation sheet and test is quantitive deskriptive. The result of this research showed that the implementation of Numbered Head Together (NHT learning model had enhanched both the students learning Active Role and the students’ learning results of the brake system subject to each cycle. It is proved by the increasing result of the observation sheet of the students learning Active Role from which the first cycle 44,57% having increased to the second cycle 16,57% becoming 61,14% and in the third cycle having increased 25,57% becoming 86,71%. Furthermore, based on the learning result test of the first cycle gave the average grade of the pre test 62,28%, the average grade of the post test then 60,71%, the average grade 69,57% so the learning result increased 8,86% and in the second cycle gave the average grade of pretest 62,28% and the average post test then 75,42% increased the learning result 13,14% and the the test of the third cycle, the average pre test 65,14% and the average post test 83,42%.. Due to the research findings, it can be concluded that the implementation of the Numbered Head Together (NHT learning model can enhance the students learning Active Role as well as the results of the students learning in the brake system.
Development of KAERI LBLOCA realistic evaluation model
International Nuclear Information System (INIS)
Lee, W.J.; Lee, Y.J.; Chung, B.D.; Lee, S.Y.
1994-01-01
A realistic evaluation model (REM) for LBLOCA licensing calculation is developed and proposed for application to pressurized light water reactors. The developmental aim of the KAERI-REM is to provide a systematic methodology that is simple in structure and to use and built upon sound logical reasoning, for improving the code capability to realistically describe the LBLOCA phenomena and for evaluating the associated uncertainties. The method strives to be faithful to the intention of being best-estimate, that is, the method aims to evaluate the best-estimate values and the associated uncertainties while complying to the requirements in the ECCS regulations. (author)
Study on team evaluation. Team process model for team evaluation
International Nuclear Information System (INIS)
Sasou Kunihide; Ebisu, Mitsuhiro; Hirose, Ayako
2004-01-01
Several studies have been done to evaluate or improve team performance in nuclear and aviation industries. Crew resource management is the typical example. In addition, team evaluation recently gathers interests in other teams of lawyers, medical staff, accountants, psychiatrics, executive, etc. However, the most evaluation methods focus on the results of team behavior that can be observed through training or actual business situations. What is expected team is not only resolving problems but also training younger members being destined to lead the next generation. Therefore, the authors set the final goal of this study establishing a series of methods to evaluate and improve teams inclusively such as decision making, motivation, staffing, etc. As the first step, this study develops team process model describing viewpoints for the evaluation. The team process is defined as some kinds of power that activate or inactivate competency of individuals that is the components of team's competency. To find the team process, the authors discussed the merits of team behavior with the experienced training instructors and shift supervisors of nuclear/thermal power plants. The discussion finds four team merits and many components to realize those team merits. Classifying those components into eight groups of team processes such as 'Orientation', 'Decision Making', 'Power and Responsibility', 'Workload Management', 'Professional Trust', 'Motivation', 'Training' and 'staffing', the authors propose Team Process Model with two to four sub processes in each team process. In the future, the authors will develop methods to evaluate some of the team processes for nuclear/thermal power plant operation teams. (author)
Evaluation in the resonance range of nuclei with a mass number above 220
International Nuclear Information System (INIS)
Ribon, P.
1970-01-01
The author discusses the problems posed by the evaluation of neutron data for fissile or fertile nuclei in the range of resolved or unresolved resonances. It appears to take several years until the data of an experiment are used by the reactor physicists. If one wants to have recent data at one's disposal, one cannot have recourse to evaluated-data libraries. Moreover, the existing parameter sets are only fragmentary. A new evaluation is, therefore, necessary for nearly all of these nuclei, but it cannot be based upon different parameter sets; these are indeed contradictory, and the evaluator will have to go back to the original data. The author shows for the set of σ f of 235 U, that a careful comparison of the data shows up unsuspected local defects. Some examples illustrate the deviation between analyses carried out by different methods and between the results on the established divergences. The parameters or cross-sections are far from being known with the precision one would desire. This fact gives rise to anomalies in the interpretation of data necessary for understanding and simulation in the range of unresolved resonances. But the introduction of concepts connected with sub-threshold fission noticeably furthers this understanding. Therefore a comparison of the methods of analysis must be made in more and more accurate measurements (evaluation and correction of systematic errors). (author) [fr
Experimental Models for Evaluation of Nanoparticles in Cancer Therapy.
Kesharwani, Prashant; Ghanghoria, Raksha; Jain, Narendra K
2017-01-01
Nanoparticles (NPs), the submicron-sized colloidal particles, have recently generated enormous interest among biomedical scientists, particularly in cancer therapy. A number of models are being used for exploring NPs safety and efficacy. Recently, cancer cell lines have explored as prominent experimental models for evaluating pharmacokinetic parameters, cell viability, cytotoxicity and drug efficacy in tumor cells. This review aims at thorough compilation of various cancer cell lines and in vivo models for evaluation of efficacy of NPs on one platform. This will provide a basis to explore and improvise pre-clinical models as a prelude to successful cancer research. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Number-conserving interacting fermion models with exact topological superconducting ground states
Wang, Zhiyuan; Xu, Youjiang; Pu, Han; Hazzard, Kaden R. A.
2017-09-01
We present a method to construct number-conserving Hamiltonians whose ground states exactly reproduce an arbitrarily chosen BCS-type mean-field state. Such parent Hamiltonians can be constructed not only for the usual s -wave BCS state, but also for more exotic states of this form, including the ground states of Kitaev wires and two-dimensional topological superconductors. This method leads to infinite families of locally interacting fermion models with exact topological superconducting ground states. After explaining the general technique, we apply this method to construct two specific classes of models. The first one is a one-dimensional double wire lattice model with Majorana-like degenerate ground states. The second one is a two-dimensional px+i py superconducting model, where we also obtain analytic expressions for topologically degenerate ground states in the presence of vortices. Our models may provide a deeper conceptual understanding of how Majorana zero modes could emerge in condensed matter systems, as well as inspire novel routes to realize them in experiment.
Equivalent Alkane Carbon Number of Live Crude Oil: A Predictive Model Based on Thermodynamics
Directory of Open Access Journals (Sweden)
Creton Benoit
2016-09-01
Full Text Available We took advantage of recently published works and new experimental data to propose a model for the prediction of the Equivalent Alkane Carbon Number of live crude oil (EACNlo for EOR processes. The model necessitates the a priori knowledge of reservoir pressure and temperature conditions as well as the initial gas to oil ratio. Additionally, some required volumetric properties for hydrocarbons were predicted using an equation of state. The model has been validated both on our own experimental data and data from the literature. These various case studies cover broad ranges of conditions in terms of API gravity index, gas to oil ratio, reservoir pressure and temperature, and composition of representative gas. The predicted EACNlo values reasonably agree with experimental EACN values, i.e. determined by comparison with salinity scans for a series of n-alkanes from nC8 to nC18. The model has been used to generate high pressure high temperature data, showing competing effects of the gas to oil ratio, pressure and temperature. The proposed model allows to strongly narrow down the spectrum of possibilities in terms of EACNlo values, and thus a more rational use of equipments.
Econometric Evaluation of Asset Pricing Models
Lars Peter Hansen; John Heaton; Erzo Luttmer
1993-01-01
In this article we provide econometric tools for the evaluation of intertemporal asset pricing models using specification-error and volatility bounds. We formulate analog estimators of these bounds, give conditions for consistency, and derive the limiting distribution of these estimators. The analysis incorporates market frictions such as short-sale constraints and proportional transactions costs. Among several applications we show how to use the methods to assess specific asset pricing model...
Novel methods for evaluation of the Reynolds number of synthetic jets
Czech Academy of Sciences Publication Activity Database
Kordík, Jozef; Broučková, Zuzana; Vít, T.; Pavelka, Miroslav; Trávníček, Zdeněk
2014-01-01
Roč. 55, č. 6 (2014), 1757_1-1757_16 ISSN 0723-4864 R&D Projects: GA ČR GPP101/12/P556 Institutional support: RVO:61388998 Keywords : synthetic jet * synthetic jet actuator * Reynolds number Subject RIV: BK - Fluid Dynamics Impact factor: 1.670, year: 2014 http://link.springer.com/article/10.1007%2Fs00348-014-1757-x
PERFORMANCE EVALUATION OF EMPIRICAL MODELS FOR VENTED LEAN HYDROGEN EXPLOSIONS
Anubhav Sinha; Vendra C. Madhav Rao; Jennifer X. Wen
2017-01-01
Explosion venting is a method commonly used to prevent or minimize damage to an enclosure caused by an accidental explosion. An estimate of the maximum overpressure generated though explosion is an important parameter in the design of the vents. Various engineering models (Bauwens et al., 2012, Molkov and Bragin, 2015) and European (EN 14994 ) and USA standards (NFPA 68) are available to predict such overpressure. In this study, their performance is evaluated using a number of published exper...
Evaluation of a Mysis bioenergetics model
Chipps, S.R.; Bennett, D.H.
2002-01-01
Direct approaches for estimating the feeding rate of the opossum shrimp Mysis relicta can be hampered by variable gut residence time (evacuation rate models) and non-linear functional responses (clearance rate models). Bioenergetics modeling provides an alternative method, but the reliability of this approach needs to be evaluated using independent measures of growth and food consumption. In this study, we measured growth and food consumption for M. relicta and compared experimental results with those predicted from a Mysis bioenergetics model. For Mysis reared at 10??C, model predictions were not significantly different from observed values. Moreover, decomposition of mean square error indicated that 70% of the variation between model predictions and observed values was attributable to random error. On average, model predictions were within 12% of observed values. A sensitivity analysis revealed that Mysis respiration and prey energy density were the most sensitive parameters affecting model output. By accounting for uncertainty (95% CLs) in Mysis respiration, we observed a significant improvement in the accuracy of model output (within 5% of observed values), illustrating the importance of sensitive input parameters for model performance. These findings help corroborate the Mysis bioenergetics model and demonstrate the usefulness of this approach for estimating Mysis feeding rate.
Evaluating the AS-level Internet models: beyond topological characteristics
International Nuclear Information System (INIS)
Fan Zheng-Ping
2012-01-01
A surge number of models has been proposed to model the Internet in the past decades. However, the issue on which models are better to model the Internet has still remained a problem. By analysing the evolving dynamics of the Internet, we suggest that at the autonomous system (AS) level, a suitable Internet model, should at least be heterogeneous and have a linearly growing mechanism. More importantly, we show that the roles of topological characteristics in evaluating and differentiating Internet models are apparently over-estimated from an engineering perspective. Also, we find that an assortative network is not necessarily more robust than a disassortative network and that a smaller average shortest path length does not necessarily mean a higher robustness, which is different from the previous observations. Our analytic results are helpful not only for the Internet, but also for other general complex networks. (interdisciplinary physics and related areas of science and technology)
Evaluation of Workflow Management Systems - A Meta Model Approach
Directory of Open Access Journals (Sweden)
Michael Rosemann
1998-11-01
Full Text Available The automated enactment of processes through the use of workflow management systems enables the outsourcing of the control flow from application systems. By now a large number of systems, that follow different workflow paradigms, are available. This leads to the problem of selecting the appropriate workflow management system for a given situation. In this paper we outline the benefits of a meta model approach for the evaluation and comparison of different workflow management systems. After a general introduction on the topic of meta modeling the meta models of the workflow management systems WorkParty (Siemens Nixdorf and FlowMark (IBM are compared as an example. These product specific meta models can be generalized to meta reference models, which helps to specify a workflow methodology. Exemplary, an organisational reference meta model is presented, which helps users in specifying their requirements for a workflow management system.
An Efficient Dynamic Trust Evaluation Model for Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Zhengwang Ye
2017-01-01
Full Text Available Trust evaluation is an effective method to detect malicious nodes and ensure security in wireless sensor networks (WSNs. In this paper, an efficient dynamic trust evaluation model (DTEM for WSNs is proposed, which implements accurate, efficient, and dynamic trust evaluation by dynamically adjusting the weights of direct trust and indirect trust and the parameters of the update mechanism. To achieve accurate trust evaluation, the direct trust is calculated considering multitrust including communication trust, data trust, and energy trust with the punishment factor and regulating function. The indirect trust is evaluated conditionally by the trusted recommendations from a third party. Moreover, the integrated trust is measured by assigning dynamic weights for direct trust and indirect trust and combining them. Finally, we propose an update mechanism by a sliding window based on induced ordered weighted averaging operator to enhance flexibility. We can dynamically adapt the parameters and the interactive history windows number according to the actual needs of the network to realize dynamic update of direct trust value. Simulation results indicate that the proposed dynamic trust model is an efficient dynamic and attack-resistant trust evaluation model. Compared with existing approaches, the proposed dynamic trust model performs better in defending multiple malicious attacks.
Acoustic model order reduction for the lowest condition number in inverse method
Madoliat, Reza; Nouri, Nowrouz Mohammad; Rahrovi, Ali
2017-06-01
Acoustic sources with wide surfaces can be broken down in a fluid environment into smaller acoustic sources. In this study, a general model is presented, indicating the type, number, direction, position and strength of these sources in such a way that the main sound and the sound of the equivalent sources match each other acceptably. When the position and direction of the source is determined, the strength of the source can be found using the inverse method. However, since the solution is not unique in the inverse method, a different acoustic strength is obtained for the sources if different positions are selected. By selecting an arrangement of general sources and using an optimization algorithm, the least possible mismatch between the main sound and the sound of equivalent sources can be achieved. In the inverse method, it is important to reduce the effects of measurement errors. The sensor placement and acoustic model order reduction (AMOR) are studied for reducing these effects.
Blocking probability in the hose-model optical VPN with different number of wavelengths
Roslyakov, Alexander V.
2017-04-01
Connection setup with guaranteed quality of service (QoS) in the optical virtual private network (OVPN) is a major goal for the network providers. In order to support this we propose a QoS based OVPN connection set up mechanism over WDM network to the end customer. The proposed WDM network model can be specified in terms of QoS parameter such as blocking probability. We estimated this QoS parameter based on the hose-model OVPN. In this mechanism the OVPN connections also can be created or deleted according to the availability of the wavelengths in the optical path. In this paper we have considered the impact of the number of wavelengths on the computation of blocking probability. The goal of the work is to dynamically provide a best OVPN connection during frequent arrival of connection requests with QoS requirements.
Monten, Chris; Veldeman, Liv; Verhaeghe, Nick; Lievens, Yolande
2017-11-01
Evolving practice in adjuvant breast radiotherapy inevitably impacts healthcare budgets. This is reflected in a rise of health economic evaluations (HEE) in this domain. The available HEE literature was analysed qualitatively and quantitatively, using available instruments. HEEs published between 1/1/2000 and 31/10/2016 were retrieved through a systematic search in Medline, Cochrane and Embase. A quality-assessment using CHEERS (Consolidated Health Economic Evaluation Reporting Standards) was translated into a quantitative score and compared with Tufts Medical Centre CEA registry and Quality of Health Economic Studies (QHES) results. Twenty cost-effectiveness analyses (CEA) and thirteen cost comparisons (CC) were analysed. In qualitative evaluation, valuation or justification of data sources, population heterogeneity and discussion on generalizability, in addition to declaration on funding, were often absent or incomplete. After quantification, the average CHEERS-scores were 74% (CI 66.9-81.1%) and 75.6% (CI 70.7-80.5%) for CEAs and CCs respectively. CEA-scores did not differ significantly from Tufts and QHES-scores. Quantitative CHEERS evaluation is feasible and yields comparable results to validated instruments. HEE in adjuvant breast radiotherapy is of acceptable quality, however, further efforts are needed to improve comprehensive reporting of all data, indispensable for assessing relevance, reliability and generalizability of results. Copyright © 2017 Elsevier B.V. All rights reserved.
Wang, Wentao
2012-03-01
Both theoretical analysis and nonlinear 2D numerical simulations are used to study the concentration difference and Peclet number effect on the measurement error of electroosmotic mobility in microchannels. We propose a compact analytical model for this error as a function of normalized concentration difference and Peclet number in micro electroosmotic flow. The analytical predictions of the errors are consistent with the numerical simulations. © 2012 IEEE.
Retrieving infinite numbers of patterns in a spin-glass model of immune networks
Agliari, E.; Annibale, A.; Barra, A.; Coolen, A. C. C.; Tantari, D.
2017-01-01
The similarity between neural and (adaptive) immune networks has been known for decades, but so far we did not understand the mechanism that allows the immune system, unlike associative neural networks, to recall and execute a large number of memorized defense strategies in parallel. The explanation turns out to lie in the network topology. Neurons interact typically with a large number of other neurons, whereas interactions among lymphocytes in immune networks are very specific, and described by graphs with finite connectivity. In this paper we use replica techniques to solve a statistical mechanical immune network model with “coordinator branches” (T-cells) and “effector branches” (B-cells), and show how the finite connectivity enables the coordinators to manage an extensive number of effectors simultaneously, even above the percolation threshold (where clonal cross-talk is not negligible). A consequence of its underlying topological sparsity is that the adaptive immune system exhibits only weak ergodicity breaking, so that also spontaneous switch-like effects as bi-stabilities are present: the latter may play a significant role in the maintenance of immune homeostasis.
Model evaluation methodology applicable to environmental assessment models
International Nuclear Information System (INIS)
Shaeffer, D.L.
1979-08-01
A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessment models might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes
Multicriterial evaluation of spallation reaction models
International Nuclear Information System (INIS)
Andrianov, A.A.; Gritsyuk, S.V.; Korovin, Yu.A.; Kuptsov, I.S.
2013-01-01
Results of evaluation of predicting ability of spallation reaction models as applied to high-energy protons interaction based on methods of discrete decision analysis are presented. It is shown that results obtained using different methods are well consistent. Recommendations are given on the use of discrete decision analysis methods for providing constants to be employed in calculations of future nuclear power facility [ru
Credit Risk Evaluation : Modeling - Analysis - Management
Wehrspohn, Uwe
2002-01-01
An analysis and further development of the building blocks of modern credit risk management: -Definitions of default -Estimation of default probabilities -Exposures -Recovery Rates -Pricing -Concepts of portfolio dependence -Time horizons for risk calculations -Quantification of portfolio risk -Estimation of risk measures -Portfolio analysis and portfolio improvement -Evaluation and comparison of credit risk models -Analytic portfolio loss distributions The thesis contributes to the evaluatio...
Evaluating Performances of Traffic Noise Models | Oyedepo ...
African Journals Online (AJOL)
Traffic noise in decibel dB(A) were measured at six locations using 407780A Integrating Sound Level Meter, while spot speed and traffic volume were collected with cine-camera. The predicted sound exposure level (SEL) was evaluated using Burgess, British and FWHA model. The average noise level obtained are 77.64 ...
Performance Evaluation Model for Application Layer Firewalls.
Directory of Open Access Journals (Sweden)
Shichang Xuan
Full Text Available Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers. Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.
Performance Evaluation Model for Application Layer Firewalls.
Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan
2016-01-01
Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.
Marketing evaluation model of the territorial image
Directory of Open Access Journals (Sweden)
Bacherikova M. L.
2017-08-01
Full Text Available this article analyzes the existing models for assessing the image of the territory and concluded that it is necessary to develop a model that allows to assess the image of the territory taking into account all the main target audiences. The study of models of the image of the territory considered in the scientific literature was carried out by the method of traditional (non-formalized analysis of documents on the basis of scientific publications of Russian and foreign authors. The author suggests using the «ideal point» model to assess the image of the territory. At the same time, the assessment of the image of the territory should be carried out for all groups of consumers, taking into account the weight coefficients reflecting the importance of opinions and the number of respondents of each group.
Implementing the Serial Number Tracking model in telecommunications: a case study of Croatia
Directory of Open Access Journals (Sweden)
Neven Polovina
2012-01-01
Full Text Available Background: The case study describes the implementation of the SNT (Serial Number Tracking model in an integrated information system, as a means of business support in a Croatian mobile telecommunications company. Objectives: The goal was to show how to make the best practice of the SNT implementation in the telecommunication industry, with referencing to problems which have arisen during the implementation. Methods/Approach: the case study approach was used based on the documentation about the SNT model and the business intelligence system in the Croatian mobile telecommunications company. Results: Economic aspects of the effectiveness of the SNT model are described and confirmed based on actual tangible and predominantly on intangible benefits. Conclusions: Advantages of the SNT model are multiple: operating costs for storage and transit of goods were reduced, accuracy of deliveries and physical inventory was improved; a new source of information for the business intelligence system was obtained; operating processes in the distribution of goods were advanced; transit insurance costs decreased and there were fewer cases of fraudulent behaviour.
Network modeling of the transcriptional effects of copy number aberrations in glioblastoma
Jörnsten, Rebecka; Abenius, Tobias; Kling, Teresia; Schmidt, Linnéa; Johansson, Erik; Nordling, Torbjörn E M; Nordlander, Bodil; Sander, Chris; Gennemark, Peter; Funa, Keiko; Nilsson, Björn; Lindahl, Linda; Nelander, Sven
2011-01-01
DNA copy number aberrations (CNAs) are a hallmark of cancer genomes. However, little is known about how such changes affect global gene expression. We develop a modeling framework, EPoC (Endogenous Perturbation analysis of Cancer), to (1) detect disease-driving CNAs and their effect on target mRNA expression, and to (2) stratify cancer patients into long- and short-term survivors. Our method constructs causal network models of gene expression by combining genome-wide DNA- and RNA-level data. Prognostic scores are obtained from a singular value decomposition of the networks. By applying EPoC to glioblastoma data from The Cancer Genome Atlas consortium, we demonstrate that the resulting network models contain known disease-relevant hub genes, reveal interesting candidate hubs, and uncover predictors of patient survival. Targeted validations in four glioblastoma cell lines support selected predictions, and implicate the p53-interacting protein Necdin in suppressing glioblastoma cell growth. We conclude that large-scale network modeling of the effects of CNAs on gene expression may provide insights into the biology of human cancer. Free software in MATLAB and R is provided. PMID:21525872
Evaluating the TD model of classical conditioning.
Ludvig, Elliot A; Sutton, Richard S; Kehoe, E James
2012-09-01
The temporal-difference (TD) algorithm from reinforcement learning provides a simple method for incrementally learning predictions of upcoming events. Applied to classical conditioning, TD models suppose that animals learn a real-time prediction of the unconditioned stimulus (US) on the basis of all available conditioned stimuli (CSs). In the TD model, similar to other error-correction models, learning is driven by prediction errors--the difference between the change in US prediction and the actual US. With the TD model, however, learning occurs continuously from moment to moment and is not artificially constrained to occur in trials. Accordingly, a key feature of any TD model is the assumption about the representation of a CS on a moment-to-moment basis. Here, we evaluate the performance of the TD model with a heretofore unexplored range of classical conditioning tasks. To do so, we consider three stimulus representations that vary in their degree of temporal generalization and evaluate how the representation influences the performance of the TD model on these conditioning tasks.
Evaluating the Safety In Numbers effect for pedestrians at urban intersections.
Murphy, Brendan; Levinson, David M; Owen, Andrew
2017-09-01
Assessment of collision risk between pedestrians and automobiles offers a powerful and informative tool in urban planning applications, and can be leveraged to inform proper placement of improvements and treatment projects to improve pedestrian safety. Such assessment can be performed using existing datasets of crashes, pedestrian counts, and automobile traffic flows to identify intersections or corridors characterized by elevated collision risks to pedestrians. The Safety In Numbers phenomenon, which refers to the observable effect that pedestrian safety is positively correlated with increased pedestrian traffic in a given area (i.e. that the individual per-pedestrian risk of a collision decreases with additional pedestrians), is a readily observed phenomenon that has been studied previously, though its directional causality is not yet known. A sample of 488 intersections in Minneapolis were analyzed, and statistically-significant log-linear relationships between pedestrian traffic flows and the per-pedestrian crash risk were found, indicating the Safety In Numbers effect. Potential planning applications of this analysis framework towards improving pedestrian safety in urban environments are discussed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Jichul Ryu
2016-04-01
Full Text Available In this study, 52 asymptotic Curve Number (CN regression equations were developed for combinations of representative land covers and hydrologic soil groups. In addition, to overcome the limitations of the original Long-term Hydrologic Impact Assessment (L-THIA model when it is applied to larger watersheds, a watershed-scale L-THIA Asymptotic CN (ACN regression equation model (watershed-scale L-THIA ACN model was developed by integrating the asymptotic CN regressions and various modules for direct runoff/baseflow/channel routing. The watershed-scale L-THIA ACN model was applied to four watersheds in South Korea to evaluate the accuracy of its streamflow prediction. The coefficient of determination (R2 and Nash–Sutcliffe Efficiency (NSE values for observed versus simulated streamflows over intervals of eight days were greater than 0.6 for all four of the watersheds. The watershed-scale L-THIA ACN model, including the asymptotic CN regression equation method, can simulate long-term streamflow sufficiently well with the ten parameters that have been added for the characterization of streamflow.
Directory of Open Access Journals (Sweden)
Stefan eHuber
2014-04-01
Full Text Available Decimal fractions comply with the base-10 notational system of natural Arabic numbers. Nevertheless, recent research suggested that decimal fractions may be represented differently than natural numbers because two number processing effects (i.e., semantic interference and compatibility effects differed in their size between decimal fractions and natural numbers. In the present study, we examined whether these differences indeed indicate that decimal fractions are represented differently from natural numbers. Therefore, we provided an alternative explanation for the semantic congruity effect, namely a string length congruity effect. Moreover, we suggest that the smaller compatibility effect for decimal fractions compared to natural numbers was driven by differences in processing strategy (sequential vs. parallel.To evaluate this claim, we manipulated the tenth and hundredth digits in a magnitude comparison task with participants' eye movements recorded, while the unit digits remained identical. In addition, we evaluated whether our empirical findings could be simulated by an extended version of our computational model originally developed to simulate magnitude comparisons of two-digit natural numbers. In the eye-tracking study, we found evidence that participants processed decimal fractions more sequentially than natural numbers because of the identical leading digit. Importantly, our model was able to account for the smaller compatibility effect found for decimal fractions. Moreover, string length congruity was an alternative account for the prolonged reaction times for incongruent decimal pairs. Consequently, we suggest that representations of natural numbers and decimal fractions do not differ.
A multi-model assessment of the impact of sea spray geoengineering on cloud droplet number
Directory of Open Access Journals (Sweden)
K. J. Pringle
2012-12-01
Full Text Available Artificially increasing the albedo of marine boundary layer clouds by the mechanical emission of sea spray aerosol has been proposed as a geoengineering technique to slow the warming caused by anthropogenic greenhouse gases. A previous global model study (Korhonen et al., 2010 found that only modest increases (< 20% and sometimes even decreases in cloud drop number (CDN concentrations would result from emission scenarios calculated using a windspeed dependent geoengineering flux parameterisation. Here we extend that work to examine the conditions under which decreases in CDN can occur, and use three independent global models to quantify maximum achievable CDN changes. We find that decreases in CDN can occur when at least three of the following conditions are met: the injected particle number is < 100 cm^{−3}, the injected diameter is > 250–300 nm, the background aerosol loading is large (≥ 150 cm^{−3} and the in-cloud updraught velocity is low (< 0.2 m s^{−1}. With lower background loadings and/or increased updraught velocity, significant increases in CDN can be achieved. None of the global models predict a decrease in CDN as a result of geoengineering, although there is considerable diversity in the calculated efficiency of geoengineering, which arises from the diversity in the simulated marine aerosol distributions. All three models show a small dependence of geoengineering efficiency on the injected particle size and the geometric standard deviation of the injected mode. However, the achievability of significant cloud drop enhancements is strongly dependent on the cloud updraught speed. With an updraught speed of 0.1 m s^{−1} a global mean CDN of 375 cm^{−3} (previously estimated to cancel the forcing caused by CO_{2} doubling is achievable in only about 50% of grid boxes which have > 50% cloud cover, irrespective of the amount of aerosol injected. But at stronger updraft speeds (0
Modelling and evaluation of surgical performance using hidden Markov models.
Megali, Giuseppe; Sinigaglia, Stefano; Tonet, Oliver; Dario, Paolo
2006-10-01
Minimally invasive surgery has become very widespread in the last ten years. Since surgeons experience difficulties in learning and mastering minimally invasive techniques, the development of training methods is of great importance. While the introduction of virtual reality-based simulators has introduced a new paradigm in surgical training, skill evaluation methods are far from being objective. This paper proposes a method for defining a model of surgical expertise and an objective metric to evaluate performance in laparoscopic surgery. Our approach is based on the processing of kinematic data describing movements of surgical instruments. We use hidden Markov model theory to define an expert model that describes expert surgical gesture. The model is trained on kinematic data related to exercises performed on a surgical simulator by experienced surgeons. Subsequently, we use this expert model as a reference model in the definition of an objective metric to evaluate performance of surgeons with different abilities. Preliminary results show that, using different topologies for the expert model, the method can be efficiently used both for the discrimination between experienced and novice surgeons, and for the quantitative assessment of surgical ability.
Evaluating software architecture using fuzzy formal models
Directory of Open Access Journals (Sweden)
Payman Behbahaninejad
2012-04-01
Full Text Available Unified Modeling Language (UML has been recognized as one of the most popular techniques to describe static and dynamic aspects of software systems. One of the primary issues in designing software packages is the existence of uncertainty associated with such models. Fuzzy-UML to describe software architecture has both static and dynamic perspective, simultaneously. The evaluation of software architecture design phase initiates always help us find some additional requirements, which helps reduce cost of design. In this paper, we use a fuzzy data model to describe the static aspects of software architecture and the fuzzy sequence diagram to illustrate the dynamic aspects of software architecture. We also transform these diagrams into Petri Nets and evaluate reliability of the architecture. The web-based hotel reservation system for further explanation has been studied.
Rinaldo, A.; Gatto, M.; Mari, L.; Casagrandi, R.; Righetto, L.; Bertuzzo, E.; Rodriguez-Iturbe, I.
2012-12-01
still lacking. Here, we show that the requirement that all the local reproduction numbers R0 be larger than unity is neither necessary nor sufficient for outbreaks to occur when local settlements are connected by networks of primary and secondary infection mechanisms. To determine onset conditions, we derive general analytical expressions for a reproduction matrix G0 explicitly accounting for spatial distributions of human settlements and pathogen transmission via hydrological and human mobility networks. At disease onset, a generalized reproduction number Λ0 (the dominant eigenvalue of G0) must be larger than unity. We also show that geographical outbreak patterns in complex environments are linked to the dominant eigenvector and to spectral properties of G0. Tests against data and computations for the 2010 Haiti and 2000 KwaZulu-Natal cholera outbreaks, as well as against computations for metapopulation networks, demonstrate that eigenvectors of G0 provide a synthetic and effective tool for predicting the disease course in space and time. Networked connectivity models, describing the interplay between hydrology, epidemiology and social behavior sustaining human mobility, thus prove to be key tools for emergency management of waterborne infections.
Atmospheric Model Evaluation Tool for meteorological and air quality simulations
The Atmospheric Model Evaluation Tool compares model predictions to observed data from various meteorological and air quality observation networks to help evaluate meteorological and air quality simulations.
CMAQ Involvement in Air Quality Model Evaluation International Initiative
Description of Air Quality Model Evaluation International Initiative (AQMEII). Different chemical transport models are applied by different groups over North America and Europe and evaluated against observations.
Constrained minimization problems for the reproduction number in meta-population models.
Poghotanyan, Gayane; Feng, Zhilan; Glasser, John W; Hill, Andrew N
2018-02-14
The basic reproduction number ([Formula: see text]) can be considerably higher in an SIR model with heterogeneous mixing compared to that from a corresponding model with homogeneous mixing. For example, in the case of measles, mumps and rubella in San Diego, CA, Glasser et al. (Lancet Infect Dis 16(5):599-605, 2016. https://doi.org/10.1016/S1473-3099(16)00004-9 ), reported an increase of 70% in [Formula: see text] when heterogeneity was accounted for. Meta-population models with simple heterogeneous mixing functions, e.g., proportionate mixing, have been employed to identify optimal vaccination strategies using an approach based on the gradient of the effective reproduction number ([Formula: see text]), which consists of partial derivatives of [Formula: see text] with respect to the proportions immune [Formula: see text] in sub-groups i (Feng et al. in J Theor Biol 386:177-187, 2015. https://doi.org/10.1016/j.jtbi.2015.09.006 ; Math Biosci 287:93-104, 2017. https://doi.org/10.1016/j.mbs.2016.09.013 ). These papers consider cases in which an optimal vaccination strategy exists. However, in general, the optimal solution identified using the gradient may not be feasible for some parameter values (i.e., vaccination coverages outside the unit interval). In this paper, we derive the analytic conditions under which the optimal solution is feasible. Explicit expressions for the optimal solutions in the case of [Formula: see text] sub-populations are obtained, and the bounds for optimal solutions are derived for [Formula: see text] sub-populations. This is done for general mixing functions and examples of proportionate and preferential mixing are presented. Of special significance is the result that for general mixing schemes, both [Formula: see text] and [Formula: see text] are bounded below and above by their corresponding expressions when mixing is proportionate and isolated, respectively.
International Nuclear Information System (INIS)
Coveyou, R.R.
1974-01-01
The subject of random number generation is currently controversial. Differing opinions on this subject seem to stem from implicit or explicit differences in philosophy; in particular, from differing ideas concerning the role of probability in the real world of physical processes, electronic computers, and Monte Carlo calculations. An attempt is made here to reconcile these views. The role of stochastic ideas in mathematical models is discussed. In illustration of these ideas, a mathematical model of the use of random number generators in Monte Carlo calculations is constructed. This model is used to set up criteria for the comparison and evaluation of random number generators. (U.S.)
Number of Children and Telomere Length in Women: A Prospective, Longitudinal Evaluation
Barha, Cindy K.; Hanna, Courtney W.; Salvante, Katrina G.; Wilson, Samantha L.; Robinson, Wendy P.; Altman, Rachel M.; Nepomnaschy, Pablo A.
2016-01-01
Life history theory (LHT) predicts a trade-off between reproductive effort and the pace of biological aging. Energy invested in reproduction is not available for tissue maintenance, thus having more offspring is expected to lead to accelerated senescence. Studies conducted in a variety of non-human species are consistent with this LHT prediction. Here we investigate the relationship between the number of surviving children born to a woman and telomere length (TL, a marker of cellular aging) over 13 years in a group of 75 Kaqchikel Mayan women. Contrary to LHT’s prediction, women who had fewer children exhibited shorter TLs than those who had more children (p = 0.045) after controlling for TL at the onset of the 13-year study period. An “ultimate” explanation for this apparently protective effect of having more children may lay with human’s cooperative-breeding strategy. In a number of socio-economic and cultural contexts, having more chilren appears to be linked to an increase in social support for mothers (e.g., allomaternal care). Higher social support, has been argued to reduce the costs of further reproduction. Lower reproductive costs may make more metabolic energy available for tissue maintenance, resulting in a slower pace of cellular aging. At a “proximate” level, mechanisms involved may include the actions of the gonadal steroid estradiol, which increases dramatically during pregnancy. Estradiol is known to protect TL from the effects of oxidative stress as well as increase telomerase activity, an enzyme that maintains TL. Future research should explore the potential role of social support as well as that of estradiol and other potential biological pathways in the trade-offs between reproductive effort and the pace of cellular aging within and among human as well as in non-human populations. PMID:26731744
2016-10-01
expressing cells suffer from a “bystander effect” upon ganciclovir treatment (REF). We therefore reasoned that outgrowth of wild type clones...progress in subaim 1a, substantially improving the design of our proposed transgenic animal , the “deletion reporter mouse”, and are finalizing cloning... animal model, two highly potent small hairpin RNAs (shRNAs) located on chromosome 19 suppress the expression of GFP-Luciferase (GFP-Luc) and RFP
Model description and evaluation of model performance: DOSDIM model
International Nuclear Information System (INIS)
Lewyckyj, N.; Zeevaert, T.
1996-01-01
DOSDIM was developed to assess the impact to man from routine and accidental atmospheric releases. It is a compartmental, deterministic, radiological model. For an accidental release, dynamic transfer are used in opposition to a routine release for which equilibrium transfer factors are used. Parameters values were chosen to be conservative. Transfer between compartments are described by first-order differential equations. 2 figs
Office for Analysis and Evaluation of Operational Data 1993 annual report: Volume 8, Number 1
International Nuclear Information System (INIS)
1994-11-01
This annual report of the US Nuclear Regulatory Commission's Office for Analysis and Evaluation of Operational Data (AEOD) describes activities conducted during 1993. The report is published in two parts. NUREG-1272, Vol. 8, No. 1, covers power reactors and presents an overview of the operating experience of the nuclear power industry from the NRC perspective, including comments about the trends of some key performance measures. The report also includes the principal findings and issues identified in AEOD studies over the past year and summarizes information from such sources as licensee event reports, diagnostic evaluations, and reports to the NRC's Operations Center. NUREG-1272, Vol. 8, No. 2, covers nuclear materials and presents a review of the events and concerns during 1993 associated with the use of licensed material in nonreactor applications, such as personnel overexposures and medical misadministrations. Both reports also contain a discussion of the Incident Investigation Team program and summarize both the Incident Investigation Team and Augmented Inspection Team reports. Each volume contains a list of the AEOD reports issued from 1980 through 1993
AN INTEGRATED FUZZY AHP AND TOPSIS MODEL FOR SUPPLIER EVALUATION
Directory of Open Access Journals (Sweden)
Željko Stević
2016-05-01
Full Text Available In today’s modern supply chains, the adequate suppliers’ choice has strategic meaning for entire companies’ business. The aim of this paper is to evaluate different suppliers using the integrated model that recognizes a combination of fuzzy AHP (Analytical Hierarchy Process and the TOPSIS method. Based on six criteria, the expert team was formed to compare them, so determination of their significance is being done with fuzzy AHP method. Expert team also compares suppliers according to each criteria and on the base of triangular fuzzy numbers. Based on their inputs, TOPSIS method is used to estimate potential solutions. Suggested model accomplishes certain advantages in comparison with previously used traditional models which were used to make decisions about evaluation and choice of supplier.
Lifetime-Aware Cloud Data Centers: Models and Performance Evaluation
Directory of Open Access Journals (Sweden)
Luca Chiaraviglio
2016-06-01
Full Text Available We present a model to evaluate the server lifetime in cloud data centers (DCs. In particular, when the server power level is decreased, the failure rate tends to be reduced as a consequence of the limited number of components powered on. However, the variation between the different power states triggers a failure rate increase. We therefore consider these two effects in a server lifetime model, subject to an energy-aware management policy. We then evaluate our model in a realistic case study. Our results show that the impact on the server lifetime is far from negligible. As a consequence, we argue that a lifetime-aware approach should be pursued to decide how and when to apply a power state change to a server.
Performance Evaluation and Modelling of Container Terminals
Venkatasubbaiah, K.; Rao, K. Narayana; Rao, M. Malleswara; Challa, Suresh
2018-02-01
The present paper evaluates and analyzes the performance of 28 container terminals of south East Asia through data envelopment analysis (DEA), principal component analysis (PCA) and hybrid method of DEA-PCA. DEA technique is utilized to identify efficient decision making unit (DMU)s and to rank DMUs in a peer appraisal mode. PCA is a multivariate statistical method to evaluate the performance of container terminals. In hybrid method, DEA is integrated with PCA to arrive the ranking of container terminals. Based on the composite ranking, performance modelling and optimization of container terminals is carried out through response surface methodology (RSM).
Probabilistic evaluation of competing climate models
Directory of Open Access Journals (Sweden)
A. Braverman
2017-10-01
Full Text Available Climate models produce output over decades or longer at high spatial and temporal resolution. Starting values, boundary conditions, greenhouse gas emissions, and so forth make the climate model an uncertain representation of the climate system. A standard paradigm for assessing the quality of climate model simulations is to compare what these models produce for past and present time periods, to observations of the past and present. Many of these comparisons are based on simple summary statistics called metrics. In this article, we propose an alternative: evaluation of competing climate models through probabilities derived from tests of the hypothesis that climate-model-simulated and observed time sequences share common climate-scale signals. The probabilities are based on the behavior of summary statistics of climate model output and observational data over ensembles of pseudo-realizations. These are obtained by partitioning the original time sequences into signal and noise components, and using a parametric bootstrap to create pseudo-realizations of the noise sequences. The statistics we choose come from working in the space of decorrelated and dimension-reduced wavelet coefficients. Here, we compare monthly sequences of CMIP5 model output of average global near-surface temperature anomalies to similar sequences obtained from the well-known HadCRUT4 data set as an illustration.
Probabilistic evaluation of competing climate models
Braverman, Amy; Chatterjee, Snigdhansu; Heyman, Megan; Cressie, Noel
2017-10-01
Climate models produce output over decades or longer at high spatial and temporal resolution. Starting values, boundary conditions, greenhouse gas emissions, and so forth make the climate model an uncertain representation of the climate system. A standard paradigm for assessing the quality of climate model simulations is to compare what these models produce for past and present time periods, to observations of the past and present. Many of these comparisons are based on simple summary statistics called metrics. In this article, we propose an alternative: evaluation of competing climate models through probabilities derived from tests of the hypothesis that climate-model-simulated and observed time sequences share common climate-scale signals. The probabilities are based on the behavior of summary statistics of climate model output and observational data over ensembles of pseudo-realizations. These are obtained by partitioning the original time sequences into signal and noise components, and using a parametric bootstrap to create pseudo-realizations of the noise sequences. The statistics we choose come from working in the space of decorrelated and dimension-reduced wavelet coefficients. Here, we compare monthly sequences of CMIP5 model output of average global near-surface temperature anomalies to similar sequences obtained from the well-known HadCRUT4 data set as an illustration.
THE ANTICIPATION OF THE NUMBER OF TOURISTS ARRIVED IN MAMAIA USING THE TYPE OF MODELS ARIMA
Directory of Open Access Journals (Sweden)
Kamer Ainur M. AIVAZ
2016-06-01
Full Text Available The Mamaia station is, at the moment, the biggest and the most looked for touristic station from the Romanian seaside of the Black Sea. From the analysis of the evolution of the main indicators of touristic circulation from the last 10 years (2006-2015, we can notice a significant increase, but we are also interested in knowing the tendency of their modification in the near future. For this reason, in the present study, we wanted to test the contribution of the models ARIMA to the elaboration of an anticipation regarding the indicators: arrivals of tourists, totally and structurally: Romanians and foreigners, for Mamaia station. We consider that the results obtained in this study may contribute to the defining of the strategy of development of the station and ensuring the necessary conditions for hosting a significant greater number of tourists, in the following years.
Genotype copy number variations using Gaussian mixture models: theory and algorithms.
Lin, Chang-Yun; Lo, Yungtai; Ye, Kenny Q
2012-10-12
Copy number variations (CNVs) are important in the disease association studies and are usually targeted by most recent microarray platforms developed for GWAS studies. However, the probes targeting the same CNV regions could vary greatly in performance, with some of the probes carrying little information more than pure noise. In this paper, we investigate how to best combine measurements of multiple probes to estimate copy numbers of individuals under the framework of Gaussian mixture model (GMM). First we show that under two regularity conditions and assume all the parameters except the mixing proportions are known, optimal weights can be obtained so that the univariate GMM based on the weighted average gives the exactly the same classification as the multivariate GMM does. We then developed an algorithm that iteratively estimates the parameters and obtains the optimal weights, and uses them for classification. The algorithm performs well on simulation data and two sets of real data, which shows clear advantage over classification based on the equal weighted average.
CTBT integrated verification system evaluation model supplement
Energy Technology Data Exchange (ETDEWEB)
EDENBURN,MICHAEL W.; BUNTING,MARCUS; PAYNE JR.,ARTHUR C.; TROST,LAWRENCE C.
2000-03-02
Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0.
CTBT integrated verification system evaluation model supplement
International Nuclear Information System (INIS)
EDENBURN, MICHAEL W.; BUNTING, MARCUS; PAYNE, ARTHUR C. JR.; TROST, LAWRENCE C.
2000-01-01
Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0
Transport properties site descriptive model. Guidelines for evaluation and modelling
International Nuclear Information System (INIS)
Berglund, Sten; Selroos, Jan-Olof
2004-04-01
This report describes a strategy for the development of Transport Properties Site Descriptive Models within the SKB Site Investigation programme. Similar reports have been produced for the other disciplines in the site descriptive modelling (Geology, Hydrogeology, Hydrogeochemistry, Rock mechanics, Thermal properties, and Surface ecosystems). These reports are intended to guide the site descriptive modelling, but also to provide the authorities with an overview of modelling work that will be performed. The site descriptive modelling of transport properties is presented in this report and in the associated 'Strategy for the use of laboratory methods in the site investigations programme for the transport properties of the rock', which describes laboratory measurements and data evaluations. Specifically, the objectives of the present report are to: Present a description that gives an overview of the strategy for developing Site Descriptive Models, and which sets the transport modelling into this general context. Provide a structure for developing Transport Properties Site Descriptive Models that facilitates efficient modelling and comparisons between different sites. Provide guidelines on specific modelling issues where methodological consistency is judged to be of special importance, or where there is no general consensus on the modelling approach. The objectives of the site descriptive modelling process and the resulting Transport Properties Site Descriptive Models are to: Provide transport parameters for Safety Assessment. Describe the geoscientific basis for the transport model, including the qualitative and quantitative data that are of importance for the assessment of uncertainties and confidence in the transport description, and for the understanding of the processes at the sites. Provide transport parameters for use within other discipline-specific programmes. Contribute to the integrated evaluation of the investigated sites. The site descriptive modelling of
Evaluating spatial patterns in hydrological modelling
DEFF Research Database (Denmark)
Koch, Julian
is not fully exploited by current modelling frameworks due to the lack of suitable spatial performance metrics. Furthermore, the traditional model evaluation using discharge is found unsuitable to lay confidence on the predicted catchment inherent spatial variability of hydrological processes in a fully...... the contiguous United Sates (10^6 km2). To this end, the thesis at hand applies a set of spatial performance metrics on various hydrological variables, namely land-surface-temperature (LST), evapotranspiration (ET) and soil moisture. The inspiration for the applied metrics is found in related fields...
Sepúlveda, Nuno
2013-02-26
Background: The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model.Results: Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates.Conclusions: In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. 2013 Seplveda et al.; licensee BioMed Central Ltd.
Variance of the number of tumors in a model for the induction of osteosarcoma by alpha radiation
International Nuclear Information System (INIS)
Groer, P.G.; Marshall, J.H.
1976-01-01
An earlier report on a model for the induction of osteosarcoma by alpha radiation gave differential equations for the mean numbers of normal, transformed, and malignant cells. In this report we show that for a constant dose rate the variance of the number of cells at each stage and time is equal to the corresponding mean, so the numbers of tumors predicted by the model have a Poisson distribution about their mean values
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-12-01
This annual report of the US Nuclear Regulatory Commission`s Office for Analysis and Evaluation of Operational Data (AEOD) describes activities conducted during 1996. The report is published in three parts. NUREG-1272, Vol. 10, No. 1, covers power reactors and presents an overview of the operating experience of the nuclear power industry from the NRC perspective, including comments about trends of some key performance measures. The report also includes the principal findings and issues identified in AEOD studies over the past year and summarizes information from such sources as licensee event reports and reports to the NRC`s Operations Center. NUREG-1272, Vol. 10, No. 2, covers nuclear materials and presents a review of the events and concerns during 1996 associated with the use of licensed material in nonreactor applications, such as personnel overexposures and medical misadministrations. Both reports also contain a discussion of the Incident Investigation Team program and summarize both the Incident Investigation Team and Augmented Inspection Team reports. Each volume contains a list of the AEOD reports issued from CY 1980 through 1996. NUREG-1272, Vol. 10, No. 3, covers technical training and presents the activities of the Technical Training Center in support of the NRC`s mission in 1996.
International Nuclear Information System (INIS)
1997-12-01
This annual report of the US Nuclear Regulatory Commission's Office for Analysis and Evaluation of Operational Data (AEOD) describes activities conducted during 1996. The report is published in three parts. NUREG-1272, Vol. 10, No. 1, covers power reactors and presents an overview of the operating experience of the nuclear power industry from the NRC perspective, including comments about trends of some key performance measures. The report also includes the principal findings and issues identified in AEOD studies over the past year and summarizes information from such sources as licensee event reports and reports to the NRC's Operations Center. NUREG-1272, Vol. 10, No. 2, covers nuclear materials and presents a review of the events and concerns during 1996 associated with the use of licensed material in nonreactor applications, such as personnel overexposures and medical misadministrations. Both reports also contain a discussion of the Incident Investigation Team program and summarize both the Incident Investigation Team and Augmented Inspection Team reports. Each volume contains a list of the AEOD reports issued from CY 1980 through 1996. NUREG-1272, Vol. 10, No. 3, covers technical training and presents the activities of the Technical Training Center in support of the NRC's mission in 1996
Implicit moral evaluations: A multinomial modeling approach.
Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael
2017-01-01
Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
An Optimization Model for Design of Asphalt Pavements Based on IHAP Code Number 234
Directory of Open Access Journals (Sweden)
Ali Reza Ghanizadeh
2016-01-01
Full Text Available Pavement construction is one of the most costly parts of transportation infrastructures. Incommensurate design and construction of pavements, in addition to the loss of the initial investment, would impose indirect costs to the road users and reduce road safety. This paper aims to propose an optimization model to determine the optimal configuration as well as the optimum thickness of different pavement layers based on the Iran Highway Asphalt Paving Code Number 234 (IHAP Code 234. After developing the optimization model, the optimum thickness of pavement layers for secondary rural roads, major rural roads, and freeways was determined based on the recommended prices in “Basic Price List for Road, Runway and Railway” of Iran in 2015 and several charts were developed to determine the optimum thickness of pavement layers including asphalt concrete, granular base, and granular subbase with respect to road classification, design traffic, and resilient modulus of subgrade. Design charts confirm that in the current situation (material prices in 2015, application of asphalt treated layer in pavement structure is not cost effective. Also it was shown that, with increasing the strength of subgrade soil, the subbase layer may be removed from the optimum structure of pavement.
Birch, Gabriel C.; Woo, Bryana L.; Sanchez, Andres L.; Knapp, Haley
2017-08-01
The evaluation of optical system performance in fog conditions typically requires field testing. This can be challenging due to the unpredictable nature of fog generation and the temporal and spatial nonuniformity of the phenomenon itself. We describe the Sandia National Laboratories fog chamber, a new test facility that enables the repeatable generation of fog within a 55 m×3 m×3 m (L×W×H) environment, and demonstrate the fog chamber through a series of optical tests. These tests are performed to evaluate system image quality, determine meteorological optical range (MOR), and measure the number of particles in the atmosphere. Relationships between typical optical quality metrics, MOR values, and total number of fog particles are described using the data obtained from the fog chamber and repeated over a series of three tests.
Sander, S. P.; Friedl, R. R.; Barker, J. R.; Golden, D. M.; Kurylo, M. J.; Wine, P. H.; Abbatt, J.; Burkholder, J. B.; Kolb, C. E.; Moortgat, G. K.;
2009-01-01
This is the supplement to the fifteenth in a series of evaluated sets of rate constants and photochemical cross sections compiled by the NASA Panel for Data Evaluation. The data are used primarily to model stratospheric and upper tropospheric processes, with particular emphasis on the ozone layer and its possible perturbation by anthropogenic and natural phenomena. Copies of this evaluation are available in electronic form and may be printed from the following Internet URL: http://jpldataeval.jpl.nasa.gov/.
Simulations, evaluations and models. Vol. 1
International Nuclear Information System (INIS)
Brehmer, B.; Leplat, J.
1992-01-01
Papers presented at the Fourth MOHAWC (Models of Human Activities in Work Context) workshop. The general theme was simulations, evaluations and models. The emphasis was on time in relation to the modelling of human activities in modern, high tech. work. Such work often requires people to control dynamic systems, and the behaviour and misbehaviour of these systems in time is a principle focus of work in, for example, a modern process plant. The papers report on microworlds and on their innovative uses, both in the form of experiments and in the form of a new form of use, that of testing a program which performs diagnostic reasoning. They present new aspects on the problem of time in process control, showing the importance of considering the time scales of dynamic tasks, both in individual decision making and in distributed decision making, and in providing new formalisms, both for the representation of time and for reasoning involving time in diagnosis. (AB)
A methodology for spectral wave model evaluation
Siqueira, S. A.; Edwards, K. L.; Rogers, W. E.
2017-12-01
Model evaluation is accomplished by comparing bulk parameters (e.g., significant wave height, energy period, and mean square slope (MSS)) calculated from the model energy spectra with those calculated from buoy energy spectra. Quality control of the observed data and choice of the frequency range from which the bulk parameters are calculated are critical steps in ensuring the validity of the model-data comparison. The compared frequency range of each observation and the analogous model output must be identical, and the optimal frequency range depends in part on the reliability of the observed spectra. National Data Buoy Center 3-m discus buoy spectra are unreliable above 0.3 Hz due to a non-optimal buoy response function correction. As such, the upper end of the spectrum should not be included when comparing a model to these data. Bioufouling of Waverider buoys must be detected, as it can harm the hydrodynamic response of the buoy at high frequencies, thereby rendering the upper part of the spectrum unsuitable for comparison. An important consideration is that the intentional exclusion of high frequency energy from a validation due to data quality concerns (above) can have major implications for validation exercises, especially for parameters such as the third and fourth moments of the spectrum (related to Stokes drift and MSS, respectively); final conclusions can be strongly altered. We demonstrate this by comparing outcomes with and without the exclusion, in a case where a Waverider buoy is believed to be free of biofouling. Determination of the appropriate frequency range is not limited to the observed spectra. Model evaluation involves considering whether all relevant frequencies are included. Guidance to make this decision is based on analysis of observed spectra. Two model frequency lower limits were considered. Energy in the observed spectrum below the model lower limit was calculated for each. For locations where long swell is a component of the wave
Diagnosis code assignment: models and evaluation metrics.
Perotte, Adler; Pivovarov, Rimma; Natarajan, Karthik; Weiskopf, Nicole; Wood, Frank; Elhadad, Noémie
2014-01-01
The volume of healthcare data is growing rapidly with the adoption of health information technology. We focus on automated ICD9 code assignment from discharge summary content and methods for evaluating such assignments. We study ICD9 diagnosis codes and discharge summaries from the publicly available Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC II) repository. We experiment with two coding approaches: one that treats each ICD9 code independently of each other (flat classifier), and one that leverages the hierarchical nature of ICD9 codes into its modeling (hierarchy-based classifier). We propose novel evaluation metrics, which reflect the distances among gold-standard and predicted codes and their locations in the ICD9 tree. Experimental setup, code for modeling, and evaluation scripts are made available to the research community. The hierarchy-based classifier outperforms the flat classifier with F-measures of 39.5% and 27.6%, respectively, when trained on 20,533 documents and tested on 2282 documents. While recall is improved at the expense of precision, our novel evaluation metrics show a more refined assessment: for instance, the hierarchy-based classifier identifies the correct sub-tree of gold-standard codes more often than the flat classifier. Error analysis reveals that gold-standard codes are not perfect, and as such the recall and precision are likely underestimated. Hierarchy-based classification yields better ICD9 coding than flat classification for MIMIC patients. Automated ICD9 coding is an example of a task for which data and tools can be shared and for which the research community can work together to build on shared models and advance the state of the art.
Directory of Open Access Journals (Sweden)
Mohsen Sayyah Markabi
2014-10-01
Full Text Available Purpose: Evaluation and selection of efficient suppliers is one of the key issues in supply chain management which depends on wide range of qualitative and quantitative criteria. The aim of this research is to develop a mathematical model for evaluating and selecting efficient suppliers when faced with supply and demand uncertainties.Design/methodology/approach: In this research Grey Relational Analysis (GRA and Data Envelopment Analysis (DEA are used to evaluate and select efficient suppliers under uncertainties. Furthermore, a novel ranking method is introduced for the units that their efficiencies are obtained in the form of interval grey numbers.Findings: The study indicates that the proposed model in addition to providing satisfactory and acceptable results avoids time-consuming computations and consequently reduces the solution time. To name another advantage of the proposed model, we can point out that it enables us to make decision based on different levels of risk.Originality/value: The paper presents a mathematical model for evaluating and selecting efficient suppliers in a stochastic environment so that companies can use in order to make better decisions.
Energy Technology Data Exchange (ETDEWEB)
Yamamoto, Yoshinobu, E-mail: yamamotoy@yamanashi.ac.jp [Division of Mechanical Engineering, University of Yamanashi, 4-3-11 Takeda, Kofu 400-8511 (Japan); Kunugi, Tomoaki, E-mail: kunugi@nucleng.kyoto-u.ac.jp [Department of Nuclear Engineering, Kyoto University, C3-d2S06, Kyoto-Daigaku Katsura, Nishikyo-Ku 615-8540, Kyoto (Japan)
2016-11-01
Highlights: • We show the applicability to predict the heat transfer imposed on a uniform wall-normal magnetic field by means of the zero-equation heat transfer model. • Quasi-theoretical turbulent Prandtl numbers with various molecular Prandtl number fluids were obtained. • Improvements of the prediction accuracy in turbulent kinetic energy and turbulent dissipation rate under the magnetic fields were accomplished. - Abstract: Zero-equation heat transfer models based on the constant turbulent Prandtl number are evaluated using direct numerical simulation (DNS) data for fully developed channel flows imposed on a uniform wall-normal magnetic field. Quasi-theoretical turbulent Prandtl numbers are estimated by DNS data of various molecular Prandtl number fluids. From the viewpoint of highly-accurate magneto-hydrodynamic (MHD) heat transfer prediction, the parameters of the turbulent eddy viscosity of the k–É› model are optimized under the magnetic fields. Consequently, we use the zero-equation model based on a constant turbulent Prandtl number to demonstrate MHD heat transfer, and show the applicability of using this model to predict the heat transfer.
Evaluation of NOx Emissions and Modeling
Henderson, B. H.; Simon, H. A.; Timin, B.; Dolwick, P. D.; Owen, R. C.; Eyth, A.; Foley, K.; Toro, C.; Baker, K. R.
2017-12-01
Studies focusing on ambient measurements of NOy have concluded that NOx emissions are overestimated and some have attributed the error to the onroad mobile sector. We investigate this conclusion to identify the cause of observed bias. First, we compare DISCOVER-AQ Baltimore ambient measurements to fine-scale modeling with NOy tagged by sector. Sector-based relationships with bias are present, but these are sensitive to simulated vertical mixing. This is evident both in sensitivity to mixing parameterization and the seasonal patterns of bias. We also evaluate observation-based indicators, like CO:NOy ratios, that are commonly used to diagnose emissions inventories. Second, we examine the sensitivity of predicted NOx and NOy to temporal allocation of emissions. We investigate alternative temporal allocations for EGUs without CEMS, on-road mobile, and several non-road categories. These results show some location-specific sensitivity and will lead to some improved temporal allocations. Third, near-road studies have inherently fewer confounding variables, and have been examined for more direct evaluation of emissions and dispersion models. From 2008-2011, the EPA and FHWA conducted near-road studies in Las Vegas and Detroit. These measurements are used to more directly evaluate the emissions and dispersion using site-specific traffic data. In addition, the site-specific emissions are being compared to the emissions used in larger-scale photochemical modeling to identify key discrepancies. These efforts are part of a larger coordinated effort by EPA scientist to ensure the highest quality in emissions and model processes. We look forward to sharing the state of these analyses and expected updates.
Intuitionistic fuzzy (IF) evaluations of multidimensional model
International Nuclear Information System (INIS)
Valova, I.
2012-01-01
There are different logical methods for data structuring, but no one is perfect enough. Multidimensional model-MD of data is presentation of data in a form of cube (referred also as info-cube or hypercube) with data or in form of 'star' type scheme (referred as multidimensional scheme), by use of F-structures (Facts) and set of D-structures (Dimensions), based on the notion of hierarchy of D-structures. The data, being subject of analysis in a specific multidimensional model is located in a Cartesian space, being restricted by D-structures. In fact, the data is either dispersed or 'concentrated', therefore the data cells are not distributed evenly within the respective space. The moment of occurrence of any event is difficult to be predicted and the data is concentrated as per time periods, location of performed business event, etc. To process such dispersed or concentrated data, various technical strategies are needed. The basic methods for presentation of such data should be selected. The approaches of data processing and respective calculations are connected with different options for data representation. The use of intuitionistic fuzzy evaluations (IFE) provide us new possibilities for alternative presentation and processing of data, subject of analysis in any OLAP application. The use of IFE at the evaluation of multidimensional models will result in the following advantages: analysts will dispose with more complete information for processing and analysis of respective data; benefit for the managers is that the final decisions will be more effective ones; enabling design of more functional multidimensional schemes. The purpose of this work is to apply intuitionistic fuzzy evaluations of multidimensional model of data. (authors)
Training Module on the Evaluation of Best Modeling Practices
Building upon the fundamental concepts outlined in previous modules, the objectives of this module are to explore the topic of model evaluation and identify the 'best modeling practices' and strategies for the Evaluation Stage of the model life-cycle.
Evaluation of onset of nucleate boiling models
Energy Technology Data Exchange (ETDEWEB)
Huang, LiDong [Heat Transfer Research, Inc., College Station, TX (United States)], e-mail: lh@htri.net
2009-07-01
This article discusses available models and correlations for predicting the required heat flux or wall superheat for the Onset of Nucleate Boiling (ONB) on plain surfaces. It reviews ONB data in the open literature and discusses the continuing efforts of Heat Transfer Research, Inc. in this area. Our ONB database contains ten individual sources for ten test fluids and a wide range of operating conditions for different geometries, e.g., tube side and shell side flow boiling and falling film evaporation. The article also evaluates literature models and correlations based on the data: no single model in the open literature predicts all data well. The prediction uncertainty is especially higher in vacuum conditions. Surface roughness is another critical criterion in determining which model should be used. However, most models do not directly account for surface roughness, and most investigators do not provide surface roughness information in their published findings. Additional experimental research is needed to improve confidence in predicting the required wall superheats for nucleation boiling for engineering design purposes. (author)
Moisture evaluation by dynamic thermography data modeling
Energy Technology Data Exchange (ETDEWEB)
Bison, P.G.; Grinzato, E.; Marinetti, S. [ITEF-CNR, Padova (Italy)
1994-12-31
This paper is the prosecution of previous works on the design of a Non Destructive method for in situ detection of moistened areas in buildings and the evaluation of the water content in porous materials by thermographic analysis. The use of heat transfer model to interpret data allows to improve the measurement accuracy taking into account the actual boundary conditions. The relative increase of computation time is balanced by the additional advantage to optimize the testing procedure of different objects simulating the heat transfer. Two models are tested both analytically and experimentally: (1) the semi-infinite body to evaluate the thermal inertia and water content; (2) the slab to measure the sample`s diffusivity, the dependence of conductivity with the water content and to correct the water content estimation. The fitting of the experimental data on the model is carried out according to the least square method that is linear in the first case and nonlinear in the second. The Levenberg-Marquardt procedure is followed in nonlinear fitting to search in the parameters space the optimum point that minimizes the Chi-square estimator. Experimental results on bricks used in building for restoration activities, are discussed. The water content measured in different hygrometric conditions is compared with known values. A correction on the absorptivity coefficient dependent on water content is introduced.
European Cohesion Policy: A Proposed Evaluation Model
Directory of Open Access Journals (Sweden)
Alina Bouroşu (Costăchescu
2012-06-01
Full Text Available The current approach of European Cohesion Policy (ECP is intended to be a bridge between different fields of study, emphasizing the intersection between "the public policy cycle, theories of new institutionalism and the new public management”. ECP can be viewed as a focal point between putting into practice the principles of the new governance theory, theories of economic convergence and divergence and the governance of common goods. After a short introduction of defining the concepts used, the author discussed on the created image of ECP by applying three different theories, focusing on the structural funds implementation system (SFIS, directing the discussion on the evaluation part of this policy, by proposing a model of performance evaluation of the system, in order to outline key principles for creating effective management mechanisms of ECP.
Al Amir, Issam; Dubayle, David; Héron, Anne; Delayre-Orthez, Carine; Anton, Pauline M
2017-12-01
Links between food and inflammatory bowel diseases (IBDs) are often suggested, but the role of food processing has not been extensively studied. Heat treatment is known to cause the loss of nutrients and the appearance of neoformed compounds such as Maillard reaction products. Their involvement in gut inflammation is equivocal, as some may have proinflammatory effects, whereas other seem to be protective. As IBDs are associated with the recruitment of immune cells, including mast cells, we raised the hypothesis that dietary Maillard reaction products generated through heat treatment of food may limit the colitic response and its associated recruitment of mast cells. An experimental model of colitis was used in mice submitted to mildly and highly heated rodent food. Adult male mice were divided in 3 groups and received nonheated, mildly heated, or highly heated chow during 21 days. In the last week of the study, each group was split into 2 subgroups, submitted or not (controls) to dextran sulfate sodium (DSS) colitis. Weight variations, macroscopic lesions, colonic myeloperoxidase activity, and mucosal mast cell number were evaluated at the end of the experiment. Only highly heated chow significantly prevented DSS-induced weight loss, myeloperoxidase activity, and mast cell number increase in the colonic mucosa of DSS-colitic mice. We suggest that Maillard reaction products from highly heated food may limit the occurrence of inflammatory phases in IBD patients. Copyright © 2017 Elsevier Inc. All rights reserved.
Incorporating a 360 Degree Evaluation Model IOT Transform the USMC Performance Evaluation System
2005-02-08
Incorporating a 360 Evaluation Model IOT Transform the USMC Performance Evaluation System EWS 2005 Subject Area Manpower...Incorporating a 360 Evaluation Model IOT Transform the USMC Performance Evaluation System” Contemporary...COVERED 00-00-2005 to 00-00-2005 4. TITLE AND SUBTITLE Incorporating a 360 Evaluation Model IOT Transform the USMC Performance
Development and Evaluation of a Dynamic, 3-Degree-of-Freedom (DOF) Wind Tunnel Model
2016-11-01
ARL-CR-0807● NOV 2016 US Army Research Laboratory Development and Evaluation of a Dynamic, 3-Degree-of-Freedom ( DOF ) Wind...ARL-CR-0807 ● NOV 2016 US Army Research Laboratory Development and Evaluation of a Dynamic, 3-Degree-of-Freedom ( DOF ) Wind...Development and Evaluation of a Dynamic, 3-Degree-of-Freedom ( DOF ) Wind Tunnel Model 5a. CONTRACT NUMBER W911-QX-14-C-0016 5b. GRANT NUMBER
Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches
Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward
2015-01-01
As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.
Chmielewski, Jacek
2017-10-01
Nowadays, feasibility studies need to be prepared for all planned transport investments, mainly those co-financed with UE grants. One of the fundamental aspect of feasibility study is the economic justification of an investment, evaluated in an area of so called cost-benefit analysis (CBA). The main goal of CBA calculation is to prove that a transport investment is really important for the society and should be implemented as economically efficient one. It can be said that the number of hours (PH - passengers hours) in trips and travelled kilometres (PK - passengers kilometres) are the most important for CBA results. The differences between PH and PK calculated for particular investment scenarios are the base for benefits calculation. Typically, transport simulation models are the best source for such data. Transport simulation models are one of the most powerful tools for transport network planning. They make it possible to evaluate forecast traffic volume and passenger flows in a public transport system for defined scenarios of transport and area development. There are many different transport models. Their construction is often similar, and they mainly differ in the level of their accuracy. Even models for the same area may differ in this matter. Typically, such differences come from the accuracy of supply side representation: road and public transport network representation. In many cases only main roads and a public transport network are represented, while local and service roads are eliminated as a way of reality simplification. This also enables a faster and more effective calculation process. On the other hand, the description of demand part of these models based on transport zones is often stable. Difficulties with data collection, mainly data on land use, resulted in the lack of changes in the analysed land division into so called transport zones. In this paper the author presents an influence of land division on the results of traffic analyses, and hence
Sadi, M; Dabir, B
2003-01-01
Monte Carlo Method is one of the most powerful techniques to model different processes, such as polymerization reactions. By this method, without any need to solve moment equations, a very detailed information on the structure and properties of polymers are obtained. The number of algorithm repetitions (selected volumes of reactor for modelling which represent the number of initial molecules) is very important in this method. In Monte Carlo method calculations are based on the random number of generations and reaction probability determinations. so the number of algorithm repetition is very important. In this paper, the initiation reaction was considered alone and the importance of number of initiator molecules on the result were studied. It can be concluded that Monte Carlo method will not give accurate results if the number of molecules is not satisfied to be big enough, because in that case , selected volume would not be representative of the whole system.
Butler, Doug; Bauman, David; Johnson-Throop, Kathy
2011-01-01
The Integrated Medical Model (IMM) Project has been developing a probabilistic risk assessment tool, the IMM, to help evaluate in-flight crew health needs and impacts to the mission due to medical events. This package is a follow-up to a data package provided in June 2009. The IMM currently represents 83 medical conditions and associated ISS resources required to mitigate medical events. IMM end state forecasts relevant to the ISS PRA model include evacuation (EVAC) and loss of crew life (LOCL). The current version of the IMM provides the basis for the operational version of IMM expected in the January 2011 timeframe. The objectives of this data package are: 1. To provide a preliminary understanding of medical risk data used to update the ISS PRA Model. The IMM has had limited validation and an initial characterization of maturity has been completed using NASA STD 7009 Standard for Models and Simulation. The IMM has been internally validated by IMM personnel but has not been validated by an independent body external to the IMM Project. 2. To support a continued dialogue between the ISS PRA and IMM teams. To ensure accurate data interpretation, and that IMM output format and content meets the needs of the ISS Risk Management Office and ISS PRA Model, periodic discussions are anticipated between the risk teams. 3. To help assess the differences between the current ISS PRA and IMM medical risk forecasts of EVAC and LOCL. Follow-on activities are anticipated based on the differences between the current ISS PRA medical risk data and the latest medical risk data produced by IMM.
CTBT Integrated Verification System Evaluation Model
Energy Technology Data Exchange (ETDEWEB)
Edenburn, M.W.; Bunting, M.L.; Payne, A.C. Jr.
1997-10-01
Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia`s Monitoring Systems and Technology Center and has been funded by the US Department of Energy`s Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, top-level, modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM`s unique features is that it integrates results from the various CTBT sensor technologies (seismic, infrasound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection) and location accuracy of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system`s performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. This report describes version 1.2 of IVSEM.
Automated expert modeling for automated student evaluation.
Energy Technology Data Exchange (ETDEWEB)
Abbott, Robert G.
2006-01-01
The 8th International Conference on Intelligent Tutoring Systems provides a leading international forum for the dissemination of original results in the design, implementation, and evaluation of intelligent tutoring systems and related areas. The conference draws researchers from a broad spectrum of disciplines ranging from artificial intelligence and cognitive science to pedagogy and educational psychology. The conference explores intelligent tutoring systems increasing real world impact on an increasingly global scale. Improved authoring tools and learning object standards enable fielding systems and curricula in real world settings on an unprecedented scale. Researchers deploy ITS's in ever larger studies and increasingly use data from real students, tasks, and settings to guide new research. With high volumes of student interaction data, data mining, and machine learning, tutoring systems can learn from experience and improve their teaching performance. The increasing number of realistic evaluation studies also broaden researchers knowledge about the educational contexts for which ITS's are best suited. At the same time, researchers explore how to expand and improve ITS/student communications, for example, how to achieve more flexible and responsive discourse with students, help students integrate Web resources into learning, use mobile technologies and games to enhance student motivation and learning, and address multicultural perspectives.
Estimating quasi-loglinear models for a Rasch table if the numbers of items is large
Kelderman, Henk
1987-01-01
The Rasch Model and various extensions of this model can be formulated as a quasi loglinear model for the incomplete subgroup x score x item response 1 x ... x item response k contingency table. By comparing various loglinear models, specific deviations of the Rasch model can be tested. Parameter
DEFF Research Database (Denmark)
Skovgaard, M.; Nielsen, Peter V.
In this paper it is investigated if it is possible to simulate and capture some of the low Reynolds number effects numerically using time averaged momentum equations and a low Reynolds number k-f model. The test case is the larninar to turbulent transitional flow over a backward facing step...
Designing and evaluating representations to model pedagogy
Directory of Open Access Journals (Sweden)
Elizabeth Masterman
2013-08-01
Full Text Available This article presents the case for a theory-informed approach to designing and evaluating representations for implementation in digital tools to support Learning Design, using the framework of epistemic efficacy as an example. This framework, which is rooted in the literature of cognitive psychology, is operationalised through dimensions of fit that attend to: (1 the underlying ontology of the domain, (2 the purpose of the task that the representation is intended to facilitate, (3 how best to support the cognitive processes of the users of the representations, (4 users’ differing needs and preferences, and (5 the tool and environment in which the representations are constructed and manipulated.Through showing how epistemic efficacy can be applied to the design and evaluation of representations, the article presents the Learning Designer, a constructionist microworld in which teachers can both assemble their learning designs and model their pedagogy in terms of students’ potential learning experience. Although the activity of modelling may add to the cognitive task of design, the article suggests that the insights thereby gained can additionally help a lecturer who wishes to reuse a particular learning design to make informed decisions about its value to their practice.
Directory of Open Access Journals (Sweden)
Fatemeh Rezaei
2018-01-01
Full Text Available Background: Methodology of Failure Mode and Effects Analysis (FMEA is known as an important risk assessment tool and accreditation requirement by many organizations. For prioritizing failures, the index of “risk priority number (RPN” is used, especially for its ease and subjective evaluations of occurrence, the severity and the detectability of each failure. In this study, we have tried to apply FMEA model more compatible with health-care systems by redefining RPN index to be closer to reality. Methods: We used a quantitative and qualitative approach in this research. In the qualitative domain, focused groups discussion was used to collect data. A quantitative approach was used to calculate RPN score. Results: We have studied patient's journey in surgery ward from holding area to the operating room. The highest priority failures determined based on (1 defining inclusion criteria as severity of incident (clinical effect, claim consequence, waste of time and financial loss, occurrence of incident (time - unit occurrence and degree of exposure to risk and preventability (degree of preventability and defensive barriers then, (2 risks priority criteria quantified by using RPN index (361 for the highest rate failure. The ability of improved RPN scores reassessed by root cause analysis showed some variations. Conclusions: We concluded that standard criteria should be developed inconsistent with clinical linguistic and special scientific fields. Therefore, cooperation and partnership of technical and clinical groups are necessary to modify these models.
du Rand, P P; Vermaak, M V
1994-12-01
A study was undertaken to evaluate the quality of nursing care in a number of homes for the aged in the Orange Free State. Ten homes were visited and 45 frail aged patients observed. Data was collected by means of a standardised instrument. Essential physical needs such as hygiene and nutrition were found to receive the necessary attention. However, aspects such as stimulation, socialisation, reality orientation, habit training programmes and exercise did not receive enough attention. In the light of these findings it was concluded that a custodial care approach was followed in these homes.
[Evaluation of the quality of nursing care in a number of homes for the aged in Orange Free State].
du Rand, P P; Vermaak, M V
1999-06-01
A study was undertaken to evaluate the quality of nursing care in a number of homes for the aged in the Orange Free State. Ten homes were visited and 45 frail aged patients observed. Data was collected by means of a standardised instrument. Essential physical needs such as hygiene and nutrition were found to receive the necessary attention. However, aspects such as stimulation, socialisation, reality orientation, habit training programmes and exercise did not receive enough attention. In the light of these findings it was concluded that a custodial care approach was followed in these homes.
Chorin, Frédéric; Rahmani, Abderrahmane; Beaune, Bruno; Cornu, Christophe
2015-08-01
Sit-to-stand (STS) movement is useful for evaluating lower limb muscle function, especially from force platforms. Nevertheless, due to a lack of standardization of the STS movement (e.g., position, subject's instructions, etc.), it is difficult to compare results obtained in previous studies. The aim of the present study was to determine the most relevant condition, parameters, and number of trial to perform STS movements. In this study, STS mechanical (maximal and mean force, impulse) and temporal parameters were measured in the vertical, medio-lateral and antero-posterior axes using a force platform. Five STS conditions (i.e., with or without armrests, variation of the height of the chair and the movement speed) were analyzed to evaluate repeatability of different standardized procedures. Most of the mechanical and temporal parameters were influenced by the STS condition (p movement.
modelling for optimal number of line storage reservoirs in a water
African Journals Online (AJOL)
user
reservoirs and the source of pipe network both increase, while the costs of the demand pipe network decreases. Consequently, a trade-off exits between the storage reservoir and source network cost and the demand network costs. The optimal number of storage reservoirs is that number which gives a system of least total ...
Cherry, S.; White, G.C.; Keating, K.A.; Haroldson, Mark A.; Schwartz, Charles C.
2007-01-01
Current management of the grizzly bear (Ursus arctos) population in Yellowstone National Park and surrounding areas requires annual estimation of the number of adult female bears with cubs-of-the-year. We examined the performance of nine estimators of population size via simulation. Data were simulated using two methods for different combinations of population size, sample size, and coefficient of variation of individual sighting probabilities. We show that the coefficient of variation does not, by itself, adequately describe the effects of capture heterogeneity, because two different distributions of capture probabilities can have the same coefficient of variation. All estimators produced biased estimates of population size with bias decreasing as effort increased. Based on the simulation results we recommend the Chao estimator for model M h be used to estimate the number of female bears with cubs of the year; however, the estimator of Chao and Shen may also be useful depending on the goals of the research.
Directory of Open Access Journals (Sweden)
Lluïsa Jordi Nebot
2013-03-01
Full Text Available This article examines new tutoring evaluation methods to be adopted in the course, Machine Theory, in the Escola Tècnica Superior d’Enginyeria Industrial de Barcelona (ETSEIB, Universitat Politècnica de Catalunya. These new methods have been developed in order to facilitate teaching staff work and include students in the evaluation process. Machine Theory is a required course with a large number of students. These students are divided into groups of three, and required to carry out a supervised work constituting 20% of their final mark. These new evaluation methods were proposed in response to the significant increase of students in spring semester of 2010-2011, and were pilot tested during fall semester of academic year 2011-2012, in the previous Industrial Engineering degree program. Pilot test results were highly satisfactory for students and teachers, alike, and met proposed educational objectives. For this reason, the new evaluation methodology was adopted in spring semester of 2011-2012, in the current bachelor’s degree program in Industrial Technology (Grau en Enginyeria en Tecnologies Industrials, GETI, where it has also achieved highly satisfactory results.
Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang
2014-12-01
Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.
Quantum distance and the Euler number index of the Bloch band in a one-dimensional spin model.
Ma, Yu-Quan
2014-10-01
We study the Riemannian metric and the Euler characteristic number of the Bloch band in a one-dimensional spin model with multisite spins exchange interactions. The Euler number of the Bloch band originates from the Gauss-Bonnet theorem on the topological characterization of the closed Bloch states manifold in the first Brillouin zone. We study this approach analytically in a transverse field XY spin chain with three-site spin coupled interactions. We define a class of cyclic quantum distance on the Bloch band and on the ground state, respectively, as a local characterization for quantum phase transitions. Specifically, we give a general formula for the Euler number by means of the Berry curvature in the case of two-band models, which reveals its essential relation to the first Chern number of the band insulators. Finally, we show that the ferromagnetic-paramagnetic phase transition in zero temperature can be distinguished by the Euler number of the Bloch band.
Shields, Matt
The development of Micro Aerial Vehicles has been hindered by the poor understanding of the aerodynamic loading and stability and control properties of the low Reynolds number regime in which the inherent low aspect ratio (LAR) wings operate. This thesis experimentally evaluates the static and damping aerodynamic stability derivatives to provide a complete aerodynamic model for canonical flat plate wings of aspect ratios near unity at Reynolds numbers under 1 x 105. This permits the complete functionality of the aerodynamic forces and moments to be expressed and the equations of motion to solved, thereby identifying the inherent stability properties of the wing. This provides a basis for characterizing the stability of full vehicles. The influence of the tip vortices during sideslip perturbations is found to induce a loading condition referred to as roll stall, a significant roll moment created by the spanwise induced velocity asymmetry related to the displacement of the vortex cores relative to the wing. Roll stall is manifested by a linearly increasing roll moment with low to moderate angles of attack and a subsequent stall event similar to a lift polar; this behavior is not experienced by conventional (high aspect ratio) wings. The resulting large magnitude of the roll stability derivative, Cl,beta and lack of roll damping, Cl ,rho, create significant modal responses of the lateral state variables; a linear model used to evaluate these modes is shown to accurately reflect the solution obtained by numerically integrating the nonlinear equations. An unstable Dutch roll mode dominates the behavior of the wing for small perturbations from equilibrium, and in the presence of angle of attack oscillations a previously unconsidered coupled mode, referred to as roll resonance, is seen develop and drive the bank angle? away from equilibrium. Roll resonance requires a linear time variant (LTV) model to capture the behavior of the bank angle, which is attributed to the
Shibata, Hiroko; Saito, Haruna; Yomota, Chikako; Kawanishi, Toru
2009-08-13
There are two generics of a parenteral lipid emulsion of prostaglandin E1 (PGE(1)) (Lipo-PGE(1)) in addition to two innovators. It was reported the change from innovator to generic in clinical practice caused the slowing of drip rate and formation of aggregates in the infusion line. Thus, we investigated the difference of pharmaceutical quality in these Lipo-PGE(1) formulations. After mixing with some infusion solutions, the mean diameter and number of large particles were determined. Although the mean diameter did not change in any infusion solutions, the number of large particles (diameter >1.0 microm) dramatically increased in generics with Hartmann's solution pH 8 or Lactec injection with 7% sodium bicarbonate. Next, we investigated the effect of these infusion solutions on the retention rate of PGE(1) in lipid particles. The retention rate of PGE(1) in these two infusion solutions decreased more quickly than that in normal saline. Nevertheless, there were no significant differences among the formulations tested. Our results suggest that there is no difference between innovators and generics except in mixing with these infusion solutions. Furthermore, that monitoring the number of large particles can be an effective means of evaluating pharmaceutical interactions and/or the stability of lipid emulsions.
Chaves, Luciano Eustáquio; Nascimento, Luiz Fernando Costa; Rizol, Paloma Maria Silva Rocha
2017-06-22
Predict the number of hospitalizations for asthma and pneumonia associated with exposure to air pollutants in the city of São José dos Campos, São Paulo State. This is a computational model using fuzzy logic based on Mamdani's inference method. For the fuzzification of the input variables of particulate matter, ozone, sulfur dioxide and apparent temperature, we considered two relevancy functions for each variable with the linguistic approach: good and bad. For the output variable number of hospitalizations for asthma and pneumonia, we considered five relevancy functions: very low, low, medium, high and very high. DATASUS was our source for the number of hospitalizations in the year 2007 and the result provided by the model was correlated with the actual data of hospitalization with lag from zero to two days. The accuracy of the model was estimated by the ROC curve for each pollutant and in those lags. In the year of 2007, 1,710 hospitalizations by pneumonia and asthma were recorded in São José dos Campos, State of São Paulo, with a daily average of 4.9 hospitalizations (SD = 2.9). The model output data showed positive and significant correlation (r = 0.38) with the actual data; the accuracies evaluated for the model were higher for sulfur dioxide in lag 0 and 2 and for particulate matter in lag 1. Fuzzy modeling proved accurate for the pollutant exposure effects and hospitalization for pneumonia and asthma approach. Prever o número de internações por asma e pneumonia associadas à exposição a poluentes do ar no município em São José dos Campos, estado de São Paulo. Trata-se de um modelo computacional que utiliza a lógica fuzzy baseado na técnica de inferência de Mamdani. Para a fuzzificação das variáveis de entrada material particulado, ozônio, dióxido de enxofre e temperatura aparente foram consideradas duas funções de pertinência para cada variável com abordagem linguísticas: bom e ruim. Para a variável de saída número interna
Evaluation of Student's Environment by DEA Models
Directory of Open Access Journals (Sweden)
F. Moradi
2016-11-01
Full Text Available The important question here is, is there real evaluation in educational advance? In other words, if a student has been successful in mathematics or has been unsuccessful in mathematics, is it possible to find the reasons behind his advance or, is it possible to find the reasons behind his advance or weakness? If we want to respond to this significant question, it should be said that factors of educational advance must be divided into 5 main groups. 1-family, 2-teacher, 3- students 4-school and 5-manager of 3 schools It can then be said that a student's score does not just depend on a factor that people have imaged From this, it can be concluded that by using the DEA and SBM models, each student's efficiency must be researched and the factors of the student's strengths and weaknesses must be analyzed.
RTMOD: Real-Time MODel evaluation
Energy Technology Data Exchange (ETDEWEB)
Graziani, G; Galmarini, S. [Joint Research centre, Ispra (Italy); Mikkelsen, T. [Risoe National Lab., Wind Energy and Atmospheric Physics Dept. (Denmark)
2000-01-01
The 1998 - 1999 RTMOD project is a system based on an automated statistical evaluation for the inter-comparison of real-time forecasts produced by long-range atmospheric dispersion models for national nuclear emergency predictions of cross-boundary consequences. The background of RTMOD was the 1994 ETEX project that involved about 50 models run in several Institutes around the world to simulate two real tracer releases involving a large part of the European territory. In the preliminary phase of ETEX, three dry runs (i.e. simulations in real-time of fictitious releases) were carried out. At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax and regular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained during the ETEX exercises suggested the development of this project. RTMOD featured a web-based user-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration predictions at the nodes of a regular grid with 0.5 degrees of resolution both in latitude and in longitude, the domain grid extending from 5W to 40E and 40N to 65N. Hypothetical releases were notified around the world to the 28 model forecasters via the web on a one-day warning in advance. They then accessed the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modelers. When additional forecast data arrived, already existing statistical results would be recalculated to include the influence by all available predictions. The new web-based RTMOD concept has proven useful as a practical decision-making tool for realtime
Model for modulated and chaotic waves in zero-Prandtl-number ...
Indian Academy of Sciences (India)
KCD) [20] for thermal convection in zero-Prandtl-number fluids in the presence of Coriolis force showed the possibility of self-tuned temporal quasiperiodic waves at the onset of thermal convection. However, the effect of modulation when the.
National Aeronautics and Space Administration — Shock Wave / Turbulent Boundary Layer Flows at High Mach Numbers. This web page provides data from experiments that may be useful for the validation of turbulence...
Model Experiments with Low Reynolds Number Effects in a Ventilated Room
DEFF Research Database (Denmark)
Nielsen, Peter V.; Filholm, Claus; Topp, Claus
The flow in a ventilated room will not always be a fully developed turbulent flow . Reduced air change rates owing to energy considerations and the application of natural ventilation with openings in the outer wall will give room air movements with low turbulence effects. This paper discusses...... the isothermal low Reynolds number flow from a slot inlet in the end wall of the room. The experiments are made on the scale of 1 to 5. Measurements indicate a low Reynolds number effect in the wall jet flow. The virtual origin of the wall jet moves forward in front of the opening at a small Reynolds number......, an effect that is also known from measurements on free jets. The growth rate of the jet, or the length scale, increases and the velocity decay factor decreases at small Reynolds numbers....
ZATPAC: a model consortium evaluates teen programs.
Owen, Kathryn; Murphy, Dana; Parsons, Chris
2009-09-01
How do we advance the environmental literacy of young people, support the next generation of environmental stewards and increase the diversity of the leadership of zoos and aquariums? We believe it is through ongoing evaluation of zoo and aquarium teen programming and have founded a consortium to pursue those goals. The Zoo and Aquarium Teen Program Assessment Consortium (ZATPAC) is an initiative by six of the nation's leading zoos and aquariums to strengthen institutional evaluation capacity, model a collaborative approach toward assessing the impact of youth programs, and bring additional rigor to evaluation efforts within the field of informal science education. Since its beginning in 2004, ZATPAC has researched, developed, pilot-tested and implemented a pre-post program survey instrument designed to assess teens' knowledge of environmental issues, skills and abilities to take conservation actions, self-efficacy in environmental actions, and engagement in environmentally responsible behaviors. Findings from this survey indicate that teens who join zoo/aquarium programs are already actively engaged in many conservation behaviors. After participating in the programs, teens showed a statistically significant increase in their reported knowledge of conservation and environmental issues and their abilities to research, explain, and find resources to take action on conservation issues of personal concern. Teens also showed statistically significant increases pre-program to post-program for various conservation behaviors, including "I talk with my family and/or friends about things they can do to help the animals or the environment," "I save water...," "I save energy...," "When I am shopping I look for recycled products," and "I help with projects that restore wildlife habitat."
2002-01-01
Business models and cost recovery are the critical factors for determining the sustainability of the traveler information service, and 511. In March 2001 the Policy Committee directed the 511 Working Group to investigate plausible business models and...
World Integrated Nuclear Evaluation System: Model documentation
International Nuclear Information System (INIS)
1991-12-01
The World Integrated Nuclear Evaluation System (WINES) is an aggregate demand-based partial equilibrium model used by the Energy Information Administration (EIA) to project long-term domestic and international nuclear energy requirements. WINES follows a top-down approach in which economic growth rates, delivered energy demand growth rates, and electricity demand are projected successively to ultimately forecast total nuclear generation and nuclear capacity. WINES could be potentially used to produce forecasts for any country or region in the world. Presently, WINES is being used to generate long-term forecasts for the United States, and for all countries with commercial nuclear programs in the world, excluding countries located in centrally planned economic areas. Projections for the United States are developed for the period from 2010 through 2030, and for other countries for the period starting in 2000 or 2005 (depending on the country) through 2010. EIA uses a pipeline approach to project nuclear capacity for the period between 1990 and the starting year for which the WINES model is used. This approach involves a detailed accounting of existing nuclear generating units and units under construction, their capacities, their actual or estimated time of completion, and the estimated date of retirements. Further detail on this approach can be found in Appendix B of Commercial Nuclear Power 1991: Prospects for the United States and the World
Minimum required number of specimen records to develop accurate species distribution models
Proosdij, van A.S.J.; Sosef, M.S.M.; Wieringa, J.J.; Raes, N.
2016-01-01
Species distribution models (SDMs) are widely used to predict the occurrence of species. Because SDMs generally use presence-only data, validation of the predicted distribution and assessing model accuracy is challenging. Model performance depends on both sample size and species’ prevalence, being
Minimum required number of specimen records to develop accurate species distribution models
Proosdij, van A.S.J.; Sosef, M.S.M.; Wieringa, Jan; Raes, N.
2015-01-01
Species Distribution Models (SDMs) are widely used to predict the occurrence of species. Because SDMs generally use presence-only data, validation of the predicted distribution and assessing model accuracy is challenging. Model performance depends on both sample size and species’ prevalence, being
COMPUTER MODEL FOR ORGANIC FERTILIZER EVALUATION
Directory of Open Access Journals (Sweden)
Zdenko Lončarić
2009-12-01
Full Text Available Evaluation of manures, composts and growing media quality should include enough properties to enable an optimal use from productivity and environmental points of view. The aim of this paper is to describe basic structure of organic fertilizer (and growing media evaluation model to present the model example by comparison of different manures as well as example of using plant growth experiment for calculating impact of pH and EC of growing media on lettuce plant growth. The basic structure of the model includes selection of quality indicators, interpretations of indicators value, and integration of interpreted values into new indexes. The first step includes data input and selection of available data as a basic or additional indicators depending on possible use as fertilizer or growing media. The second part of the model uses inputs for calculation of derived quality indicators. The third step integrates values into three new indexes: fertilizer, growing media, and environmental index. All three indexes are calculated on the basis of three different groups of indicators: basic value indicators, additional value indicators and limiting factors. The possible range of indexes values is 0-10, where range 0-3 means low, 3-7 medium and 7-10 high quality. Comparing fresh and composted manures, higher fertilizer and environmental indexes were determined for composted manures, and the highest fertilizer index was determined for composted pig manure (9.6 whereas the lowest for fresh cattle manure (3.2. Composted manures had high environmental index (6.0-10 for conventional agriculture, but some had no value (environmental index = 0 for organic agriculture because of too high zinc, copper or cadmium concentrations. Growing media indexes were determined according to their impact on lettuce growth. Growing media with different pH and EC resulted in very significant impacts on height, dry matter mass and leaf area of lettuce seedlings. The highest lettuce
Evaluation of clinical information modeling tools.
Moreno-Conde, Alberto; Austin, Tony; Moreno-Conde, Jesús; Parra-Calderón, Carlos L; Kalra, Dipak
2016-11-01
Clinical information models are formal specifications for representing the structure and semantics of the clinical content within electronic health record systems. This research aims to define, test, and validate evaluation metrics for software tools designed to support the processes associated with the definition, management, and implementation of these models. The proposed framework builds on previous research that focused on obtaining agreement on the essential requirements in this area. A set of 50 conformance criteria were defined based on the 20 functional requirements agreed by that consensus and applied to evaluate the currently available tools. Of the 11 initiative developing tools for clinical information modeling identified, 9 were evaluated according to their performance on the evaluation metrics. Results show that functionalities related to management of data types, specifications, metadata, and terminology or ontology bindings have a good level of adoption. Improvements can be made in other areas focused on information modeling and associated processes. Other criteria related to displaying semantic relationships between concepts and communication with terminology servers had low levels of adoption. The proposed evaluation metrics were successfully tested and validated against a representative sample of existing tools. The results identify the need to improve tool support for information modeling and software development processes, especially in those areas related to governance, clinician involvement, and optimizing the technical validation of testing processes. This research confirmed the potential of these evaluation metrics to support decision makers in identifying the most appropriate tool for their organization. Los Modelos de Información Clínica son especificaciones para representar la estructura y características semánticas del contenido clínico en los sistemas de Historia Clínica Electrónica. Esta investigación define, prueba y valida
Evaluation of the Current State of Integrated Water Quality Modelling
Arhonditsis, G. B.; Wellen, C. C.; Ecological Modelling Laboratory
2010-12-01
Environmental policy and management implementation require robust methods for assessing the contribution of various point and non-point pollution sources to water quality problems as well as methods for estimating the expected and achieved compliance with the water quality goals. Water quality models have been widely used for creating the scientific basis for management decisions by providing a predictive link between restoration actions and ecosystem response. Modelling water quality and nutrient transport is challenging due a number of constraints associated with the input data and existing knowledge gaps related to the mathematical description of landscape and in-stream biogeochemical processes. While enormous effort has been invested to make watershed models process-based and spatially-distributed, there has not been a comprehensive meta-analysis of model credibility in watershed modelling literature. In this study, we evaluate the current state of integrated water quality modeling across the range of temporal and spatial scales typically utilized. We address several common modeling questions by providing a quantitative assessment of model performance and by assessing how model performance depends on model development. The data compiled represent a heterogeneous group of modeling studies, especially with respect to complexity, spatial and temporal scales and model development objectives. Beginning from 1992, the year when Beven and Binley published their seminal paper on uncertainty analysis in hydrological modelling, and ending in 2009, we selected over 150 papers fitting a number of criteria. These criteria involved publications that: (i) employed distributed or semi-distributed modelling approaches; (ii) provided predictions on flow and nutrient concentration state variables; and (iii) reported fit to measured data. Model performance was quantified with the Nash-Sutcliffe Efficiency, the relative error, and the coefficient of determination. Further, our
A Model for the Evaluation of Educational Products.
Bertram, Charles L.
A model for the evaluation of educational products based on experience with development of three such products is described. The purpose of the evaluation model is to indicate the flow of evaluation activity as products undergo development. Evaluation is given Stufflebeam's definition as the process of delineating, obtaining, and providing useful…
A Model for Evaluating Student Clinical Psychomotor Skills.
And Others; Fiel, Nicholas J.
1979-01-01
A long-range plan to evaluate medical students' physical examination skills was undertaken at the Ingham Family Medical Clinic at Michigan State University. The development of the psychomotor skills evaluation model to evaluate the skill of blood pressure measurement, tests of the model's reliability, and the use of the model are described. (JMD)
Austin, Peter C
2010-04-22
Multilevel logistic regression models are increasingly being used to analyze clustered data in medical, public health, epidemiological, and educational research. Procedures for estimating the parameters of such models are available in many statistical software packages. There is currently little evidence on the minimum number of clusters necessary to reliably fit multilevel regression models. We conducted a Monte Carlo study to compare the performance of different statistical software procedures for estimating multilevel logistic regression models when the number of clusters was low. We examined procedures available in BUGS, HLM, R, SAS, and Stata. We found that there were qualitative differences in the performance of different software procedures for estimating multilevel logistic models when the number of clusters was low. Among the likelihood-based procedures, estimation methods based on adaptive Gauss-Hermite approximations to the likelihood (glmer in R and xtlogit in Stata) or adaptive Gaussian quadrature (Proc NLMIXED in SAS) tended to have superior performance for estimating variance components when the number of clusters was small, compared to software procedures based on penalized quasi-likelihood. However, only Bayesian estimation with BUGS allowed for accurate estimation of variance components when there were fewer than 10 clusters. For all statistical software procedures, estimation of variance components tended to be poor when there were only five subjects per cluster, regardless of the number of clusters.
Generating a lexicon without a language model: Do words for number count?
Spaepen, Elizabet; Coppola, Marie; Flaherty, Molly; Spelke, Elizabeth; Goldin-Meadow, Susan
2013-11-01
Homesigns are communication systems created by deaf individuals without access to conventional linguistic input. To investigate how homesign gestures for number function in short-term memory compared to homesign gestures for objects, actions, or attributes, we conducted memory span tasks with adult homesigners in Nicaragua, and with comparison groups of unschooled hearing Spanish speakers and deaf Nicaraguan Sign Language signers. There was no difference between groups in recall of gestures or words for objects, actions or attributes; homesign gestures therefore can function as word units in short-term memory. However, homesigners showed poorer recall of numbers than the other groups. Unlike the other groups, increasing the numerical value of the to-be-remembered quantities negatively affected recall in homesigners, but not controls. When developed without linguistic input, gestures for number do not seem to function as summaries of the cardinal values of the sets ( four ), but rather as indexes of items within a set ( one-one-one-one ).
Directory of Open Access Journals (Sweden)
S. Zengah
2013-06-01
Full Text Available Fatigue damage increases with applied load cycles in a cumulative manner. Fatigue damage models play a key role in life prediction of components and structures subjected to random loading. The aim of this paper is the examination of the performance of the “Damaged Stress Model”, proposed and validated, against other fatigue models under random loading before and after reconstruction of the load histories. To achieve this objective, some linear and nonlinear models proposed for fatigue life estimation and a batch of specimens made of 6082T6 aluminum alloy is subjected to random loading. The damage was cumulated by Miner’s rule, Damaged Stress Model (DSM, Henry model and Unified Theory (UT and random cycles were counted with a rain-flow algorithm. Experimental data on high-cycle fatigue by complex loading histories with different mean and amplitude stress values are analyzed for life calculation and model predictions are compared.
DTIC Review: Human, Social, Cultural and Behavior Modeling. Volume 9, Number 1 (CD-ROM)
National Research Council Canada - National Science Library
2008-01-01
...: Human, Social, Cultural and Behavior (HSCB) models are designed to help understand the structure, interconnections, dependencies, behavior, and trends associated with any collection of individuals...
Directory of Open Access Journals (Sweden)
Tetsuya Oda
2012-01-01
Full Text Available Node placement problems have been long investigated in the optimization field due to numerous applications in location science and classification. Facility location problems are showing their usefulness to communication networks, and more especially from Wireless Mesh Networks (WMNs field. Recently, such problems are showing their usefulness to communication networks, where facilities could be servers or routers offering connectivity services to clients. In this paper, we deal with the effect of mutation and crossover operators in GA for node placement problem. We evaluate the performance of the proposed system using different selection operators and different distributions of router nodes considering number of covered users parameter. The simulation results show that for Linear and Exponential ranking methods, the system has a good performance for all rates of crossover and mutation.
Directory of Open Access Journals (Sweden)
Margareth Regina Dibo
2013-07-01
Full Text Available Introduction Here, we evaluated sweeping methods used to estimate the number of immature Aedes aegypti in large containers. Methods III/IV instars and pupae at a 9:1 ratio were placed in three types of containers with, each one with three different water levels. Two sweeping methods were tested: water-surface sweeping and five-sweep netting. The data were analyzed using linear regression. Results The five-sweep netting technique was more suitable for drums and water-tanks, while the water-surface sweeping method provided the best results for swimming pools. Conclusions Both sweeping methods are useful tools in epidemiological surveillance programs for the control of Aedes aegypti.
Zhao, Quanhua; Li, Xiaoli; Li, Yu
2017-05-12
This paper presents a novel multilook SAR image segmentation algorithm with an unknown number of clusters. Firstly, the marginal probability distribution for a given SAR image is defined by a Gamma mixture model (GaMM), in which the number of components corresponds to the number of homogeneous regions needed to segment and the spatial relationship among neighboring pixels is characterized by a Markov Random Field (MRF) defined by the weighting coefficients of components in GaMM. During the algorithm iteration procedure, the number of clusters is gradually reduced by merging two components until they are equal to one. For each fixed number of clusters, the parameters of GaMM are estimated and the optimal segmentation result corresponding to the number is obtained by maximizing the marginal probability. Finally, the number of clusters with minimum global energy defined as the negative logarithm of marginal probability is indicated as the expected number of clusters with the homogeneous regions needed to be segmented, and the corresponding segmentation result is considered as the final optimal one. The experimental results from the proposed and comparing algorithms for simulated and real multilook SAR images show that the proposed algorithm can find the real number of clusters and obtain more accurate segmentation results simultaneously.
Directory of Open Access Journals (Sweden)
Fernando Augusto de Souza
2014-07-01
Full Text Available The aim of this research was to evaluate the influence of the number and position of nutrient levels used in dose-response trials in the estimation of the optimal-level (OL and the goodness of fit on the models: quadratic polynomial (QP, exponential (EXP, linear response plateau (LRP and quadratic response plateau (QRP. It was used data from dose-response trials realized in FCAV-Unesp Jaboticabal considering the homogeneity of variances and normal distribution. The fit of the models were evaluated considered the following statistics: adjusted coefficient of determination (R²adj, coefficient of variation (CV and the sum of the squares of deviations (SSD.It was verified in QP and EXP models that small changes on the placement and distribution of the levels caused great changes in the estimation of the OL. The LRP model was deeply influenced by the absence or presence of the level between the response and stabilization phases (change in the straight to plateau. The QRP needed more levels on the response phase and the last level on stabilization phase to estimate correctly the plateau. It was concluded that the OL and the adjust of the models are dependent on the positioning and the number of the levels and the specific characteristics of each model, but levels defined near to the true requirement and not so spaced are better to estimate the OL.
Modelling Problem-Solving Situations into Number Theory Tasks: The Route towards Generalisation
Papadopoulos, Ioannis; Iatridou, Maria
2010-01-01
This paper examines the way two 10th graders cope with a non-standard generalisation problem that involves elementary concepts of number theory (more specifically linear Diophantine equations) in the geometrical context of a rectangle's area. Emphasis is given on how the students' past experience of problem solving (expressed through interplay…
Directory of Open Access Journals (Sweden)
Takehisa Yamamoto
Full Text Available Because antimicrobial resistance in food-producing animals is a major public health concern, many countries have implemented antimicrobial monitoring systems at a national level. When designing a sampling scheme for antimicrobial resistance monitoring, it is necessary to consider both cost effectiveness and statistical plausibility. In this study, we examined how sampling scheme precision and sensitivity can vary with the number of animals sampled from each farm, while keeping the overall sample size constant to avoid additional sampling costs. Five sampling strategies were investigated. These employed 1, 2, 3, 4 or 6 animal samples per farm, with a total of 12 animals sampled in each strategy. A total of 1,500 Escherichia coli isolates from 300 fattening pigs on 30 farms were tested for resistance against 12 antimicrobials. The performance of each sampling strategy was evaluated by bootstrap resampling from the observational data. In the bootstrapping procedure, farms, animals, and isolates were selected randomly with replacement, and a total of 10,000 replications were conducted. For each antimicrobial, we observed that the standard deviation and 2.5-97.5 percentile interval of resistance prevalence were smallest in the sampling strategy that employed 1 animal per farm. The proportion of bootstrap samples that included at least 1 isolate with resistance was also evaluated as an indicator of the sensitivity of the sampling strategy to previously unidentified antimicrobial resistance. The proportion was greatest with 1 sample per farm and decreased with larger samples per farm. We concluded that when the total number of samples is pre-specified, the most precise and sensitive sampling strategy involves collecting 1 sample per farm.
Comparison of the 1981 INEL dispersion data with results from a number of different models
Energy Technology Data Exchange (ETDEWEB)
Lewellen, W S; Sykes, R I; Parker, S F
1985-05-01
The results from simulations by 12 different dispersion models are compared with observations from an extensive field experiment conducted by the Nuclear Regulatory Commission at the Idaho National Engineering Laboratory in July, 1981. Comparisons were made on the bases of hourly SF/sub 6/ samples taken at the surface, out to approximately 10 km from the 46 m release tower, both during and following 7 different 8-hour releases. Comparisons are also made for total integrated doses collected out to approximately 40 km. Three classes of models are used. Within the limited range appropriate for Class A models this data comparison shows that neither the puff models or the transport and diffusion models agree with the data any better than the simple Gaussian plume models. The puff and transport and diffusion models do show a slight edge in performance in comparison with the total dose over the extended range approximate for class B models. The best model results for the hourly samples show approximately 40% calculated within a factor of two when a 15/sup 0/ uncertainty in plume position is permitted and it is assumed that higher data samples may occur at stations between the actual sample sites. This is increased to 60% for the 12 hour integrated dose and 70% for the total integrated dose when the same performance measure is used. None of the models reproduce the observed patchy dose patterns. This patchiness is consistent with the discussion of the inherent uncertainty associated with time averaged plume observations contained in our companion reports on the scientific critique of available models.
Evaluating Quality in Model-Driven Engineering
Mohagheghi, Parastoo; Aagedal, Jan
2007-01-01
In Model-Driven Engineering (MDE), models are the prime artifacts, and developing high-quality systems depends on developing high-quality models and performing transformations that preserve quality or even improve it. This paper presents quality goals in MDE and states that the quality of models is affected by the quality of modeling languages, tools, modeling processes, the knowledge and experience of modelers, and the quality assurance techniques applied. The paper further presents related ...
Kirsch, V. A.; Volkov, V. V.; Bildukevich, A. V.
A method for calculating the external mass transfer in a contactor with a transverse confined flow of a viscous incompressible liquid (gas) past hollow fibers at low Reynolds numbers is proposed. The method is based on the concept of regular arrays of parallel fibers with a well-defined flowfield. As a simplest model system, a row of parallel fibers is considered, for which dependences of a drag force and an efficiency of a solute retention on the inter-fiber distance, membrane mass transfer coefficient, Peclet and Reynolds numbers are computed. The influence of the fluid inertia on the mass transport is studied. It is shown that a linear Stokes equations can be used for as higher Re numbers, as denser is the fiber array. In this case the flow field is independent on the Re number, and analytical solutions for the flowfield and fiber sorption efficiency (fiber Sherwood number) can be used.
Liu, Mei; Luo, Tao; Yang, Chongguang; Liu, Qingyun; Gao, Qian
2015-10-01
To identify a variable number of tandem repeats (VNTR) typing method that is suitable for molecular epidemiological study of tuberculosis in China. We systematically evaluated the commonly used VNTR typing methods, including 4 methods (MIRU-12, VNTR-15/VNTR-24 and VNTR "24+4") proposed by foreign colleagues and 2 methods (VNTR-L15 and VNTR"9+3") developed by domestic researchers using population-based collection of 891 clinical isolates from 5 provinces across the country. The order (from high to low) of discriminatory power for the 6 VNTR typing methods was VNTR"24+4", VNTR"9+3", VNTR-24, VNTR-15, VNTR-L15 and MIRU-12. The discriminatory power of VNTR"9+3" was comparable with VNTR"24+4" and higher than that of VNTR-15/24. The concordance for defining clustered and unique genotypes between VNTR"9+3" and VNTR"24+4" was 96.59%. Our results suggest that VNTR"9+3" is a suitable method for molecular typing of M. tuberculosis in China by considering its high discriminatory power, high consistency with VNTR"24+4" and relative small number of VNTR locus.
Yaroslavsky, Leonid P.
1996-11-01
We show that one can treat pseudo-random generators, evolutionary models of texture images, iterative local adaptive filters for image restoration and enhancement and growth models in biology and material sciences in a unified way as special cases of dynamic systems with a nonlinear feedback.
Energy Technology Data Exchange (ETDEWEB)
Vigren, E.; Edberg, N. J. T.; Eriksson, A. I.; Johansson, F.; Odelstad, E. [Swedish Institute of Space Physics, Uppsala (Sweden); Altwegg, K.; Tzou, C.-Y. [Physikalisches Institut, University of Bern, Bern (Switzerland); Galand, M. [Department of Physics, Imperial College London, London (United Kingdom); Henri, P.; Valliéres, X., E-mail: erik.vigren@irfu.se [Laboratoire de Physique et Chimie de l’Environnement et de l’Espace, Orleans (France)
2016-09-01
During 2015 January 9–11, at a heliocentric distance of ∼2.58–2.57 au, the ESA Rosetta spacecraft resided at a cometocentric distance of ∼28 km from the nucleus of comet 67P/Churyumov–Gerasimenko, sweeping the terminator at northern latitudes of 43°N–58°N. Measurements by the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis/Comet Pressure Sensor (ROSINA/COPS) provided neutral number densities. We have computed modeled electron number densities using the neutral number densities as input into a Field Free Chemistry Free model, assuming H{sub 2}O dominance and ion-electron pair formation by photoionization only. A good agreement (typically within 25%) is found between the modeled electron number densities and those observed from measurements by the Mutual Impedance Probe (RPC/MIP) and the Langmuir Probe (RPC/LAP), both being subsystems of the Rosetta Plasma Consortium. This indicates that ions along the nucleus-spacecraft line were strongly coupled to the neutrals, moving radially outward with about the same speed. Such a statement, we propose, can be further tested by observations of H{sub 3}O{sup +}/H{sub 2}O{sup +} number density ratios and associated comparisons with model results.
Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial
The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.
Baryon number and lepton universality violation in leptoquark and diquark models
Directory of Open Access Journals (Sweden)
Nima Assad
2018-02-01
Full Text Available We perform a systematic study of models involving leptoquarks and diquarks with masses well below the grand unification scale and demonstrate that a large class of them is excluded due to rapid proton decay. After singling out the few phenomenologically viable color triplet and sextet scenarios, we show that there exist only two leptoquark models which do not suffer from tree-level proton decay and which have the potential for explaining the recently discovered anomalies in B meson decays. Both of those models, however, contain dimension five operators contributing to proton decay and require a new symmetry forbidding them to emerge at a higher scale. This has a particularly nice realization for the model with the vector leptoquark (3,12/3, which points to a specific extension of the Standard Model, namely the Pati–Salam unification model, where this leptoquark naturally arises as the new gauge boson. We explore this possibility in light of recent B physics measurements. Finally, we analyze also a vector diquark model, discussing its LHC phenomenology and showing that it has nontrivial predictions for neutron–antineutron oscillation experiments.
Directory of Open Access Journals (Sweden)
Fateme Tajik
2013-01-01
Full Text Available Introduction: Lack of knowledge about root canal anatomy can cause mistakes in diagnosis, treatment planning and failure of treatment. Mandibular canine is usually single-rooted it may have two roots or more root canals. The purpose of this study was evaluating the number of root and root-canals of mandibular canine using digital radiography with different angles and comparing it with clearing method.Materials & Methods: This study was a diagnostic test. Two hundred human mandibular canine teeth were studied. Digital radiography of the teeth from mesiodistal, bacculingual and 200 mesial views were prepared. Radiographic evaluation was down by two observers (An oral radiologist and an endodontist separately. Then dental clearing was performed. Data analysis was done using SPSS.Ver.17 software and statistical tests of MC Nemar. P0.001. Findings of digital radiography in mesiodistal view showed that 180 teeth (90% were single-canal and 20 teeth (10% had two canals, which were not different from those of clearing method (P=0.25. In 200 mesial view, 192 teeth (96% were single-canal and 8 teeth (4% had two canals, which were different from those of clearing method (P=0.012.Conclusion: Despite the low prevalence of anatomical variations in mandibular canine in this in vitro study, due to the lack of significant difference of radiographic mesiodistal views compared to that of clearing technique, CBCT modality is recommended for obtaining fast and complete diagnosis of unusual root canal.
Directory of Open Access Journals (Sweden)
Ilse Storch
2002-06-01
Full Text Available This paper explores the effects of spatial resolution on the performance and applicability of habitat models in wildlife management and conservation. A Habitat Suitability Index (HSI model for the Capercaillie (Tetrao urogallus in the Bavarian Alps, Germany, is presented. The model was exclusively built on non-spatial, small-scale variables of forest structure and without any consideration of landscape patterns. The main goal was to assess whether a HSI model developed from small-scale habitat preferences can explain differences in population abundance at larger scales. To validate the model, habitat variables and indirect sign of Capercaillie use (such as feathers or feces were mapped in six study areas based on a total of 2901 20 m radius (for habitat variables and 5 m radius sample plots (for Capercaillie sign. First, the model's representation of Capercaillie habitat preferences was assessed. Habitat selection, as expressed by Ivlev's electivity index, was closely related to HSI scores, increased from poor to excellent habitat suitability, and was consistent across all study areas. Then, habitat use was related to HSI scores at different spatial scales. Capercaillie use was best predicted from HSI scores at the small scale. Lowering the spatial resolution of the model stepwise to 36-ha, 100-ha, 400-ha, and 2000-ha areas and relating Capercaillie use to aggregated HSI scores resulted in a deterioration of fit at larger scales. Most importantly, there were pronounced differences in Capercaillie abundance at the scale of study areas, which could not be explained by the HSI model. The results illustrate that even if a habitat model correctly reflects a species' smaller scale habitat preferences, its potential to predict population abundance at larger scales may remain limited.
Model for modulated and chaotic waves in zero-Prandtl-number ...
Indian Academy of Sciences (India)
c + 2q2)/8, f7 = αβη(3π2 − q2)/2, f8 = αη(4π2β − q2)/4, f9 = (3 + α2). 2.3 Results and discussion. The above dynamical system (eqs (7)–(19)) is now integrated using standard fourth- order Runge–Kutta scheme. The external parameters are Rayleigh number R,. Pramana – J. Phys., Vol. 71, No. 3, September 2008. 549 ...
DEFF Research Database (Denmark)
Kumar, Prashant; Garmory, Andrew; Ketzel, Matthias
2009-01-01
Pollution Model (OSPM) and Computational Fluid Dynamics (CFD) code FLUENT. All models disregarded any particle dynamics. CFD simulations have been carried out in a simplified geometry of the selected street canyon. Four different sizes of emission sources have been used in the CFD simulations to assess...... simulations showed that selection of the source size was critical to determine PNC distributions. A source size scaling the vehicle dimensions was found to better represent the measured PNC profiles in the lowest part of the canyon. The OSPM and Box model produced similar shapes of PNC profile across...... differences were largest between idealised (CFD and Box) and operational (OSPM) models at upper sampling heights; these were attributed to weaker exchange of air between street and roof-above in the upper part of the canyon in the CFD calculations. Possible reasons for these discrepancies are given....
Total cross sections of hadron interactions at high energies in low constituents number model
International Nuclear Information System (INIS)
Abramovskij, V.A.; Radchenko, N.V.
2009-01-01
We consider QCD hadrons interaction model in which gluons density is low in initial state wave function in rapidity space and real hadrons are produced from color strings decay. In this model behavior of total cross sections of pp, pp bar, π ± p, K ± p, γp, and γγ interactions is well described. The value of proton-proton total cross section at LHC energy is predicted
Impact of model defect and experimental uncertainties on evaluated output
International Nuclear Information System (INIS)
Neudecker, D.; Capote, R.; Leeb, H.
2013-01-01
One of the current major problems in nuclear data evaluation is the unreasonably small evaluated uncertainties often obtained. These small uncertainties are partly attributed to missing correlations of experimental uncertainties as well as to deficiencies of the model employed for the prior information. In this article, both uncertainty sources are included in an evaluation of 55 Mn cross-sections for incident neutrons. Their impact on the evaluated output is studied using a prior obtained by the Full Bayesian Evaluation Technique and a prior obtained by the nuclear model program EMPIRE. It is shown analytically and by means of an evaluation that unreasonably small evaluated uncertainties can be obtained not only if correlated systematic uncertainties of the experiment are neglected but also if prior uncertainties are smaller or about the same magnitude as the experimental ones. Furthermore, it is shown that including model defect uncertainties in the evaluation of 55 Mn leads to larger evaluated uncertainties for channels where the model is deficient. It is concluded that including correlated experimental uncertainties is equally important as model defect uncertainties, if the model calculations deviate significantly from the measurements. -- Highlights: • We study possible causes of unreasonably small evaluated nuclear data uncertainties. • Two different formulations of model defect uncertainties are presented and compared. • Smaller prior than experimental uncertainties cause too small evaluated ones. • Neglected correlations of experimental uncertainties cause too small evaluated ones. • Including model defect uncertainties in the prior improves the evaluated output
Modeling and designing of variable-period and variable-pole-number undulator
Directory of Open Access Journals (Sweden)
I. Davidyuk
2016-02-01
Full Text Available The concept of permanent-magnet variable-period undulator (VPU was proposed several years ago and has found few implementations so far. The VPUs have some advantages as compared with conventional undulators, e.g., a wider range of radiation wavelength tuning and the option to increase the number of poles for shorter periods. Both these advantages will be realized in the VPU under development now at Budker INP. In this paper, we present the results of 2D and 3D magnetic field simulations and discuss some design features of this VPU.
Occupancy Models, Bell-Type Polynomials and Numbers and Applications to Probability.
1983-06-01
expressed in terms of such ratios. These recurrences bypass the computational difficulties which come from the fact that the numbers themselves (but not the...biisic properties and rec’u-ruees for r(Tr;nj,r) hrld fac) itat( tlair computation . Remark 2.1 . In tesqlin order t.o avoid unnv e!1.bry cui-l i...Sobel, . , Uppuluri, Y.R.R. and Frankowski, K. (71977). Dirichlet distributions- type 1. In S.lected tables in Mathenatical Statistics, Vol.4
International Nuclear Information System (INIS)
Alvarez, Gabriel; Martinez Alonso, Luis; Medina, Elena
2011-01-01
We present a method to compute the genus expansion of the free energy of Hermitian matrix models from the large N expansion of the recurrence coefficients of the associated family of orthogonal polynomials. The method is based on the Bleher-Its deformation of the model, on its associated integral representation of the free energy, and on a method for solving the string equation which uses the resolvent of the Lax operator of the underlying Toda hierarchy. As a byproduct we obtain an efficient algorithm to compute generating functions for the enumeration of labeled k-maps which does not require the explicit expressions of the coefficients of the topological expansion. Finally we discuss the regularization of singular one-cut models within this approach.
Haghani, Shima; Sedehi, Morteza; Kheiri, Soleiman
2017-09-02
Traditional statistical models often are based on certain presuppositions and limitations that may not presence in actual data and lead to turbulence in estimation or prediction. In these situations, artificial neural networks (ANNs) could be suitable alternative rather than classical statistical methods. A prospective cohort study. The study was conducted in Shahrekord Blood Transfusion Center, Shahrekord, central Iran, on blood donors from 2008-2009. The accuracy of the proposed model to prediction of number of return to blood donations was compared with classical statistical models. A number of 864 donors who had a first-time successful donation were followed for five years. Number of return for blood donation was considered as response variable. Poisson regression (PR), negative binomial regression (NBR), zero-inflated Poisson regression (ZIPR) and zero-inflated negative binomial regression (ZINBR) as well as ANN model were fitted to data. MSE criterion was used to compare models. To fitting the models, STATISTICA 10 and, R 3.2.2 was used RESULTS: The MSE of PR, NBR, ZIPR, ZINBR and ANN models was obtained 2.71, 1.01, 1.54, 0.094 and 0.056 for the training and 4.05, 9.89, 3.99, 2.53 and 0.27 for the test data, respectively. The ANN model had the least MSE in both training, and test data set and has a better performance than classic models. ANN could be a suitable alternative for modeling such data because of fewer restrictions.
Increased numbers of orexin/hypocretin neurons in a genetic rat depression model
DEFF Research Database (Denmark)
Mikrouli, Elli; Wörtwein, Gitta; Soylu, Rana
2011-01-01
The Flinders Sensitive Line (FSL) rat is a genetic animal model of depression that displays characteristics similar to those of depressed patients including lower body weight, decreased appetite and reduced REM sleep latency. Hypothalamic neuropeptides such as orexin/hypocretin, melanin-concentra......The Flinders Sensitive Line (FSL) rat is a genetic animal model of depression that displays characteristics similar to those of depressed patients including lower body weight, decreased appetite and reduced REM sleep latency. Hypothalamic neuropeptides such as orexin/hypocretin, melanin...
A Validated All-Pressure Fluid Drop Model and Lewis Number Effects for a Binary Mixture
Harstad, K.; Bellan, J.
1999-01-01
The differences between subcritical liquid drop and supercritical fluid drop behavior are discussed. Under subcritical, evaporative high emission rate conditions, a film layer is present in the inner part of the drop surface which contributes to the unique determination of the boundary conditions; it is this film layer which contributes to the solution's convective-diffusive character. In contrast, under supercritical condition as the boundary conditions contain a degree of arbitrariness due to the absence of a surface, and the solution has then a purely diffusive character. Results from simulations of a free fluid drop under no-gravity conditions are compared to microgravity experimental data from suspended, large drop experiments at high, low and intermediary temperatures and in a range of pressures encompassing the sub-and supercritical regime. Despite the difference between the conditions of the simulations and experiments (suspension vs. free floating), the time rate of variation of the drop diameter square is remarkably well predicted in the linear curve regime. The drop diameter is determined in the simulations from the location of the maximum density gradient, and agrees well with the data. It is also shown that the classical calculation of the Lewis number gives qualitatively erroneous results at supercritical conditions, but that an effective Lewis number previously defined gives qualitatively correct estimates of the length scales for heat and mass transfer at all pressures.
Nuclear safety culture evaluation model based on SSE-CMM
International Nuclear Information System (INIS)
Yang Xiaohua; Liu Zhenghai; Liu Zhiming; Wan Yaping; Peng Guojian
2012-01-01
Safety culture, which is of great significance to establish safety objectives, characterizes level of enterprise safety production and development. Traditional safety culture evaluation models emphasis on thinking and behavior of individual and organization, and pay attention to evaluation results while ignore process. Moreover, determining evaluation indicators lacks objective evidence. A novel multidimensional safety culture evaluation model, which has scientific and completeness, is addressed by building an preliminary mapping between safety culture and SSE-CMM's (Systems Security Engineering Capability Maturity Model) process area and generic practice. The model focuses on enterprise system security engineering process evaluation and provides new ideas and scientific evidences for the study of safety culture. (authors)
Spectral evaluation of Earth geopotential models and an experiment ...
Indian Academy of Sciences (India)
and an experiment on its regional improvement for geoid modelling. B Erol. Department of Geomatics Engineering, Civil Engineering Faculty,. Istanbul Technical University, Maslak 34469, Istanbul, Turkey. e-mail: bihter@itu.edu.tr. As the number of Earth geopotential models (EGM) grows with the increasing number of data ...
Spectral evaluation of Earth geopotential models and an experiment ...
Indian Academy of Sciences (India)
As the number of Earth geopotential models (EGM) grows with the increasing number of data collected by dedicated satellite gravity missions, CHAMP, GRACE and GOCE, measuring the differences among the models and monitoring the improvements in gravity field recovery are required. This study assesses the ...
Ball, Frank; Pellis, Lorenzo; Trapman, Pieter
2016-04-01
In this paper we consider epidemic models of directly transmissible SIR (susceptible → infective → recovered) and SEIR (with an additional latent class) infections in fully-susceptible populations with a social structure, consisting either of households or of households and workplaces. We review most reproduction numbers defined in the literature for these models, including the basic reproduction number R0 introduced in the companion paper of this, for which we provide a simpler, more elegant derivation. Extending previous work, we provide a complete overview of the inequalities among these reproduction numbers and resolve some open questions. Special focus is put on the exponential-growth-associated reproduction number Rr, which is loosely defined as the estimate of R0 based on the observed exponential growth of an emerging epidemic obtained when the social structure is ignored. We show that for the vast majority of the models considered in the literature Rr ≥ R0 when R0 ≥ 1 and Rr ≤ R0 when R0 ≤ 1. We show that, in contrast to models without social structure, vaccination of a fraction 1-1/R0 of the population, chosen uniformly at random, with a perfect vaccine is usually insufficient to prevent large epidemics. In addition, we provide significantly sharper bounds than the existing ones for bracketing the critical vaccination coverage between two analytically tractable quantities, which we illustrate by means of extensive numerical examples. Copyright © 2016 Elsevier Inc. All rights reserved.
Don C. Bragg; Jeffrey L. Kershner
2004-01-01
Riparian large woody debris (LWD) recruitment simulations have traditionally applied a random angle of tree fall from two well-forested stream banks. We used a riparian LWD recruitment model (CWD, version 1.4) to test the validity these assumptions. Both the number of contributing forest banks and predominant tree fall direction significantly influenced simulated...
Energy Technology Data Exchange (ETDEWEB)
Guo, Y. [School of Astronomy and Space Science and Key Laboratory of Modern Astronomy and Astrophysics in Ministry of Education, Nanjing University, Nanjing 210023 (China); Pariat, E.; Moraitis, K. [LESIA, Observatoire de Paris, PSL Research University, CNRS, Sorbonne Université, UPMC Univ. Paris 06, Univ. Paris Diderot, Sorbonne Paris Cité, F-92190 Meudon (France); Valori, G. [University College London, Mullard Space Science Laboratory, Holmbury St. Mary, Dorking, Surrey, RH5 6NT (United Kingdom); Anfinogentov, S. [Institute of Solar-Terrestrial Physics SB RAS 664033, Irkutsk, P.O. box 291, Lermontov Street, 126a (Russian Federation); Chen, F. [Max-Plank-Institut für Sonnensystemforschung, D-37077 Göttingen (Germany); Georgoulis, M. K. [Research Center for Astronomy and Applied Mathematics of the Academy of Athens, 4 Soranou Efesiou Street, 11527 Athens (Greece); Liu, Y. [W. W. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA 94305 (United States); Thalmann, J. K. [Institute of Physics, Univeristy of Graz, Universitätsplatz 5/II, A-8010 Graz (Austria); Yang, S., E-mail: guoyang@nju.edu.cn [Key Laboratory of Solar Activity, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China)
2017-05-01
We study the writhe, twist, and magnetic helicity of different magnetic flux ropes, based on models of the solar coronal magnetic field structure. These include an analytical force-free Titov–Démoulin equilibrium solution, non-force-free magnetohydrodynamic simulations, and nonlinear force-free magnetic field models. The geometrical boundary of the magnetic flux rope is determined by the quasi-separatrix layer and the bottom surface, and the axis curve of the flux rope is determined by its overall orientation. The twist is computed by the Berger–Prior formula, which is suitable for arbitrary geometry and both force-free and non-force-free models. The magnetic helicity is estimated by the twist multiplied by the square of the axial magnetic flux. We compare the obtained values with those derived by a finite volume helicity estimation method. We find that the magnetic helicity obtained with the twist method agrees with the helicity carried by the purely current-carrying part of the field within uncertainties for most test cases. It is also found that the current-carrying part of the model field is relatively significant at the very location of the magnetic flux rope. This qualitatively explains the agreement between the magnetic helicity computed by the twist method and the helicity contributed purely by the current-carrying magnetic field.
The Caribbean News Agency: Third World Model. Journalism Monographs Number 71.
Cuthbert, Marlene
This monograph is a history of the Caribbean News Agency (CANA), which is jointly owned by private and public mass media of its region and independent of both governments and foreign news agencies. It is proposed that CANA may provide a unique model of an independent, regional third-world news agency. Sections of the monograph examine (1) CANA's…
Presenting an Evaluation Model for the Cancer Registry Software.
Moghaddasi, Hamid; Asadi, Farkhondeh; Rabiei, Reza; Rahimi, Farough; Shahbodaghi, Reihaneh
2017-12-01
As cancer is increasingly growing, cancer registry is of great importance as the main core of cancer control programs, and many different software has been designed for this purpose. Therefore, establishing a comprehensive evaluation model is essential to evaluate and compare a wide range of such software. In this study, the criteria of the cancer registry software have been determined by studying the documents and two functional software of this field. The evaluation tool was a checklist and in order to validate the model, this checklist was presented to experts in the form of a questionnaire. To analyze the results of validation, an agreed coefficient of %75 was determined in order to apply changes. Finally, when the model was approved, the final version of the evaluation model for the cancer registry software was presented. The evaluation model of this study contains tool and method of evaluation. The evaluation tool is a checklist including the general and specific criteria of the cancer registry software along with their sub-criteria. The evaluation method of this study was chosen as a criteria-based evaluation method based on the findings. The model of this study encompasses various dimensions of cancer registry software and a proper method for evaluating it. The strong point of this evaluation model is the separation between general criteria and the specific ones, while trying to fulfill the comprehensiveness of the criteria. Since this model has been validated, it can be used as a standard to evaluate the cancer registry software.
Evaluation of Data Used for Modelling the Stratosphere of Saturn
Armstrong, Eleanor Sophie; Irwin, Patrick G. J.; Moses, Julianne I.
2015-11-01
Planetary atmospheres are modeled through the use of a photochemical and kinetic reaction scheme constructed from experimentally and theoretically determined rate coefficients, photoabsorption cross sections and branching ratios for the molecules described within them. The KINETICS architecture has previously been developed to model planetary atmospheres and is applied here to Saturn’s stratosphere. We consider the pathways that comprise the reaction scheme of a current model, and update the reaction scheme according the to findings in a literature investigation. We evaluate contemporary photochemical literature, studying recent data sets of cross-sections and branching ratios for a number of hydrocarbons used in the photochemical scheme of Model C of KINETICS. In particular evaluation of new photodissociation branching ratios for CH4, C2H2, C2H4, C3H3, C3H5 and C4H2, and new cross-sectional data for C2H2, C2H4, C2H6, C3H3, C4H2, C6H2 and C8H2 are considered. By evaluating the techniques used and data sets obtained, a new reaction scheme selection was drawn up. These data are then used within the preferred reaction scheme of the thesis and applied to the KINETICS atmospheric model to produce a model of the stratosphere of Saturn in a steady state. A total output of the preferred reaction scheme is presented, and the data is compared both with the previous reaction scheme and with data from the Cassini spacecraft in orbit around Saturn.One of the key findings of this work is that there is significant change in the model’s output as a result of temperature dependent data determination. Although only shown within the changes to the photochemical portion of the preferred reaction scheme, it is suggested that an equally important temperature dependence will be exhibited in the kinetic section of the reaction scheme. The photochemical model output is shown to be highly dependent on the preferred reaction scheme used within it by this thesis. The importance of correct
Directory of Open Access Journals (Sweden)
M Sedaghati
2015-12-01
Full Text Available In this paper, a new model has been presented to determine the number of spare transformers and their locations for distribution stations. The number of spare transformers must be so that they need minimum investment. Furthermore, they must be sufficient for replacing with transformers that have been damaged. For this reason, in this paper a new purpose function has been presented to maximize profit in distribution company’s budgeting and planning. For determining the number of spares that must be available in a stock room, this paper considers the number of spares and transformer’s fault at the same time. The number of spare transformers is determined so that at least one spare transformer will be available for replacing with the failed transformers. This paper considers time required for purchasing or repairing a failed transformer to determine the number of required spare transformers. Furthermore, whatever the number of spare equipment are increased, cost of maintenance will be increased, so an economic comparison must be done between reduced costs from reducing of outage time and increased costs from spare transformers existence.
Rao, Mathukumalli Srinivasa; Swathi, Pettem; Rao, Chitiprolu Anantha Rama; Rao, K V; Raju, B M K; Srinivas, Karlapudi; Manimanjari, Dammu; Maheswari, Mandapaka
2015-01-01
The present study features the estimation of number of generations of tobacco caterpillar, Spodoptera litura. Fab. on peanut crop at six locations in India using MarkSim, which provides General Circulation Model (GCM) of future data on daily maximum (T.max), minimum (T.min) air temperatures from six models viz., BCCR-BCM2.0, CNRM-CM3, CSIRO-Mk3.5, ECHams5, INCM-CM3.0 and MIROC3.2 along with an ensemble of the six from three emission scenarios (A2, A1B and B1). This data was used to predict the future pest scenarios following the growing degree days approach in four different climate periods viz., Baseline-1975, Near future (NF) -2020, Distant future (DF)-2050 and Very Distant future (VDF)-2080. It is predicted that more generations would occur during the three future climate periods with significant variation among scenarios and models. Among the seven models, 1-2 additional generations were predicted during DF and VDF due to higher future temperatures in CNRM-CM3, ECHams5 & CSIRO-Mk3.5 models. The temperature projections of these models indicated that the generation time would decrease by 18-22% over baseline. Analysis of variance (ANOVA) was used to partition the variation in the predicted number of generations and generation time of S. litura on peanut during crop season. Geographical location explained 34% of the total variation in number of generations, followed by time period (26%), model (1.74%) and scenario (0.74%). The remaining 14% of the variation was explained by interactions. Increased number of generations and reduction of generation time across the six peanut growing locations of India suggest that the incidence of S. litura may increase due to projected increase in temperatures in future climate change periods.
Directory of Open Access Journals (Sweden)
Mathukumalli Srinivasa Rao
Full Text Available The present study features the estimation of number of generations of tobacco caterpillar, Spodoptera litura. Fab. on peanut crop at six locations in India using MarkSim, which provides General Circulation Model (GCM of future data on daily maximum (T.max, minimum (T.min air temperatures from six models viz., BCCR-BCM2.0, CNRM-CM3, CSIRO-Mk3.5, ECHams5, INCM-CM3.0 and MIROC3.2 along with an ensemble of the six from three emission scenarios (A2, A1B and B1. This data was used to predict the future pest scenarios following the growing degree days approach in four different climate periods viz., Baseline-1975, Near future (NF -2020, Distant future (DF-2050 and Very Distant future (VDF-2080. It is predicted that more generations would occur during the three future climate periods with significant variation among scenarios and models. Among the seven models, 1-2 additional generations were predicted during DF and VDF due to higher future temperatures in CNRM-CM3, ECHams5 & CSIRO-Mk3.5 models. The temperature projections of these models indicated that the generation time would decrease by 18-22% over baseline. Analysis of variance (ANOVA was used to partition the variation in the predicted number of generations and generation time of S. litura on peanut during crop season. Geographical location explained 34% of the total variation in number of generations, followed by time period (26%, model (1.74% and scenario (0.74%. The remaining 14% of the variation was explained by interactions. Increased number of generations and reduction of generation time across the six peanut growing locations of India suggest that the incidence of S. litura may increase due to projected increase in temperatures in future climate change periods.
Modeling a support system for the evaluator
International Nuclear Information System (INIS)
Lozano Lima, B.; Ilizastegui Perez, F; Barnet Izquierdo, B.
1998-01-01
This work gives evaluators a tool they can employ to give more soundness to their review of operational limits and conditions. The system will establish the most adequate method to carry out the evaluation, as well as to evaluate the basis for technical operational specifications. It also includes the attainment of alternative questions to be supplied to the operating entity to support it in decision-making activities
Ivins, E. R.; Unti, T. W. J.; Phillips, R. J.
1982-01-01
It has long been known that the earth behaves viscoelastically. Viscoelasticity may be of importance in two aspects of mantle convection, including time-dependent behavior and local storage of recoverable work. The present investigation makes use of thermal convection in a box as a prototype of mantle flow. It is demonstrated that recoverable work can be important to the local mechanical energy balance in the descending lithosphere. It is shown that, even when assuming large viscoelastic parameters, an inherent time-dependence of viscoelastic convection appears only in local exchanges of mechanical energy. There is no strong exchange between buoyant potential energy and recoverable strain energy in the Rayleigh number range investigated. The investigation is mainly concerned with viscoelastic effects occurring on a buoyant time scale. It is found that viscoelastic effects have a negligible influence on the long term thermal energetics of mantle convection.
Statistical models of shape optimisation and evaluation
Davies, Rhodri; Taylor, Chris
2014-01-01
Deformable shape models have wide application in computer vision and biomedical image analysis. This book addresses a key issue in shape modelling: establishment of a meaningful correspondence between a set of shapes. Full implementation details are provided.
Naturalness and lepton number/flavor violation in inverse seesaw models
Energy Technology Data Exchange (ETDEWEB)
Haba, Naoyuki [Graduate School of Science and Engineering, Shimane University,1060, Nishikawatsu, Matsue, Shimane (Japan); Ishida, Hiroyuki [Graduate School of Science and Engineering, Shimane University,1060, Nishikawatsu, Matsue, Shimane (Japan); Physics Division, National Center for Theoretical Sciences,101, Section 2 Kuang Fu Road, Hsinchu, 300 Taiwan (China); Yamaguchi, Yuya [Graduate School of Science and Engineering, Shimane University,1060, Nishikawatsu, Matsue, Shimane (Japan); Department of Physics, Faculty of Science, Hokkaido University,Kita 9 Nishi 8, Kita-ku, Sapporo, Hokkaido (Japan)
2016-11-02
We introduce three right-handed neutrinos and three sterile neutrinos, and consider an inverse seesaw mechanism for neutrino mass generation. From naturalness point of view, their Majorana masses should be small, while it induces a large neutrino Yukawa coupling. Then, a neutrinoless double beta decay rate can be enhanced, and a sizable Higgs mass correction is inevitable. We find that the enhancement rate can be more than ten times compared with a standard prediction from light neutrino contribution alone, and an analytic form of heavy neutrino contributions to the Higgs mass correction. In addition, we numerically analyze the model, and find almost all parameter space of the model can be complementarily searched by future experiments of neutrinoless double beta decay and μ→e conversion.
Cerebellar plasticity and motor learning deficits in a copy-number variation mouse model of autism.
Piochon, Claire; Kloth, Alexander D; Grasselli, Giorgio; Titley, Heather K; Nakayama, Hisako; Hashimoto, Kouichi; Wan, Vivian; Simmons, Dana H; Eissa, Tahra; Nakatani, Jin; Cherskov, Adriana; Miyazaki, Taisuke; Watanabe, Masahiko; Takumi, Toru; Kano, Masanobu; Wang, Samuel S-H; Hansel, Christian
2014-11-24
A common feature of autism spectrum disorder (ASD) is the impairment of motor control and learning, occurring in a majority of children with autism, consistent with perturbation in cerebellar function. Here we report alterations in motor behaviour and cerebellar synaptic plasticity in a mouse model (patDp/+) for the human 15q11-13 duplication, one of the most frequently observed genetic aberrations in autism. These mice show ASD-resembling social behaviour deficits. We find that in patDp/+ mice delay eyeblink conditioning--a form of cerebellum-dependent motor learning--is impaired, and observe deregulation of a putative cellular mechanism for motor learning, long-term depression (LTD) at parallel fibre-Purkinje cell synapses. Moreover, developmental elimination of surplus climbing fibres--a model for activity-dependent synaptic pruning--is impaired. These findings point to deficits in synaptic plasticity and pruning as potential causes for motor problems and abnormal circuit development in autism.
Baryon number generation in a flipped SU(5) x U(1) model
International Nuclear Information System (INIS)
Campbell, B.; Hagelin, J.; Nanopoulos, D.V.; Olive, K.A.
1988-01-01
We consider the possibilities for generating a baryon asymmetry in the early universe in a flipped SU(5) x U(1) model inspired by the superstring. Depending on the temperature of the radiation background after inflation we can distinguish between two scenarios for baryogenesis: (1) After reheating the original SU(5) x U(1) symmetry is restored, or there was no inflation at all; (2) reheating after inflation is rather weak and SU(5) x U(1) is broken. In either case the asymmetry is generated by the out-of-equilibrium decays of a massive SU(3) x SU(2) x U(1) singlet field φ m . In the flipped SU(5) x U(1) model, gauge symmetry breaking is triggered by strong coupling phenomena, and is in general accompanied by the production of entropy. We examine constraints on the reheating temperature and the strong coupling scale in each of the scenarios. (orig.)
Evaluation of EOR Processes Using Network Models
DEFF Research Database (Denmark)
Winter, Anatol; Larsen, Jens Kjell; Krogsbøll, Anette
1998-01-01
The report consists of the following parts: 1) Studies of wetting properties of model fluids and fluid mixtures aimed at an optimal selection of candidates for micromodel experiments. 2) Experimental studies of multiphase transport properties using physical models of porous networks (micromodels......) including estimation of their "petrophysical" properties (e.g. absolute permeability). 3) Mathematical modelling and computer studies of multiphase transport through pore space using mathematical network models. 4) Investigation of link between pore-scale and macroscopic recovery mechanisms....
The Relevance of the CIPP Evaluation Model for Educational Accountability.
Stufflebeam, Daniel L.
The CIPP Evaluation Model was originally developed to provide timely information in a systematic way for decision making, which is a proactive application of evaluation. This article examines whether the CIPP model also serves the retroactive purpose of providing information for accountability. Specifically, can the CIPP Model adequately assist…
The Use of AMET and Automated Scripts for Model Evaluation
The Atmospheric Model Evaluation Tool (AMET) is a suite of software designed to facilitate the analysis and evaluation of meteorological and air quality models. AMET matches the model output for particular locations to the corresponding observed values from one or more networks ...
SIMPLEBOX: a generic multimedia fate evaluation model
van de Meent D
1993-01-01
This document describes the technical details of the multimedia fate model SimpleBox, version 1.0 (930801). SimpleBox is a multimedia box model of what is commonly referred to as a "Mackay-type" model ; it assumes spatially homogeneous environmental compartments (air, water, suspended
Educational game models: conceptualization and evaluation ...
African Journals Online (AJOL)
The relationship between educational theories, game design and game development are used to develop models for the creation of complex learning environments. The Game Object Model (GOM), that marries educational theory and game design, forms the basis for the development of the Persona Outlining Model (POM) ...
2017-09-01
MODELING TO IMPROVE HUMAN DECISION-MAKING DURING TEST AND EVALUATION RANGE CONTROL by William Carlson September 2017 Thesis Advisor...MAKING DURING TEST AND EVALUATION RANGE CONTROL 5. FUNDING NUMBERS 6. AUTHOR(S) William Carlson 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES...evaluation (T&E) managers often control testing via heuristics (i.e., using experience and lessons learned from previous testing to modify existing
Ghassoun, Yahya; Löwner, Marc-Oliver
2017-10-01
Total particle number concentration (TNC) was studied in a 1 × 2 km area in Berlin, the capital of Germany by three Land Use Regression models (LUR). The estimation of TNC was established and compared using one 2D-LUR and two 3D-LUR models. All models predict total number concentrations TNC by using urban morphological (2D resp. 3D) and additional semantical parameters. 2D and semantical parameters were derived from Open Street Map data (OSM) whereas 3D parameters were derived from a CityGML-based 3D city model. While the models are capable to depict the spatial variation of TNC across the study area, the two 3D-LUR showed better results than the 2D-LUR. The 2D-LUR model explained 74% of the variance of TNC for the full data set with root mean square error (RMSE) of 4014 cm-3 while the 3D-LUR explained 79% of the variance with an RMSE of 3477 cm-3. The further introduction of a new spatial parameter, the Frontal Area Index (FAI) that represents the dynamic factor wind direction enhanced the 3D-LUR to explain 82% of the variance with RMSE of 3389 cm-3. Furthermore, the semantical parameters (e.g. streets type) played a significant role in all models.
Mendez, Javier; Monleon-Getino, Antonio; Jofre, Juan; Lucena, Francisco
2017-10-01
The present study aimed to establish the kinetics of the appearance of coliphage plaques using the double agar layer titration technique to evaluate the feasibility of using traditional coliphage plaque forming unit (PFU) enumeration as a rapid quantification method. Repeated measurements of the appearance of plaques of coliphages titrated according to ISO 10705-2 at different times were analysed using non-linear mixed-effects regression to determine the most suitable model of their appearance kinetics. Although this model is adequate, to simplify its applicability two linear models were developed to predict the numbers of coliphages reliably, using the PFU counts as determined by the ISO after only 3 hours of incubation. One linear model, when the number of plaques detected was between 4 and 26 PFU after 3 hours, had a linear fit of: (1.48 × Counts 3 h + 1.97); and the other, values >26 PFU, had a fit of (1.18 × Counts 3 h + 2.95). If the number of plaques detected was PFU after 3 hours, we recommend incubation for (18 ± 3) hours. The study indicates that the traditional coliphage plating technique has a reasonable potential to provide results in a single working day without the need to invest in additional laboratory equipment.
Issues in Value-at-Risk Modeling and Evaluation
J. Daníelsson (Jón); C.G. de Vries (Casper); B.N. Jorgensen (Bjørn); P.F. Christoffersen (Peter); F.X. Diebold (Francis); T. Schuermann (Til); J.A. Lopez (Jose); B. Hirtle (Beverly)
1998-01-01
textabstractDiscusses the issues in value-at-risk modeling and evaluation. Value of value at risk; Horizon problems and extreme events in financial risk management; Methods of evaluating value-at-risk estimates.
Application of random number generators in genetic algorithms to improve rainfall-runoff modelling
Czech Academy of Sciences Publication Activity Database
Chlumecký, M.; Buchtele, Josef; Richta, K.
2017-01-01
Roč. 553, October (2017), s. 350-355 ISSN 0022-1694 Institutional support: RVO:67985874 Keywords : genetic algorithm * optimisation * rainfall-runoff modeling * random generator Subject RIV: DA - Hydrology ; Limnology OBOR OECD: Hydrology Impact factor: 3.483, year: 2016 https://ac.els-cdn.com/S0022169417305516/1-s2.0-S0022169417305516-main.pdf?_tid=fa1bad8a-bd6a-11e7-8567-00000aab0f27&acdnat=1509365462_a1335d3d997e9eab19e23b1eee977705
Autism spectrum disorder model mice: Focus on copy number variation and epigenetics.
Nakai, Nobuhiro; Otsuka, Susumu; Myung, Jihwan; Takumi, Toru
2015-10-01
Autism spectrum disorder (ASD) is gathering concerns in socially developed countries. ASD is a neuropsychiatric disorder of genetic origin with high prevalence of 1%-2%. The patients with ASD characteristically show impaired social skills. Today, many genetic studies identify numerous susceptible genes and genetic loci associated with ASD. Although some genetic factors can lead to abnormal brain function linked to ASD phenotypes, the pathogenic mechanism of ASD is still unclear. Here, we discuss a new mouse model for ASD as an advanced tool to understand the mechanism of ASD.
Application of random number generators in genetic algorithms to improve rainfall-runoff modelling
Czech Academy of Sciences Publication Activity Database
Chlumecký, M.; Buchtele, Josef; Richta, K.
2017-01-01
Roč. 553, October (2017), s. 350-355 ISSN 0022-1694 Institutional support: RVO:67985874 Keywords : genetic algorithm * optimisation * rainfall-runoff modeling * random generator Subject RIV: DA - Hydrology ; Limnology OBOR OECD: Hydrology Impact factor: 3.483, year: 2016 https://ac.els- cdn .com/S0022169417305516/1-s2.0-S0022169417305516-main.pdf?_tid=fa1bad8a-bd6a-11e7-8567-00000aab0f27& amp ;acdnat=1509365462_a1335d3d997e9eab19e23b1eee977705
Aschbacher, Kirstin; Milush, Jeffrey M; Gilbert, Amanda; Almeida, Carlos; Sinclair, Elizabeth; Epling, Lorrie; Grenon, S Marlene; Marco, Elysa J; Puterman, Eli; Epel, Elissa
2017-01-01
Chronic psychological stress is a risk factor for cardiovascular disease and mortality. Circulating hematopoietic progenitor cells (CPCs) maintain vascular homeostasis, correlate with preclinical atherosclerosis, and prospectively predict cardiovascular events. We hypothesize that (1) chronic caregiving stress is related to reduced CPC number, and (2) this may be explained in part by negative interactions within the family. We investigated levels of stress and CPCs in 68 healthy mothers - 31 of these had children with an autism spectrum disorder (M-ASD) and 37 had neurotypical children (M-NT). Participants provided fasting blood samples, and CD45 + CD34 + KDR + and CD45 + CD133 + KDR + CPCs were assayed by flow cytometry. We averaged the blom-transformed scores of both CPCs to create one index. Participants completed the perceived stress scale (PSS), the inventory for depressive symptoms (IDS), and reported on daily interactions with their children and partners, averaged over 7 nights. M-ASD exhibited lower CPCs than M-NT (Cohen's d=0.83; p⩽0.01), controlling for age, BMI, and physical activity. Across the whole sample, positive interactions were related to higher CPCs, and negative interactions to lower CPCs (allp'scaregivers, child-related interpersonal stress appears to be a key psychological predictor of stress-related CVD risk. Copyright © 2016 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Y. I. Troitskaya
2006-01-01
Full Text Available The objective of the present paper is to develop a theoretical model describing the evolution of a turbulent wake behind a towed sphere in a stably stratified fluid at large Froude and Reynolds numbers. The wake flow is considered as a quasi two-dimensional (2-D turbulent jet flow whose dynamics is governed by the momentum transfer from the mean flow to a quasi-2-D sinuous mode growing due to hydrodynamic instability. The model employs a quasi-linear approximation to describe this momentum transfer. The model scaling coefficients are defined with the use of available experimental data, and the performance of the model is verified by comparison with the results of a direct numerical simulation of a 2-D turbulent jet flow. The model prediction for the temporal development of the wake axis mean velocity is found to be in good agreement with the experimental data obtained by Spedding (1997.
Cerebellar Plasticity and Motor Learning Deficits in a Copy Number Variation Mouse Model of Autism
Piochon, Claire; Kloth, Alexander D; Grasselli, Giorgio; Titley, Heather K; Nakayama, Hisako; Hashimoto, Kouichi; Wan, Vivian; Simmons, Dana H; Eissa, Tahra; Nakatani, Jin; Cherskov, Adriana; Miyazaki, Taisuke; Watanabe, Masahiko; Takumi, Toru; Kano, Masanobu; Wang, Samuel S-H; Hansel, Christian
2014-01-01
A common feature of autism spectrum disorder (ASD) is the impairment of motor control and learning, occurring in a majority of children with autism, consistent with perturbation in cerebellar function. Here we report alterations in motor behavior and cerebellar synaptic plasticity in a mouse model (patDp/+) for the human 15q11-13 duplication, one of the most frequently observed genetic aberrations in autism. These mice show ASD-resembling social behavior deficits. We find that in patDp/+ mice delay eyeblink conditioning—a form of cerebellum-dependent motor learning—is impaired, and observe deregulation of a putative cellular mechanism for motor learning, long-term depression (LTD) at parallel fiber-Purkinje cell synapses. Moreover, developmental elimination of surplus climbing fibers—a model for activity-dependent synaptic pruning—is impaired. These findings point to deficits in synaptic plasticity and pruning as potential causes for motor problems and abnormal circuit development in autism. PMID:25418414
A simple model for straggling evaluation
Wilson, J W; Tai, H; Tripathi, R K
2002-01-01
Simple straggling models had largely been abandoned in favor of Monte Carlo simulations of straggling which are accurate but time consuming, limiting their application in practice. The difficulty of simple analytic models is the failure to give accurate values past 85% of the particle range. A simple model is derived herein based on a second order approximation upon which rapid analysis tools are developed for improved understanding of material charged particle transmission properties.
A Descriptive Evaluation of Software Sizing Models
1987-09-01
compensate for a lack of understanding of a software job to be done. 1.3 REPORT OUTLINE The guiding principle for model selection for this paper was...MODEL SIZE ESTIMATES FOR THE CAiSS SENSITIVITY MODEL MODEL SLOC ESD 37,600+ SPQR 35,910 BYL 22,402 PRICE SZ 21,410 ASSET-R 11,943 SSM 11,700 ASSET-R...disk. ?. Date LS, De fault current date, Re quire ] - ,, ... perffr: an,- 1 ,’ e e st i ma t e. Quantitative inputs Note- Each of the nine quantitative
A Regional Climate Model Evaluation System
National Aeronautics and Space Administration — Develop a packaged data management infrastructure for the comparison of generated climate model output to existing observational datasets that includes capabilities...
A Regional Climate Model Evaluation System Project
National Aeronautics and Space Administration — Develop a packaged data management infrastructure for the comparison of generated climate model output to existing observational datasets that includes capabilities...
Katagiri, Kenta; Matsukura, Yu; Muneta, Takeshi; Ozeki, Nobutake; Mizuno, Mitsuru; Katano, Hisako; Sekiya, Ichiro
2017-04-01
To develop an in vitro model, the "suspended synovium culture model," to demonstrate the mobilization of mesenchymal stem cells (MSCs) from the synovium into a noncontacted culture dish through culture medium. In addition, to examine which synovium, fibrous synovium or adipose synovium, released more MSCs in the knee with osteoarthritis. Human synovial tissue was harvested during total knee arthroplasty from knee joints of 34 patients with osteoarthritis (28 patients: only fibrous synovium, 6 patients: fibrous and adipose synovium). One gram of synovium was suspended with a thread in a bottle containing 40 mL of culture medium and a 3.5-cm-diameter culture dish at the bottom. After 7 days, the culture dish in the bottle was examined. For the cells harvested, multipotentiality and surface epitopes were analyzed. The numbers of colonies derived from fibrous synovium and adipose synovium were also compared. Colonies of spindle-shaped cells were observed in the culture dish in all 28 donors. Colonies numbered 26 on average, and the cells derived from colony-forming cells had multipotentiality for chondrogenesis, adipogenesis, calcification, and surface epitopes similar to MSCs. The number was colonies was significantly higher in fibrous synovium than in adipose synovium (P < .05, n = 6). We developed a suspended synovium culture model. Suspended synovium was able to release MSCs into a noncontacted culture dish through medium in a bottle. Fibrous synovium was found to release greater numbers of MSCs than adipose synovium in our culture model. CLINICAL RELEVANCE: This model could be a valuable tool to screen drugs capable of releasing MSCs from the synovium into synovial fluid. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Evaluation of long-range transport models in NOVANA; Evaluering af langtransportmodeller i NOVANA
Energy Technology Data Exchange (ETDEWEB)
Frohn, L.M.; Brandt, J.; Christensen, J.H.; Geels, C.; Hertel, O.; Skjoeth, C.A.; Ellemann, T.
2007-06-15
The Lagrangian model ACDEP which has been applied in BOP/-NOVA/NOVANA during the period 1995-2004, has been replaced by the more modern Eulerian model DEHM. The new model has a number of advantages, such as a better description of the three-dimensional atmospheric transport, a larger domain, a possibility for high spatial resolution in the calculations and a more detailed description of photochemical processes and dry deposition. In advance of the replacement, the results of the two models have been compared and evaluated using European and Danish measurements. Calculations have been performed with both models applying the same meteorological and emission input, for Europe for the year 2000 as well as for Denmark for the period 2000-2003. The European measurements applied in the present evaluation are obtained through EMEP. Using these measurements DEHM and ACDEP have been compared with respect to daily and yearly mean concentrations of ammonia (NH{sub 3}), ammonium (NH{sub 4}{sup +}), the sum of NH{sub 3} and NH{sub 4}{sup +} (SNH), nitric acid (HNO{sub 3}), nitrate (NO{sub 3}{sup -}), the sum of HNO{sub 3} and NO{sub 3}{sup -} (SNO{sub 3}), nitrogen dioxide (NO{sub 2}), ozone (O{sub 3}), sulphur dioxide (SO{sub 2}) and sulphate (SO{sub 4}{sup 2-}) as well as the hourly mean and daily maximum concentrations of O{sub 3}. Furthermore the daily and yearly total values of precipitation and wet deposition of NH{sub 4}{sup +}, NO{sub 3}{sup -} and SO{sub 4}{sup 2-} have been compared for the two models. The statistical parameters applied in the comparison are correlation, bias and fractional bias. The result of the comparison with the EMEP data is, that DEHM achieves better correlation coefficients for all chemical parameters (16 parameters in total) when the daily values are analysed, and for 15 out of 16 parameters when yearly values are taken into account. With respect to the fractional bias, the results obtained with DEHM are better than the corresponding results
Holt, Carson; Losic, Bojan; Pai, Deepa; Zhao, Zhen; Trinh, Quang; Syam, Sujata; Arshadi, Niloofar; Jang, Gun Ho; Ali, Johar; Beck, Tim; McPherson, John; Muthuswamy, Lakshmi B
2014-03-15
Copy number variations (CNVs) are a major source of genomic variability and are especially significant in cancer. Until recently microarray technologies have been used to characterize CNVs in genomes. However, advances in next-generation sequencing technology offer significant opportunities to deduce copy number directly from genome sequencing data. Unfortunately cancer genomes differ from normal genomes in several aspects that make them far less amenable to copy number detection. For example, cancer genomes are often aneuploid and an admixture of diploid/non-tumor cell fractions. Also patient-derived xenograft models can be laden with mouse contamination that strongly affects accurate assignment of copy number. Hence, there is a need to develop analytical tools that can take into account cancer-specific parameters for detecting CNVs directly from genome sequencing data. We have developed WaveCNV, a software package to identify copy number alterations by detecting breakpoints of CNVs using translation-invariant discrete wavelet transforms and assign digitized copy numbers to each event using next-generation sequencing data. We also assign alleles specifying the chromosomal ratio following duplication/loss. We verified copy number calls using both microarray (correlation coefficient 0.97) and quantitative polymerase chain reaction (correlation coefficient 0.94) and found them to be highly concordant. We demonstrate its utility in pancreatic primary and xenograft sequencing data. Source code and executables are available at https://github.com/WaveCNV. The segmentation algorithm is implemented in MATLAB, and copy number assignment is implemented Perl. lakshmi.muthuswamy@gmail.com Supplementary data are available at Bioinformatics online.
van Giessen, Anoukh; Peters, Jaime; Wilcher, Britni; Hyde, Chris; Moons, Carl; de Wit, Ardine; Koffijberg, Erik
2017-04-01
Although health economic evaluations (HEEs) are increasingly common for therapeutic interventions, they appear to be rare for the use of risk prediction models (PMs). To evaluate the current state of HEEs of PMs by performing a comprehensive systematic review. Four databases were searched for HEEs of PM-based strategies. Two reviewers independently selected eligible articles. A checklist was compiled to score items focusing on general characteristics of HEEs of PMs, model characteristics and quality of HEEs, evidence on PMs typically used in the HEEs, and the specific challenges in performing HEEs of PMs. After screening 791 abstracts, 171 full texts, and reference checking, 40 eligible HEEs evaluating 60 PMs were identified. In these HEEs, PM strategies were compared with current practice (n = 32; 80%), to other stratification methods for patient management (n = 19; 48%), to an extended PM (n = 9; 23%), or to alternative PMs (n = 5; 13%). The PMs guided decisions on treatment (n = 42; 70%), further testing (n = 18; 30%), or treatment prioritization (n = 4; 7%). For 36 (60%) PMs, only a single decision threshold was evaluated. Costs of risk prediction were ignored for 28 (46%) PMs. Uncertainty in outcomes was assessed using probabilistic sensitivity analyses in 22 (55%) HEEs. Despite the huge number of PMs in the medical literature, HEE of PMs remains rare. In addition, we observed great variety in their quality and methodology, which may complicate interpretation of HEE results and implementation of PMs in practice. Guidance on HEE of PMs could encourage and standardize their application and enhance methodological quality, thereby improving adequate use of PM strategies. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
QUALITY OF AN ACADEMIC STUDY PROGRAMME - EVALUATION MODEL
Directory of Open Access Journals (Sweden)
Mirna Macur
2016-01-01
Full Text Available Quality of an academic study programme is evaluated by many: employees (internal evaluation and by external evaluators: experts, agencies and organisations. Internal and external evaluation of an academic programme follow written structure that resembles on one of the quality models. We believe the quality models (mostly derived from EFQM excellence model don’t fit very well into non-profit activities, policies and programmes, because they are much more complex than environment, from which quality models derive from (for example assembly line. Quality of an academic study programme is very complex and understood differently by various stakeholders, so we present dimensional evaluation in the article. Dimensional evaluation, as opposed to component and holistic evaluation, is a form of analytical evaluation in which the quality of value of the evaluand is determined by looking at its performance on multiple dimensions of merit or evaluation criteria. First stakeholders of a study programme and their views, expectations and interests are presented, followed by evaluation criteria. They are both joined into the evaluation model revealing which evaluation criteria can and should be evaluated by which stakeholder. Main research questions are posed and research method for each dimension listed.
Evaluation of models for assessing Medicago sativa L. hay quality
African Journals Online (AJOL)
UFS Campus
) model of Weiss et al. (1992), using lignin to determine truly digestible NDF, ... quality evaluation model for commercial application. .... The almost perfect relationship (r = 0.98; Table 1) between TDNlig of lucerne hay and MY, predicted.
iFlorida model deployment final evaluation report
2009-01-01
This document is the final report for the evaluation of the USDOT-sponsored Surface Transportation Security and Reliability Information System Model Deployment, or iFlorida Model Deployment. This report discusses findings in the following areas: ITS ...
evaluation of models for assessing groundwater vulnerability
African Journals Online (AJOL)
DR. AMINU
applied models for groundwater vulnerability assessment mapping. ... of other models have not been applied to ground water studies in Nigeria, unlike other parts of .... Clay Loam. 3. Muck. 2. Nonshrinking and nonaggregated clay. 1. Aller et al., (1987). Table 2: Assigned weights for DRASTIC parameters. Parameters.
Modeling, simulation and performance evaluation of parabolic ...
African Journals Online (AJOL)
Model of a parabolic trough power plant, taking into consideration the different losses associated with collection of the solar irradiance and thermal losses is presented. MATLAB software is employed to model the power plant at reference state points. The code is then used to find the different reference values which are ...
Karimi, Leila; Ghassemi, Abbas
2016-07-01
Among the different technologies developed for desalination, the electrodialysis/electrodialysis reversal (ED/EDR) process is one of the most promising for treating brackish water with low salinity when there is high risk of scaling. Multiple researchers have investigated ED/EDR to optimize the process, determine the effects of operating parameters, and develop theoretical/empirical models. Previously published empirical/theoretical models have evaluated the effect of the hydraulic conditions of the ED/EDR on the limiting current density using dimensionless numbers. The reason for previous studies' emphasis on limiting current density is twofold: 1) to maximize ion removal, most ED/EDR systems are operated close to limiting current conditions if there is not a scaling potential in the concentrate chamber due to a high concentration of less-soluble salts; and 2) for modeling the ED/EDR system with dimensionless numbers, it is more accurate and convenient to use limiting current density, where the boundary layer's characteristics are known at constant electrical conditions. To improve knowledge of ED/EDR systems, ED/EDR models should be also developed for the Ohmic region, where operation reduces energy consumption, facilitates targeted ion removal, and prolongs membrane life compared to limiting current conditions. In this paper, theoretical/empirical models were developed for ED/EDR performance in a wide range of operating conditions. The presented ion removal and selectivity models were developed for the removal of monovalent ions and divalent ions utilizing the dominant dimensionless numbers obtained from laboratory scale electrodialysis experiments. At any system scale, these models can predict ED/EDR performance in terms of monovalent and divalent ion removal. Copyright © 2016 Elsevier Ltd. All rights reserved.
The percentage of macrophage numbers in rat model of sciatic nerve crush injury
Directory of Open Access Journals (Sweden)
Satrio Wicaksono
2016-02-01
Full Text Available ABSTRACT Excessive accumulation of macrophages in sciatic nerve fascicles inhibits regeneration of peripheral nerves. The aim of this study is to determine the percentage of the macrophages inside and outside of the fascicles at the proximal, at the site of injury and at the distal segment of rat model of sciatic nerve crush injury. Thirty male 3 months age Wistar rats of 200-230 g were divided into sham-operation group and crush injury group. Termination was performed on day 3, 7, and 14 after crush injury. Immunohistochemical examination was done using anti CD68 antibody. Counting of immunopositive and immunonegative cells was done on three representative fields for extrafascicular and intrafascicular area of proximal, injury and distal segments. The data was presented as percentage of immunopositive cells. The percentage of the macrophages was significantly increased in crush injury group compared to the sham-operated group in all segments of the peripheral nerves. While the percentage of macrophages outside fascicle in all segments of sciatic nerve and within the fascicle in the proximal segment reached its peak on day 3, the percentage of macrophages within the fascicles at the site of injury and distal segments reached the peak later at day 7. In conclusions, accumulation of macrophages outside the nerve fascicles occurs at the beginning of the injury, and then followed later by the accumulation of macrophages within nerve fascicles
Evaluating Energy Efficiency Policies with Energy-Economy Models
Energy Technology Data Exchange (ETDEWEB)
Mundaca, Luis; Neij, Lena; Worrell, Ernst; McNeil, Michael A.
2010-08-01
The growing complexities of energy systems, environmental problems and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically analyse bottom-up energy-economy models and corresponding evaluation studies on energy efficiency policies to induce technological change. We use the household sector as a case study. Our analysis focuses on decision frameworks for technology choice, type of evaluation being carried out, treatment of market and behavioural failures, evaluated policy instruments, and key determinants used to mimic policy instruments. Although the review confirms criticism related to energy-economy models (e.g. unrealistic representation of decision-making by consumers when choosing technologies), they provide valuable guidance for policy evaluation related to energy efficiency. Different areas to further advance models remain open, particularly related to modelling issues, techno-economic and environmental aspects, behavioural determinants, and policy considerations.
Son, Reuben S; Gowrishankar, Thiruvallur R; Smith, Kyle C; Weaver, James C
2016-03-01
Pulse trains are widely used in electroporation (EP) for both general biomedical research and clinical applications such as nonthermal tumor ablation. Here we use a computational method based on a meshed transport network to investigate a cell system model's response to a train of identical, evenly spaced electric field pulses. We obtain an unexpected result: the number of membrane pores decreases during the application of twenty 1.0 kV/cm, 100 μs pulses, delivered at 1 Hz. This pulse train initially creates 13,000 membrane pores, but pore number decreases by a factor of 15 to about 830 pores throughout subsequent pulses. We conclude that pore number can greatly diminish during a train of identical pulses, with direct consequences for the transport of solutes across an electroporated membrane. Although application of additional pulses is generally intended to increase the effects of EP, we show that these pulses do not significantly enhance calcium delivery into the cell. Instead, calcium delivery can be significantly increased by varying inter-pulse intervals. We show that inserting a 300-s interruption midway in a widely used eight-pulse train (a protocol for electrosensitization) yields a ∼ twofold delivery increase. Overall, our modeling shows support for electrosensitization, in which multiple pulse protocols that maximize pore number over time can yield significant increase of transport of calcium compared to standard pulse trains.
Yau, C; Papaspiliopoulos, O; Roberts, G O; Holmes, C
2011-01-01
We consider the development of Bayesian Nonparametric methods for product partition models such as Hidden Markov Models and change point models. Our approach uses a Mixture of Dirichlet Process (MDP) model for the unknown sampling distribution (likelihood) for the observations arising in each state and a computationally efficient data augmentation scheme to aid inference. The method uses novel MCMC methodology which combines recent retrospective sampling methods with the use of slice sampler variables. The methodology is computationally efficient, both in terms of MCMC mixing properties, and robustness to the length of the time series being investigated. Moreover, the method is easy to implement requiring little or no user-interaction. We apply our methodology to the analysis of genomic copy number variation.
Hydrologic Evaluation of Landfill Performance (HELP) Model
The program models rainfall, runoff, infiltration, and other water pathways to estimate how much water builds up above each landfill liner. It can incorporate data on vegetation, soil types, geosynthetic materials, initial moisture conditions, slopes, etc.
Benedek, Judit; Papp, Gábor; Kalmár, János
2018-04-01
Beyond rectangular prism polyhedron, as a discrete volume element, can also be used to model the density distribution inside 3D geological structures. The calculation of the closed formulae given for the gravitational potential and its higher-order derivatives, however, needs twice more runtime than that of the rectangular prism computations. Although the more detailed the better principle is generally accepted it is basically true only for errorless data. As soon as errors are present any forward gravitational calculation from the model is only a possible realization of the true force field on the significance level determined by the errors. So if one really considers the reliability of input data used in the calculations then sometimes the "less" can be equivalent to the "more" in statistical sense. As a consequence the processing time of the related complex formulae can be significantly reduced by the optimization of the number of volume elements based on the accuracy estimates of the input data. New algorithms are proposed to minimize the number of model elements defined both in local and in global coordinate systems. Common gravity field modelling programs generate optimized models for every computation points ( dynamic approach), whereas the static approach provides only one optimized model for all. Based on the static approach two different algorithms were developed. The grid-based algorithm starts with the maximum resolution polyhedral model defined by 3-3 points of each grid cell and generates a new polyhedral surface defined by points selected from the grid. The other algorithm is more general; it works also for irregularly distributed data (scattered points) connected by triangulation. Beyond the description of the optimization schemes some applications of these algorithms in regional and local gravity field modelling are presented too. The efficiency of the static approaches may provide even more than 90% reduction in computation time in favourable
Dewi, Arianti Puspita; Kusmayadi, Tri Atmojo; Usodo, Budi
2014-01-01
The purposes of this research were to determine: (1) which students mathematics achievement would be better, student given NHT MM, NHT BD, or direct learning model, (2) which students mathematics achievement would be better, student with interpersonal intelligence of high, medium or low, (3) which students mathematics achievement would be better, student with interpersonal intelligence of high, medium, or low on each learning model, (4) which student mathematics achievement would be better, ...
Evaluation model development for sprinkler irrigation uniformity ...
African Journals Online (AJOL)
A new evaluation method with accompanying software was developed to precisely calculate uniformity from catch-can test data, assuming sprinkler distribution data to be a continuous variable. Two interpolation steps are required to compute unknown water application depths at grid distribution points from radial ...
Model for Evaluating Teacher and Trainer Competences
Carioca, Vito; Rodrigues, Clara; Saude, Sandra; Kokosowski, Alain; Harich, Katja; Sau-Ek, Kristiina; Georgogianni, Nicole; Levy, Samuel; Speer, Sandra; Pugh, Terence
2009-01-01
A lack of common criteria for comparing education and training systems makes it difficult to recognise qualifications and competences acquired in different environments and levels of training. A valid basis for defining a framework for evaluating professional performance in European educational and training contexts must therefore be established.…
Inclusive integral evaluation for mammograms using the hierarchical fuzzy integral (HFI) model
International Nuclear Information System (INIS)
Amano, Takashi; Yamashita, Kazuya; Arao, Shinichi; Kitayama, Akira; Hayashi, Akiko; Suemori, Shinji; Ohkura, Yasuhiko
2000-01-01
Physical factors (physically evaluated values) and psychological factors (fuzzy measurements) of breast x-ray images were comprehensively evaluated by applying breast x-ray images to an extended stratum-type fuzzy integrating model. In addition, x-ray images were evaluated collectively by integrating the quality (sharpness, graininess, and contrast) of x-ray images and three representative shadows (fibrosis, calcification, tumor) in the breast x-ray images. We selected the most appropriate system for radiography of the breast from three kinds of intensifying screens and film systems for evaluation by this method and investigated the relationship between the breast x-ray images and noise equivalent quantum number, which is called the overall physical evaluation method, and between the breast x-ray images and psychological evaluation by a visual system with a stratum-type fuzzy integrating model. We obtained a linear relationship between the breast x-ray image and noise-equivalent quantum number, and linearity between the breast x-ray image and psychological evaluation by the visual system. Therefore, the determination of fuzzy measurement, which is a scale for fuzzy evaluation of psychological factors of the observer, and physically evaluated values with a stratum-type fuzzy integrating model enabled us to make a comprehensive evaluation of x-ray images that included both psychological and physical aspects. (author)
Islam, Mohammad Mafijul; Alam, Morshed; Tariquzaman, Md; Kabir, Mohammad Alamgir; Pervin, Rokhsona; Begum, Munni; Khan, Md Mobarak Hossain
2013-01-08
Malnutrition is one of the principal causes of child mortality in developing countries including Bangladesh. According to our knowledge, most of the available studies, that addressed the issue of malnutrition among under-five children, considered the categorical (dichotomous/polychotomous) outcome variables and applied logistic regression (binary/multinomial) to find their predictors. In this study malnutrition variable (i.e. outcome) is defined as the number of under-five malnourished children in a family, which is a non-negative count variable. The purposes of the study are (i) to demonstrate the applicability of the generalized Poisson regression (GPR) model as an alternative of other statistical methods and (ii) to find some predictors of this outcome variable. The data is extracted from the Bangladesh Demographic and Health Survey (BDHS) 2007. Briefly, this survey employs a nationally representative sample which is based on a two-stage stratified sample of households. A total of 4,460 under-five children is analysed using various statistical techniques namely Chi-square test and GPR model. The GPR model (as compared to the standard Poisson regression and negative Binomial regression) is found to be justified to study the above-mentioned outcome variable because of its under-dispersion (variance variable namely mother's education, father's education, wealth index, sanitation status, source of drinking water, and total number of children ever born to a woman. Consistencies of our findings in light of many other studies suggest that the GPR model is an ideal alternative of other statistical models to analyse the number of under-five malnourished children in a family. Strategies based on significant predictors may improve the nutritional status of children in Bangladesh.
Simulation of electric power conservation strategies: model of economic evaluation
International Nuclear Information System (INIS)
Pinhel, A.C.C.
1992-01-01
A methodology for the economic evaluation model for energy conservation programs to be executed by the National Program of Electric Power Conservation is presented. From data as: forecasting of conserved energy, tariffs, energy costs and budget, the model calculates the economic indexes for the programs, allowing the evaluation of economic impacts in the electric sector. (C.G.C.)
Model evaluation and optimisation of nutrient removal potential for ...
African Journals Online (AJOL)
Performance of sequencing batch reactors for simultaneous nitrogen and phosphorus removal is evaluated by means of model simulation, using the activated sludge model, ASM2d, involving anoxic phosphorus uptake, recently proposed by the IAWQ Task group. The evaluation includes all major process configurations ...
The Use of AMET & Automated Scripts for Model Evaluation
Brief overview of EPA’s new CMAQ website to be launched publically in June, 2017. Details on the upcoming release of the Atmospheric Model Evaluation Tool (AMET) and the creation of automated scripts for post-processing and evaluating air quality model data.
Rhode Island Model Evaluation & Support System: Building Administrator. Edition III
Rhode Island Department of Education, 2015
2015-01-01
Rhode Island educators believe that implementing a fair, accurate, and meaningful educator evaluation and support system will help improve teaching, learning, and school leadership. The primary purpose of the Rhode Island Model Building Administrator Evaluation and Support System (Rhode Island Model) is to help all building administrators improve.…
Energy Technology Data Exchange (ETDEWEB)
Beaujean, Frederik; Caldwell, Allen [Max-Planck-Institut fuer Physik, Muenchen (Germany); Kollar, Daniel [CERN, Genf (Switzerland); Kroeninger, Kevin [Georg-August-Universitaet, Goettingen (Germany)
2011-07-01
In the analysis of experimental results it is often necessary to pass a judgment on the validity of a model as a representation of the data. A quantitative procedure to decide whether a model provides a good description of data is often based on a specific test statistic and a p-value summarizing both the data and the statistic's sampling distribution. Although there is considerable confusion concerning the meaning of p-values, leading to their misuse, they are nevertheless of practical importance in common data analysis tasks. We motivate the application of p-values using a Bayesian argumentation. We then describe commonly and less commonly known test statistics and how they are used to define p-values. The distribution of these are then extracted for examples modeled on typical new physics searches in high energy physics. We comment on their usefulness for determining goodness-of-fit and highlight some common pitfalls.
Systematic evaluation of atmospheric chemistry-transport model CHIMERE
Khvorostyanov, Dmitry; Menut, Laurent; Mailler, Sylvain; Siour, Guillaume; Couvidat, Florian; Bessagnet, Bertrand; Turquety, Solene
2017-04-01
Regional-scale atmospheric chemistry-transport models (CTM) are used to develop air quality regulatory measures, to support environmentally sensitive decisions in the industry, and to address variety of scientific questions involving the atmospheric composition. Model performance evaluation with measurement data is critical to understand their limits and the degree of confidence in model results. CHIMERE CTM (http://www.lmd.polytechnique.fr/chimere/) is a French national tool for operational forecast and decision support and is widely used in the international research community in various areas of atmospheric chemistry and physics, climate, and environment (http://www.lmd.polytechnique.fr/chimere/CW-articles.php). This work presents the model evaluation framework applied systematically to the new CHIMERE CTM versions in the course of the continuous model development. The framework uses three of the four CTM evaluation types identified by the Environmental Protection Agency (EPA) and the American Meteorological Society (AMS): operational, diagnostic, and dynamic. It allows to compare the overall model performance in subsequent model versions (operational evaluation), identify specific processes and/or model inputs that could be improved (diagnostic evaluation), and test the model sensitivity to the changes in air quality, such as emission reductions and meteorological events (dynamic evaluation). The observation datasets currently used for the evaluation are: EMEP (surface concentrations), AERONET (optical depths), and WOUDC (ozone sounding profiles). The framework is implemented as an automated processing chain and allows interactive exploration of the results via a web interface.
Carranza, Emmanuel John M.; Laborte, Alice G.
2015-01-01
Machine learning methods that have been used in data-driven predictive modeling of mineral prospectivity (e.g., artificial neural networks) invariably require large number of training prospect/locations and are unable to handle missing values in certain evidential data. The Random Forests (RF) algorithm, which is a machine learning method, has recently been applied to data-driven predictive mapping of mineral prospectivity, and so it is instructive to further study its efficacy in this particular field. This case study, carried out using data from Abra (Philippines), examines (a) if RF modeling can be used for data-driven modeling of mineral prospectivity in areas with a few (i.e., individual layers of evidential data. Furthermore, RF modeling can handle missing values in evidential data through an RF-based imputation technique whereas in WofE modeling values are simply represented by zero weights. Therefore, the RF algorithm is potentially more useful than existing methods that are currently used for data-driven predictive mapping of mineral prospectivity. In particular, it is not a purely black-box method like artificial neural networks in the context of data-driven predictive modeling of mineral prospectivity. However, further testing of the method in other areas with a few mineral occurrences is needed to fully investigate its usefulness in data-driven predictive modeling of mineral prospectivity.
International Nuclear Information System (INIS)
Beaujean, F.; Caldwell, A.; Kollar, D.; Kroeninger, K.
2011-01-01
Deciding whether a model provides a good description of data is often based on a goodness-of-fit criterion summarized by a p-value. Although there is considerable confusion concerning the meaning of p-values, leading to their misuse, they are nevertheless of practical importance in common data analysis tasks. We motivate their application using a Bayesian argumentation. We then describe commonly and less commonly known discrepancy variables and how they are used to define p-values. The distribution of these are then extracted for examples modeled on typical data analysis tasks, and comments on their usefulness for determining goodness-of-fit are given.
Descriptive and predictive evaluation of high resolution Markov chain precipitation models
DEFF Research Database (Denmark)
Sørup, Hjalte Jomo Danielsen; Madsen, Henrik; Arnbjerg-Nielsen, Karsten
2012-01-01
. Continuous modelling of the Markov process proved attractive because of a marked decrease in the number of parameters. Inclusion of seasonality into the continuous Markov chain model proved difficult. Monte Carlo simulations with the models show that it is very difficult for all the model formulations......A time series of tipping bucket recordings of very high temporal and volumetric resolution precipitation is modelled using Markov chain models. Both first and second‐order Markov models as well as seasonal and diurnal models are investigated and evaluated using likelihood based techniques....... The first‐order Markov model seems to capture most of the properties of precipitation, but inclusion of seasonal and diurnal variation improves the model. Including a second‐order Markov Chain component does improve the descriptive capabilities of the model, but is very expensive in its parameter use...
Evaluation of consumer satisfaction using the tetra-class model.
Clerfeuille, Fabrice; Poubanne, Yannick; Vakrilova, Milena; Petrova, Guenka
2008-09-01
A number of studies have shown the importance of consumers' satisfaction toward pharmacy services. The measurement of patient satisfaction through different elements of services provided is challenging within the context of a dynamic economic environment. Patient satisfaction is the result of long-term established habits and expectations to the pharmacy as an institution. Few studies to date have attempted to discern whether these changes have led to increased patient satisfaction and loyalty, particularly within developing nations. The objective of this study was to evaluate the elements of the services provided in Bulgarian pharmacies and their contribution to consumer satisfaction using a tetra-class model. Three main hypotheses were tested in pharmacies to validate the model in the case of complex services. Additionally, the contribution of the different service elements to the clients' satisfaction was studied. The analysis was based on a survey of customers in central and district pharmacies in Sofia, Bulgaria. The data were analyzed through a correspondence analysis which was applied to the results of the 752 distributed questionnaires. It was observed that different dimensions of the pharmacies contribute uniquely to customer satisfaction, with consumer gender contributing greatly toward satisfaction, with type/location of pharmacy, consumer age, and educational degree also playing a part. The duration of time over which the consumers have been clients at a given pharmacy influences the subsequent service categorization. This research demonstrated that the tetra-class model is suitable for application in the pharmaceutical sector. The model results could be beneficial for both researchers and pharmacy managers.
Directory of Open Access Journals (Sweden)
Antoine Abou Rached
2017-11-01
Full Text Available Helicobacter pylori (H. pylori can cause a wide variety of illnesses such as peptic ulcer disease, gastric adenocarcinoma and mucosa-associated lymphoid tissue (MALT lymphoma. The diagnosis and eradication of H. pylori are crucial. The diagnosis of H. pylori is usually based on the rapid urease test (RUT and gastric antral biopsy for histology. The aim of this study is to evaluate the numbers of needed biopsies and their location (antrum/fundus to obtain optimal result for the diagnosis of H. pylori. Three hundred fifty consecutive patients were recruited, 210 fulfill the inclusion criteria and had nine gastric biopsies for the detection of H. pylori infection: two antral for the first RUT (RUT1, one antral and one fundic for the second (RUT2, one antral for the third (RUT3 and two antral with two fundic for histology (HES, Giemsa, PAS. The reading of the 3 types of RUT was performed at 1 hour, 3 hours and 24 hours and biopsies were read by two experienced pathologists not informed about the result of RUT. Results of RUT were considered positive if H. pylori was found on histology of at least one biopsy. The RUT1 at 1h, 3h and 24h has a sensitivity of 72%, 82% and 89% and a specificity of 100%, 99% and 87% respectively. The positive predictive value (PPV was 100%, 99% and 85% respectively and the negative predictive value (NPV of 81%, 87% and 90%. The RUT2 at 1h, 3h and 24h, respectively, had a sensitivity of 86%, 87% and 91% and a specificity of 99%, 97% and 90%. The PPV was 99%, 96% and 88% and NPV of 89%, 90%, 94%. The RUT3 at 1h, 3h and 24h, respectively, had a sensitivity of 70%, 74% and 84% and a specificity of 99%, 99% and 94%. The PPV was 99%, 99% and 92% and NPV of 79%, 81% and 87%. The best sensitivity and specificity were obtained for RUT1 read at 3h, for RUT2 read 1h and 3h, and the RUT3 read at 24h.This study demonstrates that the best sensitivity and specificity of rapid test for urease is obtained when fundic plus antral biopsy
Center for Integrated Nanotechnologies (CINT) Chemical Release Modeling Evaluation
Energy Technology Data Exchange (ETDEWEB)
Stirrup, Timothy Scott [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2016-12-20
This evaluation documents the methodology and results of chemical release modeling for operations at Building 518, Center for Integrated Nanotechnologies (CINT) Core Facility. This evaluation is intended to supplement an update to the CINT [Standalone] Hazards Analysis (SHA). This evaluation also updates the original [Design] Hazards Analysis (DHA) completed in 2003 during the design and construction of the facility; since the original DHA, additional toxic materials have been evaluated and modeled to confirm the continued low hazard classification of the CINT facility and operations. This evaluation addresses the potential catastrophic release of the current inventory of toxic chemicals at Building 518 based on a standard query in the Chemical Information System (CIS).
Modelling operator cognitive interactions in nuclear power plant safety evaluation
International Nuclear Information System (INIS)
Senders, J.W.; Moray, N.; Smiley, A.; Sellen, A.
1985-08-01
The overall objectives of the study were to review methods which are applicable to the analysis of control room operator cognitive interactions in nuclear plant safety evaluations and to indicate where future research effort in this area should be directed. This report is based on an exhaustive search and review of the literature on NPP (Nuclear Power Plant) operator error, human error, human cognitive function, and on human performance. A number of methods which have been proposed for the estimation of data for probabilistic risk analysis have been examined and have been found wanting. None addresses the problem of diagnosis error per se. Virtually all are concerned with the more easily detected and identified errors of action. None addresses underlying cause and mechanism. It is these mechanisms which must be understood if diagnosis errors and other cognitive errors are to be controlled and predicted. We have attempted to overcome the deficiencies of earlier work and have constructed a model/taxonomy, EXHUME, which we consider to be exhaustive. This construct has proved to be fruitful in organizing our thinking about the kinds of error that can occur and the nature of self-correcting mechanisms, and has guided our thinking in suggesting a research program which can provide the data needed for quantification of cognitive error rates and of the effects of mitigating efforts. In addition a preliminary outline of EMBED, a causal model of error, is given based on general behavioural research into perception, attention, memory, and decision making. 184 refs
Simplified Entropic Model for the Evaluation of Suspended Load Concentration
Directory of Open Access Journals (Sweden)
Domenica Mirauda
2018-03-01
Full Text Available Suspended sediment concentration is a key aspect in the forecasting of river evolution dynamics, as well as in water quality assessment, evaluation of reservoir impacts, and management of water resources. The estimation of suspended load often relies on empirical models, of which efficiency is limited by their analytic structure or by the need for calibration parameters. The present work deals with a simplified fully-analytical formulation of the so-called entropic model in order to reproduce the vertical distribution of sediment concentration. The simplification consists in the leading order expansion of the generalized spatial coordinate of the entropic velocity profile that, strictly speaking, applies to the near-bed region, but that provides acceptable results also near the free surface. The proposed closed-form solution, which highlights the interplay among channel morphology, stream power, secondary flows, and suspended transport features, allows reducing the needed number of field measurements and, therefore, the time of field activities. Its accuracy and robustness were successfully tested based on the comparison with laboratory data reported in literature.
Evaluating Models of Human Performance: Safety-Critical Systems Applications
Feary, Michael S.
2012-01-01
This presentation is part of panel discussion on Evaluating Models of Human Performance. The purpose of this panel is to discuss the increasing use of models in the world today and specifically focus on how to describe and evaluate models of human performance. My presentation will focus on discussions of generating distributions of performance, and the evaluation of different strategies for humans performing tasks with mixed initiative (Human-Automation) systems. I will also discuss issues with how to provide Human Performance modeling data to support decisions on acceptability and tradeoffs in the design of safety critical systems. I will conclude with challenges for the future.
A Model for Telestrok Network Evaluation
DEFF Research Database (Denmark)
Storm, Anna; Günzel, Franziska; Theiss, Stephan
2011-01-01
was developed from the third-party payer perspective. In principle, it enables telestroke networks to conduct cost-effectiveness studies, because the majority of the required data can be extracted from health insurance companies’ databases and the telestroke network itself. The model presents a basis...
Evaluating a Model of Youth Physical Activity
Heitzler, Carrie D.; Lytle, Leslie A.; Erickson, Darin J.; Barr-Anderson, Daheia; Sirard, John R.; Story, Mary
2010-01-01
Objective: To explore the relationship between social influences, self-efficacy, enjoyment, and barriers and physical activity. Methods: Structural equation modeling examined relationships between parent and peer support, parent physical activity, individual perceptions, and objectively measured physical activity using accelerometers among a…
An evaluation of uncertainties in radioecological models
International Nuclear Information System (INIS)
Hoffmann, F.O.; Little, C.A.; Miller, C.W.; Dunning, D.E. Jr.; Rupp, E.M.; Shor, R.W.; Schaeffer, D.L.; Baes, C.F. III
1978-01-01
The paper presents results of analyses for seven selected parameters commonly used in environmental radiological assessment models, assuming that the available data are representative of the true distribution of parameter values and that their respective distributions are lognormal. Estimates of the most probable, median, mean, and 99th percentile for each parameter are fiven and compared to U.S. NRC default values. The regulatory default values are generally greater than the median values for the selected parameters, but some are associated with percentiles significantly less than the 50th. The largest uncertainties appear to be associated with aquatic bioaccumulation factors for fresh water fish. Approximately one order of magnitude separates median values and values of the 99th percentile. The uncertainty is also estimated for the annual dose rate predicted by a multiplicative chain model for the transport of molecular iodine-131 via the air-pasture-cow-milk-child's thyroid pathway. The value for the 99th percentile is ten times larger than the median value of the predicted dose normalized for a given air concentration of 131 I 2 . About 72% of the uncertainty in this model is contributed by the dose conversion factor and the milk transfer coefficient. Considering the difficulties in obtaining a reliable quantification of the true uncertainties in model predictions, methods for taking these uncertainties into account when determining compliance with regulatory statutes are discussed. (orig./HP) [de
Evaluation Model of Tea Industry Information Service Quality
Shi , Xiaohui; Chen , Tian’en
2015-01-01
International audience; According to characteristics of tea industry information service, this paper have built service quality evaluation index system for tea industry information service quality, R-cluster analysis and multiple regression have been comprehensively used to contribute evaluation model with a high practice and credibility. Proved by the experiment, the evaluation model of information service quality has a good precision, which has guidance significance to a certain extent to e...
A Model Management Approach for Co-Simulation Model Evaluation
Zhang, X.C.; Broenink, Johannes F.; Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno
2011-01-01
Simulating formal models is a common means for validating the correctness of the system design and reduce the time-to-market. In most of the embedded control system design, multiple engineering disciplines and various domain-specific models are often involved, such as mechanical, control, software
Ahdika, Atina; Lusiyana, Novyan
2017-02-01
World Health Organization (WHO) noted Indonesia as the country with the highest dengue (DHF) cases in Southeast Asia. There are no vaccine and specific treatment for DHF. One of the efforts which can be done by both government and resident is doing a prevention action. In statistics, there are some methods to predict the number of DHF cases to be used as the reference to prevent the DHF cases. In this paper, a discrete time series model, INAR(1)-Poisson model in specific, and Markov prediction model are used to predict the number of DHF patients in West Java Indonesia. The result shows that MPM is the best model since it has the smallest value of MAE (mean absolute error) and MAPE (mean absolute percentage error).
International Nuclear Information System (INIS)
O’Carroll, Michael
2012-01-01
We consider the interaction of particles in weakly correlated lattice quantum field theories. In the imaginary time functional integral formulation of these theories there is a relative coordinate lattice Schroedinger operator H which approximately describes the interaction of these particles. Scalar and vector spin, QCD and Gross-Neveu models are included in these theories. In the weakly correlated regime H=H o +W where H o =−γΔ l , 0 l is the d-dimensional lattice Laplacian: γ=β, the inverse temperature for spin systems and γ=κ 3 where κ is the hopping parameter for QCD. W is a self-adjoint potential operator which may have non-local contributions but obeys the bound ‖W(x, y)‖⩽cexp ( −a(‖x‖+‖y‖)), a large: exp−a=β/β o (1/2) (κ/κ o ) for spin (QCD) models. H o , W, and H act in l 2 (Z d ), d⩾ 1. The spectrum of H below zero is known to be discrete and we obtain bounds on the number of states below zero. This number depends on the short range properties of W, i.e., the long range tail does not increase the number of states.
RTMOD: Real-Time MODel evaluation
DEFF Research Database (Denmark)
Graziani, G.; Galmarini, S.; Mikkelsen, Torben
2000-01-01
the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modellers. When additionalforecast data arrived, already existing statistical results....... At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax andregular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained...... during the ETEX exercises suggested the development of this project. RTMOD featured a web-baseduser-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration...
Dekoninck, Luc; Botteldooren, Dick; Panis, Luc Int; Hankey, Steve; Jain, Grishma; S, Karthik; Marshall, Julian
2015-01-01
Several studies show that a significant portion of daily air pollution exposure, in particular black carbon (BC), occurs during transport. In a previous work, a model for the in-traffic exposure of bicyclists to BC was proposed based on spectral evaluation of mobile noise measurements and validated with BC measurements in Ghent, Belgium. In this paper, applicability of this model in a different cultural context with a totally different traffic and mobility situation is presented. In addition, a similar modeling approach is tested for particle number (PN) concentration. Indirectly assessing BC and PN exposure through a model based on noise measurements is advantageous because of the availability of very affordable noise monitoring devices. Our previous work showed that a model including specific spectral components of the noise that relate to engine and rolling emission and basic meteorological data, could be quite accurate. Moreover, including a background concentration adjustment improved the model considerably. To explore whether this model could also be used in a different context, with or without tuning of the model parameters, a study was conducted in Bangalore, India. Noise measurement equipment, data storage, data processing, continent, country, measurement operators, vehicle fleet, driving behavior, biking facilities, background concentration, and meteorology are all very different from the first measurement campaign in Belgium. More than 24h of combined in-traffic noise, BC, and PN measurements were collected. It was shown that the noise-based BC exposure model gives good predictions in Bangalore and that the same approach is also successful for PN. Cross validation of the model parameters was used to compare factors that impact exposure across study sites. A pooled model (combining the measurements of the two locations) results in a correlation of 0.84 when fitting the total trip exposure in Bangalore. Estimating particulate matter exposure with traffic
Leasa, Marleny; Duran Corebima, Aloysius
2017-01-01
Learning models and academic ability may affect students’ achievement in science. This study, thus aimed to investigate the effect of numbered heads together (NHT) cooperative learning model on elementary students’ cognitive achievement in natural science. This study employed a quasi-experimental design with pretest-posttest non-equivalent control group with 2 x 2 factorial. There were two learning models compared NHT and the conventional, and two academic ability high and low. The results of ana Cova test confirmed the difference in the students’ cognitive achievement based on learning models and general academic ability. However, the interaction between learning models and academic ability did not affect the students’ cognitive achievement. In conclusion, teachers are strongly recommended to be more creative in designing learning using other types of cooperative learning models. Also, schools are required to create a better learning environment which is more cooperative to avoid unfair competition among students in the classroom and as a result improve the students’ academic ability. Further research needs to be conducted to explore the contribution of other aspects in cooperative learning toward cognitive achievement of students with different academic ability.
Local fit evaluation of structural equation models using graphical criteria.
Thoemmes, Felix; Rosseel, Yves; Textor, Johannes
2018-03-01
Evaluation of model fit is critically important for every structural equation model (SEM), and sophisticated methods have been developed for this task. Among them are the χ² goodness-of-fit test, decomposition of the χ², derived measures like the popular root mean square error of approximation (RMSEA) or comparative fit index (CFI), or inspection of residuals or modification indices. Many of these methods provide a global approach to model fit evaluation: A single index is computed that quantifies the fit of the entire SEM to the data. In contrast, graphical criteria like d-separation or trek-separation allow derivation of implications that can be used for local fit evaluation, an approach that is hardly ever applied. We provide an overview of local fit evaluation from the viewpoint of SEM practitioners. In the presence of model misfit, local fit evaluation can potentially help in pinpointing where the problem with the model lies. For models that do fit the data, local tests can identify the parts of the model that are corroborated by the data. Local tests can also be conducted before a model is fitted at all, and they can be used even for models that are globally underidentified. We discuss appropriate statistical local tests, and provide applied examples. We also present novel software in R that automates this type of local fit evaluation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
SET-MM – A Software Evaluation Technology Maturity Model
García-Castro, Raúl
2011-01-01
The application of software evaluation technologies in different research fields to verify and validate research is a key factor in the progressive evolution of those fields. Nowadays, however, to have a clear picture of the maturity of the technologies used in evaluations or to know which steps to follow in order to improve the maturity of such technologies is not easy. This paper describes a Software Evaluation Technology Maturity Model that can be used to assess software evaluation tech...
Directory of Open Access Journals (Sweden)
H Mohamadi Monavar
2017-10-01
Full Text Available Introduction Precision agriculture (PA is a technology that measures and manages within-field variability, such as physical and chemical properties of soil. The nondestructive and rapid VIS-NIR technology detected a significant correlation between reflectance spectra and the physical and chemical properties of soil. On the other hand, quantitatively predict of soil factors such as nitrogen, carbon, cation exchange capacity and the amount of clay in precision farming is very important. The emphasis of this paper is comparing different techniques of choosing calibration samples such as randomly selected method, chemical data and also based on PCA. Since increasing the number of samples is usually time-consuming and costly, then in this study, the best sampling way -in available methods- was predicted for calibration models. In addition, the effect of sample size on the accuracy of the calibration and validation models was analyzed. Materials and Methods Two hundred and ten soil samples were collected from cultivated farm located in Avarzaman in Hamedan province, Iran. The crop rotation was mostly potato and wheat. Samples were collected from a depth of 20 cm above ground and passed through a 2 mm sieve and air dried at room temperature. Chemical analysis was performed in the soil science laboratory, faculty of agriculture engineering, Bu-ali Sina University, Hamadan, Iran. Two Spectrometer (AvaSpec-ULS 2048- UV-VIS and (FT-NIR100N were used to measure the spectral bands which cover the UV-Vis and NIR region (220-2200 nm. Each soil sample was uniformly tiled in a petri dish and was scanned 20 times. Then the pre-processing methods of multivariate scatter correction (MSC and base line correction (BC were applied on the raw signals using Unscrambler software. The samples were divided into two groups: one group for calibration 105 and the second group was used for validation. Each time, 15 samples were selected randomly and tested the accuracy of
What's new in the Atmospheric Model Evaluation Tool (AMET) version 1.3
A new version of the Atmospheric Model Evaluation Tool (AMET) has been released. The new version of AMET, version 1.3 (AMETv1.3), contains a number of updates and changes from the previous of version of AMET (v1.2) released in 2012. First, the Perl scripts used in the previous ve...
DEFF Research Database (Denmark)
Csabai, Dávid; Wiborg, Ove; Czéh, Boldizsár
2018-01-01
Stressful experiences can induce structural changes in neurons of the limbic system. These cellular changes contribute to the development of stress-induced psychopathologies like depressive disorders. In the prefrontal cortex of chronically stressed animals, reduced dendritic length and spine loss...... have been reported. This loss of dendritic material should consequently result in synapse loss as well, because of the reduced dendritic surface. But so far, no one studied synapse numbers in the prefrontal cortex of chronically stressed animals. Here, we examined synaptic contacts in rats subjected...... to an animal model for depression, where animals are exposed to a chronic stress protocol. Our hypothesis was that long term stress should reduce the number of axo-spinous synapses in the medial prefrontal cortex. Adult male rats were exposed to daily stress for 9 weeks and afterward we did a post mortem...
Resampling methods for evaluating classification accuracy of wildlife habitat models
Verbyla, David L.; Litvaitis, John A.
1989-11-01
Predictive models of wildlife-habitat relationships often have been developed without being tested The apparent classification accuracy of such models can be optimistically biased and misleading. Data resampling methods exist that yield a more realistic estimate of model classification accuracy These methods are simple and require no new sample data. We illustrate these methods (cross-validation, jackknife resampling, and bootstrap resampling) with computer simulation to demonstrate the increase in precision of the estimate. The bootstrap method is then applied to field data as a technique for model comparison We recommend that biologists use some resampling procedure to evaluate wildlife habitat models prior to field evaluation.
[Evaluation on a fast weight reduction model in vitro].
Li, Songtao; Li, Ying; Wen, Ying; Sun, Changhao
2010-03-01
To establish a fast and effective model in vitro for screening weight-reducing drugs and taking preliminary evaluation of the model. Mature adipocytes of SD rat induced by oleic acid were used to establish a obesity model in vitro. Isoprel, genistein, caffeine were selected as positive agents and curcumine as negative agent to evaluate the obesity model. Lipolysis of adipocytes was stimulated significantly by isoprel, genistein and caffeine rather than curcumine. This model could be used efficiently for screening weight-losing drugs.
Evaluation of Industry Standard Turbulence Models on an Axisymmetric Supersonic Compression Corner
DeBonis, James R.
2015-01-01
Reynolds-averaged Navier-Stokes computations of a shock-wave/boundary-layer interaction (SWBLI) created by a Mach 2.85 flow over an axisymmetric 30-degree compression corner were carried out. The objectives were to evaluate four turbulence models commonly used in industry, for SWBLIs, and to evaluate the suitability of this test case for use in further turbulence model benchmarking. The Spalart-Allmaras model, Menter's Baseline and Shear Stress Transport models, and a low-Reynolds number k- model were evaluated. Results indicate that the models do not accurately predict the separation location; with the SST model predicting the separation onset too early and the other models predicting the onset too late. Overall the Spalart-Allmaras model did the best job in matching the experimental data. However there is significant room for improvement, most notably in the prediction of the turbulent shear stress. Density data showed that the simulations did not accurately predict the thermal boundary layer upstream of the SWBLI. The effect of turbulent Prandtl number and wall temperature were studied in an attempt to improve this prediction and understand their effects on the interaction. The data showed that both parameters can significantly affect the separation size and location, but did not improve the agreement with the experiment. This case proved challenging to compute and should provide a good test for future turbulence modeling work.
Teachers' Development Model to Authentic Assessment by Empowerment Evaluation Approach
Charoenchai, Charin; Phuseeorn, Songsak; Phengsawat, Waro
2015-01-01
The purposes of this study were 1) Study teachers authentic assessment, teachers comprehension of authentic assessment and teachers needs for authentic assessment development. 2) To create teachers development model. 3) Experiment of teachers development model. 4) Evaluate effectiveness of teachers development model. The research is divided into 4…
Evaluation of habitat suitability index models for assessing biotic resources
John C. Rennie; Joseph D. Clark; James M. Sweeney
2000-01-01
Existing habitat suitability index (HSI) models are evaluated for assessing the biotic resources on Champion International Corporation (CIC) lands with data from a standard and an expanded timber inventory. Forty HSI models for 34 species that occur in the Southern Appalachians have been identified from the literature. All of the variables for 14 models are provided (...
EcoMark: Evaluating Models of Vehicular Environmental Impact
DEFF Research Database (Denmark)
Guo, Chenjuan; Ma, Mike; Yang, Bin
2012-01-01
the vehicle travels in. We develop an evaluation framework, called EcoMark, for such environmental impact models. In addition, we survey all eleven state-of-the-art impact models known to us. To gain insight into the capabilities of the models and to understand the effectiveness of the EcoMark, we apply...
Evaluating the double Poisson generalized linear model.
Zou, Yaotian; Geedipally, Srinivas Reddy; Lord, Dominique
2013-10-01
The objectives of this study are to: (1) examine the applicability of the double Poisson (DP) generalized linear model (GLM) for analyzing motor vehicle crash data characterized by over- and under-dispersion and (2) compare the performance of the DP GLM with the Conway-Maxwell-Poisson (COM-Poisson) GLM in terms of goodness-of-fit and theoretical soundness. The DP distribution has seldom been investigated and applied since its first introduction two decades ago. The hurdle for applying the DP is related to its normalizing constant (or multiplicative constant) which is not available in closed form. This study proposed a new method to approximate the normalizing constant of the DP with high accuracy and reliability. The DP GLM and COM-Poisson GLM were developed using two observed over-dispersed datasets and one observed under-dispersed dataset. The modeling results indicate that the DP GLM with its normalizing constant approximated by the new method can handle crash data characterized by over- and under-dispersion. Its performance is comparable to the COM-Poisson GLM in terms of goodness-of-fit (GOF), although COM-Poisson GLM provides a slightly better fit. For the over-dispersed data, the DP GLM performs similar to the NB GLM. Considering the fact that the DP GLM can be easily estimated with inexpensive computation and that it is simpler to interpret coefficients, it offers a flexible and efficient alternative for researchers to model count data. Copyright © 2013 Elsevier Ltd. All rights reserved.
Law of large numbers for the SIR model with random vertex weights on Erdős-Rényi graph
Xue, Xiaofeng
2017-11-01
In this paper we are concerned with the SIR model with random vertex weights on Erdős-Rényi graph G(n , p) . The Erdős-Rényi graph G(n , p) is generated from the complete graph Cn with n vertices through independently deleting each edge with probability (1 - p) . We assign i. i. d. copies of a positive r. v. ρ on each vertex as the vertex weights. For the SIR model, each vertex is in one of the three states 'susceptible', 'infective' and 'removed'. An infective vertex infects a given susceptible neighbor at rate proportional to the production of the weights of these two vertices. An infective vertex becomes removed at a constant rate. A removed vertex will never be infected again. We assume that at t = 0 there is no removed vertex and the number of infective vertices follows a Bernoulli distribution B(n , θ) . Our main result is a law of large numbers of the model. We give two deterministic functions HS(ψt) ,HV(ψt) for t ≥ 0 and show that for any t ≥ 0, HS(ψt) is the limit proportion of susceptible vertices and HV(ψt) is the limit of the mean capability of an infective vertex to infect a given susceptible neighbor at moment t as n grows to infinity.
Design Concept Evaluation Using System Throughput Model
International Nuclear Information System (INIS)
Sequeira, G.; Nutt, W. M.
2004-01-01
The U.S. Department of Energy (DOE) Office of Civilian Radioactive Waste Management (OCRWM) is currently developing the technical bases to support the submittal of a license application for construction of a geologic repository at Yucca Mountain, Nevada to the U.S. Nuclear Regulatory Commission. The Office of Repository Development (ORD) is responsible for developing the design of the proposed repository surface facilities for the handling of spent nuclear fuel and high level nuclear waste. Preliminary design activities are underway to sufficiently develop the repository surface facilities design for inclusion in the license application. The design continues to evolve to meet mission needs and to satisfy both regulatory and program requirements. A system engineering approach is being used in the design process since the proposed repository facilities are dynamically linked by a series of sub-systems and complex operations. In addition, the proposed repository facility is a major system element of the overall waste management process being developed by the OCRWM. Such an approach includes iterative probabilistic dynamic simulation as an integral part of the design evolution process. A dynamic simulation tool helps to determine if: (1) the mission and design requirements are complete, robust, and well integrated; (2) the design solutions under development meet the design requirements and mission goals; (3) opportunities exist where the system can be improved and/or optimized; and (4) proposed changes to the mission, and design requirements have a positive or negative impact on overall system performance and if design changes may be necessary to satisfy these changes. This paper will discuss the type of simulation employed to model the waste handling operations. It will then discuss the process being used to develop the Yucca Mountain surface facilities model. The latest simulation model and the results of the simulation and how the data were used in the design
Pleasant, Andrew
2008-01-01
There is a growing interest in health literacy and in developing curricula for health care providers and for the general public. However, developing curriculum without accompanying evaluation plans is like starting a race without a finish line, and current measures of health literacy are not up to the task of evaluating curriculum. This research…
Calibrating E-values for hidden Markov models using reverse-sequence null models.
Karplus, Kevin; Karchin, Rachel; Shackelford, George; Hughey, Richard
2005-11-15
Hidden Markov models (HMMs) calculate the probability that a sequence was generated by a given model. Log-odds scoring provides a context for evaluating this probability, by considering it in relation to a null hypothesis. We have found that using a reverse-sequence null model effectively removes biases owing to sequence length and composition and reduces the number of false positives in a database search. Any scoring system is an arbitrary measure of the quality of database matches. Significance estimates of scores are essential, because they eliminate model- and method-dependent scaling factors, and because they quantify the importance of each match. Accurate computation of the significance of reverse-sequence null model scores presents a problem, because the scores do not fit the extreme-value (Gumbel) distribution commonly used to estimate HMM scores' significance. To get a better estimate of the significance of reverse-sequence null model scores, we derive a theoretical distribution based on the assumption of a Gumbel distribution for raw HMM scores and compare estimates based on this and other distribution families. We derive estimation methods for the parameters of the distributions based on maximum likelihood and on moment matching (least-squares fit for Student's t-distribution). We evaluate the modeled distributions of scores, based on how well they fit the tail of the observed distribution for data not used in the fitting and on the effects of the improved E-values on our HMM-based fold-recognition methods. The theoretical distribution provides some improvement in fitting the tail and in providing fewer false positives in the fold-recognition test. An ad hoc distribution based on assuming a stretched exponential tail does an even better job. The use of Student's t to model the distribution fits well in the middle of the distribution, but provides too heavy a tail. The moment-matching methods fit the tails better than maximum-likelihood methods
Directory of Open Access Journals (Sweden)
Dominique Peeters
2017-06-01
Full Text Available Some authors argue that age-related improvements in number line estimation (NLE performance result from changes in strategy use. More specifically, children’s strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders’ NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%, as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children’s NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders’ NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children’s benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on
A Decision Model for Evaluating Potential Change in Instructional Programs.
Amor, J. P.; Dyer, J. S.
A statistical model designed to assist elementary school principals in the process of selection educational areas which should receive additional emphasis is presented. For each educational area, the model produces an index number which represents the expected "value" per dollar spent on an instructional program appropriate for strengthening that…
On a Graphical Technique for Evaluating Some Rational Expectations Models
DEFF Research Database (Denmark)
Johansen, Søren; Swensen, Anders R.
2011-01-01
. In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...
Evaluating Instructional Design Models: A Proposed Research Approach
Gropper, George L.
2015-01-01
Proliferation of prescriptive models in an "engineering" field is not a sign of its maturity. Quite the opposite. Materials engineering, for example, meets the criterion of parsimony. Sadly, the very large number of models in "instructional design," putatively an engineering field, raises questions about its status. Can the…
Hendrick, R Edward; Helvie, Mark A; Hardesty, Lara A
2014-12-01
In this article, we evaluate the implications of recent Cancer Intervention and Surveillance Modeling Network (CISNET) modeling of benefits and harms of screening to women 40-49 years old using annual digital mammography. We show that adding annual digital mammography of women 40-49 years old to biennial screening of women 50-74 years old increases lives saved by 27% and life-years gained by 47%. Annual digital mammography in women 40-49 years old saves 42% more lives and life-years than biennial digital mammography. The number needed to screen to save one life (NNS) with annual digital mammography in women 40-49 years old is 588.
Directory of Open Access Journals (Sweden)
Nelson Maculan
2003-01-01
Full Text Available We present integer linear models with a polynomial number of variables and constraints for combinatorial optimization problems in graphs: optimum elementary cycles, optimum elementary paths and optimum tree problems.Apresentamos modelos lineares inteiros com um número polinomial de variáveis e restrições para problemas de otimização combinatória em grafos: ciclos elementares ótimos, caminhos elementares ótimos e problemas em árvores ótimas.
International Nuclear Information System (INIS)
Paziresh, M.; Kingston, A. M.; Latham, S. J.; Fullagar, W. K.; Myers, G. M.
2016-01-01
Dual-energy computed tomography and the Alvarez and Macovski [Phys. Med. Biol. 21, 733 (1976)] transmitted intensity (AMTI) model were used in this study to estimate the maps of density (ρ) and atomic number (Z) of mineralogical samples. In this method, the attenuation coefficients are represented [Alvarez and Macovski, Phys. Med. Biol. 21, 733 (1976)] in the form of the two most important interactions of X-rays with atoms that is, photoelectric absorption (PE) and Compton scattering (CS). This enables material discrimination as PE and CS are, respectively, dependent on the atomic number (Z) and density (ρ) of materials [Alvarez and Macovski, Phys. Med. Biol. 21, 733 (1976)]. Dual-energy imaging is able to identify sample materials even if the materials have similar attenuation coefficients at single-energy spectrum. We use the full model rather than applying one of several applied simplified forms [Alvarez and Macovski, Phys. Med. Biol. 21, 733 (1976); Siddiqui et al., SPE Annual Technical Conference and Exhibition (Society of Petroleum Engineers, 2004); Derzhi, U.S. patent application 13/527,660 (2012); Heismann et al., J. Appl. Phys. 94, 2073–2079 (2003); Park and Kim, J. Korean Phys. Soc. 59, 2709 (2011); Abudurexiti et al., Radiol. Phys. Technol. 3, 127–135 (2010); and Kaewkhao et al., J. Quant. Spectrosc. Radiat. Transfer 109, 1260–1265 (2008)]. This paper describes the tomographic reconstruction of ρ and Z maps of mineralogical samples using the AMTI model. The full model requires precise knowledge of the X-ray energy spectra and calibration of PE and CS constants and exponents of atomic number and energy that were estimated based on fits to simulations and calibration measurements. The estimated ρ and Z images of the samples used in this paper yield average relative errors of 2.62% and 1.19% and maximum relative errors of 2.64% and 7.85%, respectively. Furthermore, we demonstrate that the method accounts for the beam hardening effect in density (
ECOPATH: Model description and evaluation of model performance
International Nuclear Information System (INIS)
Bergstroem, U.; Nordlinder, S.
1996-01-01
The model is based upon compartment theory and it is run in combination with a statistical error propagation method (PRISM, Gardner et al. 1983). It is intended to be generic for application on other sites with simple changing of parameter values. It was constructed especially for this scenario. However, it is based upon an earlier designed model for calculating relations between released amount of radioactivity and doses to critical groups (used for Swedish regulations concerning annual reports of released radioactivity from routine operation of Swedish nuclear power plants (Bergstroem och Nordlinder, 1991)). The model handles exposure from deposition on terrestrial areas as well as deposition on lakes, starting with deposition values. 14 refs, 16 figs, 7 tabs
A random walk model to evaluate autism
Moura, T. R. S.; Fulco, U. L.; Albuquerque, E. L.
2018-02-01
A common test administered during neurological examination in children is the analysis of their social communication and interaction across multiple contexts, including repetitive patterns of behavior. Poor performance may be associated with neurological conditions characterized by impairments in executive function, such as the so-called pervasive developmental disorders (PDDs), a particular condition of the autism spectrum disorders (ASDs). Inspired in these diagnosis tools, mainly those related to repetitive movements and behaviors, we studied here how the diffusion regimes of two discrete-time random walkers, mimicking the lack of social interaction and restricted interests developed for children with PDDs, are affected. Our model, which is based on the so-called elephant random walk (ERW) approach, consider that one of the random walker can learn and imitate the microscopic behavior of the other with probability f (1 - f otherwise). The diffusion regimes, measured by the Hurst exponent (H), is then obtained, whose changes may indicate a different degree of autism.
Evaluations of an Experiential Gaming Model
Directory of Open Access Journals (Sweden)
Kristian Kiili
2006-01-01
Full Text Available This paper examines the experiences of players of a problem-solving game. The main purpose of the paper is to validate the flow antecedents included in an experiential gaming model and to study their influence on the flow experience. Additionally, the study aims to operationalize the flow construct in a game context and to start a scale development process for assessing the experience of flow in game settings. Results indicated that the flow antecedents studied—challenges matched to a player’s skill level, clear goals, unambiguous feedback, a sense of control, and playability—should be considered in game design because they contribute to the flow experience. Furthermore, the indicators of the actual flow experience were distinguished.
Large scale Bayesian nuclear data evaluation with consistent model defects
International Nuclear Information System (INIS)
Schnabel, G
2015-01-01
The aim of nuclear data evaluation is the reliable determination of cross sections and related quantities of the atomic nuclei. To this end, evaluation methods are applied which combine the information of experiments with the results of model calculations. The evaluated observables with their associated uncertainties and correlations are assembled into data sets, which are required for the development of novel nuclear facilities, such as fusion reactors for energy supply, and accelerator driven systems for nuclear waste incineration. The efficiency and safety of such future facilities is dependent on the quality of these data sets and thus also on the reliability of the applied evaluation methods. This work investigated the performance of the majority of available evaluation methods in two scenarios. The study indicated the importance of an essential component in these methods, which is the frequently ignored deficiency of nuclear models. Usually, nuclear models are based on approximations and thus their predictions may deviate from reliable experimental data. As demonstrated in this thesis, the neglect of this possibility in evaluation methods can lead to estimates of observables which are inconsistent with experimental data. Due to this finding, an extension of Bayesian evaluation methods is proposed to take into account the deficiency of the nuclear models. The deficiency is modeled as a random function in terms of a Gaussian process and combined with the model prediction. This novel formulation conserves sum rules and allows to explicitly estimate the magnitude of model deficiency. Both features are missing in available evaluation methods so far. Furthermore, two improvements of existing methods have been developed in the course of this thesis. The first improvement concerns methods relying on Monte Carlo sampling. A Metropolis-Hastings scheme with a specific proposal distribution is suggested, which proved to be more efficient in the studied scenarios than the
Evaluation of atmospheric dispersion/consequence models supporting safety analysis
International Nuclear Information System (INIS)
O'Kula, K.R.; Lazaro, M.A.; Woodard, K.
1996-01-01
Two DOE Working Groups have completed evaluation of accident phenomenology and consequence methodologies used to support DOE facility safety documentation. The independent evaluations each concluded that no one computer model adequately addresses all accident and atmospheric release conditions. MACCS2, MATHEW/ADPIC, TRAC RA/HA, and COSYMA are adequate for most radiological dispersion and consequence needs. ALOHA, DEGADIS, HGSYSTEM, TSCREEN, and SLAB are recommended for chemical dispersion and consequence applications. Additional work is suggested, principally in evaluation of new models, targeting certain models for continued development, training, and establishing a Web page for guidance to safety analysts
International Nuclear Information System (INIS)
Kerrouchi, S.; Allal, N.H.; Fellah, M.; Oudih, M.R.
2015-01-01
The particle number fluctuation effects, which are inherent to the Bardeen–Cooper–Schrieffer (BCS) theory, on the beta decay log ft values are studied in the isovector case. Expressions of the transition probabilities, of Fermi as well as Gamow–Teller types, which strictly conserve the particle number are established using a projection method. The probabilities are calculated for some transitions of isobars such as N ≃ Z. The obtained results are compared to values obtained before the projection. The nuclear deformation effect on the log ft values is also studied. (author)
FARMLAND: Model description and evaluation of model performance
International Nuclear Information System (INIS)
Attwood, C.; Fayers, C.; Mayall, A.; Brown, J.; Simmonds, J.R.
1996-01-01
The FARMLAND model was originally developed for use in connection with continuous, routine releases of radionuclides, but because it has many time-dependent features it has been developed further for a single accidental release. The most recent version of FARMLAND is flexible and can be used to predict activity concentrations in food as a function of time after both accidental and routine releases of radionuclides. The effect of deposition at different times of the year can be taken into account. FARMLAND contains a suite of models which simulate radionuclide transfer through different parts of the foodchain. The models can be used in different combinations and offer the flexibility to assess a variety of radiological situations. The main foods considered are green vegetables, grain products, root vegetables, milk, meat and offal from cattle, and meat and offal from sheep. A large variety of elements can be considered although the degree of complexity with which some are modelled is greater than others; isotopes of caesium, strontium and iodine are treated in greatest detail. 22 refs, 12 figs, 10 tabs
Evaluation of Cost Models and Needs & Gaps Analysis
DEFF Research Database (Denmark)
Kejser, Ulla Bøgvad
2014-01-01
his report ’D3.1—Evaluation of Cost Models and Needs & Gaps Analysis’ provides an analysis of existing research related to the economics of digital curation and cost & benefit modelling. It reports upon the investigation of how well current models and tools meet stakeholders’ needs for calculating...... andcomparing financial information. Based on this evaluation, it aims to point out gaps that need to be bridged in order to increase the uptake of cost & benefit modelling and good practices that will enable costing and comparison of the costs of alternative scenarios—which in turn provides a starting point...... for amore efficient use of resources for digital curation. To facilitate and clarify the model evaluation the report first outlines a basic terminology and a generaldescription of the characteristics of cost and benefit models.The report then describes how the ten current and emerging cost and benefit...
Allahverdi, Rouzbeh; Dev, P. S. Bhupal; Dutta, Bhaskar
2018-04-01
We study a simple TeV-scale model of baryon number violation which explains the observed proximity of the dark matter and baryon abundances. The model has constraints arising from both low and high-energy processes, and in particular, predicts a sizable rate for the neutron-antineutron (n - n bar) oscillation at low energy and the monojet signal at the LHC. We find an interesting complementarity among the constraints arising from the observed baryon asymmetry, ratio of dark matter and baryon abundances, n - n bar oscillation lifetime and the LHC monojet signal. There are regions in the parameter space where the n - n bar oscillation lifetime is found to be more constraining than the LHC constraints, which illustrates the importance of the next-generation n - n bar oscillation experiments.
International Nuclear Information System (INIS)
Bixler, N.E.; Schaperow, J.H.
1998-06-01
VICTORIA is a mechanistic computer code designed to analyze fission product behavior within a nuclear reactor coolant system (RCS) during a severe accident. It provides detailed predictions of the release of radioactive and nonradioactive materials from the reactor core and transport and deposition of these materials within the RCS. A recently completed independent peer review of VICTORIA, while confirming the overall adequacy of the code, recommended a number of modeling improvements. One of these recommendations, to model three rather than a single condensed phase, is the focus of the work reported here. The recommendation has been implemented as an option so that either a single or three condensed phases can be treated. Both options have been employed in the study of fission product behavior during an induced steam generator tube rupture sequence. Differences in deposition patterns and mechanisms predicted using these two options are discussed
International Nuclear Information System (INIS)
Sahara; Jean L Ndeugueu; Masaru Aniya
2010-01-01
The temperature dependence of the viscosity of trehalose-water-lithium iodide system has been investigated by the mean of the Bond Strength Coordination Number Fluctuation (BSCNF) model. The result indicates that by increasing the trehalose content, maintaining the content of LiI constant, the fragility decreases due to the increase of the connectivity between the structural units. Our analysis suggests also that the fragility of the system is controlled by the amount of water in the composition. By increasing the water content, the total bond strength decreases and its fluctuation increases, resulting in the increase of the fragility. Based on the analysis of the obtained parameters of the BSCNF model, a physical interpretation of the VFT parameters reported in a previous study has been given. (author)
DEFF Research Database (Denmark)
Morales Rodriguez, Ricardo; Meyer, Anne S.; Gernaey, Krist
2011-01-01
against the following benchmark criteria, yield (kg ethanol/kg dry-biomass), final product concentration and number of unit operations required in the different process configurations. The results has shown the process configuration for simultaneous saccharification and co-fermentation (SSCF) operating......An assessment of a number of different process flowsheets for bioethanol production was performed using dynamic model-based simulations. The evaluation employed diverse operational scenarios such as, fed-batch, continuous and continuous with recycle configurations. Each configuration was evaluated...
Regime-based evaluation of cloudiness in CMIP5 models
Jin, Daeho; Oreopoulos, Lazaros; Lee, Dongmin
2017-01-01
The concept of cloud regimes (CRs) is used to develop a framework for evaluating the cloudiness of 12 fifth Coupled Model Intercomparison Project (CMIP5) models. Reference CRs come from existing global International Satellite Cloud Climatology Project (ISCCP) weather states. The evaluation is made possible by the implementation in several CMIP5 models of the ISCCP simulator generating in each grid cell daily joint histograms of cloud optical thickness and cloud top pressure. Model performance is assessed with several metrics such as CR global cloud fraction (CF), CR relative frequency of occurrence (RFO), their product [long-term average total cloud amount (TCA)], cross-correlations of CR RFO maps, and a metric of resemblance between model and ISCCP CRs. In terms of CR global RFO, arguably the most fundamental metric, the models perform unsatisfactorily overall, except for CRs representing thick storm clouds. Because model CR CF is internally constrained by our method, RFO discrepancies yield also substantial TCA errors. Our results support previous findings that CMIP5 models underestimate cloudiness. The multi-model mean performs well in matching observed RFO maps for many CRs, but is still not the best for this or other metrics. When overall performance across all CRs is assessed, some models, despite shortcomings, apparently outperform Moderate Resolution Imaging Spectroradiometer cloud observations evaluated against ISCCP like another model output. Lastly, contrasting cloud simulation performance against each model's equilibrium climate sensitivity in order to gain insight on whether good cloud simulation pairs with particular values of this parameter, yields no clear conclusions.
Evaluating and modelling constructs for e-government decision making
Sharif, AM; Irani, Z; Weerakkody, V
2010-01-01
It is now becoming increasingly well understood that the investment and evaluation of electronic government projects is determinant on a number of organisational, policymaking and decision-making factors, which are determining the success or failure of such endeavours. Given the increasing interest in the manner and methods by which public sector projects are implemented as well as evaluated, this paper attempts to synergise contemporary e-government project management (PM) components and syn...
Energy Technology Data Exchange (ETDEWEB)
O' Carroll, Michael [Departamento de Matematica Aplicada e Estatistica, ICMC-USP, C.P. 668,13560-970 Sao Carlos, Sao Paulo (Brazil)
2012-07-15
We consider the interaction of particles in weakly correlated lattice quantum field theories. In the imaginary time functional integral formulation of these theories there is a relative coordinate lattice Schroedinger operator H which approximately describes the interaction of these particles. Scalar and vector spin, QCD and Gross-Neveu models are included in these theories. In the weakly correlated regime H=H{sub o}+W where H{sub o}=-{gamma}{Delta}{sub l}, 0 < {gamma} Much-Less-Than 1 and {Delta}{sub l} is the d-dimensional lattice Laplacian: {gamma}={beta}, the inverse temperature for spin systems and {gamma}={kappa}{sup 3} where {kappa} is the hopping parameter for QCD. W is a self-adjoint potential operator which may have non-local contributions but obeys the bound Double-Vertical-Line W(x, y) Double-Vertical-Line Less-Than-Or-Slanted-Equal-To cexp ( -a( Double-Vertical-Line x Double-Vertical-Line + Double-Vertical-Line y Double-Vertical-Line )), a large: exp-a={beta}/{beta}{sub o}{sup (1/2)}({kappa}/{kappa}{sub o}) for spin (QCD) models. H{sub o}, W, and H act in l{sub 2}(Z{sup d}), d Greater-Than-Or-Slanted-Equal-To 1. The spectrum of H below zero is known to be discrete and we obtain bounds on the number of states below zero. This number depends on the short range properties of W, i.e., the long range tail does not increase the number of states.
Jiang, Yingni
2018-03-01
Due to the high energy consumption of communication, energy saving of data centers must be enforced. But the lack of evaluation mechanisms has restrained the process on energy saving construction of data centers. In this paper, energy saving evaluation index system of data centers was constructed on the basis of clarifying the influence factors. Based on the evaluation index system, analytical hierarchy process was used to determine the weights of the evaluation indexes. Subsequently, a three-grade fuzzy comprehensive evaluation model was constructed to evaluate the energy saving system of data centers.
Biology learning evaluation model in Senior High Schools
Directory of Open Access Journals (Sweden)
Sri Utari
2017-06-01
Full Text Available The study was to develop a Biology learning evaluation model in senior high schools that referred to the research and development model by Borg & Gall and the logic model. The evaluation model included the components of input, activities, output and outcomes. The developing procedures involved a preliminary study in the form of observation and theoretical review regarding the Biology learning evaluation in senior high schools. The product development was carried out by designing an evaluation model, designing an instrument, performing instrument experiment and performing implementation. The instrument experiment involved teachers and Students from Grade XII in senior high schools located in the City of Yogyakarta. For the data gathering technique and instrument, the researchers implemented observation sheet, questionnaire and test. The questionnaire was applied in order to attain information regarding teacher performance, learning performance, classroom atmosphere and scientific attitude; on the other hand, test was applied in order to attain information regarding Biology concept mastery. Then, for the analysis of instrument construct, the researchers performed confirmatory factor analysis by means of Lisrel 0.80 software and the results of this analysis showed that the evaluation instrument valid and reliable. The construct validity was between 0.43-0.79 while the reliability of measurement model was between 0.88-0.94. Last but not the least, the model feasibility test showed that the theoretical model had been supported by the empirical data.
Csabai, Dávid; Wiborg, Ove; Czéh, Boldizsár
2018-01-01
Stressful experiences can induce structural changes in neurons of the limbic system. These cellular changes contribute to the development of stress-induced psychopathologies like depressive disorders. In the prefrontal cortex of chronically stressed animals, reduced dendritic length and spine loss have been reported. This loss of dendritic material should consequently result in synapse loss as well, because of the reduced dendritic surface. But so far, no one studied synapse numbers in the prefrontal cortex of chronically stressed animals. Here, we examined synaptic contacts in rats subjected to an animal model for depression, where animals are exposed to a chronic stress protocol. Our hypothesis was that long term stress should reduce the number of axo-spinous synapses in the medial prefrontal cortex. Adult male rats were exposed to daily stress for 9 weeks and afterward we did a post mortem quantitative electron microscopic analysis to quantify the number and morphology of synapses in the infralimbic cortex. We analyzed asymmetric (Type I) and symmetric (Type II) synapses in all cortical layers in control and stressed rats. We also quantified axon numbers and measured the volume of the infralimbic cortex. In our systematic unbiased analysis, we examined 21,000 axon terminals in total. We found the following numbers in the infralimbic cortex of control rats: 1.15 × 10 9 asymmetric synapses, 1.06 × 10 8 symmetric synapses and 1.00 × 10 8 myelinated axons. The density of asymmetric synapses was 5.5/μm 3 and the density of symmetric synapses was 0.5/μm 3 . Average synapse membrane length was 207 nm and the average axon terminal membrane length was 489 nm. Stress reduced the number of synapses and myelinated axons in the deeper cortical layers, while synapse membrane lengths were increased. These stress-induced ultrastructural changes indicate that neurons of the infralimbic cortex have reduced cortical network connectivity. Such reduced network connectivity is
Csabai, Dávid; Wiborg, Ove; Czéh, Boldizsár
2018-01-01
Stressful experiences can induce structural changes in neurons of the limbic system. These cellular changes contribute to the development of stress-induced psychopathologies like depressive disorders. In the prefrontal cortex of chronically stressed animals, reduced dendritic length and spine loss have been reported. This loss of dendritic material should consequently result in synapse loss as well, because of the reduced dendritic surface. But so far, no one studied synapse numbers in the prefrontal cortex of chronically stressed animals. Here, we examined synaptic contacts in rats subjected to an animal model for depression, where animals are exposed to a chronic stress protocol. Our hypothesis was that long term stress should reduce the number of axo-spinous synapses in the medial prefrontal cortex. Adult male rats were exposed to daily stress for 9 weeks and afterward we did a post mortem quantitative electron microscopic analysis to quantify the number and morphology of synapses in the infralimbic cortex. We analyzed asymmetric (Type I) and symmetric (Type II) synapses in all cortical layers in control and stressed rats. We also quantified axon numbers and measured the volume of the infralimbic cortex. In our systematic unbiased analysis, we examined 21,000 axon terminals in total. We found the following numbers in the infralimbic cortex of control rats: 1.15 × 109 asymmetric synapses, 1.06 × 108 symmetric synapses and 1.00 × 108 myelinated axons. The density of asymmetric synapses was 5.5/μm3 and the density of symmetric synapses was 0.5/μm3. Average synapse membrane length was 207 nm and the average axon terminal membrane length was 489 nm. Stress reduced the number of synapses and myelinated axons in the deeper cortical layers, while synapse membrane lengths were increased. These stress-induced ultrastructural changes indicate that neurons of the infralimbic cortex have reduced cortical network connectivity. Such reduced network connectivity is likely
Directory of Open Access Journals (Sweden)
Guénola Ricard
2010-11-01
Full Text Available A large fraction of genome variation between individuals is comprised of submicroscopic copy number variation of genomic DNA segments. We assessed the relative contribution of structural changes and gene dosage alterations on phenotypic outcomes with mouse models of Smith-Magenis and Potocki-Lupski syndromes. We phenotyped mice with 1n (Deletion/+, 2n (+/+, 3n (Duplication/+, and balanced 2n compound heterozygous (Deletion/Duplication copies of the same region. Parallel to the observations made in humans, such variation in gene copy number was sufficient to generate phenotypic consequences: in a number of cases diametrically opposing phenotypes were associated with gain versus loss of gene content. Surprisingly, some neurobehavioral traits were not rescued by restoration of the normal gene copy number. Transcriptome profiling showed that a highly significant propensity of transcriptional changes map to the engineered interval in the five assessed tissues. A statistically significant overrepresentation of the genes mapping to the entire length of the engineered chromosome was also found in the top-ranked differentially expressed genes in the mice containing rearranged chromosomes, regardless of the nature of the rearrangement, an observation robust across different cell lineages of the central nervous system. Our data indicate that a structural change at a given position of the human genome may affect not only locus and adjacent gene expression but also "genome regulation." Furthermore, structural change can cause the same perturbation in particular pathways regardless of gene dosage. Thus, the presence of a genomic structural change, as well as gene dosage imbalance, contributes to the ultimate phenotype.
Ricard, Guénola; Molina, Jessica; Chrast, Jacqueline; Gu, Wenli; Gheldof, Nele; Pradervand, Sylvain; Schütz, Frédéric; Young, Juan I; Lupski, James R; Reymond, Alexandre; Walz, Katherina
2010-11-23
A large fraction of genome variation between individuals is comprised of submicroscopic copy number variation of genomic DNA segments. We assessed the relative contribution of structural changes and gene dosage alterations on phenotypic outcomes with mouse models of Smith-Magenis and Potocki-Lupski syndromes. We phenotyped mice with 1n (Deletion/+), 2n (+/+), 3n (Duplication/+), and balanced 2n compound heterozygous (Deletion/Duplication) copies of the same region. Parallel to the observations made in humans, such variation in gene copy number was sufficient to generate phenotypic consequences: in a number of cases diametrically opposing phenotypes were associated with gain versus loss of gene content. Surprisingly, some neurobehavioral traits were not rescued by restoration of the normal gene copy number. Transcriptome profiling showed that a highly significant propensity of transcriptional changes map to the engineered interval in the five assessed tissues. A statistically significant overrepresentation of the genes mapping to the entire length of the engineered chromosome was also found in the top-ranked differentially expressed genes in the mice containing rearranged chromosomes, regardless of the nature of the rearrangement, an observation robust across different cell lineages of the central nervous system. Our data indicate that a structural change at a given position of the human genome may affect not only locus and adjacent gene expression but also "genome regulation." Furthermore, structural change can cause the same perturbation in particular pathways regardless of gene dosage. Thus, the presence of a genomic structural change, as well as gene dosage imbalance, contributes to the ultimate phenotype.
Applying the social relations model to self and peer evaluations
Greguras, G.J.; Robie, C.; Born, M.Ph.
2001-01-01
Peer evaluations of performance increasingly are being used to make organizational decisions and to provide individuals with performance related feedback. Using Kenny's social relations model (SRM), data from 14 teams of undergraduate students who completed performance ratings of themselves and
Industrial Waste Management Evaluation Model Version 3.1
IWEM is a screening level ground water model designed to simulate contaminant fate and transport. IWEM v3.1 is the latest version of the IWEM software, which includes additional tools to evaluate the beneficial use of industrial materials
Using Models of Cognition in HRI Evaluation and Design
National Research Council Canada - National Science Library
Goodrich, Michael A
2004-01-01
...) guide the construction of experiments. In this paper, we present an information processing model of cognition that we have used extensively in designing and evaluating interfaces and autonomy modes...
Evaluation of global luminous efficacy models for Florianopolis, Brazil
Energy Technology Data Exchange (ETDEWEB)
De Souza, Roberta G.; Pereira, Fernando O.R. [Universidade Federal de Santa Catarina, Florianopolis (Brazil). Laboratorio de Conforto Ambiental, Dpto. de Arquitetura; Robledo, Luis [Universidad Politecnica de Madrid, Madrid (Spain). E.P.E.S. Ciencias Ambientales; Soler, Alfonso [Universidad Politecnica de Madrid, Madrid (Spain). E.P.E.S. Ciencias Ambientales and Dpto. de Fisica e Instalaciones Aplicadas, E.T.S. de Arquitectura
2006-10-15
Several global luminous efficacy models have been tested with daylight-measured data obtained for Felipresina, Southern Brazil. The models have been used with their original coefficients, given by the authors and also with local coefficients obtained when the models were optimized with the data measured in Felipresina. The evaluation of the different models has been carried out considering three sky categories, according to a higher or lower presence of clouds. For clear sky, the models tested have been compared with a proposed polynomial model on the solar altitude, obtained by the best fit of experimental points for Felipresina. It has been proved that the model coefficients have a local character. If those models are used with local coefficients, there is no model that works better than the others for all sky types, but that for each sky category a different model could be recommended. (author)
Model of service-oriented catering supply chain performance evaluation
Gou, Juanqiong; Shen, Guguan; Chai, Rui
2013-01-01
Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering ...
A New Software Quality Model for Evaluating COTS Components
Adnan Rawashdeh; Bassem Matalkah
2006-01-01
Studies show that COTS-based (Commercial off the shelf) systems that are being built recently are exceeding 40% of the total developed software systems. Therefore, a model that ensures quality characteristics of such systems becomes a necessity. Among the most critical processes in COTS-based systems are the evaluation and selection of the COTS components. There are several existing quality models used to evaluate software systems in general; however, none of them is dedicated to COTS-based s...
Directory of Open Access Journals (Sweden)
L. S. Vidal
Full Text Available Abstract Captive animals exhibit stereotypic pacing in response to multiple causes, including the inability to escape from human contact. Environmental enrichment techniques can minimize pacing expression. By using an individual-based approach, we addressed whether the amount of time two males and a female jaguar (Panthera onca devote to pacing varied with the number of visitors and tested the effectiveness of cinnamon and black pepper in reducing pacing. The amount of time that all jaguars engaged in pacing increased significantly with the number of visitors. Despite the difference between the males regarding age and housing conditions, both devoted significantly less time to pacing following the addition of both spices, which indicates their suitability as enrichment techniques. Mean time devoted to pacing among the treatments did not differ for the female. Our findings pointed out to the validity of individual-based approaches, as they can reveal how suitable olfactory stimuli are to minimizing stereotypies irrespective of particular traits.
Directory of Open Access Journals (Sweden)
Thomas O Crawford
Full Text Available The universal presence of a gene (SMN2 nearly identical to the mutated SMN1 gene responsible for Spinal Muscular Atrophy (SMA has proved an enticing incentive to therapeutics development. Early disappointments from putative SMN-enhancing agent clinical trials have increased interest in improving the assessment of SMN expression in blood as an early "biomarker" of treatment effect.A cross-sectional, single visit, multi-center design assessed SMN transcript and protein in 108 SMA and 22 age and gender-matched healthy control subjects, while motor function was assessed by the Modified Hammersmith Functional Motor Scale (MHFMS. Enrollment selectively targeted a broad range of SMA subjects that would permit maximum power to distinguish the relative influence of SMN2 copy number, SMA type, present motor function, and age.SMN2 copy number and levels of full-length SMN2 transcripts correlated with SMA type, and like SMN protein levels, were lower in SMA subjects compared to controls. No measure of SMN expression correlated strongly with MHFMS. A key finding is that SMN2 copy number, levels of transcript and protein showed no correlation with each other.This is a prospective study that uses the most advanced techniques of SMN transcript and protein measurement in a large selectively-recruited cohort of individuals with SMA. There is a relationship between measures of SMN expression in blood and SMA type, but not a strong correlation to motor function as measured by the MHFMS. Low SMN transcript and protein levels in the SMA subjects relative to controls suggest that these measures of SMN in accessible tissues may be amenable to an "early look" for target engagement in clinical trials of putative SMN-enhancing agents. Full length SMN transcript abundance may provide insight into the molecular mechanism of phenotypic variation as a function of SMN2 copy number.Clinicaltrials.gov NCT00756821.
DETRA: Model description and evaluation of model performance
International Nuclear Information System (INIS)
Suolanen, V.
1996-01-01
The computer code DETRA is a generic tool for environmental transfer analyses of radioactive or stable substances. The code has been applied for various purposes, mainly problems related to the biospheric transfer of radionuclides both in safety analyses of disposal of nuclear wastes and in consideration of foodchain exposure pathways in the analyses of off-site consequences of reactor accidents. For each specific application an individually tailored conceptual model can be developed. The biospheric transfer analyses performed by the code are typically carried out for terrestrial, aquatic and food chain applications. 21 refs, 35 figs, 15 tabs
Chen, Yi; Pouillot, Régis; S Burall, Laurel; Strain, Errol A; Van Doren, Jane M; De Jesus, Antonio J; Laasri, Anna; Wang, Hua; Ali, Laila; Tatavarthy, Aparna; Zhang, Guodong; Hu, Lijun; Day, James; Sheth, Ishani; Kang, Jihun; Sahu, Surasri; Srinivasan, Devayani; Brown, Eric W; Parish, Mickey; Zink, Donald L; Datta, Atin R; Hammack, Thomas S; Macarisin, Dumitru
2017-01-16
A precise and accurate method for enumeration of low level of Listeria monocytogenes in foods is critical to a variety of studies. In this study, paired comparison of most probable number (MPN) and direct plating enumeration of L. monocytogenes was conducted on a total of 1730 outbreak-associated ice cream samples that were naturally contaminated with low level of L. monocytogenes. MPN was performed on all 1730 samples. Direct plating was performed on all samples using the RAPID'L.mono (RLM) agar (1600 samples) and agar Listeria Ottaviani and Agosti (ALOA; 130 samples). Probabilistic analysis with Bayesian inference model was used to compare paired direct plating and MPN estimates of L. monocytogenes in ice cream samples because assumptions implicit in ordinary least squares (OLS) linear regression analyses were not met for such a comparison. The probabilistic analysis revealed good agreement between the MPN and direct plating estimates, and this agreement showed that the MPN schemes and direct plating schemes using ALOA or RLM evaluated in the present study were suitable for enumerating low levels of L. monocytogenes in these ice cream samples. The statistical analysis further revealed that OLS linear regression analyses of direct plating and MPN data did introduce bias that incorrectly characterized systematic differences between estimates from the two methods. Published by Elsevier B.V.
Ground-water transport model selection and evaluation guidelines
International Nuclear Information System (INIS)
Simmons, C.S.; Cole, C.R.
1983-01-01
Guidelines are being developed to assist potential users with selecting appropriate computer codes for ground-water contaminant transport modeling. The guidelines are meant to assist managers with selecting appropriate predictive models for evaluating either arid or humid low-level radioactive waste burial sites. Evaluation test cases in the form of analytical solutions to fundamental equations and experimental data sets have been identified and recommended to ensure adequate code selection, based on accurate simulation of relevant physical processes. The recommended evaluation procedures will consider certain technical issues related to the present limitations in transport modeling capabilities. A code-selection plan will depend on identifying problem objectives, determining the extent of collectible site-specific data, and developing a site-specific conceptual model for the involved hydrology. Code selection will be predicated on steps for developing an appropriate systems model. This paper will review the progress in developing those guidelines. 12 references
Geerts, L; Adriaens, E; Alépée, N; Guest, R; Willoughby, J A; Kandarova, H; Drzewiecka, A; Fochtman, P; Verstraelen, S; Van Rompay, A R
2017-09-21
Assessment of ocular irritation is a regulatory requirement in safety evaluation of industrial and consumer products. Although a number of in vitro ocular irritation assays exist, none are capable of fully categorizing chemicals as stand-alone assays. Therefore, the CEFIC-LRI-AIMT6-VITO CON4EI (CONsortium for in vitro Eye Irritation testing strategy) project was developed to assess the reliability of eight in vitro test methods and computational models as well as establishing an optimal tiered-testing strategy. For three computational models (Toxtree, and Case Ultra EYE_DRAIZE and EYE_IRR) performance parameters were calculated. Coverage ranged from 15 to 58%. Coverage was 2 to 3.4 times higher for liquids than for solids. The lowest number of false positives (5%) was reached with EYE_IRR; this model however also gave a high number of false negatives (46%). The lowest number of false negatives (25%) was seen with Toxtree; for liquids Toxtree predicted the lowest number of false negatives (11%), for solids EYE_DRAIZE did (17%). It can be concluded that the training sets should be enlarged with high quality data. The tested models are not yet sufficiently powerful for stand-alone evaluations, but that they can surely become of value in an integrated weight-of-evidence approach in hazard assessment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lei, Da; Lin, Mian; Li, Yun; Jiang, Wenbin
2018-03-01
An accurate model of the dynamic contact angle θ d is critical for the calculation of capillary force in applications like enhanced oil recovery, where the capillary number Ca ranges from 10 -10 to 10 -5 and the Bond number Bo is less than 10 -4 . The rate-dependence of the dynamic contact angle under such conditions remains blurred, and is the main target of this study. Featuring with pressure control and interface tracking, the innovative experimental system presented in this work achieves the desired ranges of Ca and Bo, and enables the direct optical measurement of dynamic contact angles in capillaries as tiny as 40 × 20 (width × height) μm and 80 × 20 μm. The advancing and receding processes of wetting and nonwetting liquids were tested. The dynamic contact angle was confirmed velocity-independent with 10 -9 < Ca < 10 -5 (contact line velocity V = 0.135-490 μm/s) and it can be described by a two-angle model with desirable accuracy. A modified two-angle model was developed and an empirical form was obtained from experiments. For different liquids contacting the same surface, the advancing angle θ adv approximately equals the static contact angle θ o . The receding angle θ rec was found to be a linear function of θ adv , in good agreement with our and other experiments from the literature. Copyright © 2018 Elsevier Inc. All rights reserved.
A Universal Model for the Normative Evaluation of Internet Information.
Spence, E.H.
2009-01-01
Beginning with the initial premise that as the Internet has a global character, the paper will argue that the normative evaluation of digital information on the Internet necessitates an evaluative model that is itself universal and global in character (I agree, therefore, with Gorniak- Kocikowska’s
Synthesis, evaluation and molecular modelling studies of some ...
Indian Academy of Sciences (India)
Home; Journals; Journal of Chemical Sciences; Volume 122; Issue 2. Synthesis, evaluation and molecular modelling studies of some novel 3-(3 ... The compounds have been characterized on the basis of elemental analysis and spectral data. All the compounds were evaluated for their HIV-1 RT inhibitory activity. Among ...
LINDOZ model for Finland environment: Model description and evaluation of model performance
International Nuclear Information System (INIS)
Galeriu, D.; Apostoaie, A.I.; Mocanu, N.; Paunescu, N.
1996-01-01
LINDOZ model was developed as a realistic assessment tool for radioactive contamination of the environment. It was designed to produce estimates for the concentration of the pollutant in different compartments of the terrestrial ecosystem (soil, vegetation, animal tissue, and animal products), and to evaluate human exposure to the contaminant (concentration in whole human body, and dose to humans) from inhalation, ingestion and external irradiation. The user can apply LINDOZ for both routine and accidental type of releases. 2 figs, 2 tabs
Evaluating complex fusion systems based on causal probabilistic models
Mignet, F.; Pavlin, G.; de Oude, P.; da Costa, P.C.G.
2013-01-01
The paper evaluates a class of fusion systems that support interpretation of complex patterns consisting of large numbers of heterogeneous data obtained from distributed sources at different points in time. The fusion solutions in such domains must be able to process large quantities of
Strohl, Bonnie, Comp.
This bibliography contains annotations of 110 journal articles on topics related to library collection evaluation techniques, including academic library collections, access-vs-ownership, "Books for College Libraries," business collections, the OCLC/AMIGOS Collection Analysis CD, circulation data, citation-checking, collection bias,…
Davids, Mogamat Razeen; Harvey, Justin; Halperin, Mitchell L.; Chikte, Usuf M. E.
2015-01-01
The usability of computer interfaces has a major influence on learning. Optimising the usability of e-learning resources is therefore essential. However, this may be neglected because of time and monetary constraints. User testing is a common approach to usability evaluation and involves studying typical end-users interacting with the application…
Standardizing the performance evaluation of short-term wind prediction models
DEFF Research Database (Denmark)
Madsen, Henrik; Pinson, Pierre; Kariniotakis, G.
2005-01-01
Short-term wind power prediction is a primary requirement for efficient large-scale integration of wind generation in power systems and electricity markets. The choice of an appropriate prediction model among the numerous available models is not trivial, and has to be based on an objective...... evaluation of model performance. This paper proposes a standardized protocol for the evaluation of short-term wind-poser preciction systems. A number of reference prediction models are also described, and their use for performance comparison is analysed. The use of the protocol is demonstrated using results...... from both on-shore and off-shore wind forms. The work was developed in the frame of the Anemos project (EU R&D project) where the protocol has been used to evaluate more than 10 prediction systems....
Energy Technology Data Exchange (ETDEWEB)
Ramírez-Hernández, Abelardo, E-mail: abelardo@anl.gov; Pablo, Juan J. de, E-mail: depablo@uchicago.edu [Materials Science Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439 (United States); Institute for Molecular Engineering, The University of Chicago, Chicago, Illinois 60637 (United States); Peters, Brandon L.; Andreev, Marat; Schieber, Jay D., E-mail: schieber@iit.edu [Institute for Molecular Engineering, The University of Chicago, Chicago, Illinois 60637 (United States)
2015-12-28
A theoretically informed entangled polymer simulation approach is presented for description of the linear and non-linear rheology of entangled polymer melts. The approach relies on a many-chain representation and introduces the topological effects that arise from the non-crossability of molecules through effective fluctuating interactions, mediated by slip-springs, between neighboring pairs of macromolecules. The total number of slip-springs is not preserved but, instead, it is controlled through a chemical potential that determines the average molecular weight between entanglements. The behavior of the model is discussed in the context of a recent theory for description of homogeneous materials, and its relevance is established by comparing its predictions to experimental linear and non-linear rheology data for a series of well-characterized linear polyisoprene melts. The results are shown to be in quantitative agreement with experiment and suggest that the proposed formalism may also be used to describe the dynamics of inhomogeneous systems, such as composites and copolymers. Importantly, the fundamental connection made here between our many-chain model and the well-established, thermodynamically consistent single-chain mean-field models provides a path to systematic coarse-graining for prediction of polymer rheology in structurally homogeneous and heterogeneous materials.
Ramírez-Hernández, Abelardo; Peters, Brandon L.; Andreev, Marat; Schieber, Jay D.; de Pablo, Juan J.
2015-12-01
A theoretically informed entangled polymer simulation approach is presented for description of the linear and non-linear rheology of entangled polymer melts. The approach relies on a many-chain representation and introduces the topological effects that arise from the non-crossability of molecules through effective fluctuating interactions, mediated by slip-springs, between neighboring pairs of macromolecules. The total number of slip-springs is not preserved but, instead, it is controlled through a chemical potential that determines the average molecular weight between entanglements. The behavior of the model is discussed in the context of a recent theory for description of homogeneous materials, and its relevance is established by comparing its predictions to experimental linear and non-linear rheology data for a series of well-characterized linear polyisoprene melts. The results are shown to be in quantitative agreement with experiment and suggest that the proposed formalism may also be used to describe the dynamics of inhomogeneous systems, such as composites and copolymers. Importantly, the fundamental connection made here between our many-chain model and the well-established, thermodynamically consistent single-chain mean-field models provides a path to systematic coarse-graining for prediction of polymer rheology in structurally homogeneous and heterogeneous materials.
International Nuclear Information System (INIS)
Chagas Moura, Márcio das; Azevedo, Rafael Valença; Droguett, Enrique López; Chaves, Leandro Rego; Lins, Isis Didier
2016-01-01
Occupational accidents pose several negative consequences to employees, employers, environment and people surrounding the locale where the accident takes place. Some types of accidents correspond to low frequency-high consequence (long sick leaves) events, and then classical statistical approaches are ineffective in these cases because the available dataset is generally sparse and contain censored recordings. In this context, we propose a Bayesian population variability method for the estimation of the distributions of the rates of accident and recovery. Given these distributions, a Markov-based model will be used to estimate the uncertainty over the expected number of accidents and the work time loss. Thus, the use of Bayesian analysis along with the Markov approach aims at investigating future trends regarding occupational accidents in a workplace as well as enabling a better management of the labor force and prevention efforts. One application example is presented in order to validate the proposed approach; this case uses available data gathered from a hydropower company in Brazil. - Highlights: • This paper proposes a Bayesian method to estimate rates of accident and recovery. • The model requires simple data likely to be available in the company database. • These results show the proposed model is not too sensitive to the prior estimates.
Evaluating a novel resident role-modelling programme.
Sternszus, Robert; Steinert, Yvonne; Bhanji, Farhan; Andonian, Sero; Snell, Linda S
2017-05-09
Role modelling is a fundamental method by which students learn from residents. To our knowledge, however, resident-as-teacher curricula have not explicitly addressed resident role modelling. The purpose of this project was to design, implement and evaluate an innovative programme to teach residents about role modelling. The authors designed a resident role-modelling programme and incorporated it into the 2015 and 2016 McGill University resident-as-teacher curriculum. Influenced by experiential and social learning theories, the programme incorporated flipped-classroom and simulation approaches to teach residents to be aware and deliberate role models. Outcomes were assessed through a pre- and immediate post-programme questionnaire evaluating reaction and learning, a delayed post-programme questionnaire evaluating learning, and a retrospective pre-post questionnaire (1 month following the programme) evaluating self-reported behaviour changes. Thirty-three of 38 (87%) residents who participated in the programme completed the evaluation, with 25 residents (66%) completing all questionnaires. Participants rated the programme highly on a five-point Likert scale (where 1 = not helpful and 5 = very helpful; mean score, M = 4.57; standard deviation, SD = 0.50), and showed significant improvement in their perceptions of their importance as role models and their knowledge of deliberate role modelling. Residents also reported an increased use of deliberate role-modelling strategies 1 month after completing the programme. Resident-as-teacher curricula have not explicitly addressed resident role modelling DISCUSSION: The incorporation of resident role modelling into our resident-as-teacher curriculum positively influenced the participants' perceptions of their role-modelling abilities. This programme responds to a gap in resident training and has the potential to guide further programme development in this important and often overlooked area. © 2017 John Wiley & Sons
Matsuyama, Yusuke; Fujiwara, Takeo; Aida, Jun; Watt, Richard G; Kondo, Naoki; Yamamoto, Tatsuo; Kondo, Katsunori; Osaka, Ken
2016-12-01
From a life-course perspective, adverse childhood experiences (ACEs) such as childhood abuse are known risk factors for adult diseases and death throughout life. ACEs could also cause poor dental health in later life because they could induce poor dental health in childhood, initiate unhealthy behaviors, and lower immune and physiological functions. However, it is not known whether ACEs have a longitudinal adverse effect on dental health in older age. This study aimed to investigate the association between experience of childhood abuse until the age of 18 and current number of remaining teeth among a sample of older Japanese adults. A retrospective cohort study was conducted using the data from the Japan Gerontological Evaluation Study (JAGES), a large-scale, self-reported survey in 2013 including 27 525 community-dwelling Japanese aged ≥65 years (response rate=71.1%). The outcome, current number of remaining teeth was used categorically: ≥20, 10-19, 5-9, 1-4, and no teeth. Childhood abuse was defined as having any experience of physical abuse, psychological abuse, and psychological neglect up until the age of 18 years. Ordered logistic regression models were applied. Of the 25 189 respondents who indicated their number of remaining teeth (mean age: 73.9; male: 46.5%), 14.8% had experience of childhood abuse. Distributions of ≥20, 10-19, 5-9, 1-4, and no teeth were as follows: 46.6%, 22.0%, 11.4%, 8.2%, and 11.8% among respondents with childhood abuse, while 52.3%, 21.3%, 10.3%, 6.6%, and 9.5% among respondents without childhood abuse. Childhood abuse was significantly associated with fewer remaining teeth after adjusting for covariates including socioeconomic status (odds ratio=1.14; 95% confidence interval: 1.06, 1.22). Childhood abuse could have a longitudinal adverse effect on later dental health in older age. This study emphasizes the importance of early life experiences on dental health throughout later life. © 2016 John Wiley & Sons A/S. Published by
Faculty Performance Evaluation: The CIPP-SAPS Model.
Mitcham, Maralynne
1981-01-01
The issues of faculty performance evaluation for allied health professionals are addressed. Daniel Stufflebeam's CIPP (content-imput-process-product) model is introduced and its development into a CIPP-SAPS (self-administrative-peer- student) model is pursued. (Author/CT)
Evaluation of forest snow processes models (SnowMKIP2)
Nick Rutter; Richard Essery; John Pomeroy; Nuria Altimir; Kostas Andreadis; Ian Baker; Alan Barr; Paul Bartlett; Aaron Boone; Huiping Deng; Herve Douville; Emanuel Dutra; Kelly Elder; others
2009-01-01
Thirty-three snowpack models of varying complexity and purpose were evaluated across a wide range of hydrometeorological and forest canopy conditions at five Northern Hemisphere locations, for up to two winter snow seasons. Modeled estimates of snow water equivalent (SWE) or depth were compared to observations at forest and open sites at each location. Precipitation...
Using an ecosystem model to evaluate fisheries management ...
African Journals Online (AJOL)
A coral reef ecosystem simulation model, CAFFEE, was developedto evaluate the effects of fisheries management measures on coral reef ecosystem services and functioning, independently or combined with climate change impacts. As an example of the types of simulations available, we present model outputsfor ...
Evaluating Emulation-based Models of Distributed Computing Systems
Energy Technology Data Exchange (ETDEWEB)
Jones, Stephen T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Gabert, Kasimir G. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Cyber Initiatives; Tarman, Thomas D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Emulytics Initiatives
2017-08-01
Emulation-based models of distributed computing systems are collections of virtual ma- chines, virtual networks, and other emulation components configured to stand in for oper- ational systems when performing experimental science, training, analysis of design alterna- tives, test and evaluation, or idea generation. As with any tool, we should carefully evaluate whether our uses of emulation-based models are appropriate and justified. Otherwise, we run the risk of using a model incorrectly and creating meaningless results. The variety of uses of emulation-based models each have their own goals and deserve thoughtful evaluation. In this paper, we enumerate some of these uses and describe approaches that one can take to build an evidence-based case that a use of an emulation-based model is credible. Predictive uses of emulation-based models, where we expect a model to tell us something true about the real world, set the bar especially high and the principal evaluation method, called validation , is comensurately rigorous. We spend the majority of our time describing and demonstrating the validation of a simple predictive model using a well-established methodology inherited from decades of development in the compuational science and engineering community.
The fence experiment - a first evaluation of shelter models
DEFF Research Database (Denmark)
Peña, Alfredo; Bechmann, Andreas; Conti, Davide
2016-01-01
We present a preliminary evaluation of shelter models of different degrees of complexity using full-scale lidar measurements of the shelter on a vertical plane behind and orthogonal to a fence. Model results accounting for the distribution of the relative wind direction within the observed direct...
Evaluation of preformance of Predictive Models for Deoxynivalenol in Wheat
Fels, van der H.J.
2014-01-01
The aim of this study was to evaluate the performance of two predictive models for deoxynivalenol contamination of wheat at harvest in the Netherlands, including the use of weather forecast data and external model validation. Data were collected in a different year and from different wheat fields
Boussinesq Modeling of Wave Propagation and Runup over Fringing Coral Reefs, Model Evaluation Report
National Research Council Canada - National Science Library
Demirbilek, Zeki; Nwogu, Okey G
2007-01-01
..., for waves propagating over fringing reefs. The model evaluation had two goals: (a) investigate differences between laboratory and field characteristics of wave transformation processes over reefs, and (b...
Garcia, Florine; Folton, Nathalie; Oudin, Ludovic; Arnaud, Patrick
2015-04-01
Issues with water resource management result from both an increasing demand and climate changes. The situations of low-flows, droughts and more generally lack of water are critically scrutinized. In this context, there is a need for tools to assist water agencies in the prediction and management of reference low-flows at gauged and ungauged catchment locations. IRSTEA developed GR2M-LoiEau, a conceptual distributed rainfall-runoff model, which is combined with a regionalized model of snow storage and melt. GR2M-LoiEau relies on two parameters which are regionalized and mapped throughout France. This model allows to cartography annual and monthly reference low-flows. The input meteorological data come from the distributed mesoscale atmospheric analysis system SAFRAN, which provides daily solid and liquid precipitations and temperatures data from everywhere in the French territory. In order to fully exploit these daily meteorological data to estimate daily statistics on low flows, a new version of GR2M-LoiEau is being developed at a daily time step, yet keeping only a few regionalized parameters. The aim of this study is to design a comprehensive set of tests to allow comparing low-flows obtained with different regionalization methods used to estimate low-flow model parameters. The new version of GR2M-LoiEau being not yet operational, the tests are made with GR4J (Perrin, 2002), a conceptual rainfall-runoff model, which already provides daily estimations, but involves four parameters that cannot easily be regionalized. Many studies showed the good prediction performances of this model. This work includes two parts. On the one hand, good criteria must be identified to evaluate and compare model results, good predictions of the model being expected about low flows and reference low flows, but also annual means and high flows. On the other hand, two methods of regionalization will have to be compared to estimate model parameters. The first one is rough, all the
Directory of Open Access Journals (Sweden)
Y. Narita
2009-10-01
Full Text Available We develop an estimator for the magnetic helicity density, a measure of the spiral geometry of magnetic field lines, in the wave number domain as a wave diagnostic tool based on multi-point measurements in space. The estimator is numerically tested with a synthetic data set and then applied to an observation of magnetic field fluctuations in the Earth foreshock region provided by the four-point measurements of the Cluster spacecraft. The energy and the magnetic helicity density are determined in the frequency and the wave number domain, which allows us to identify the wave properties in the plasma rest frame correcting for the Doppler shift. In the analyzed time interval, dominant wave components have parallel propagation to the mean magnetic field, away from the shock at about Alfvén speed and a left-hand spatial rotation sense of helicity with respect to the propagation direction, which means a right-hand temporal rotation sense of polarization. These wave properties are well explained by the right-hand resonant beam instability as the driving mechanism in the foreshock. Cluster observations allow therefore detailed comparisons with various theories of waves and instabilities.
Directory of Open Access Journals (Sweden)
Benedict Yan
Full Text Available ALK is an established causative oncogenic driver in neuroblastoma, and is likely to emerge as a routine biomarker in neuroblastoma diagnostics. At present, the optimal strategy for clinical diagnostic evaluation of ALK protein, genomic and hotspot mutation status is not well-studied. We evaluated ALK immunohistochemical (IHC protein expression using three different antibodies (ALK1, 5A4 and D5F3 clones, ALK genomic status using single-color chromogenic in situ hybridization (CISH, and ALK hotspot mutation status using conventional Sanger sequencing and a next-generation sequencing platform (Ion Torrent Personal Genome Machine (IT-PGM, in archival formalin-fixed, paraffin-embedded neuroblastoma samples. We found a significant difference in IHC results using the three different antibodies, with the highest percentage of positive cases seen on D5F3 immunohistochemistry. Correlation with ALK genomic and hotspot mutational status revealed that the majority of D5F3 ALK-positive cases did not possess either ALK genomic amplification or hotspot mutations. Comparison of sequencing platforms showed a perfect correlation between conventional Sanger and IT-PGM sequencing. Our findings suggest that D5F3 immunohistochemistry, single-color CISH and IT-PGM sequencing are suitable assays for evaluation of ALK status in future neuroblastoma clinical trials.
Ergonomic evaluation model of operational room based on team performance
Directory of Open Access Journals (Sweden)
YANG Zhiyi
2017-05-01
Full Text Available A theoretical calculation model based on the ergonomic evaluation of team performance was proposed in order to carry out the ergonomic evaluation of the layout design schemes of the action station in a multitasking operational room. This model was constructed in order to calculate and compare the theoretical value of team performance in multiple layout schemes by considering such substantial influential factors as frequency of communication, distance, angle, importance, human cognitive characteristics and so on. An experiment was finally conducted to verify the proposed model under the criteria of completion time and accuracy rating. As illustrated by the experiment results,the proposed approach is conductive to the prediction and ergonomic evaluation of the layout design schemes of the action station during early design stages,and provides a new theoretical method for the ergonomic evaluation,selection and optimization design of layout design schemes.