Directory of Open Access Journals (Sweden)
Y. H. Lee
2014-09-01
Full Text Available The TwO-Moment Aerosol Sectional microphysics model (TOMAS has been integrated into the state-of-the-art general circulation model, GISS ModelE2. TOMAS has the flexibility to select a size resolution as well as the lower size cutoff. A computationally efficient version of TOMAS is used here, which has 15 size bins covering 3 nm to 10 μm aerosol dry diameter. For each bin, it simulates the total aerosol number concentration and mass concentrations of sulphate, pure elementary carbon (hydrophobic, mixed elemental carbon (hydrophilic, hydrophobic organic matter, hydrophilic organic matter, sea salt, mineral dust, ammonium, and aerosol-associated water. This paper provides a detailed description of the ModelE2-TOMAS model and evaluates the model against various observations including aerosol precursor gas concentrations, aerosol mass and number concentrations, and aerosol optical depths. Additionally, global budgets in ModelE2-TOMAS are compared with those of other global aerosol models, and the TOMAS model is compared to the default aerosol model in ModelE2, which is a bulk aerosol model. Overall, the ModelE2-TOMAS predictions are within the range of other global aerosol model predictions, and the model has a reasonable agreement with observations of sulphur species and other aerosol components as well as aerosol optical depth. However, ModelE2-TOMAS (as well as the bulk aerosol model cannot capture the observed vertical distribution of sulphur dioxide over the Pacific Ocean possibly due to overly strong convective transport. The TOMAS model successfully captures observed aerosol number concentrations and cloud condensation nuclei concentrations. Anthropogenic aerosol burdens in the bulk aerosol model running in the same host model as TOMAS (ModelE2 differ by a few percent to a factor of 2 regionally, mainly due to differences in aerosol processes including deposition, cloud processing, and emission parameterizations. Larger differences are found
Evaluation and intercomparison of the aerosol number concentrations and CCNs in global models
Fanourgakis, Georgios; Myriokefalitakis, Stelios; Kanakidou, Maria; Makkonen, Risto; Grini, Alf; Stier, Philip; Watson-Parris, Duncan; Schutgens, Nick; Neubauer, David; Lohmann, Ulrike; Nenes, Athanasis
2017-04-01
In this work preliminary results of the current status of BACCHUS global modeling of aerosol number concentrations and cloud condensation nuclei (CCN) are presented and compared to observations. So far, simulation results from the TM4-ECPL, ECHAM-HAM, ECHAM6-HAM2 and NorESM models have become available. Hourly model results for the aerosol number concentrations and CCN concentrations at various supersaturation ratios, as well as their corresponding daily and monthly averaged values are compared to the measurements from nine ACTRIS sites for the years 2010-2015. CCN concentration persistence obtained from the auto-correlation function of observational and model data is compared. Seasonal variations are also considered in the present analysis. In order to identify any common biases against observations, the model results are further analyzed in terms of the particles chemical composition and the set of hygroscopicity parameters used for the calculation of CCNs. Annual mean surface-level number concentrations of various particle sizes and CCNs at 0.2% supersaturation predicted by the models along with their corresponding chemical composition are presented and discussed.
Martinez, L C; Calzado, A
2016-01-01
A parametric model is used for the calculation of the CT number of some selected human tissues of known compositions (Hi) in two hybrid systems, one SPECT-CT and one PET-CT. Only one well characterized substance, not necessarily tissue-like, needs to be scanned with the protocol of interest. The linear attenuation coefficients of these tissues for some energies of interest (μ(i)) have been calculated from their tabulated compositions and the NIST databases. These coefficients have been compared with those calculated with the bilinear model from the CT number (μ(B)i). No relevant differences have been found for bones and lung. In the soft tissue region, the differences can be up to 5%. These discrepancies are attributed to the different chemical composition for the tissues assumed by both methods.
A study on the minimum number of loci required for genetic evaluation using a finite locus model
Directory of Open Access Journals (Sweden)
Fernando Rohan L
2004-07-01
Full Text Available Abstract For a finite locus model, Markov chain Monte Carlo (MCMC methods can be used to estimate the conditional mean of genotypic values given phenotypes, which is also known as the best predictor (BP. When computationally feasible, this type of genetic prediction provides an elegant solution to the problem of genetic evaluation under non-additive inheritance, especially for crossbred data. Successful application of MCMC methods for genetic evaluation using finite locus models depends, among other factors, on the number of loci assumed in the model. The effect of the assumed number of loci on evaluations obtained by BP was investigated using data simulated with about 100 loci. For several small pedigrees, genetic evaluations obtained by best linear prediction (BLP were compared to genetic evaluations obtained by BP. For BLP evaluation, used here as the standard of comparison, only the first and second moments of the joint distribution of the genotypic and phenotypic values must be known. These moments were calculated from the gene frequencies and genotypic effects used in the simulation model. BP evaluation requires the complete distribution to be known. For each model used for BP evaluation, the gene frequencies and genotypic effects, which completely specify the required distribution, were derived such that the genotypic mean, the additive variance, and the dominance variance were the same as in the simulation model. For lowly heritable traits, evaluations obtained by BP under models with up to three loci closely matched the evaluations obtained by BLP for both purebred and crossbred data. For highly heritable traits, models with up to six loci were needed to match the evaluations obtained by BLP.
Re-evaluation of Predictive Models in Light of New Data: Sunspot Number Version 2.0
Gkana, A.; Zachilas, L.
2016-10-01
The original version of the Zürich sunspot number (Sunspot Number Version 1.0) has been revised by an entirely new series (Sunspot Number Version 2.0). We re-evaluate the performance of our previously proposed models for predicting solar activity in the light of the revised data. We perform new monthly and yearly predictions using the Sunspot Number Version 2.0 as input data and compare them with our original predictions (using the Sunspot Number Version 1.0 series as input data). We show that our previously proposed models are still able to produce quite accurate solar-activity predictions despite the full revision of the Zürich Sunspot Number, indicating that there is no significant degradation in their performance. Extending our new monthly predictions (July 2013 - August 2015) by 50 time-steps (months) ahead in time (from September 2015 to October 2019), we provide evidence that we are heading into a period of dramatically low solar activity. Finally, our new future long-term predictions endorse our previous claim that a prolonged solar activity minimum is expected to occur, lasting up to the year ≈ 2100.
Zhang, K.; Kazil, J.; Feichter, J.
2009-04-01
Since its first version developed by Stier et al. (2005), the global aerosol-climate model ECHAM5-HAM has gone through further development and updates. The changes in the model include (1) a new time integration scheme for the condensation of the sulfuric acid gas on existing particles, (2) a new aerosol nucleation scheme that takes into account the charged nucleation caused by cosmic rays, and (3) a parameterization scheme explicitly describing the conversion of aerosol particles to cloud nuclei. In this work, simulations performed with the old and new model versions are evaluated against some measurements reported in recent years. The focus is on the aerosol size distribution in the troposphere. Results show that modifications in the parameterizations have led to significant changes in the simulated aerosol concentrations. Vertical profiles of the total particle number concentration (diameter > 3nm) compiled by Clarke et al. (2002) suggest that, over the Pacific in the upper free troposphere, the tropics are associated with much higher concentrations than the mid-latitude regions. This feature is more reasonably reproduced by the new model version, mainly due to the improved results of the nucleation mode aerosols. In the lower levels (2-5 km above the Earth's surface), the number concentrations of the Aitken mode particles are overestimated compared to both the Pacific data given in Clarke et al. (2002) and the vertical profiles over Europe reported by Petzold et al. (2007). The physical and chemical processes that have led to these changes are identified by sensitivity tests. References: Clarke and Kapustin: A Pacific aerosol survey - part 1: a decade of data on production, transport, evolution and mixing in the troposphere, J. Atmos. Sci., 59, 363-382, 2002. Petzold et al.: Perturbation of the European free troposphere aerosol by North American forest fire plumes during the ICARTT-ITOP experiment in summer 2004, Atmos. Chem. Phys., 7, 5105-5127, 2007
Grieco, Preston W; Frumberg, David B; Weinberg, Maxwell; Pivec, Robert; Naziri, Qais; Uribe, Jaime A
2015-04-01
Numerous suturing techniques have been described to treat Achilles tendon ruptures. No prior studies have evaluated frayed tendon ends on construct strength and whether this allows for less extensile exposure. Forty bovine Achilles tendons were divided into groups: 1 control and 4 experimental. Experimental groups were sectioned with ends frayed longitudinally in 2 mm intervals for 2 cm with no fraying for the control group. Four-stand Krackow sutures were used for repairs with 3 loops in the control group, 2 loops in frayed section for experimental groups, and varying numbers of loops (1-4) in healthy tendon. Samples were tested in loading cells at 100 N and 190 N for 1000 cycles. Gap width and maximum load failure were measured. Gapping was tendon (10.9-13.9 mm). Most early catastrophic failures (5/8) occurred in groups with 1-2 loops in healthy tendon. Two failures at 100 N occurred in 1-loop healthy tendons. The least failures occurred in controls (2/8), at 190 N. Suture loops incorporated into frayed tendon portions predisposed repairs to significantly greater gapping and lower maximal failure forces than 4-strand Krackow repairs in unfrayed tendons. We cannot recommend attempting more limited exposures with sutures in frayed tendon as this may lead to early repair failure. We provided a physiologic model utilizing frayed tendon ends that resembles in vivo Achilles tendon rupture. © The Author(s) 2014.
From Concurrency Models to Numbers
DEFF Research Database (Denmark)
Hermanns, Holger; Zhang, Lijun
2011-01-01
Discrete-state Markov processes are very common models used for performance and dependability evaluation of, for example, distributed information and communication systems. Over the last fifteen years, compositional model construction and model checking algorithms have been studied for these proc...
From Concurrency Models to Numbers
DEFF Research Database (Denmark)
Hermanns, Holger; Zhang, Lijun
2011-01-01
Discrete-state Markov processes are very common models used for performance and dependability evaluation of, for example, distributed information and communication systems. Over the last fifteen years, compositional model construction and model checking algorithms have been studied for these proc...
Directory of Open Access Journals (Sweden)
A. Karppinen
2007-08-01
Full Text Available This study presents an evaluation and modeling exercise of the size fractionated aerosol particle number concentrations measured nearby a major road in Helsinki during 23 August–19 September 2003 and 14 January–11 February 2004. The available information also included electronic traffic counts, on-site meteorological measurements, and urban background particle number size distribution measurement. The ultrafine particle (UFP, diameter<100 nm number concentrations at the roadside site were approximately an order of magnitude higher than those at the urban background site during daytime and downwind conditions. Both the modal structure analysis of the particle number size distributions and the statistical correlation between the traffic density and the UFP number concentrations indicate that the UFP were evidently from traffic related emissions. The modeling exercise included the evolution of the particle number size distribution nearby the road during downwind conditions. The model simulation results revealed that the evaluation of the emission factors of aerosol particles might not be valid for the same site during different time.
Tomas, Jose M.; Hontangas, Pedro M.; Oliver, Amparo
2000-01-01
Assessed two models for confirmatory factor analysis of multitrait-multimethod data through Monte Carlo simulation. The correlated traits-correlated methods (CTCM) and the correlated traits-correlated uniqueness (CTCU) models were compared. Results suggest that CTCU is a good alternative to CTCM in the typical multitrait-multimethod matrix, but…
Eckhard, Timo; Valero, Eva M; Hernández-Andrés, Javier; Heikkinen, Ville
2014-03-01
In this work, we evaluate the conditionally positive definite logarithmic kernel in kernel-based estimation of reflectance spectra. Reflectance spectra are estimated from responses of a 12-channel multispectral imaging system. We demonstrate the performance of the logarithmic kernel in comparison with the linear and Gaussian kernel using simulated and measured camera responses for the Pantone and HKS color charts. Especially, we focus on the estimation model evaluations in case the selection of model parameters is optimized using a cross-validation technique. In experiments, it was found that the Gaussian and logarithmic kernel outperformed the linear kernel in almost all evaluation cases (training set size, response channel number) for both sets. Furthermore, the spectral and color estimation accuracies of the Gaussian and logarithmic kernel were found to be similar in several evaluation cases for real and simulated responses. However, results suggest that for a relatively small training set size, the accuracy of the logarithmic kernel can be markedly lower when compared to the Gaussian kernel. Further it was found from our data that the parameter of the logarithmic kernel could be fixed, which simplified the use of this kernel when compared with the Gaussian kernel.
Modelling in environments without numbers
Grigoraş, D.R.; Hoede, C.
2008-01-01
In order to study how students are handling modelling situations, we address the type of tasks without an obvious mathematical character. The mathematical elements are somehow hidden and are to be elaborated by the students, if their solving strategy goes in that direction. The main reason why we el
Evaluating Number Sense in Workforce Students
Steinke, Dorothea A.
2015-01-01
Earlier institution-sponsored research revealed that about 20% of students in community college basic math and pre-algebra programs lacked a sense of part-whole relationships with whole numbers. Using the same tool with a group of 86 workforce students, about 75% placed five whole numbers on an empty number line in a way that indicated lack of…
Three probes for diagnosing photochemical dynamics are presented and applied to specialized ambient surface-level observations and to a numerical photochemical model to better understand rates of production and other process information in the atmosphere and in the model. Howeve...
Lepton number violation in 331 models
Fonseca, Renato M
2016-01-01
Different models based on the extended $SU(3)_{C}\\times SU(3)_{L}\\times U(1)_{X}$ (331) gauge group have been proposed over the past four decades. Yet, despite being an active research topic, the status of lepton number in 331 models has not been fully addressed in the literature, and furthermore many of the original proposals can not explain the observed neutrino masses. In this paper we review the basic features of various 331 models, focusing on potential sources of lepton number violation. We then describe different modifications which can be made to the original models in order to accommodate neutrino (and charged lepton) masses.
Lepton number violation in 331 models
Fonseca, Renato M.; Hirsch, Martin
2016-12-01
Different models based on the extended S U (3 )C×S U (3 )L×U (1 )X (331) gauge group have been proposed over the past four decades. Yet, despite being an active research topic, the status of lepton number in 331 models has not been fully addressed in the literature, and furthermore many of the original proposals can not explain the observed neutrino masses. In this paper we review the basic features of various 331 models, focusing on potential sources of lepton number violation. We then describe different modifications which can be made to the original models in order to accommodate neutrino (and charged lepton) masses.
Nowcasting sunshine number using logistic modeling
Brabec, Marek; Badescu, Viorel; Paulescu, Marius
2013-04-01
In this paper, we present a formalized approach to statistical modeling of the sunshine number, binary indicator of whether the Sun is covered by clouds introduced previously by Badescu (Theor Appl Climatol 72:127-136, 2002). Our statistical approach is based on Markov chain and logistic regression and yields fully specified probability models that are relatively easily identified (and their unknown parameters estimated) from a set of empirical data (observed sunshine number and sunshine stability number series). We discuss general structure of the model and its advantages, demonstrate its performance on real data and compare its results to classical ARIMA approach as to a competitor. Since the model parameters have clear interpretation, we also illustrate how, e.g., their inter-seasonal stability can be tested. We conclude with an outlook to future developments oriented to construction of models allowing for practically desirable smooth transition between data observed with different frequencies and with a short discussion of technical problems that such a goal brings.
QSPR Models for Octane Number Prediction
Directory of Open Access Journals (Sweden)
Jabir H. Al-Fahemi
2014-01-01
Full Text Available Quantitative structure-property relationship (QSPR is performed as a means to predict octane number of hydrocarbons via correlating properties to parameters calculated from molecular structure; such parameters are molecular mass M, hydration energy EH, boiling point BP, octanol/water distribution coefficient logP, molar refractivity MR, critical pressure CP, critical volume CV, and critical temperature CT. Principal component analysis (PCA and multiple linear regression technique (MLR were performed to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The results of PCA explain the interrelationships between octane number and different variables. Correlation coefficients were calculated using M.S. Excel to examine the relationship between multiple variables of the above parameters and the octane number of hydrocarbons. The data set was split into training of 40 hydrocarbons and validation set of 25 hydrocarbons. The linear relationship between the selected descriptors and the octane number has coefficient of determination (R2=0.932, statistical significance (F=53.21, and standard errors (s =7.7. The obtained QSPR model was applied on the validation set of octane number for hydrocarbons giving RCV2=0.942 and s=6.328.
Stochastic modeling of sunshine number data
Energy Technology Data Exchange (ETDEWEB)
Brabec, Marek, E-mail: mbrabec@cs.cas.cz [Department of Nonlinear Modeling, Institute of Computer Science, Academy of Sciences of the Czech Republic, Pod Vodarenskou vezi 2, 182 07 Prague 8 (Czech Republic); Paulescu, Marius [Physics Department, West University of Timisoara, V. Parvan 4, 300223 Timisoara (Romania); Badescu, Viorel [Candida Oancea Institute, Polytechnic University of Bucharest, Spl. Independentei 313, 060042 Bucharest (Romania)
2013-11-13
In this paper, we will present a unified statistical modeling framework for estimation and forecasting sunshine number (SSN) data. Sunshine number has been proposed earlier to describe sunshine time series in qualitative terms (Theor Appl Climatol 72 (2002) 127-136) and since then, it was shown to be useful not only for theoretical purposes but also for practical considerations, e.g. those related to the development of photovoltaic energy production. Statistical modeling and prediction of SSN as a binary time series has been challenging problem, however. Our statistical model for SSN time series is based on an underlying stochastic process formulation of Markov chain type. We will show how its transition probabilities can be efficiently estimated within logistic regression framework. In fact, our logistic Markovian model can be relatively easily fitted via maximum likelihood approach. This is optimal in many respects and it also enables us to use formalized statistical inference theory to obtain not only the point estimates of transition probabilities and their functions of interest, but also related uncertainties, as well as to test of various hypotheses of practical interest, etc. It is straightforward to deal with non-homogeneous transition probabilities in this framework. Very importantly from both physical and practical points of view, logistic Markov model class allows us to test hypotheses about how SSN dependents on various external covariates (e.g. elevation angle, solar time, etc.) and about details of the dynamic model (order and functional shape of the Markov kernel, etc.). Therefore, using generalized additive model approach (GAM), we can fit and compare models of various complexity which insist on keeping physical interpretation of the statistical model and its parts. After introducing the Markovian model and general approach for identification of its parameters, we will illustrate its use and performance on high resolution SSN data from the Solar
Transcriptional regulation by the numbers: models.
Bintu, Lacramioara; Buchler, Nicolas E; Garcia, Hernan G; Gerland, Ulrich; Hwa, Terence; Kondev, Jané; Phillips, Rob
2005-04-01
The expression of genes is regularly characterized with respect to how much, how fast, when and where. Such quantitative data demands quantitative models. Thermodynamic models are based on the assumption that the level of gene expression is proportional to the equilibrium probability that RNA polymerase (RNAP) is bound to the promoter of interest. Statistical mechanics provides a framework for computing these probabilities. Within this framework, interactions of activators, repressors, helper molecules and RNAP are described by a single function, the "regulation factor". This analysis culminates in an expression for the probability of RNA polymerase binding at the promoter of interest as a function of the number of regulatory proteins in the cell.
Microcomputers and Evaluation. Evaluation Guides: Guide Number 1.
Gray, Peter J.
The potential uses of microcomputers in evaluation research are discussed in this pamphlet. At the beginning, a matrix is provided showing the relationship between the steps in the evaluation research process and common types of computer software. Thereafter, the guide is organized sequentially around the evaluation research activities that are…
Microcomputers: Software Evaluation. Evaluation Guides. Guide Number 17.
Gray, Peter J.
This guide discusses three critical steps in selecting microcomputer software and hardware: setting the context, software evaluation, and managing microcomputer use. Specific topics addressed include: (1) conducting an informal task analysis to determine how the potential user's time is spent; (2) identifying tasks amenable to computerization and…
Evaluation of a number skills development programme | Pietersen ...
African Journals Online (AJOL)
Evaluation of a number skills development programme. ... It was concluded that the use of concrete educational material should be central in the ... in the use of educational equipment and also to create an optimal learning environment.
Modeling the number of car theft using Poisson regression
Zulkifli, Malina; Ling, Agnes Beh Yen; Kasim, Maznah Mat; Ismail, Noriszura
2016-10-01
Regression analysis is the most popular statistical methods used to express the relationship between the variables of response with the covariates. The aim of this paper is to evaluate the factors that influence the number of car theft using Poisson regression model. This paper will focus on the number of car thefts that occurred in districts in Peninsular Malaysia. There are two groups of factor that have been considered, namely district descriptive factors and socio and demographic factors. The result of the study showed that Bumiputera composition, Chinese composition, Other ethnic composition, foreign migration, number of residence with the age between 25 to 64, number of employed person and number of unemployed person are the most influence factors that affect the car theft cases. These information are very useful for the law enforcement department, insurance company and car owners in order to reduce and limiting the car theft cases in Peninsular Malaysia.
Modeling the Concept of Number: What are the Alternatives?
Hernandez, Norma G.
1985-01-01
The use of a variety of models to develop number concepts is advocated. Four models are discussed, with illustrations: the cardinal number of a set, Cuisenaire rods, the number line, and the Papy Minicomputer. (MNS)
Pragmatic geometric model evaluation
Pamer, Robert
2015-04-01
Quantification of subsurface model reliability is mathematically and technically demanding as there are many different sources of uncertainty and some of the factors can be assessed merely in a subjective way. For many practical applications in industry or risk assessment (e. g. geothermal drilling) a quantitative estimation of possible geometric variations in depth unit is preferred over relative numbers because of cost calculations for different scenarios. The talk gives an overview of several factors that affect the geometry of structural subsurface models that are based upon typical geological survey organization (GSO) data like geological maps, borehole data and conceptually driven construction of subsurface elements (e. g. fault network). Within the context of the trans-European project "GeoMol" uncertainty analysis has to be very pragmatic also because of different data rights, data policies and modelling software between the project partners. In a case study a two-step evaluation methodology for geometric subsurface model uncertainty is being developed. In a first step several models of the same volume of interest have been calculated by omitting successively more and more input data types (seismic constraints, fault network, outcrop data). The positions of the various horizon surfaces are then compared. The procedure is equivalent to comparing data of various levels of detail and therefore structural complexity. This gives a measure of the structural significance of each data set in space and as a consequence areas of geometric complexity are identified. These areas are usually very data sensitive hence geometric variability in between individual data points in these areas is higher than in areas of low structural complexity. Instead of calculating a multitude of different models by varying some input data or parameters as it is done by Monte-Carlo-simulations, the aim of the second step of the evaluation procedure (which is part of the ongoing work) is to
DEFF Research Database (Denmark)
Borlund, Pia
2003-01-01
An alternative approach to evaluation of interactive information retrieval (IIR) systems, referred to as the IIR evaluation model, is proposed. The model provides a framework for the collection and analysis of IR interaction data. The aim of the model is two-fold: 1) to facilitate the evaluation...... assessments. The IIR evaluation model is presented as an alternative to the system-driven Cranfield model (Cleverdon, Mills & Keen, 1966; Cleverdon & Keen, 1966) which still is the dominant approach to the evaluation of IR and IIR systems. Key elements of the IIR evaluation model are the use of realistic...... of IIR systems as realistically as possible with reference to actual information searching and retrieval processes, though still in a relatively controlled evaluation environment; and 2) to calculate the IIR system performance taking into account the non-binary nature of the assigned relevance...
Evaluation of aerosol number concentrations in NorESM with improved nucleation parameterisation
Directory of Open Access Journals (Sweden)
R. Makkonen
2013-10-01
Full Text Available The Norwegian Earth System Model (NorESM is evaluated against atmospheric observations of aerosol number concentrations. The model is extended to include an explicit mechanism for new particle formation, and the secondary organic aerosol (SOA formation from biogenic precursors is revised. Several model experiments are conducted to study the sensitivity of simulated number concentrations to nucleation, SOA formation, black carbon size distribution and model meteorology. Comparison against 60 measurement sites reveals that the model with improved nucleation and SOA scheme performs well in terms of correlation coefficient R2=0.41 calculated against monthly mean observed aerosol number concentrations with a number concentration bias of −6%. NorESM generally overestimates the amplitude of the seasonal cycle, possibly due to too high sensitivity to biogenic precursors. Simulated vertical profiles are also evaluated against 12 flight campaigns.
Toward a Model Framework of Generalized Parallel Componential Processing of Multi-Symbol Numbers
Huber, Stefan; Cornelsen, Sonja; Moeller, Korbinian; Nuerk, Hans-Christoph
2015-01-01
In this article, we propose and evaluate a new model framework of parallel componential multi-symbol number processing, generalizing the idea of parallel componential processing of multi-digit numbers to the case of negative numbers by considering the polarity signs similar to single digits. In a first step, we evaluated this account by defining…
For prediction of elder survival by a Gompertz model, number dead is preferable to number alive.
Easton, Dexter M; Hirsch, Henry R
2008-12-01
The standard Gompertz equation for human survival fits very poorly the survival data of the very old (age 85 and above), who appear to survive better than predicted. An alternative Gompertz model based on the number of individuals who have died, rather than the number that are alive, at each age, tracks the data more accurately. The alternative model is based on the same differential equation as in the usual Gompertz model. The standard model describes the accelerated exponential decay of the number alive, whereas the alternative, heretofore unutilized model describes the decelerated exponential growth of the number dead. The alternative model is complementary to the standard and, together, the two Gompertz formulations allow accurate prediction of survival of the older as well as the younger mature members of the population.
Introducing Program Evaluation Models
Directory of Open Access Journals (Sweden)
Raluca GÂRBOAN
2008-02-01
Full Text Available Programs and project evaluation models can be extremely useful in project planning and management. The aim is to set the right questions as soon as possible in order to see in time and deal with the unwanted program effects, as well as to encourage the positive elements of the project impact. In short, different evaluation models are used in order to minimize losses and maximize the benefits of the interventions upon small or large social groups. This article introduces some of the most recently used evaluation models.
Numerical simulation of LBGK model for high Reynolds number flow
Institute of Scientific and Technical Information of China (English)
Zhou Xiao-Yang; Shi Bao-Chang; Wang Neng-Chao
2004-01-01
A principle of selecting relaxation parameter was proposed to observe the limit computational capability of the incompressible LBGK models developed by Guo ZL (Guo model) and He SY (He model) for high Reynolds number flow.To the two-dimensional driven cavity flow problem, the highest Reynolds numbers covered by Guo and He models are in the range 58000-52900 and 28000-29000, respectively, at 0.3 Mach number and 1/256 lattice space. The simulation results also show that the Guo model has stronger robustness due to its higher accuracy.
Evaluation of phase separator number in hydrodesulfurization (HDS) unit
Jayanti, A. D.; Indarto, A.
2016-11-01
The removal process of acid gases such as H2S in natural gas processing industry is required in order to meet sales gas specification. Hydrodesulfurization (HDS)is one of the processes in the refinery that is dedicated to reduce sulphur.InHDS unit, phase separator plays important role to remove H2S from hydrocarbons, operated at a certain pressure and temperature. Optimization of the number of separator performed on the system is then evaluated to understand the performance and economics. From the evaluation, it shows that all systems were able to meet the specifications of H2S in the desired product. However, one separator system resulted the highest capital and operational costs. The process of H2S removal with two separator systems showed the best performance in terms of both energy efficiency with the lowest capital and operating cost. The two separator system is then recommended as a reference in the HDS unit to process the removal of H2S from natural gas.
Prediction of cloud droplet number in a general circulation model
Energy Technology Data Exchange (ETDEWEB)
Ghan, S.J.; Leung, L.R. [Pacific Northwest National Lab., Richland, WA (United States)
1996-04-01
We have applied the Colorado State University Regional Atmospheric Modeling System (RAMS) bulk cloud microphysics parameterization to the treatment of stratiform clouds in the National Center for Atmospheric Research Community Climate Model (CCM2). The RAMS predicts mass concentrations of cloud water, cloud ice, rain and snow, and number concnetration of ice. We have introduced the droplet number conservation equation to predict droplet number and it`s dependence on aerosols.
Dual Numbers Approach in Multiaxis Machines Error Modeling
Directory of Open Access Journals (Sweden)
Jaroslav Hrdina
2014-01-01
Full Text Available Multiaxis machines error modeling is set in the context of modern differential geometry and linear algebra. We apply special classes of matrices over dual numbers and propose a generalization of such concept by means of general Weil algebras. We show that the classification of the geometric errors follows directly from the algebraic properties of the matrices over dual numbers and thus the calculus over the dual numbers is the proper tool for the methodology of multiaxis machines error modeling.
Photon Number Conserving Models of H II Bubbles during Reionization
Paranjape, Aseem; Padmanabhan, Hamsa
2015-01-01
Traditional excursion set based models of H II bubble growth during the epoch of reionization are known to violate photon number conservation, in the sense that the mass fraction in ionized bubbles in these models does not equal the ratio of the number of ionizing photons produced by sources and the number of hydrogen atoms in the intergalactic medium. We demonstrate that this problem arises from a fundamental conceptual shortcoming of the excursion set approach (already recognised in the literature on this formalism) which only tracks average mass fractions instead of the exact, stochastic source counts. With this insight, we build an approximately photon number conserving Monte Carlo model of bubble growth based on partitioning regions of dark matter into halos. Our model, which is formally valid for white noise initial conditions (ICs), shows dramatic improvements in photon number conservation, as well as substantial differences in the bubble size distribution, as compared to traditional models. We explore...
CMAQ Model Evaluation Framework
CMAQ is tested to establish the modeling system’s credibility in predicting pollutants such as ozone and particulate matter. Evaluation of CMAQ has been designed to assess the model’s performance for specific time periods and for specific uses.
McCutcheon, Kimble D.; Gordon, Stephen S.; Thompson, Paul A.
1992-07-01
NASA uses the Variable Polarity Plasma Arc Welding (VPPAW) process extensively for fabrication of Space Shuttle External Tanks. This welding process has been in use at NASA since the late 1970's but the physics of the process have never been satisfactorily modeled and understood. In an attempt to advance the level of understanding of VPPAW, Dr. Arthur C. Nunes, Jr., (NASA) has developed a mathematical model of the process. The work described in this report evaluated and used two versions (level-0 and level-1) of Dr. Nunes' model, and a model derived by the University of Alabama at Huntsville (UAH) from Dr. Nunes' level-1 model. Two series of VPPAW experiments were done, using over 400 different combinations of welding parameters. Observations were made of VPPAW process behavior as a function of specific welding parameter changes. Data from these weld experiments was used to evaluate and suggest improvements to Dr. Nunes' model. Experimental data and correlations with the model were used to develop a multi-variable control algorithm for use with a future VPPAW controller. This algorithm is designed to control weld widths (both on the crown and root of the weld) based upon the weld parameters, base metal properties, and real-time observation of the crown width. The algorithm exhibited accuracy comparable to that of the weld width measurements for both aluminum and mild steel welds.
Evaluating Number Sense in Community College Developmental Math Students
Steinke, Dorothea A.
2017-01-01
Community college developmental math students (N = 657) from three math levels were asked to place five whole numbers on a line that had only endpoints 0 and 20 marked. How the students placed the numbers revealed the same three stages of behavior that Steffe and Cobb (1988) documented in determining young children's number sense. 23% of the…
Theoretical models in low-Reynolds-number locomotion
Pak, On Shun
2014-01-01
The locomotion of microorganisms in fluids is ubiquitous and plays an important role in numerous biological processes. In this chapter we present an overview of theoretical modeling for low-Reynolds-number locomotion.
The Influence of Investor Number on a Microscopic Market Model
Hellthaler, T.
The stock market model of Levy, Persky, Solomon is simulated for much larger numbers of investors. While small markets can lead to realistically looking prices, the resulting prices of large markets oscillate smoothly in a semi-regular fashion.
Modelling the dispersion of particle numbers in five European cities
Kukkonen, J.; Karl, M.; Keuken, M.P.; Denier van der Gon, H.A.C.; Denby, B.R.; Singh, V.; Douros, J.; Manders, A.M.M.; Samaras, Z.; Moussiopoulos, N.; Jonkers, S.; Aarnio, M.; Karppinen, A.; Kangas, L.; Lutzenkirchen, S.; Petaja, T.; Vouitsis, I.; Sokhi, R.S.
2016-01-01
We present an overview of the modelling of particle number concentrations (PNCs) in five major European cities, namely Helsinki, Oslo, London, Rotterdam and Athens in 2008. Novel emission inventories of particle numbers have been compiled both on urban and European scales. We used atmospheric disper
Major revision of sunspot number: implication for the ionosphere models
Gulyaeva, Tamara
2016-07-01
Recently on 1st July, 2015, a major revision of the historical sunspot number series has been carried out as discussed in [Clette et al., Revisiting the Sunspot Number. A 400-Year Perspective on the Solar Cycle, Space Science Reviews, 186, Issue 1-4, pp. 35-103, 2014). The revised SSN2.0 dataset is provided along with the former SSN1.0 data at http://sidc.oma.be/silso/. The SSN2.0 values exceed the former conventional SSN1.0 data so that new SSNs are greater in many cases than the solar radio flux F10.7 values which pose a problem of SSN2.0 implementation as a driver of the International Reference Ionosphere, IRI, its extension to plasmasphere, IRI-Plas, NeQuick model, Russian Standard Ionosphere, SMI. In particular, the monthly predictions of the F2 layer peak are based on input of the ITU-R (former CCIR) and URSI maps. The CCIR and URSI maps coefficients are available for each month of the year, and for two levels of solar activity: low (SSN = 0) and high (SSN = 100). SSN is the monthly smoothed sunspot number from the SSN1.0 data set used as an index of the level of solar activity. For every SSN different from 0 or 100 the critical frequency foF2 and the M3000F2 radio propagation factor used for the peak height hmF2 production may be evaluated by an interpolation. The ionospheric proxies of the solar activity IG12 index or Global Electron Content GEC12 index, driving the ionospheric models, are also calibrated with the former SSN1.0 data. The paper presents a solar proxy intended to calibrate SSN2.0 data set to fit F10.7 solar radio flux and/or SSN1.0 data series. This study is partly supported by TUBITAK EEEAG 115E915.
Econometric Model Evaluation: Implications for Program Evaluation.
Ridge, Richard S.; And Others
1990-01-01
The problem associated with evaluating an econometric model using values outside those used in the model estimation is illustrated in the evaluations of a residential load management program during each of two successive years. Analysis reveals that attention must be paid to this problem. (Author/TJH)
The Performance of Discrete Models of Low Reynolds Number Swimmers
Wang, Qixuan
2015-01-01
Swimming by shape changes at low Reynolds number is widely used in biology and understanding how the efficiency of movement depends on the geometric pattern of shape changes is important to understand swimming of microorganisms and in designing low Reynolds number swimming models. The simplest models of shape changes are those that comprise a series of linked spheres that can change their separation and/or their size. Herein we compare the efficiency of three models in which these modes are used in different ways.
The Evaluation Exchange. Volume XV Number 1. Spring 2010
Coffman, Julia, Ed.; Harris, Erin, Ed.
2010-01-01
This issue of The Evaluation Exchange explores the promising practices and challenges associated with taking an enterprise to scale, along with the role that evaluation can and should play in that process. Surprisingly few examples exist of nonprofit efforts that have scaled up and achieved lasting success. A program or approach may be strong…
Numerical computations and mathematical modelling with infinite and infinitesimal numbers
Sergeyev, Yaroslav D
2012-01-01
Traditional computers work with finite numbers. Situations where the usage of infinite or infinitesimal quantities is required are studied mainly theoretically. In this paper, a recently introduced computational methodology (that is not related to the non-standard analysis) is used to work with finite, infinite, and infinitesimal numbers \\textit{numerically}. This can be done on a new kind of a computer - the Infinity Computer - able to work with all these types of numbers. The new computational tools both give possibilities to execute computations of a new type and open new horizons for creating new mathematical models where a computational usage of infinite and/or infinitesimal numbers can be useful. A number of numerical examples showing the potential of the new approach and dealing with divergent series, limits, probability theory, linear algebra, and calculation of volumes of objects consisting of parts of different dimensions are given.
Fuzzy Model for Trust Evaluation
Institute of Scientific and Technical Information of China (English)
Zhang Shibin; He Dake
2006-01-01
Based on fuzzy set theory, a fuzzy trust model is established by using membership function to describe the fuzziness of trust. The trust vectors of subjective trust are obtained based on a mathematical model of fuzzy synthetic evaluation. Considering the complicated and changeable relationships between various subjects, the multi-level mathematical model of fuzzy synthetic evaluation is introduced. An example of a two-level fuzzy synthetic evaluation model confirms the feasibility of the multi-level fuzzy synthesis evaluation model. The proposed fuzzy model for trust evaluation may provide a promising method for research of trust model in open networks.
On Evaluation of Overlap Integrals with Noninteger Principal Quantum Numbers
Institute of Scientific and Technical Information of China (English)
I.I.Guseinov; B.A.Mamedov
2004-01-01
By use of complete orthonormal sets of ψα exponential-type orbitals (ψα-ETOs,α=1,0,-1,-2,...) the series expansion formulas for the noninteger n Slater-type orbitals (NISTOs) in terms of integer n Slater-type orbitals (ISTOs) are derived. These formulas enable us to express the overlap integrals with NISTOs through the overlap integrals over ISTOs with the same and different screening constants. By calculating concrete cases the convergence of the series for arbitrary values of noninteger principal quantum numbers and screening constants of NISTOs and internuclear distances is tested. The accuracy of the results is quite high for quantum numbers, screening constants and location of STOs.
On Evaluation of Overlap Integrals with Noninteger Principal Quantum Numbers
Institute of Scientific and Technical Information of China (English)
I.I. Guseinov; B.A. Mamedov
2004-01-01
By use of complete orthonormal sets of ψα exponential-type orbitals (ψα-ETOs, α = 1, 0,-1,-2, ...) the series expansion formulas for the noninteger n* Slater-type orbitals (NISTOs) in terms of integer n Slater-type orbitals(ISTOs) are derived. These formulas enable us to express the overlap integrals with NISTOs through the overlap integrals over ISTOs with the same and different screening constants. By calculating concrete cases the convergence of the series for arbitrary values of noninteger principal quantum numbers and screening constants of NISTOs and internuclear distances is tested. The accuracy of the results is quite high for quantum numbers, screening constants and location of STOs.
Evaluation Theory, Models, and Applications
Stufflebeam, Daniel L.; Shinkfield, Anthony J.
2007-01-01
"Evaluation Theory, Models, and Applications" is designed for evaluators and students who need to develop a commanding knowledge of the evaluation field: its history, theory and standards, models and approaches, procedures, and inclusion of personnel as well as program evaluation. This important book shows how to choose from a growing…
Microcomputers: Instrument Generation Software. Evaluation Guides. Guide Number 11.
Gray, Peter J.
Designed to assist evaluators in selecting the appropriate software for the generation of various data collection instruments, this guide discusses such key program characteristics as text entry, item storage and manipulation, item retrieval, and printing. Some characteristics of a good instrument generation program are discussed; these include…
Microcomputers: Communication Software. Evaluation Guides. Guide Number 13.
Gray, Peter J.
This guide discusses four types of microcomputer-based communication programs that could prove useful to evaluators: (1) the direct communication of information generated by one computer to another computer; (2) using the microcomputer as a terminal to a mainframe computer to input, direct the analysis of, and/or output data using a statistical…
Evaluating Interagency Collaborations. State Series Paper (Number 2).
McLaughlin, John A.; Covert, Robert C.
A procedure is outlined for understanding and evaluating interagency collaboration. Five steps are addressed: (1) understanding the context of an interagency collaboration (stages of development, communication channels used); (2) verifying the need for interagency collaboration (needs for increased client access to services and reduced duplication…
Institute of Scientific and Technical Information of China (English)
李超; 潘琦; 徐锡武; 陈彤
2016-01-01
目的 探究运用Eviews软件时间序列函数,构建医院出院患者数的预测模型及其意义,选择准确预测医院未来医院出院人数的方法.方法 运用Eviews 6.0软件将具有趋势性及季节性周期变化的出院患者数时间序列数据进行对数及一阶差分变化,保证时间序列变为平稳序列,之后构建时间序列模型,并运用一系列评价指标及残差白噪声检测方法来判断模型的优劣,选出最优预测模型,评价其预测价值.结果 采用ARIMA乘积季节模型较于其他模型能够更精确的预测出院人数,2014年实际出院人数与预测人数进行对比验证,平均预测误差为11.3%.结论 对几种预测方法的综合比对,选用ARIMA(p,d,q)(P,D,Q)S模型能够更精确的预测医院出院人数,且有助于实现医院管理从事中事后管理向事前管理模式的转变.%Objectives To explore how to use Eviews 6.0 software's time series forecasting method to construct a model to predict the number of discharged patients and its significance, and how to choose an accurate method for predicting discharge patients.Methods To use Eviews 6.0 software to transform the time series data by logarithm and first-order difference method, to eliminate the data trend and seasonal changes, ensure the time series turn into stationary series, and then estimate time series model. At last, discuss the purpose of accurate prediction of the number of discharged patients.ResultsIn this study, we found the application of ARIMA seasonal model could predict the discharged numbers more accurately compared with other models. The comparison of actual discharged patients number with predicted numbers in 2014 confirmed that with average prediction error was 11.3%. ConclusionsThrough the comprehensive comparison of several prediction methods, we agreed that the ARIMA (p,d,q)(P,D,Q)S model was more accurate in the prediction of discharge patients numbers, and also helpful for the realization to
Application of Z-Number Based Modeling in Psychological Research
Directory of Open Access Journals (Sweden)
Rafik Aliev
2015-01-01
Full Text Available Pilates exercises have been shown beneficial impact on physical, physiological, and mental characteristics of human beings. In this paper, Z-number based fuzzy approach is applied for modeling the effect of Pilates exercises on motivation, attention, anxiety, and educational achievement. The measuring of psychological parameters is performed using internationally recognized instruments: Academic Motivation Scale (AMS, Test of Attention (D2 Test, and Spielberger’s Anxiety Test completed by students. The GPA of students was used as the measure of educational achievement. Application of Z-information modeling allows us to increase precision and reliability of data processing results in the presence of uncertainty of input data created from completed questionnaires. The basic steps of Z-number based modeling with numerical solutions are presented.
Baryon number fluctuations in quasi-particle model
Energy Technology Data Exchange (ETDEWEB)
Zhao, Ameng [Southeast University Chengxian College, Department of Foundation, Nanjing (China); Luo, Xiaofeng [Central China Normal University, Key Laboratory of Quark and Lepton Physics (MOE), Institute of Particle Physics, Wuhan (China); Zong, Hongshi [Nanjing University, Department of Physics, Nanjing (China); Joint Center for Particle, Nuclear Physics and Cosmology, Nanjing (China); Institute of Theoretical Physics, CAS, State Key Laboratory of Theoretical Physics, Beijing (China)
2017-04-15
Baryon number fluctuations are sensitive to the QCD phase transition and the QCD critical point. According to the Feynman rules of finite-temperature field theory, we calculated various order moments and cumulants of the baryon number distributions in the quasi-particle model of the quark-gluon plasma. Furthermore, we compared our results with the experimental data measured by the STAR experiment at RHIC. It is found that the experimental data can be well described by the model for the colliding energies above 30 GeV and show large discrepancies at low energies. This puts a new constraint on the qQGP model and also provides a baseline for the QCD critical point search in heavy-ion collisions at low energies. (orig.)
Baryon Number Fluctuations in Quasi-particle Model
Zhao, Ameng; Zong, Hongshi
2016-01-01
Baryon number fluctuations are sensitive to the QCD phase transition and QCD critical point. According to the Feynman rules of finite-temperature field theory, we calculated various order moments and cumulants of the baryon number distributions in the quasi-particle model of quark gluon plasma. Furthermore, we compared our results with the experimental data measured by the STAR experiment at RHIC. It is found that the experimental data can be well described by the model for the colliding energies above 30 GeV and show large discrepancies at low energies. It can put new constraint on qQGP model and also provide a baseline for the QCD critical point search in heavy-ion collisions at low energies.
Number of fermion generations from a novel grand unified model
Energy Technology Data Exchange (ETDEWEB)
Byakti, Pritibhajan; Mazumdar, Arindam; Pal, Palash B. [Saha Institute of Nuclear Physics, Kolkata (India); Emmanuel-Costa, David [Universidade de Lisboa, Departamento de Fisica and Centro de Fisica Teorica de Particulas (CFTP), Instituto Superior Tecnico (IST), Lisbon (Portugal)
2014-02-15
Electroweak interactions based on the gauge group SU(3){sub L} x U(1){sub X}, coupled to the QCD gauge group SU(3){sub c}, can predict the number of generations to be multiples of three. We first try to unify these models within SU(N) groups, using antisymmetric tensor representations only. After examining why these attempts fail, we continue to search for an SU(N) GUT that can explain the number of fermion generations. We show that such a model can be found for N = 9, with fermions in antisymmetric rank-1 and rank-3 representations only, and we examine the constraints on various masses in the model coming from the requirement of unification. (orig.)
Different Higgs models and the number of Higgs particles
Energy Technology Data Exchange (ETDEWEB)
Marek-Crnjac, L. [University of Maribor, Faculty of Mechanical Engineering, Smetanova ulica 17, SI-2000 Maribor (Slovenia)] e-mail: fs.taj06@uni-mb.si
2006-02-01
In this short paper we discuss some interesting Higgs models. It is concluded that the most likely scheme for the Higgs particles consists of five physical Higgs particles. These are two charged H{sup +}, H{sup -} and three neutrals h {sup 0}, H{sup 0}, A{sup 0}. Further more the most probably total number of elementary particles for each model is calculated [El Naschie MS. Experimental and theoretical arguments for the number of the mass of the Higgs particles. Chaos, Solitons and Fractals 2005;23:1091-8; El Naschie MS. Determining the mass of the Higgs and the electroweak bosons. Chaos, Solitons and Fractals 2005;24:899-905; El Naschie MS. On 366 kissing spheres in 10 dimensions, 528 P-Brane states in 11 dimensions and the 60 elementary particles of the standard model. Chaos, Solitons and Fractals 2005;24:447-57].
Statistical evaluation of PACSTAT random number generation capabilities
Energy Technology Data Exchange (ETDEWEB)
Piepel, G.F.; Toland, M.R.; Harty, H.; Budden, M.J.; Bartley, C.L.
1988-05-01
This report summarizes the work performed in verifying the general purpose Monte Carlo driver-program PACSTAT. The main objective of the work was to verify the performance of PACSTAT's random number generation capabilities. Secondary objectives were to document (using controlled configuration management procedures) changes made in PACSTAT at Pacific Northwest Laboratory, and to assure that PACSTAT input and output files satisfy quality assurance traceability constraints. Upon receipt of the PRIME version of the PACSTAT code from the Basalt Waste Isolation Project, Pacific Northwest Laboratory staff converted the code to run on Digital Equipment Corporation (DEC) VAXs. The modifications to PACSTAT were implemented using the WITNESS configuration management system, with the modifications themselves intended to make the code as portable as possible. Certain modifications were made to make the PACSTAT input and output files conform to quality assurance traceability constraints. 10 refs., 17 figs., 6 tabs.
Supervision in Factor Models Using a Large Number of Predictors
DEFF Research Database (Denmark)
Boldrini, Lorenzo; Hillebrand, Eric Tobias
In this paper we investigate the forecasting performance of a particular factor model (FM) in which the factors are extracted from a large number of predictors. We use a semi-parametric state-space representation of the FM in which the forecast objective, as well as the factors, is included.......g. a standard dynamic factor model with separate forecast and state equations....... in the state vector. The factors are informed of the forecast target (supervised) through the state equation dynamics. We propose a way to assess the contribution of the forecast objective on the extracted factors that exploits the Kalman filter recursions. We forecast one target at a time based...
Parameterized reduced-order models using hyper-dual numbers.
Energy Technology Data Exchange (ETDEWEB)
Fike, Jeffrey A.; Brake, Matthew Robert
2013-10-01
The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize the effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.
Low-Reynolds number modelling of flows with transpiration
Hwang, C. B.; Lin, C. A.
2000-03-01
An improved low-Reynolds number model was adopted to predict the dynamic and thermal fields in flows with transpiration. The performance of the adopted model was first contrasted with the direct numerical simulation (DNS) data of channel flow with uniform wall injection and suction. The validity of the present model applied to flows with a high level of transpiration was further examined. To explore the model's performance in complex environments, the model was applied to simulate a transpired developing channel flow. By contrasting the predictions with DNS data and measurements, the results indicated that the present model reproduced correctly the deceleration and acceleration of the flow caused by the injection and suction from the permeable part of the wall. The turbulence structure of transpired flows was also well captured and the superior performance of the adopted model was reflected by the predicted correct level of with the maximum being located at both the injection and the suction walls. The predicted thermal field by the present model also compared favourably with the DNS data and measurements. Copyright
A dynamical phyllotaxis model to determine floral organ number.
Directory of Open Access Journals (Sweden)
Miho S Kitazawa
2015-05-01
Full Text Available How organisms determine particular organ numbers is a fundamental key to the development of precise body structures; however, the developmental mechanisms underlying organ-number determination are unclear. In many eudicot plants, the primordia of sepals and petals (the floral organs first arise sequentially at the edge of a circular, undifferentiated region called the floral meristem, and later transition into a concentric arrangement called a whorl, which includes four or five organs. The properties controlling the transition to whorls comprising particular numbers of organs is little explored. We propose a development-based model of floral organ-number determination, improving upon earlier models of plant phyllotaxis that assumed two developmental processes: the sequential initiation of primordia in the least crowded space around the meristem and the constant growth of the tip of the stem. By introducing mutual repulsion among primordia into the growth process, we numerically and analytically show that the whorled arrangement emerges spontaneously from the sequential initiation of primordia. Moreover, by allowing the strength of the inhibition exerted by each primordium to decrease as the primordium ages, we show that pentamerous whorls, in which the angular and radial positions of the primordia are consistent with those observed in sepal and petal primordia in Silene coeli-rosa, Caryophyllaceae, become the dominant arrangement. The organ number within the outmost whorl, corresponding to the sepals, takes a value of four or five in a much wider parameter space than that in which it takes a value of six or seven. These results suggest that mutual repulsion among primordia during growth and a temporal decrease in the strength of the inhibition during initiation are required for the development of the tetramerous and pentamerous whorls common in eudicots.
Evaluation Model of System Survivability
Institute of Scientific and Technical Information of China (English)
LIU Yuling; PAN Shiying; TIAN Junfeng
2006-01-01
This paper puts forward a survivability evaluation model, SQEM(Survivability Quantitative Evaluation Model), based on lucubrating the main method existed. Then it defines the measurement factors and parses the survivability mathematically, introduces state change probability and the idea of setting the weights of survivability factors dynamically into the evaluating process of SQEM, which improved the accuracy of evaluation. An example is presented to illustrate the way SQEM works, which demonstrated the validity and feasibility of the method.
Binary tree models of high-Reynolds-number turbulence
Aurell, Erik; Dormy, Emmanuel; Frick, Peter
1997-08-01
We consider hierarchical models for turbulence, that are simple generalizations of the standard Gledzer-Ohkitani-Yamada shell models (E. B. Gledzer, Dokl, Akad. Nauk SSSR 209, 5 (1973) [Sov. Phys. Dokl. 18, 216 (1973)]; M. Yamada and K. Ohkitani, J. Phys. Soc. Jpn. 56, 4210 (1987)). The density of degrees of freedom is constant in wave-number space. Looking only at this behavior and at the quadratic invariants in the inviscid unforced limit, the models can be thought of as systems living naturally in one spatial dimension, but being qualitatively similar to hydrodynamics in two (2D) and three dimensions. We investigated cascade phenomena and intermittency in the different cases. We observed and studied a forward cascade of enstrophy in the 2D case.
Modeling users' activity on twitter networks: validation of Dunbar's number.
Directory of Open Access Journals (Sweden)
Bruno Gonçalves
Full Text Available Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100-200 stable relationships. Thus, the 'economy of attention' is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior.
Modeling users' activity on Twitter networks: validation of Dunbar's number
Goncalves, Bruno; Perra, Nicola; Vespignani, Alessandro
2012-02-01
Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100-200 stable relationships. Thus, the ``economy of attention'' is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior.
Modeling users' activity on twitter networks: validation of Dunbar's number.
Gonçalves, Bruno; Perra, Nicola; Vespignani, Alessandro
2011-01-01
Microblogging and mobile devices appear to augment human social capabilities, which raises the question whether they remove cognitive or biological constraints on human communication. In this paper we analyze a dataset of Twitter conversations collected across six months involving 1.7 million individuals and test the theoretical cognitive limit on the number of stable social relationships known as Dunbar's number. We find that the data are in agreement with Dunbar's result; users can entertain a maximum of 100-200 stable relationships. Thus, the 'economy of attention' is limited in the online world by cognitive and biological constraints as predicted by Dunbar's theory. We propose a simple model for users' behavior that includes finite priority queuing and time resources that reproduces the observed social behavior.
Local models of stellar convection III: The Strouhal number
Käpylä, P J; Ossendrijver, M; Tuominen, I
2004-01-01
(Abbreviated) We determine the Strouhal number (St), a nondimensional measure of the correlation time, from numerical models of convection. The Strouhal number arises in the mean-field theories of angular momentum transport and dynamos, where its value determines the validity of certain widely used approximations, such as the first order smoothing (FOSA). More specifically, the relevant transport coefficients can be calculated by means of a cumulative series expansion if St < 1 (e.g. Knobloch 1978). We use two independent methods to estimate St. Firstly, we apply the minimal tau-approximation (MTA) in the equation of the time derivative of the Reynolds stress. In this approach the time derivative is essentially replaced by a term containing a relaxation time which can be interpreted as the correlation time of the turbulence. In this approach, the turnover time is estimated simply from the energy carrying scale of the convection and a typical velocity. In the second approach, we determine the correlation an...
DEFF Research Database (Denmark)
Olesen, H. R.
1998-01-01
Proceedings of the Twenty-Second NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held June 6-10, 1997, in Clermont-Ferrand, France.......Proceedings of the Twenty-Second NATO/CCMS International Technical Meeting on Air Pollution Modeling and Its Application, held June 6-10, 1997, in Clermont-Ferrand, France....
Risk Quantification and Evaluation Modelling
Directory of Open Access Journals (Sweden)
Manmohan Singh
2014-07-01
Full Text Available In this paper authors have discussed risk quantification methods and evaluation of risks and decision parameter to be used for deciding on ranking of the critical items, for prioritization of condition monitoring based risk and reliability centered maintenance (CBRRCM. As time passes any equipment or any product degrades into lower effectiveness and the rate of failure or malfunctioning increases, thereby lowering the reliability. Thus with the passage of time or a number of active tests or periods of work, the reliability of the product or the system, may fall down to a low value known as a threshold value, below which the reliability should not be allowed to dip. Hence, it is necessary to fix up the normal basis for determining the appropriate points in the product life cycle where predictive preventive maintenance may be applied in the programme so that the reliability (the probability of successful functioning can be enhanced, preferably to its original value, by reducing the failure rate and increasing the mean time between failure. It is very important for defence application where reliability is a prime work. An attempt is made to develop mathematical model for risk assessment and ranking them. Based on likeliness coefficient β1 and risk coefficient β2 ranking of the sub-systems can be modelled and used for CBRRCM.Defence Science Journal, Vol. 64, No. 4, July 2014, pp. 378-384, DOI:http://dx.doi.org/10.14429/dsj.64.6366
Evaluation of model fit in nonlinear multilevel structural equation modeling
Directory of Open Access Journals (Sweden)
Karin eSchermelleh-Engel
2014-03-01
Full Text Available Evaluating model fit in nonlinear multilevel structural equation models (MSEM presents a challenge as no adequate test statistic is available. Nevertheless, using a product indicator approach a likelihood ratio test for linear models is provided which may also be useful for nonlinear MSEM. The main problem with nonlinear models is that product variables are nonnormally distributed. Although robust test statistics have been developed for linear SEM to ensure valid results under the condition of nonnormality, they were not yet investigated for nonlinear MSEM. In a Monte Carlo study, the performance of the robust likelihood ratio test was investigated for models with single-level latent interaction effects using the unconstrained product indicator approach. As overall model fit evaluation has a potential limitation in detecting the lack of fit at a single level even for linear models, level-specific model fit evaluation was also investigated using partially saturated models. Four population models were considered: a model with interaction effects at both levels, an interaction effect at the within-group level, an interaction effect at the between-group level, and a model with no interaction effects at both levels. For these models the number of groups, predictor correlation, and model misspecification was varied. The results indicate that the robust test statistic performed sufficiently well. Advantages of level-specific model fit evaluation for the detection of model misfit are demonstrated.
Evaluation of model fit in nonlinear multilevel structural equation modeling.
Schermelleh-Engel, Karin; Kerwer, Martin; Klein, Andreas G
2014-01-01
Evaluating model fit in nonlinear multilevel structural equation models (MSEM) presents a challenge as no adequate test statistic is available. Nevertheless, using a product indicator approach a likelihood ratio test for linear models is provided which may also be useful for nonlinear MSEM. The main problem with nonlinear models is that product variables are non-normally distributed. Although robust test statistics have been developed for linear SEM to ensure valid results under the condition of non-normality, they have not yet been investigated for nonlinear MSEM. In a Monte Carlo study, the performance of the robust likelihood ratio test was investigated for models with single-level latent interaction effects using the unconstrained product indicator approach. As overall model fit evaluation has a potential limitation in detecting the lack of fit at a single level even for linear models, level-specific model fit evaluation was also investigated using partially saturated models. Four population models were considered: a model with interaction effects at both levels, an interaction effect at the within-group level, an interaction effect at the between-group level, and a model with no interaction effects at both levels. For these models the number of groups, predictor correlation, and model misspecification was varied. The results indicate that the robust test statistic performed sufficiently well. Advantages of level-specific model fit evaluation for the detection of model misfit are demonstrated.
Deng, Xinyang; Jiang, Wen
2017-09-12
Failure mode and effect analysis (FMEA) is a useful tool to define, identify, and eliminate potential failures or errors so as to improve the reliability of systems, designs, and products. Risk evaluation is an important issue in FMEA to determine the risk priorities of failure modes. There are some shortcomings in the traditional risk priority number (RPN) approach for risk evaluation in FMEA, and fuzzy risk evaluation has become an important research direction that attracts increasing attention. In this paper, the fuzzy risk evaluation in FMEA is studied from a perspective of multi-sensor information fusion. By considering the non-exclusiveness between the evaluations of fuzzy linguistic variables to failure modes, a novel model called D numbers is used to model the non-exclusive fuzzy evaluations. A D numbers based multi-sensor information fusion method is proposed to establish a new model for fuzzy risk evaluation in FMEA. An illustrative example is provided and examined using the proposed model and other existing method to show the effectiveness of the proposed model.
Improving CASINO performance for models with large number of electrons
Energy Technology Data Exchange (ETDEWEB)
Anton, L; Alfe, D; Hood, R Q; Tanqueray, D
2009-05-13
Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation.
High Reynolds number magnetohydrodynamic turbulence using a Lagrangian model.
Graham, J Pietarila; Mininni, P D; Pouquet, A
2011-07-01
With the help of a model of magnetohydrodynamic (MHD) turbulence tested previously, we explore high Reynolds number regimes up to equivalent resolutions of 6000(3) grid points in the absence of forcing and with no imposed uniform magnetic field. For the given initial condition chosen here, with equal kinetic and magnetic energy, the flow ends up being dominated by the magnetic field, and the dynamics leads to an isotropic Iroshnikov-Kraichnan energy spectrum. However, the locally anisotropic magnetic field fluctuations perpendicular to the local mean field follow a Kolmogorov law. We find that the ratio of the eddy turnover time to the Alfvén time increases with wave number, contrary to the so-called critical balance hypothesis. Residual energy and helicity spectra are also considered; the role played by the conservation of magnetic helicity is studied, and scaling laws are found for the magnetic helicity and residual helicity spectra. We put these results in the context of the dynamics of a globally isotropic MHD flow that is locally anisotropic because of the influence of the strong large-scale magnetic field, leading to a partial equilibration between kinetic and magnetic modes for the energy and the helicity.
Modelling the number of olive groves in Spanish municipalities
Directory of Open Access Journals (Sweden)
María-Dolores Huete
2016-03-01
Full Text Available The univariate generalized Waring distribution (UGWD is presented as a new model to describe the goodness of fit, applicable in the context of agriculture. In this paper, it was used to model the number of olive groves recorded in Spain in the 8,091 municipalities recorded in the 2009 Agricultural Census, according to which the production of oil olives accounted for 94% of total output, while that of table olives represented 6% (with an average of 44.84 and 4.06 holdings per Spanish municipality, respectively. UGWD is suitable for fitting this type of discrete data, with strong left-sided asymmetry. This novel use of UGWD can provide the foundation for future research in agriculture, with the advantage over other discrete distributions that enables the analyst to split the variance. After defining the distribution, we analysed various methods for fitting the parameters associated with it, namely estimation by maximum likelihood, estimation by the method of moments and a variant of the latter, estimation by the method of frequencies and moments. For oil olives, the chi-square goodness of fit test gives p-values of 0.9992, 0.9967 and 0.9977, respectively. However, a poor fit was obtained for the table olive distribution. Finally, the variance was split, following Irwin, into three components related to random factors, external factors and internal differences. For the distribution of the number of olive grove holdings, this splitting showed that random and external factors only account about 0.22% and 0.05%. Therefore, internal differences within municipalities play an important role in determining total variability.
Modelling the number of olive groves in Spanish municipalities
Energy Technology Data Exchange (ETDEWEB)
Huete, M.D.; Marmolejo, J.A.
2016-11-01
The univariate generalized Waring distribution (UGWD) is presented as a new model to describe the goodness of fit, applicable in the context of agriculture. In this paper, it was used to model the number of olive groves recorded in Spain in the 8,091 municipalities recorded in the 2009 Agricultural Census, according to which the production of oil olives accounted for 94% of total output, while that of table olives represented 6% (with an average of 44.84 and 4.06 holdings per Spanish municipality, respectively). UGWD is suitable for fitting this type of discrete data, with strong left-sided asymmetry. This novel use of UGWD can provide the foundation for future research in agriculture, with the advantage over other discrete distributions that enables the analyst to split the variance. After defining the distribution, we analysed various methods for fitting the parameters associated with it, namely estimation by maximum likelihood, estimation by the method of moments and a variant of the latter, estimation by the method of frequencies and moments. For oil olives, the chi-square goodness of fit test gives p-values of 0.9992, 0.9967 and 0.9977, respectively. However, a poor fit was obtained for the table olive distribution. Finally, the variance was split, following Irwin, into three components related to random factors, external factors and internal differences. For the distribution of the number of olive grove holdings, this splitting showed that random and external factors only account about 0.22% and 0.05%. Therefore, internal differences within municipalities play an important role in determining total variability. (Author)
Gan, Luping; Li, Yan-Feng; Zhu, Shun-Peng; Yang, Yuan-Jian; Huang, Hong-Zhong
2014-06-01
Failure mode, effects and criticality analysis (FMECA) and Fault tree analysis (FTA) are powerful tools to evaluate reliability of systems. Although single failure mode issue can be efficiently addressed by traditional FMECA, multiple failure modes and component correlations in complex systems cannot be effectively evaluated. In addition, correlated variables and parameters are often assumed to be precisely known in quantitative analysis. In fact, due to the lack of information, epistemic uncertainty commonly exists in engineering design. To solve these problems, the advantages of FMECA, FTA, fuzzy theory, and Copula theory are integrated into a unified hybrid method called fuzzy probability weighted geometric mean (FPWGM) risk priority number (RPN) method. The epistemic uncertainty of risk variables and parameters are characterized by fuzzy number to obtain fuzzy weighted geometric mean (FWGM) RPN for single failure mode. Multiple failure modes are connected using minimum cut sets (MCS), and Boolean logic is used to combine fuzzy risk priority number (FRPN) of each MCS. Moreover, Copula theory is applied to analyze the correlation of multiple failure modes in order to derive the failure probabilities of each MCS. Compared to the case where dependency among multiple failure modes is not considered, the Copula modeling approach eliminates the error of reliability analysis. Furthermore, for purpose of quantitative analysis, probabilities importance weight from failure probabilities are assigned to FWGM RPN to reassess the risk priority, which generalize the definition of probability weight and FRPN, resulting in a more accurate estimation than that of the traditional models. Finally, a basic fatigue analysis case drawn from turbine and compressor blades in aeroengine is used to demonstrate the effectiveness and robustness of the presented method. The result provides some important insights on fatigue reliability analysis and risk priority assessment of structural
Increased mast cell numbers in a calcaneal tendon overuse model.
Pingel, J; Wienecke, J; Kongsgaard, M; Behzad, H; Abraham, T; Langberg, H; Scott, A
2013-12-01
Tendinopathy is often discovered late because the initial development of tendon pathology is asymptomatic. The aim of this study was to examine the potential role of mast cell involvement in early tendinopathy using a high-intensity uphill running (HIUR) exercise model. Twenty-four male Wistar rats were divided in two groups: running group (n = 12); sedentary control group (n = 12). The running-group was exposed to the HIUR exercise protocol for 7 weeks. The calcaneal tendons of both hind limbs were dissected. The right tendon was used for histologic analysis using Bonar score, immunohistochemistry, and second harmonic generation microscopy (SHGM). The left tendon was used for quantitative polymerase chain reaction (qPCR) analysis. An increased tendon cell density in the runners were observed compared to the controls (P = 0.05). Further, the intensity of immunostaining of protein kinase B, P = 0.03; 2.75 ± 0.54 vs 1.17 ± 0.53, was increased in the runners. The Bonar score (P = 0.05), and the number of mast cells (P = 0.02) were significantly higher in the runners compared to the controls. Furthermore, SHGM showed focal collagen disorganization in the runners, and reduced collagen density (P = 0.03). IL-3 mRNA levels were correlated with mast cell number in sedentary animals. The qPCR analysis showed no significant differences between the groups in the other analyzed targets. The current study demonstrates that 7-week HIUR causes structural changes in the calcaneal tendon, and further that these changes are associated with an increased mast cell density.
Dynamos at extreme magnetic Prandtl numbers: insights from shell models
Verma, Mahendra K.; Kumar, Rohit
2016-12-01
We present an MHD shell model suitable for computation of various energy fluxes of magnetohydrodynamic turbulence for very small and very large magnetic Prandtl numbers $\\mathrm{Pm}$; such computations are inaccessible to direct numerical simulations. For small $\\mathrm{Pm}$, we observe that both kinetic and magnetic energy spectra scale as $k^{-5/3}$ in the inertial range, but the dissipative magnetic energy scales as $k^{-11/3}\\exp(-k/k_\\eta)$. Here, the kinetic energy at large length scale feeds the large-scale magnetic field that cascades to small-scale magnetic field, which gets dissipated by Joule heating. The large-$\\mathrm{Pm}$ dynamo has a similar behaviour except that the dissipative kinetic energy scales as $k^{-13/3}$. For this case, the large-scale velocity field transfers energy to the large-scale magnetic field, which gets transferred to small-scale velocity and magnetic fields; the energy of the small-scale magnetic field also gets transferred to the small-scale velocity field, and the energy thus accumulated is dissipated by the viscous force.
Entransy dissipation number and its application to heat exchanger performance evaluation
Institute of Scientific and Technical Information of China (English)
GUO JiangFeng; CHENG Lin; XU MingTian
2009-01-01
Based on the concept of the entransy which characterizes heat transfer ability,a new heat exchanger performance evaluation criterion termed the entransy dissipation number is established.Our analysis shows that the decrease of the entransy dissipation number always increases the heat exchanger effectiveness for fixed heat capacity rate ratio.Therefore,the smaller the entransy dissipation number,the better the heat exchanger performance is.The entransy dissipation number in terms of the number of exchanger heat transfer units or heat capacity rate ratio correctly exhibits the global performance of the counter-,cross-and parallel-flow heat exchangers.In comparison with the heat exchanger performance evaluation criteria based on entropy generation,the entransy dissipation number demonstrates some distinct advantages.Furthermore,the entransy dissipation number reflects the degree of irreversibility caused by flow imbalance.
A heuristic approach to determine an appropriate number of topics in topic modeling.
Zhao, Weizhong; Chen, James J; Perkins, Roger; Liu, Zhichao; Ge, Weigong; Ding, Yijun; Zou, Wen
2015-01-01
Topic modelling is an active research field in machine learning. While mainly used to build models from unstructured textual data, it offers an effective means of data mining where samples represent documents, and different biological endpoints or omics data represent words. Latent Dirichlet Allocation (LDA) is the most commonly used topic modelling method across a wide number of technical fields. However, model development can be arduous and tedious, and requires burdensome and systematic sensitivity studies in order to find the best set of model parameters. Often, time-consuming subjective evaluations are needed to compare models. Currently, research has yielded no easy way to choose the proper number of topics in a model beyond a major iterative approach. Based on analysis of variation of statistical perplexity during topic modelling, a heuristic approach is proposed in this study to estimate the most appropriate number of topics. Specifically, the rate of perplexity change (RPC) as a function of numbers of topics is proposed as a suitable selector. We test the stability and effectiveness of the proposed method for three markedly different types of grounded-truth datasets: Salmonella next generation sequencing, pharmacological side effects, and textual abstracts on computational biology and bioinformatics (TCBB) from PubMed. The proposed RPC-based method is demonstrated to choose the best number of topics in three numerical experiments of widely different data types, and for databases of very different sizes. The work required was markedly less arduous than if full systematic sensitivity studies had been carried out with number of topics as a parameter. We understand that additional investigation is needed to substantiate the method's theoretical basis, and to establish its generalizability in terms of dataset characteristics.
Numerical computations and mathematical modelling with infinite and infinitesimal numbers
Sergeyev, Yaroslav D.
2012-01-01
Traditional computers work with finite numbers. Situations where the usage of infinite or infinitesimal quantities is required are studied mainly theoretically. In this paper, a recently introduced computational methodology (that is not related to the non-standard analysis) is used to work with finite, infinite, and infinitesimal numbers \\textit{numerically}. This can be done on a new kind of a computer - the Infinity Computer - able to work with all these types of numbers. The new computatio...
Long-Term Sunspot Number Prediction based on EMD Analysis and AR Model
Institute of Scientific and Technical Information of China (English)
Tong Xu; Jian Wu; Zhen-Sen Wu; Qiang Li
2008-01-01
The Empirical Mode Decomposition (EMD) and Auto-Regressive model (AR) are applied to a long-term prediction of sunspot numbers. With the sample data of sunspot numbers from 1848 to 1992, the method is evaluated by examining the measured data of the solar cycle 23 with the prediction: different time scale components are obtained by the EMD method and multi-step predicted values are combined to reconstruct the sunspot number time series. The result is remarkably good in comparison to the predictions made by the solar dynamo and precursor approaches for cycle 23. Sunspot numbers of the coming solar cycle 24 are obtained with the data from 1848 to 2007, the maximum amplitude of the next solar cycle is predicted to be about 112 in 2011-2012.
Open intersection numbers, matrix models and MKP hierarchy
Alexandrov, A
2014-01-01
In this paper we claim that the generating function of the intersection numbers on the moduli spaces of Riemann surfaces with boundary, constructed recently by R. Pandharipande, J. Solomon and R. Tessler and extended by A. Buryak, is a tau-function of the KP integrable hierarchy. Moreover, it is given by a simple modification of the Kontsevich matrix integral so that the generating functions of open and closed intersection numbers are described by the MKP integrable hierarchy. Virasoro constraints for the open intersection numbers naturally follow from the matrix integral representation.
Open intersection numbers, matrix models and MKP hierarchy
Energy Technology Data Exchange (ETDEWEB)
Alexandrov, A. [Freiburg Institute for Advanced Studies (FRIAS), University of Freiburg,Albertstrasse 19, 79104 Freiburg (Germany); Mathematics Institute, University of Freiburg,Eckerstrasse 1, 79104 Freiburg (Germany); ITEP,Bolshaya Cheremushkinskaya 25, 117218 Moscow (Russian Federation)
2015-03-09
In this paper we conjecture that the generating function of the intersection numbers on the moduli spaces of Riemann surfaces with boundary, constructed recently by R. Pandharipande, J. Solomon and R. Tessler and extended by A. Buryak, is a tau-function of the KP integrable hierarchy. Moreover, it is given by a simple modification of the Kontsevich matrix integral so that the generating functions of open and closed intersection numbers are described by the MKP integrable hierarchy. Virasoro constraints for the open intersection numbers naturally follow from the matrix integral representation.
Directory of Open Access Journals (Sweden)
Yen-Jen Lin
Full Text Available Copy number variation (CNV has been reported to be associated with disease and various cancers. Hence, identifying the accurate position and the type of CNV is currently a critical issue. There are many tools targeting on detecting CNV regions, constructing haplotype phases on CNV regions, or estimating the numerical copy numbers. However, none of them can do all of the three tasks at the same time. This paper presents a method based on Hidden Markov Model to detect parent specific copy number change on both chromosomes with signals from SNP arrays. A haplotype tree is constructed with dynamic branch merging to model the transition of the copy number status of the two alleles assessed at each SNP locus. The emission models are constructed for the genotypes formed with the two haplotypes. The proposed method can provide the segmentation points of the CNV regions as well as the haplotype phasing for the allelic status on each chromosome. The estimated copy numbers are provided as fractional numbers, which can accommodate the somatic mutation in cancer specimens that usually consist of heterogeneous cell populations. The algorithm is evaluated on simulated data and the previously published regions of CNV of the 270 HapMap individuals. The results were compared with five popular methods: PennCNV, genoCN, COKGEN, QuantiSNP and cnvHap. The application on oral cancer samples demonstrates how the proposed method can facilitate clinical association studies. The proposed algorithm exhibits comparable sensitivity of the CNV regions to the best algorithm in our genome-wide study and demonstrates the highest detection rate in SNP dense regions. In addition, we provide better haplotype phasing accuracy than similar approaches. The clinical association carried out with our fractional estimate of copy numbers in the cancer samples provides better detection power than that with integer copy number states.
Lin, Yen-Jen; Chen, Yu-Tin; Hsu, Shu-Ni; Peng, Chien-Hua; Tang, Chuan-Yi; Yen, Tzu-Chen; Hsieh, Wen-Ping
2014-01-01
Copy number variation (CNV) has been reported to be associated with disease and various cancers. Hence, identifying the accurate position and the type of CNV is currently a critical issue. There are many tools targeting on detecting CNV regions, constructing haplotype phases on CNV regions, or estimating the numerical copy numbers. However, none of them can do all of the three tasks at the same time. This paper presents a method based on Hidden Markov Model to detect parent specific copy number change on both chromosomes with signals from SNP arrays. A haplotype tree is constructed with dynamic branch merging to model the transition of the copy number status of the two alleles assessed at each SNP locus. The emission models are constructed for the genotypes formed with the two haplotypes. The proposed method can provide the segmentation points of the CNV regions as well as the haplotype phasing for the allelic status on each chromosome. The estimated copy numbers are provided as fractional numbers, which can accommodate the somatic mutation in cancer specimens that usually consist of heterogeneous cell populations. The algorithm is evaluated on simulated data and the previously published regions of CNV of the 270 HapMap individuals. The results were compared with five popular methods: PennCNV, genoCN, COKGEN, QuantiSNP and cnvHap. The application on oral cancer samples demonstrates how the proposed method can facilitate clinical association studies. The proposed algorithm exhibits comparable sensitivity of the CNV regions to the best algorithm in our genome-wide study and demonstrates the highest detection rate in SNP dense regions. In addition, we provide better haplotype phasing accuracy than similar approaches. The clinical association carried out with our fractional estimate of copy numbers in the cancer samples provides better detection power than that with integer copy number states. PMID:24849202
An EPQ model with imperfect items using interval grey numbers
Directory of Open Access Journals (Sweden)
Erdal Aydemir
2015-01-01
Full Text Available The classic economic production quantity (EPQ model has been widely used to determine the optimal production quantity. However, the analysis for finding an EPQ model has many weaknesses which lead many researchers and practitioners to make extensions in several aspects on the original EPQ model. The basic assumption of EPQ model is that 100% of manufactured products are non-defective that is not valid for many production processes generally. The purpose of this paper is to develop an EPQ model with grey demand rate and cost values with maximum backorder level allowed with the good quality items in units under an imperfect production process. The imperfect items are considered to be low quality items which are sold to a particular purchaser at a lower price and, the others are reworked and scrapped. A mathematical model is developed and then an industrial example is presented on the wooden chipboard production process for illustration of the proposed model.
Modeling Interventions in the Owned Cat Population to Decrease Numbers, Knox County, TN.
Lancaster, Evan P; Lenhart, Suzanne; Ojogbo, Ejebagom J; Rekant, Steven I; Scott, Janelle R; Weimer, Heidi; New, John C
2016-01-01
To find management strategies for controlling the owned cat population in Knox County, TN, the authors formulated a mathematical model using biological properties of such nonhuman animals and spay actions on certain age classes. They constructed this discrete-time model to predict the future owned cat population in this county and to evaluate intervention strategies to surgically sterilize some proportion of the population. Using the predicted population size and the number of surgeries for specific scenarios, they showed that focusing on specific age classes can be an effective feature in spay programs.
Modeling of dynamically loaded hydrodynamic bearings at low Sommerfeld numbers
DEFF Research Database (Denmark)
Thomsen, Kim
. The challenging main bearing operation conditions in a wind turbine pose a demanding development task for the design of a hydrodynamic bearing. In general these conditions include operation at low Reynolds numbers with frequent start and stop at high loads as well as difficult operating conditions dictated...
Directory of Open Access Journals (Sweden)
Flávia Barbosa Abreu
2006-01-01
Full Text Available This study presents the minimum number and the best combination of tomato harvests needed to compare tomato accessions from germplasm banks. Number and weight of fruit in tomato plants are important as auxiliary traits in the evaluation of germplasm banks and should be studied simultaneously with other desirable characteristics such as pest and disease resistance, improved flavor and early production. Brazilian tomato breeding programs should consider not only the number of fruit but also fruit size because Brazilian consumers value fruit that are homogeneous, large and heavy. Our experiment was a randomized block design with three replicates of 32 tomato accessions from the Vegetable Germplasm Bank (Banco de Germoplasma de Hortaliças at the Federal University of Viçosa, Minas Gerais, Brazil plus two control cultivars (Debora Plus and Santa Clara. Nine harvests were evaluated for four production-related traits. The results indicate that six successive harvests are sufficient to compare tomato genotypes and germplasm bank accessions. Evaluation of genotypes according to the number of fruit requires analysis from the second to the seventh harvest. Evaluation of fruit weight by genotype requires analysis from the fourth to the ninth harvest. Evaluation of both number and weight of fruit require analysis from the second to the ninth harvest.
Estimates of the Strouhal number from numerical models of convection
Käpylä, P J; Ossendrijver, M; Tuominen, I
2004-01-01
We determine the Strouhal number (hereafter St), which is essentially a nondimensional measure of the correlation time, from numerical calculations of convection. We use two independent methods to estimate St. Firstly, we apply the minimal tau-approximation (MTA) on the equation of the time derivative of the Reynolds stress. A relaxation time is obtained from which St can be estimated by normalising with a typical turnover time. Secondly, we calculate the correlation and turnover times separately, the former from the autocorrelation of velocity and the latter by following test particles embedded in the flow. We find that the Strouhal number is in general of the order of 0.1 to 1, i.e. rather large in comparison to the typical assumption in the mean-field theories that St << 1. However, there is a clear decreasing trend as function of the Rayleigh number and increasing rotation. Furthermore, for the present range of parameters the decrease of St does not show signs of saturation, indicating that in stell...
Barraclough, Terry
Administrators have always been evaluated in one way or another. Decisions on hiring, training, promotion, and firing of administrators have always been necessary; such decisions are based on some sort of evaluation, whether formal or informal, of administrative performance. The concept of accountability has also affected administrator evaluation.…
Özcan, Zeynep; Başkan, Oğuz; Düzgün, H Şebnem; Kentel, Elçin; Alp, Emre
2017-10-01
Fate and transport models are powerful tools that aid authorities in making unbiased decisions for developing sustainable management strategies. Application of pollution fate and transport models in semi-arid regions has been challenging because of unique hydrological characteristics and limited data availability. Significant temporal and spatial variability in rainfall events, complex interactions between soil, vegetation and topography, and limited water quality and hydrological data due to insufficient monitoring network make it a difficult task to develop reliable models in semi-arid regions. The performances of these models govern the final use of the outcomes such as policy implementation, screening, economical analysis, etc. In this study, a deterministic distributed fate and transport model, SWAT, is applied in Lake Mogan Watershed, a semi-arid region dominated by dry agricultural practices, to estimate nutrient loads and to develop the water budget of the watershed. To minimize the discrepancy due to limited availability of historical water quality data extensive efforts were placed in collecting site-specific data for model inputs such as soil properties, agricultural practice information and land use. Moreover, calibration parameter ranges suggested in the literature are utilized during calibration in order to obtain more realistic representation of Lake Mogan Watershed in the model. Model performance is evaluated using comparisons of the measured data with 95%CI for the simulated data and comparison of unit pollution load estimations with those provided in the literature for similar catchments, in addition to commonly used evaluation criteria such as Nash-Sutcliffe simulation efficiency, coefficient of determination and percent bias. These evaluations demonstrated that even though the model prediction power is not high according to the commonly used model performance criteria, the calibrated model may provide useful information in the comparison of the
The baryon number two system in the Chiral Soliton Model
Sarti, Valentina Mantovani; Vento, Vicente; Park, Byung-Yoon
2012-01-01
We study the interaction between two B = 1 states in a Chiral Soliton Model where baryons are described as non-topological solitons. By using the hedgehog solution for the B = 1 states we construct three possible B = 2 configurations to analyze the role of the relative orientation of the hedgehog quills in the dynamics. The strong dependence of the intersoliton interaction on these relative orientations reveals that studies of dense hadronic matter using this model should take into account their implications.
DEFF Research Database (Denmark)
Christensen, Martin Gram; Adler-Nissen, Jens
2015-01-01
This paper presents a normalization of the Biot number, which enables the Fourier exponents to be fitted with a simple 3rd order polynomial (R2 > 0.9999). The method is validated for Biot numbers ranging from 0.02 to 8, and presented graphically for both the Fourier exponents and the lag factors...
Postprocessing for quantum random number generators: entropy evaluation and randomness extraction
Ma, Xiongfeng; Xu, Feihu; Xu, He; Tan, Xiaoqing; Qi, Bing; Lo, Hoi-Kwong
2012-01-01
Quantum random-number generators (QRNGs) can offer a means to generate information-theoretically provable random numbers, in principle. In practice, unfortunately, the quantum randomness is inevitably mixed with classical randomness due to classical noises. To distill this quantum randomness, one needs to quantify the randomness of the source and apply a randomness extractor. Here, we propose a generic framework for evaluating quantum randomness of real-life QRNGs by min-entropy, and apply it...
Recommendations and illustrations for the evaluation of photonic random number generators
Hart, Joseph D.; Terashima, Yuta; Uchida, Atsushi; Baumgartner, Gerald B.; Murphy, Thomas E.; Roy, Rajarshi
2017-09-01
The never-ending quest to improve the security of digital information combined with recent improvements in hardware technology has caused the field of random number generation to undergo a fundamental shift from relying solely on pseudo-random algorithms to employing optical entropy sources. Despite these significant advances on the hardware side, commonly used statistical measures and evaluation practices remain ill-suited to understand or quantify the optical entropy that underlies physical random number generation. We review the state of the art in the evaluation of optical random number generation and recommend a new paradigm: quantifying entropy generation and understanding the physical limits of the optical sources of randomness. In order to do this, we advocate for the separation of the physical entropy source from deterministic post-processing in the evaluation of random number generators and for the explicit consideration of the impact of the measurement and digitization process on the rate of entropy production. We present the Cohen-Procaccia estimate of the entropy rate h (𝜖 ,τ ) as one way to do this. In order to provide an illustration of our recommendations, we apply the Cohen-Procaccia estimate as well as the entropy estimates from the new NIST draft standards for physical random number generators to evaluate and compare three common optical entropy sources: single photon time-of-arrival detection, chaotic lasers, and amplified spontaneous emission.
Increased mast cell numbers in a calcaneal tendon overuse model
DEFF Research Database (Denmark)
Pingel, Jessica; Wienecke, Jacob; Kongsgaard Madsen, Mads
2013-01-01
Tendinopathy is often discovered late because the initial development of tendon pathology is asymptomatic. The aim of this study was to examine the potential role of mast cell involvement in early tendinopathy using a high-intensity uphill running (HIUR) exercise model. Twenty-four male Wistar ra...
Emergence of a 'visual number sense' in hierarchical generative models.
Stoianov, Ivilin; Zorzi, Marco
2012-01-08
Numerosity estimation is phylogenetically ancient and foundational to human mathematical learning, but its computational bases remain controversial. Here we show that visual numerosity emerges as a statistical property of images in 'deep networks' that learn a hierarchical generative model of the sensory input. Emergent numerosity detectors had response profiles resembling those of monkey parietal neurons and supported numerosity estimation with the same behavioral signature shown by humans and animals.
Number of Clusters and the Quality of Hybrid Predictive Models in Analytical CRM
Directory of Open Access Journals (Sweden)
Łapczyński Mariusz
2014-08-01
Full Text Available Making more accurate marketing decisions by managers requires building effective predictive models. Typically, these models specify the probability of customer belonging to a particular category, group or segment. The analytical CRM categories refer to customers interested in starting cooperation with the company (acquisition models, customers who purchase additional products (cross- and up-sell models or customers intending to resign from the cooperation (churn models. During building predictive models researchers use analytical tools from various disciplines with an emphasis on their best performance. This article attempts to build a hybrid predictive model combining decision trees (C&RT algorithm and cluster analysis (k-means. During experiments five different cluster validity indices and eight datasets were used. The performance of models was evaluated by using popular measures such as: accuracy, precision, recall, G-mean, F-measure and lift in the first and in the second decile. The authors tried to find a connection between the number of clusters and models' quality.
Chemical Kinetics and Photochemical Data for Use in Atmospheric Studies: Evaluation Number 18
Burkholder, J. B.; Sander, S. P.; Abbatt, J. P. D.; Barker, J. R.; Huie, R. E.; Kolb, C. E.; Kurylo, M. J.; Orkin, V. L.; Wilmouth, D. M.; Wine, P. H.
2015-01-01
This is the eighteenth in a series of evaluated sets of rate constants, photochemical cross sections, heterogeneous parameters, and thermochemical parameters compiled by the NASA Panel for Data Evaluation. The data are used primarily to model stratospheric and upper tropospheric processes, with particular emphasis on the ozone layer and its possible perturbation by anthropogenic and natural phenomena. The evaluation is available in electronic form from the following Internet URL: http://jpldataeval.jpl.nasa.gov/
Metrics and Evaluation Models for Accessible Television
DEFF Research Database (Denmark)
Li, Dongxiao; Looms, Peter Olaf
2014-01-01
The adoption of the UN Convention on the Rights of Persons with Disabilities (UN CRPD) in 2006 has provided a global framework for work on accessibility, including information and communication technologies and audiovisual content. One of the challenges facing the application of the UN CRPD is te...... and evaluation models for access service provision, the paper identifies options that could facilitate the evaluation of UN CRPD outcomes and suggests priorities for future research in this area....... number of platforms on which audiovisual content needs to be distributed, requiring very clear multiplatform architectures to facilitate interworking and assure interoperability. As a consequence, the regular evaluations of progress being made by signatories to the UN CRPD protocol are difficult...
Performability Modelling Tools, Evaluation Techniques and Applications
Haverkort, Boudewijn R.H.M.
1990-01-01
This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new
Parabolic Anderson model with a finite number of moving catalysts
Castell, Fabienne; Maillard, Grégory
2010-01-01
We consider the parabolic Anderson model (PAM) which is given by the equation $\\partial u/\\partial t = \\kappa\\Delta u + \\xi u$ with $u\\colon\\, \\Z^d\\times [0,\\infty)\\to \\R$, where $\\kappa \\in [0,\\infty)$ is the diffusion constant, $\\Delta$ is the discrete Laplacian, and $\\xi\\colon\\,\\Z^d\\times [0,\\infty)\\to\\R$ is a space-time random environment that drives the equation. The solution of this equation describes the evolution of a ``reactant'' $u$ under the influence of a ``catalyst'' $\\xi$. In the present paper we focus on the case where $\\xi$ is a system of $n$ independent simple random walks each with step rate $2d\\rho$ and starting from the origin. We study the \\emph{annealed} Lyapunov exponents, i.e., the exponential growth rates of the successive moments of $u$ w.r.t.\\ $\\xi$ and show that these exponents, as a function of the diffusion constant $\\kappa$ and the rate constant $\\rho$, behave differently depending on the dimension $d$. In particular, we give a description of the intermittent behavior of the sys...
Pouillot, Régis; Chen, Yuhuan; Hoelzer, Karin
2015-02-01
When developing quantitative risk assessment models, a fundamental consideration for risk assessors is to decide whether to evaluate changes in bacterial levels in terms of concentrations or in terms of bacterial numbers. Although modeling bacteria in terms of integer numbers may be regarded as a more intuitive and rigorous choice, modeling bacterial concentrations is more popular as it is generally less mathematically complex. We tested three different modeling approaches in a simulation study. The first approach considered bacterial concentrations; the second considered the number of bacteria in contaminated units, and the third considered the expected number of bacteria in contaminated units. Simulation results indicate that modeling concentrations tends to overestimate risk compared to modeling the number of bacteria. A sensitivity analysis using a regression tree suggests that processes which include drastic scenarios consisting of combinations of large bacterial inactivation followed by large bacterial growth frequently lead to a >10-fold overestimation of the average risk when modeling concentrations as opposed to bacterial numbers. Alternatively, the approach of modeling the expected number of bacteria in positive units generates results similar to the second method and is easier to use, thus potentially representing a promising compromise.
Evaluation Model for Sentient Cities
Directory of Open Access Journals (Sweden)
Mª Florencia Fergnani Brion
2016-11-01
Full Text Available In this article we made a research about the Sentient Cities and produced an assessment model to analyse if a city is or could be potentially considered one. It can be used to evaluate the current situation of a city before introducing urban policies based on citizen participation in hybrid environments (physical and digital. To that effect, we've developed evaluation grids with the main elements that form a Sentient City and their measurement values. The Sentient City is a variation of the Smart City, also based on technology progress and innovation, but where the citizens are the principal agent. In this model, governments aim to have a participatory and sustainable system for achieving the Knowledge Society and Collective Intelligence development, as well as the city’s efficiency. Also, they increase communication channels between the Administration and citizens. In this new context, citizens are empowered because they have the opportunity to create a Local Identity and transform their surroundings through open and horizontal initiatives.
Prediction model of interval grey number based on DGM(1,1)
Institute of Scientific and Technical Information of China (English)
Bo Zeng; Sifeng Liu; Naiming Xie
2010-01-01
In grey system theory,the studies in the field of grey prediction model are focused on real number sequences,rather than grey number ones.Hereby,a prediction model based on interval grey number sequences is proposed.By mining the geometric features of interval grey number sequences on a two-dimensional surface,all the interval grey numbers are converted into real numbers by means of certain algorithm,and then the prediction model is established based on those real number sequences.The entire process avoids the algebraic operations of grey number,and the prediction problem of interval grey number is usefully solved.Ultimately,through an example's program simulation,the validity and practicability of this novel model are verified.
Evaluating models of vowel perception
Molis, Michelle R.
2005-08-01
There is a long-standing debate concerning the efficacy of formant-based versus whole spectrum models of vowel perception. Categorization data for a set of synthetic steady-state vowels were used to evaluate both types of models. The models tested included various combinations of formant frequencies and amplitudes, principal components derived from excitation patterns, and perceptually scaled LPC cepstral coefficients. The stimuli were 54 five-formant synthesized vowels that had a common F1 frequency and varied orthogonally in F2 and F3 frequency. Twelve speakers of American English categorized the stimuli as the vowels /smcapi/, /capomega/, or /hkbkeh/. Results indicate that formant frequencies provided the best account of the data only if nonlinear terms, in the form of squares and cross products of the formant values, were also included in the analysis. The excitation pattern principal components also produced reasonably accurate fits to the data. Although a wish to use the lowest-dimensional representation would dictate that formant frequencies are the most appropriate vowel description, the relative success of richer, more flexible, and more neurophysiologically plausible whole spectrum representations suggests that they may be preferred for understanding human vowel perception.
Evaluating the conceptual design schemes of complex products based on FAHP using fuzzy number
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
In the conceptual design stage of complex products, CBR(Case-Based Reasoning) tool is useful to offer a feasible set of schemes. Then the mot adaptive scheme can be generated through a procedure of comparison and evaluation.The procedure is essentially a multiple criteria decision-making problem. The traditional multiple criteria programming is not flexible enough in executing the system evaluation algorithm due to both the limited experimental data and the lack of human experiences. To make the CBR tool to be more efficient, a new method for the best choice among the feasible schemes based on the Fuzzy AHP using Fuzzy numbers (FFAHP) is proposed. Since the final results become a problem of ranking the mean of fuzzy numbers by the optimism of decision-maker using the FFAHP, its execution is much more intuitive and effective than with the traditional method.
Postprocessing for quantum random-number generators: Entropy evaluation and randomness extraction
Ma, Xiongfeng; Xu, Feihu; Xu, He; Tan, Xiaoqing; Qi, Bing; Lo, Hoi-Kwong
2013-06-01
Quantum random-number generators (QRNGs) can offer a means to generate information-theoretically provable random numbers, in principle. In practice, unfortunately, the quantum randomness is inevitably mixed with classical randomness due to classical noises. To distill this quantum randomness, one needs to quantify the randomness of the source and apply a randomness extractor. Here, we propose a generic framework for evaluating quantum randomness of real-life QRNGs by min-entropy, and apply it to two different existing quantum random-number systems in the literature. Moreover, we provide a guideline of QRNG data postprocessing for which we implement two information-theoretically provable randomness extractors: Toeplitz-hashing extractor and Trevisan's extractor.
Karimi, Nasim; Moghimbeigi, Abbas; Motamedzade, Majid; Roshanaei, Ghodratollah
2016-12-01
Musculoskeletal disorders (MSDs) are a common problem among carpet weavers. This study was undertaken to introduce affecting personal and occupational factors in developing the number of MSDs among carpet weavers. A cross-sectional study was performed among 862 weavers in seven towns with regard to workhouse location in urban or rural regions. Data were collected by using questionnaires that contain personal, workplace, and information tools and the modified Nordic MSDs questionnaire. Statistical analysis was performed by applying Poisson and negative binomial mixed models using a full Bayesian hierarchical approach. The deviance information criterion was used for comparison between models and model selection. The majority of weavers (72%) were female and carpet weaving was the main job of 85.2% of workers. The negative binomial mixed model with lowest deviance information criterion was selected as the best model. The criteria showed the convergence of chains. Based on 95% Bayesian credible interval, the main job and weaving type variables statistically affected the number of MSDs, but variables age, sex, weaving comb, work experience, and carpet weaving looms were not significant. According to the results of this study, it can be concluded that occupational factors are associated with the number of MSDs developing among carpet weavers. Thus, using standard tools and decreasing hours of work per day can reduce frequency of MSDs among carpet weavers.
Yu, Hye-Kyung; Kim, Na-Young; Kim, Sung Soon; Chu, Chaeshin; Kee, Mee-Kyung
2013-12-01
From the introduction of HIV into the Republic of Korea in 1985 through 2012, 9,410 HIV-infected Koreans have been identified. Since 2000, there has been a sharp increase in newly diagnosed HIV-infected Koreans. It is necessary to estimate the changes in HIV infection to plan budgets and to modify HIV/AIDS prevention policy. We constructed autoregressive integrated moving average (ARIMA) models to forecast the number of HIV infections from 2013 to 2017. HIV infection data from 1985 to 2012 were used to fit ARIMA models. Akaike Information Criterion and Schwartz Bayesian Criterion statistics were used to evaluate the constructed models. Estimation was via the maximum likelihood method. To assess the validity of the proposed models, the mean absolute percentage error (MAPE) between the number of observed and fitted HIV infections from 1985 to 2012 was calculated. Finally, the fitted ARIMA models were used to forecast the number of HIV infections from 2013 to 2017. The fitted number of HIV infections was calculated by optimum ARIMA (2,2,1) model from 1985-2012. The fitted number was similar to the observed number of HIV infections, with a MAPE of 13.7%. The forecasted number of new HIV infections in 2013 was 962 (95% confidence interval (CI): 889-1,036) and in 2017 was 1,111 (95% CI: 805-1,418). The forecasted cumulative number of HIV infections in 2013 was 10,372 (95% CI: 10,308-10,437) and in 2017 was14,724 (95% CI: 13,893-15,555) by ARIMA (1,2,3). Based on the forecast of the number of newly diagnosed HIV infections and the current cumulative number of HIV infections, the cumulative number of HIV-infected Koreans in 2017 would reach about 15,000.
Evaluation of Computational Method of High Reynolds Number Slurry Flow for Caverns Backfilling
Energy Technology Data Exchange (ETDEWEB)
Bettin, Giorgia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-05-01
The abandonment of salt caverns used for brining or product storage poses a significant environmental and economic risk. Risk mitigation can in part be address ed by the process of backfilling which can improve the cavern geomechanical stability and reduce the risk o f fluid loss to the environment. This study evaluate s a currently available computational tool , Barracuda, to simulate such process es as slurry flow at high Reynolds number with high particle loading . Using Barracuda software, a parametric sequence of simu lations evaluated slurry flow at Re ynolds number up to 15000 and loading up to 25%. Li mitations come into the long time required to run these simulation s due in particular to the mesh size requirement at the jet nozzle. This study has found that slurry - jet width and centerline velocities are functions of Re ynold s number and volume fractio n The solid phase was found to spread less than the water - phase with a spreading rate smaller than 1 , dependent on the volume fraction. Particle size distribution does seem to have a large influence on the jet flow development. This study constitutes a first step to understand the behavior of highly loaded slurries and their ultimate application to cavern backfilling.
Evaluation of CASP8 model quality predictions
Cozzetto, Domenico
2009-01-01
The model quality assessment problem consists in the a priori estimation of the overall and per-residue accuracy of protein structure predictions. Over the past years, a number of methods have been developed to address this issue and CASP established a prediction category to evaluate their performance in 2006. In 2008 the experiment was repeated and its results are reported here. Participants were invited to infer the correctness of the protein models submitted by the registered automatic servers. Estimates could apply to both whole models and individual amino acids. Groups involved in the tertiary structure prediction categories were also asked to assign local error estimates to each predicted residue in their own models and their results are also discussed here. The correlation between the predicted and observed correctness measures was the basis of the assessment of the results. We observe that consensus-based methods still perform significantly better than those accepting single models, similarly to what was concluded in the previous edition of the experiment. © 2009 WILEY-LISS, INC.
Critical evaluation of HPV16 gene copy number quantification by SYBR green PCR.
Roberts, Ian; Ng, Grace; Foster, Nicola; Stanley, Margaret; Herdman, Michael T; Pett, Mark R; Teschendorff, Andrew; Coleman, Nicholas
2008-07-24
Human papilloma virus (HPV) load and physical status are considered useful parameters for clinical evaluation of cervical squamous cell neoplasia. However, the errors implicit in HPV gene quantification by PCR are not well documented. We have undertaken the first rigorous evaluation of the errors that can be expected when using SYBR green qPCR for quantification of HPV type 16 gene copy numbers. We assessed a modified method, in which external calibration curves were generated from a single construct containing HPV16 E2, HPV16 E6 and the host gene hydroxymethylbilane synthase in a 1:1:1 ratio. When testing dilutions of mixed HPV/host DNA in replicate runs, we observed errors in quantifying E2 and E6 amplicons of 5-40%, with greatest error at the lowest DNA template concentration (3 ng/microl). Errors in determining viral copy numbers per diploid genome were 13-53%. Nevertheless, in cervical keratinocyte cell lines we observed reasonable agreement between viral loads determined by qPCR and Southern blotting. The mean E2/E6 ratio in episome-only cells was 1.04, but with a range of 0.76-1.32. In three integrant-only lines the mean E2/E6 ratios were 0.20, 0.72 and 2.61 (values confirmed by gene-specific Southern blotting). When E2/E6 ratios in fourteen HPV16-positive cervical carcinomas were analysed, conclusions regarding viral physical state could only be made in three cases, where the E2/E6 ratio was unavoidable inaccuracies that should be allowed for when quantifying HPV gene copy number. While E6 copy numbers can be considered to provide a useable indication of viral loads, the E2/E6 ratio is of limited value. Previous studies may have overestimated the frequency of mixed episomal/integrant HPV infections.
Model Evaluation of Continuous Data Pharmacometric Models: Metrics and Graphics.
Nguyen, Tht; Mouksassi, M-S; Holford, N; Al-Huniti, N; Freedman, I; Hooker, A C; John, J; Karlsson, M O; Mould, D R; Pérez Ruixo, J J; Plan, E L; Savic, R; van Hasselt, Jgc; Weber, B; Zhou, C; Comets, E; Mentré, F
2017-02-01
This article represents the first in a series of tutorials on model evaluation in nonlinear mixed effect models (NLMEMs), from the International Society of Pharmacometrics (ISoP) Model Evaluation Group. Numerous tools are available for evaluation of NLMEM, with a particular emphasis on visual assessment. This first basic tutorial focuses on presenting graphical evaluation tools of NLMEM for continuous data. It illustrates graphs for correct or misspecified models, discusses their pros and cons, and recalls the definition of metrics used.
Model Evaluation of Continuous Data Pharmacometric Models: Metrics and Graphics
Nguyen, THT; Mouksassi, M‐S; Holford, N; Al‐Huniti, N; Freedman, I; Hooker, AC; John, J; Karlsson, MO; Mould, DR; Pérez Ruixo, JJ; Plan, EL; Savic, R; van Hasselt, JGC; Weber, B; Zhou, C; Comets, E
2017-01-01
This article represents the first in a series of tutorials on model evaluation in nonlinear mixed effect models (NLMEMs), from the International Society of Pharmacometrics (ISoP) Model Evaluation Group. Numerous tools are available for evaluation of NLMEM, with a particular emphasis on visual assessment. This first basic tutorial focuses on presenting graphical evaluation tools of NLMEM for continuous data. It illustrates graphs for correct or misspecified models, discusses their pros and cons, and recalls the definition of metrics used. PMID:27884052
Area density of streptavidin can be evaluated by the number density of biotinylated microbubbles
Yokoi, Yasuhiro; Yoshida, Kenji; Otsuki, Yuta; Watanabe, Yoshiaki
2017-02-01
Targeted microbubbles (TMBs) that specifically accumulate on target sites via biochemical bonds have been studied for using ultrasound diagnoses and therapies (e.g., ultrasound molecular imaging) in the research field. To understand the specific interactions between TMBs and their target molecules, a biosensor system with a quartz crystal microbalance (QCM) was constructed. In this system, TMBs become absorbed on their target molecule, which was fixed to the QCM surface via a self-assembled monolayer. Our previous studies showed that the system allowed the evaluation of the interaction between biotinylated MBs and the target molecule, streptavidin, by monitoring changes in the resonant frequency of QCM [Muramoto et al., Ultrasound Med. Biol., 40(5), 1027-1033 (2014)]. This paper investigates how the amount of streptavidin relates to the amount of absorbed biotinylated MBs. The amount of streptavidin on the QCM surface was evaluated by measuring the difference in its resonant frequency before and after the fixation of streptavidin. After which, the amount of absorbed MBs was also evaluated by measuring the frequency shift during the interaction between MBs and the target molecule. Our results showed a weak correlation between the amounts of bound MB and the density of streptavidin (correlation coefficient, r = 0.44), suggesting that the area density of target molecule can be evaluated by estimating the number density of TMBs.
Cox, M.; Shirono, K.
2017-10-01
A criticism levelled at the Guide to the Expression of Uncertainty in Measurement (GUM) is that it is based on a mixture of frequentist and Bayesian thinking. In particular, the GUM’s Type A (statistical) uncertainty evaluations are frequentist, whereas the Type B evaluations, using state-of-knowledge distributions, are Bayesian. In contrast, making the GUM fully Bayesian implies, among other things, that a conventional objective Bayesian approach to Type A uncertainty evaluation for a number n of observations leads to the impractical consequence that n must be at least equal to 4, thus presenting a difficulty for many metrologists. This paper presents a Bayesian analysis of Type A uncertainty evaluation that applies for all n ≥slant 2 , as in the frequentist analysis in the current GUM. The analysis is based on assuming that the observations are drawn from a normal distribution (as in the conventional objective Bayesian analysis), but uses an informative prior based on lower and upper bounds for the standard deviation of the sampling distribution for the quantity under consideration. The main outcome of the analysis is a closed-form mathematical expression for the factor by which the standard deviation of the mean observation should be multiplied to calculate the required standard uncertainty. Metrological examples are used to illustrate the approach, which is straightforward to apply using a formula or look-up table.
A European model for the number of end-of-life vehicles
DEFF Research Database (Denmark)
Møller Andersen, Frits; Larsen, Helge V.; Skovgaard, M.
2007-01-01
This paper describes a model for the projection of the number of End-of-Life Vehicles (ELVs) and presents a baseline projection. Historical data on population, the number of cars per capita (car density), GDP per capita, and the vintage distribution of cars are combined to model ELVs. The lifetime...
Using common random numbers in health care cost-effectiveness simulation modeling.
Murphy, Daniel R; Klein, Robert W; Smolen, Lee J; Klein, Timothy M; Roberts, Stephen D
2013-08-01
To identify the problem of separating statistical noise from treatment effects in health outcomes modeling and analysis. To demonstrate the implementation of one technique, common random numbers (CRNs), and to illustrate the value of CRNs to assess costs and outcomes under uncertainty. A microsimulation model was designed to evaluate osteoporosis treatment, estimating cost and utility measures for patient cohorts at high risk of osteoporosis-related fractures. Incremental cost-effectiveness ratios (ICERs) were estimated using a full implementation of CRNs, a partial implementation of CRNs, and no CRNs. A modification to traditional probabilistic sensitivity analysis (PSA) was used to determine how variance reduction can impact a decision maker's view of treatment efficacy and costs. The full use of CRNs provided a 93.6 percent reduction in variance compared to simulations not using the technique. The use of partial CRNs provided a 5.6 percent reduction. The PSA results using full CRNs demonstrated a substantially tighter range of cost-benefit outcomes for teriparatide usage than the cost-benefits generated without the technique. CRNs provide substantial variance reduction for cost-effectiveness studies. By reducing variability not associated with the treatment being evaluated, CRNs provide a better understanding of treatment effects and risks. © Health Research and Educational Trust.
Training evaluation models: Theory and applications
Carbone, V.; MORVILLO, A
2002-01-01
This chapter has the following aims: 1. Compare the various conceptual models for evaluation, identifying their strengths and weaknesses; 2. Define an evaluation model consistent with the aims and constraints of the fit project; 3. Describe, in critical fashion, operative tools for evaluating training which are reliable, flexible and analytical.
Gifford, Sue
2014-01-01
This article sets out to evaluate the English Early Years Foundation Stage Goal for Numbers, in relation to research evidence. The Goal, which sets out to provide "a good foundation in mathematics", has greater breadth of content and higher levels of difficulty than previous versions. Research suggests that the additional expectations…
Developing a Learning Progression for Number Sense Based on the Rule Space Model in China
Chen, Fu; Yan, Yue; Xin, Tao
2017-01-01
The current study focuses on developing the learning progression of number sense for primary school students, and it applies a cognitive diagnostic model, the rule space model, to data analysis. The rule space model analysis firstly extracted nine cognitive attributes and their hierarchy model from the analysis of previous research and the…
Challenges in simulation and modeling of heat transfer in low-Prandtl number fluids
Energy Technology Data Exchange (ETDEWEB)
Groetzbach, G., E-mail: groetzbach@kit.edu [Karlsruher Inst. fuer Technologie (KIT), Inst. fuer Kern-und Energietechnik, Karlsruhe (Germany)
2011-07-01
Nuclear heat transfer applications with low-Prandtl number fluids are often in the transition range between conduction and convection dominated regimes. Most flows in reactors involve also anisotropic turbulent fluxes and strong buoyancy influences. The relevance and complexity of the required heat flux modelling is discussed depending on engineering issues. The needed acceptable models range from turbulent Prandtl number concepts, over algebraic flux models, to full second order models in RANS as well as in LES, all with special liquid metal extensions. Recommendations are deduced for the promising HYBRID models. The listed remaining challenges show the need for further development of models and instrumentation. (author)
Differential program evaluation model in child protection.
Lalayants, Marina
2012-01-01
Increasingly attention has been focused to the degree to which social programs have effectively and efficiently delivered services. Using the differential program evaluation model by Tripodi, Fellin, and Epstein (1978) and by Bielawski and Epstein (1984), this paper described the application of this model to evaluating a multidisciplinary clinical consultation practice in child protection. This paper discussed the uses of the model by demonstrating them through the four stages of program initiation, contact, implementation, and stabilization. This organizational case study made a contribution to the model by introducing essential and interrelated elements of a "practical evaluation" methodology in evaluating social programs, such as a participatory evaluation approach; learning, empowerment and sustainability; and a flexible individualized approach to evaluation. The study results demonstrated that by applying the program development model, child-protective administrators and practitioners were able to evaluate the existing practices and recognize areas for program improvement.
Keeping the noise down: common random numbers for disease simulation modeling.
Stout, Natasha K; Goldie, Sue J
2008-12-01
Disease simulation models are used to conduct decision analyses of the comparative benefits and risks associated with preventive and treatment strategies. To address increasing model complexity and computational intensity, modelers use variance reduction techniques to reduce stochastic noise and improve computational efficiency. One technique, common random numbers, further allows modelers to conduct counterfactual-like analyses with direct computation of statistics at the individual level. This technique uses synchronized random numbers across model runs to induce correlation in model output thereby making differences easier to distinguish as well as simulating identical individuals across model runs. We provide a tutorial introduction and demonstrate the application of common random numbers in an individual-level simulation model of the epidemiology of breast cancer.
Directory of Open Access Journals (Sweden)
Prasenjit Chatterjee
2012-04-01
Full Text Available Evaluation of proper supplier for manufacturing organizations is one of the most challenging problems in real time manufacturing environment due to a wide variety of customer demands. It has become more and more complicated to meet the challenges of international competitiveness and as the decision makers need to assess a wide range of alternative suppliers based on a set of conflicting criteria. Thus, the main objective of supplier selection is to select highly potential supplier through which all the set goals regarding the purchasing and manufacturing activity can be achieved. Because of these reasons, supplier selection has got considerable attention by the academicians and researchers. This paper presents a combined multi-criteria decision making methodology for supplier evaluation for given industrial applications. The proposed methodology is based on a compromise ranking method combined with Grey Interval Numbers considering different cardinal and ordinal criteria and their relative importance. A ‘supplier selection index’ is also proposed to help evaluation and ranking the alternative suppliers. Two examples are illustrated to demonstrate the potentiality and applicability of the proposed method.
Critical evaluation of HPV16 gene copy number quantification by SYBR green PCR
Directory of Open Access Journals (Sweden)
Pett Mark R
2008-07-01
Full Text Available Abstract Background Human papilloma virus (HPV load and physical status are considered useful parameters for clinical evaluation of cervical squamous cell neoplasia. However, the errors implicit in HPV gene quantification by PCR are not well documented. We have undertaken the first rigorous evaluation of the errors that can be expected when using SYBR green qPCR for quantification of HPV type 16 gene copy numbers. We assessed a modified method, in which external calibration curves were generated from a single construct containing HPV16 E2, HPV16 E6 and the host gene hydroxymethylbilane synthase in a 1:1:1 ratio. Results When testing dilutions of mixed HPV/host DNA in replicate runs, we observed errors in quantifying E2 and E6 amplicons of 5–40%, with greatest error at the lowest DNA template concentration (3 ng/μl. Errors in determining viral copy numbers per diploid genome were 13–53%. Nevertheless, in cervical keratinocyte cell lines we observed reasonable agreement between viral loads determined by qPCR and Southern blotting. The mean E2/E6 ratio in episome-only cells was 1.04, but with a range of 0.76–1.32. In three integrant-only lines the mean E2/E6 ratios were 0.20, 0.72 and 2.61 (values confirmed by gene-specific Southern blotting. When E2/E6 ratios in fourteen HPV16-positive cervical carcinomas were analysed, conclusions regarding viral physical state could only be made in three cases, where the E2/E6 ratio was ≤ 0.06. Conclusion Run-to-run variation in SYBR green qPCR produces unavoidable inaccuracies that should be allowed for when quantifying HPV gene copy number. While E6 copy numbers can be considered to provide a useable indication of viral loads, the E2/E6 ratio is of limited value. Previous studies may have overestimated the frequency of mixed episomal/integrant HPV infections.
Evaluating uncertainty in simulation models
Energy Technology Data Exchange (ETDEWEB)
McKay, M.D.; Beckman, R.J.; Morrison, J.D.; Upton, S.C.
1998-12-01
The authors discussed some directions for research and development of methods for assessing simulation variability, input uncertainty, and structural model uncertainty. Variance-based measures of importance for input and simulation variables arise naturally when using the quadratic loss function of the difference between the full model prediction y and the restricted prediction {tilde y}. The concluded that generic methods for assessing structural model uncertainty do not now exist. However, methods to analyze structural uncertainty for particular classes of models, like discrete event simulation models, may be attainable.
Systems Evaluation Methods, Models, and Applications
Liu, Siefeng; Xie, Naiming; Yuan, Chaoqing
2011-01-01
A book in the Systems Evaluation, Prediction, and Decision-Making Series, Systems Evaluation: Methods, Models, and Applications covers the evolutionary course of systems evaluation methods, clearly and concisely. Outlining a wide range of methods and models, it begins by examining the method of qualitative assessment. Next, it describes the process and methods for building an index system of evaluation and considers the compared evaluation and the logical framework approach, analytic hierarchy process (AHP), and the data envelopment analysis (DEA) relative efficiency evaluation method. Unique
The Information Service Evaluation (ISE Model
Directory of Open Access Journals (Sweden)
Laura Schumann
2014-06-01
Full Text Available Information services are an inherent part of our everyday life. Especially since ubiquitous cities are being developed all over the world their number is increasing even faster. They aim at facilitating the production of information and the access to the needed information and are supposed to make life easier. Until today many different evaluation models (among others, TAM, TAM 2, TAM 3, UTAUT and MATH have been developed to measure the quality and acceptance of these services. Still, they only consider subareas of the whole concept that represents an information service. As a holistic and comprehensive approach, the ISE Model studies five dimensions that influence adoption, use, impact and diffusion of the information service: information service quality, information user, information acceptance, information environment and time. All these aspects have a great impact on the final grading and of the success (or failure of the service. Our model combines approaches, which study subjective impressions of users (e.g., the perceived service quality, and user-independent, more objective approaches (e.g., the degree of gamification of a system. Furthermore, we adopt results of network economics, especially the "Success breeds success"-principle.
Takaishi, Tetsuya; Chen, Ting Ting
2016-08-01
We examine the relationship between trading volumes, number of transactions, and volatility using daily stock data of the Tokyo Stock Exchange. Following the mixture of distributions hypothesis, we use trading volumes and the number of transactions as proxy for the rate of information arrivals affecting stock volatility. The impact of trading volumes or number of transactions on volatility is measured using the generalized autoregressive conditional heteroscedasticity (GARCH) model. We find that the GARCH effects, that is, persistence of volatility, is not always removed by adding trading volumes or number of transactions, indicating that trading volumes and number of transactions do not adequately represent the rate of information arrivals.
Estimate Total Number of the Earth Atmospheric Particle with Standard Atmosphere Model
Institute of Scientific and Technical Information of China (English)
GAO Chong-Yi
2001-01-01
The total number of atmospheric particle (AP) is an important datum for planetary science and geoscience.Estimating entire AP number is also a familiar question in general physics.With standard atmosphere model,considering the number difference of AP caused by rough and uneven in the earth surface below,the sum of dry clean atmosphere particle is 1.06962 × 1044.So the whole number of AP including water vapor is 1.0740 × 1044.The rough estimation for the total number of AP on other planets (or satellites) in condensed state is also discussed on the base of it.
[Evaluation of variable number of tandem repeats (VNTR) isolates of Mycobacterium bovis in Algeria].
Sahraoui, Naima; Muller, Borna; Djamel, Yala; Fadéla, Boulahbal; Rachid, Ouzrout; Jakob, Zinsstag; Djamel, Guetarni
2010-01-01
The discriminatory potency of variable number of tandem repeats (VNTR), based on 7 loci (MIRU 26, 27 and 5 ETRs A, B, C, D, E) was assayed on Mycobacterium bovis strains obtained from samples due to tuberculosis in two slaughterhouses in Algeria. The technique of MIRU-VNTR has been evaluated on 88 strains of M. bovis and one strain of M. caprea and shows 41 different profiles. Results showed that the VNTR were highly discriminatory with an allelic diversity of 0.930 when four loci (ETR A, B, C and MIRU 27) were highly discriminatory (h>0.25) and three loci (ETR D and E MIRU 26) moderately discriminatory (0.11
Using multifractals to evaluate oceanographic model skill
Skákala, Jozef; Cazenave, Pierre W.; Smyth, Timothy J.; Torres, Ricardo
2016-08-01
We are in an era of unprecedented data volumes generated from observations and model simulations. This is particularly true from satellite Earth Observations (EO) and global scale oceanographic models. This presents us with an opportunity to evaluate large-scale oceanographic model outputs using EO data. Previous work on model skill evaluation has led to a plethora of metrics. The paper defines two new model skill evaluation metrics. The metrics are based on the theory of universal multifractals and their purpose is to measure the structural similarity between the model predictions and the EO data. The two metrics have the following advantages over the standard techniques: (a) they are scale-free and (b) they carry important part of information about how model represents different oceanographic drivers. Those two metrics are then used in the paper to evaluate the performance of the FVCOM model in the shelf seas around the south-west coast of the UK.
1976-08-01
author is indebted to Lowell Bruce Anderson and Jerome Bracken of the Institute for Defense Analyses for many helpful conversations during the...set straight the author’s thinking on a number of points. Joseph Bruner of the Institute for Defense Analyses and Philip Lowry of the General
Evaluation of lymph node numbers for adequate staging of Stage II and III colon cancer
Directory of Open Access Journals (Sweden)
Bumpers Harvey L
2011-05-01
Full Text Available Abstract Background Although evaluation of at least 12 lymph nodes (LNs is recommended as the minimum number of nodes required for accurate staging of colon cancer patients, there is disagreement on what constitutes an adequate identification of such LNs. Methods To evaluate the minimum number of LNs for adequate staging of Stage II and III colon cancer, 490 patients were categorized into groups based on 1-6, 7-11, 12-19, and ≥ 20 LNs collected. Results For patients with Stage II or III disease, examination of 12 LNs was not significantly associated with recurrence or mortality. For Stage II (HR = 0.33; 95% CI, 0.12-0.91, but not for Stage III patients (HR = 1.59; 95% CI, 0.54-4.64, examination of ≥20 LNs was associated with a reduced risk of recurrence within 2 years. However, examination of ≥20 LNs had a 55% (Stage II, HR = 0.45; 95% CI, 0.23-0.87 and a 31% (Stage III, HR = 0.69; 95% CI, 0.38-1.26 decreased risk of mortality, respectively. For each six additional LNs examined from Stage III patients, there was a 19% increased probability of finding a positive LN (parameter estimate = 0.18510, p Conclusions Thus, the 12 LN cut-off point cannot be supported as requisite in determining adequate staging of colon cancer based on current data. However, a minimum of 6 LNs should be examined for adequate staging of Stage II and III colon cancer patients.
Optimization model using Markowitz model approach for reducing the number of dengue cases in Bandung
Yong, Benny; Chin, Liem
2017-05-01
Dengue fever is one of the most serious diseases and this disease can cause death. Currently, Indonesia is a country with the highest cases of dengue disease in Southeast Asia. Bandung is one of the cities in Indonesia that is vulnerable to dengue disease. The sub-districts in Bandung had different levels of relative risk of dengue disease. Dengue disease is transmitted to people by the bite of an Aedesaegypti mosquito that is infected with a dengue virus. Prevention of dengue disease is by controlling the vector mosquito. It can be done by various methods, one of the methods is fogging. The efforts made by the Health Department of Bandung through fogging had constraints in terms of limited funds. This problem causes Health Department selective in fogging, which is only done for certain locations. As a result, many sub-districts are not handled properly by the Health Department because of the unequal distribution of activities to prevent the spread of dengue disease. Thus, it needs the proper allocation of funds to each sub-district in Bandung for preventing dengue transmission optimally. In this research, the optimization model using Markowitz model approach will be applied to determine the allocation of funds should be given to each sub-district in Bandung. Some constraints will be added to this model and the numerical solution will be solved with generalized reduced gradient method using Solver software. The expected result of this research is the proportion of funds given to each sub-district in Bandung correspond to the level of risk of dengue disease in each sub-district in Bandung so that the number of dengue cases in this city can be reduced significantly.
Evaluation of Marine Corps Manpower Computer Simulation Model
2016-12-01
MARINE CORPS MANPOWER COMPUTER SIMULATION MODEL by Eric S. Anderson December 2016 Thesis Advisor: Arnold Buss Second Reader: Neil Rowe...Master’s thesis 4. TITLE AND SUBTITLE EVALUATION OF MARINE CORPS MANPOWER COMPUTER SIMULATION MODEL 5. FUNDING NUMBERS ACCT: 622716 JON...overall end strength are maintained. To assist their mission, an agent-based computer simulation model was developed in the Java computer language
An Instructional Model for Teaching Proof Writing in the Number Theory Classroom
Schabel, Carmen
2005-01-01
I discuss an instructional model that I have used in my number theory classes. Facets of the model include using small group work and whole class discussion, having students generate examples and counterexamples, and giving students the opportunity to write proofs and make conjectures in class. The model is designed to actively engage students in…
Refined open intersection numbers and the Kontsevich-Penner matrix model
Alexandrov, Alexander; Buryak, Alexandr; Tessler, Ran J.
2017-03-01
A study of the intersection theory on the moduli space of Riemann surfaces with boundary was recently initiated in a work of R. Pandharipande, J.P. Solomon and the third author, where they introduced open intersection numbers in genus 0. Their construction was later generalized to all genera by J.P. Solomon and the third author. In this paper we consider a refinement of the open intersection numbers by distinguishing contributions from surfaces with different numbers of boundary components, and we calculate all these numbers. We then construct a matrix model for the generating series of the refined open intersection numbers and conjecture that it is equivalent to the Kontsevich-Penner matrix model. An evidence for the conjecture is presented. Another refinement of the open intersection numbers, which describes the distribution of the boundary marked points on the boundary components, is also discussed.
Evaluation of the Design Metric to Reduce the Number of Defects in Software Development
Directory of Open Access Journals (Sweden)
M. Rizwan Jameel Qureshi
2012-04-01
Full Text Available Software design is one of the most important and key activities in the system development life cycle (SDLC phase that ensures the quality of software. Different key areas of design are very vital to be taken into consideration while designing software. Software design describes how the software system is decomposed and managed in smaller components. Object-oriented (OO paradigm has facilitated software industry with more reliable and manageable software and its design. The quality of the software design can be measured through different metrics such as Chidamber and Kemerer (CK design metrics, Mood Metrics & Lorenz and Kidd metrics. CK metrics is one of the oldest and most reliable metrics among all metrics available to software industry to evaluate OO design. This paper presents an evaluation of CK metrics to propose an improved CK design metrics values to reduce the defects during software design phase in software. This paper will also describe that whether a significant effect of any CK design metrics exists on total number of defects per module or not. This is achieved by conducting survey in two software development companies.
Application of Multiple Evaluation Models in Brazil
Directory of Open Access Journals (Sweden)
Rafael Victal Saliba
2008-07-01
Full Text Available Based on two different samples, this article tests the performance of a number of Value Drivers commonly used for evaluating companies by ﬁnance practitioners, through simple regression models of cross-section type which estimate the parameters associated to each Value Driver, denominated Market Multiples. We are able to diagnose the behavior of several multiples in the period 1994-2004, with an outlook also on the particularities of the economic activities performed by the sample companies (and their impacts on the performance through a subsequent analysis with segregation of companies in the sample by sectors. Extrapolating simple multiples evaluation standards from analysts of the main ﬁnancial institutions in Brazil, we ﬁnd that adjusting the ratio formulation to allow for an intercept does not provide satisfactory results in terms of pricing errors reduction. Results found, in spite of evidencing certain relative and absolute superiority among the multiples, may not be generically representative, given samples limitation.
A data-driven model for estimating industry average numbers of hospital security staff.
Vellani, Karim H; Emery, Robert J; Reingle Gonzalez, Jennifer M
2015-01-01
In this article the authors report the results of an expanded survey, financed by the International Healthcare Security and Safety Foundation (IHSSF), applied to the development of a model for determining the number of security officers required by a hospital.
Model-based control of vortex shedding at low Reynolds numbers
Illingworth, Simon J.
2016-10-01
Model-based feedback control of vortex shedding at low Reynolds numbers is considered. The feedback signal is provided by velocity measurements in the wake, and actuation is achieved using blowing and suction on the cylinder's surface. Using two-dimensional direct numerical simulations and reduced-order modelling techniques, linear models of the wake are formed at Reynolds numbers between 45 and 110. These models are used to design feedback controllers using {H}_∞ loop-shaping. Complete suppression of shedding is demonstrated up to Re = 110—both for a single-sensor arrangement and for a three-sensor arrangement. The robustness of the feedback controllers is also investigated by applying them over a range of off-design Reynolds numbers, and good robustness properties are seen. It is also observed that it becomes increasingly difficult to achieve acceptable control performance—measured in a suitable way—as Reynolds number increases.
Energy Technology Data Exchange (ETDEWEB)
Smorodinskiy, B.I.
1984-01-01
A theoretical justification is given for a method for solving one of the common problems in whole number linear programming with Boolean variables. It is shown that a number of optimization problems in the oil industry can be reduced to this model.
Bénet, L; Seligman, T H; Suárez-Moreno, A
1999-01-01
Quantum-classical correspondence for the shape of eigenfunctions, local spectral density of states and occupation number distribution is studied in a chaotic model of two coupled quartic oscillators. In particular, it is shown that both classical quantities and quantum spectra determine global properties of occupation numbers and inverse participation ratio.
Prediction and control of number of cells in microdroplets by stochastic modeling
Ceyhan, Elvan; Xu, Feng; Gürkan, Umut Atakan; Emre, Almet Emrehan; Turalı, Emine Sümeyra; El Assal, Rami; Açıkgenc, Ali; Wu, Chung-an Max; Demirci, Utkan
2012-01-01
Manipulation and encapsulation of cells in microdroplets has found many applications in various fields such as clinical diagnostics, pharmaceutical research, and regenerative medicine. The control over the number of cells in individual droplets is important especially for microfluidic and bioprinting applications. There is a growing need for modeling approaches that enable control over a number of cells within individual droplets. In this study, we developed statistical models based on negati...
L. V. Nedorezov
2014-01-01
Stochastic model of migrations of individuals within the limits of finite domain on a plane is considered. It is assumed that population size scale is homogeneous, and there doesn't exist an interval of optimal values of population size (Alley effect doesn't realize for population). For every fixed value of population size number of interactions between individuals is calculated (as average in space and time). Correspondence between several classic models and numbers of interactions between i...
Klewicki, J. C.; Chini, G. P.; Gibson, J. F.
2017-01-01
Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585
What properties of numbers are needed to model accelerated observers in relativity?
Székely, Gergely
2012-01-01
We investigate the possible structures of numbers (as physical quantities) over which accelerated observers can be modeled in special relativity. We present a general axiomatic theory of accelerated observers which has a model over every real closed field. We also show that, if we would like to model certain accelerated observers, then not every real closed field is suitable, e.g., uniformly accelerated observers cannot be modeled over the field of real algebraic numbers. Consequently, the class of fields over which uniform acceleration can be investigated is not axiomatizable in the language of ordered fields.
Lepton Number Violation and Neutrino Masses in 3-3-1 Models
Directory of Open Access Journals (Sweden)
Richard H. Benavides
2015-01-01
Full Text Available Lepton number violation and its relation to neutrino masses are investigated in several versions of the SU3c⊗SU3L⊗U1x model. Spontaneous and explicit violation and conservation of the lepton number are considered. In one of the models (the so-called economical one, the lepton number is spontaneously violated and it is found that the would be Majoron is not present because it is gauged away, providing in this way the longitudinal polarization component to a now massive gauge field.
Lepton number violation and neutrino masses in 3-3-1 models
Benavides, Richard H; Fanchiotti, Huner; Canal, Carlos García; Ponce, William A
2015-01-01
Lepton number violation and its relation to neutrino masses is investigated in several versions of the $SU(3)_c\\otimes SU(3)_L\\otimes U(1)_x$ model. Spontaneous and explicit violation and conservation of the lepton number are considered. In one of the models (the so-called economical one), the lepton number is spontaneously violated and it is found that the would be Majoron is not present because it is gauged away, poviding in this way the longitudinal polarization component to a now massive gauge field.
Lal, Mohan; Mishra, S. K.; Pandey, Ashish; Pandey, R. P.; Meena, P. K.; Chaudhary, Anubhav; Jha, Ranjit Kumar; Shreevastava, Ajit Kumar; Kumar, Yogendra
2016-08-01
The Soil Conservation Service curve number (SCS-CN) method, also known as the Natural Resources Conservation Service curve number (NRCS-CN) method, is popular for computing the volume of direct surface runoff for a given rainfall event. The performance of the SCS-CN method, based on large rainfall (P) and runoff (Q) datasets of United States watersheds, is evaluated using a large dataset of natural storm events from 27 agricultural plots in India. On the whole, the CN estimates from the National Engineering Handbook (chapter 4) tables do not match those derived from the observed P and Q datasets. As a result, the runoff prediction using former CNs was poor for the data of 22 (out of 24) plots. However, the match was little better for higher CN values, consistent with the general notion that the existing SCS-CN method performs better for high rainfall-runoff (high CN) events. Infiltration capacity (fc) was the main explanatory variable for runoff (or CN) production in study plots as it exhibited the expected inverse relationship between CN and fc. The plot-data optimization yielded initial abstraction coefficient (λ) values from 0 to 0.659 for the ordered dataset and 0 to 0.208 for the natural dataset (with 0 as the most frequent value). Mean and median λ values were, respectively, 0.030 and 0 for the natural rainfall-runoff dataset and 0.108 and 0 for the ordered rainfall-runoff dataset. Runoff estimation was very sensitive to λ and it improved consistently as λ changed from 0.2 to 0.03.
Low Mach and Peclet number limit for a model of stellar tachocline and upper radiative zones
Directory of Open Access Journals (Sweden)
Donatella Donatelli
2016-09-01
Full Text Available We study a hydrodynamical model describing the motion of internal stellar layers based on compressible Navier-Stokes-Fourier-Poisson system. We suppose that the medium is electrically charged, we include energy exchanges through radiative transfer and we assume that the system is rotating. We analyze the singular limit of this system when the Mach number, the Alfven number, the Peclet number and the Froude number approache zero in a certain way and prove convergence to a 3D incompressible MHD system with a stationary linear transport equation for transport of radiation intensity. Finally, we show that the energy equation reduces to a steady equation for the temperature corrector.
Prediction Model of Interval Grey Numbers with a Real Parameter and Its Application
Directory of Open Access Journals (Sweden)
Bo Zeng
2014-01-01
Full Text Available Grey prediction models have become common methods which are widely employed to solve the problems with “small examples and poor information.” However, modeling objects of existing grey prediction models are limited to the homogenous data sequences which only contain the same data type. This paper studies the methodology of building prediction models of interval grey numbers that are grey heterogeneous data sequence, with a real parameter. Firstly, the position of the real parameter in an interval grey number sequence is discussed, and the real number is expanded into an interval grey number by adopting the method of grey generation. On this basis, a prediction model of interval grey number with a real parameter is deduced and built. Finally, this novel model is successfully applied to forecast the concentration of organic pollutant DDT in the atmosphere. The analysis and research results in this paper extend the object of grey prediction from homogenous data sequence to grey heterogeneous data sequence. Those research findings are of positive significance in terms of enriching and improving the theory system of grey prediction models.
Klewicki, J. C.; Chini, G. P.; Gibson, J. F.
2017-03-01
Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.
Further Evaluation of a Brief, Intensive Teacher-Training Model
Lerman, Dorothea C.; Tetreault, Allison; Hovanetz, Alyson; Strobel, Margaret; Garro, Joanie
2008-01-01
The purpose of this study was to further evaluate the outcomes of a model program that was designed to train current teachers of children with autism. Nine certified special education teachers participating in an intensive 5-day summer training program were taught a relatively large number of specific skills in two areas (preference assessment and…
Numerical modeling of the impact of regenerator housing on the determination of Nusselt numbers
DEFF Research Database (Denmark)
Nielsen, Kaspar Kirstein; Nellis, G.F.; Klein, S.A.
2013-01-01
to the regenerator fluid is developed. The model is applied to a range of cases and it is shown that at low Reynolds numbers (well below 100) and at Prandtl numbers appropriate to liquids (7 for water) the regenerator housing may influence the experimental determination of Nusselt numbers significantly. The impact......It is suggested that the housing of regenerators may have a significant impact when experimentally determining Nusselt numbers at low Reynolds and large Prandtl numbers. In this paper, a numerical model that takes the regenerator housing into account as a domain that is thermally coupled...... of the housing on the performance during cyclic steady-state regenerator operation is quantified by comparing the regenerator effectiveness for cases where the wall is ignored and with cases where it is included. It is shown that the effectiveness may be decreased by as much as 18% for the cases considered here...
Evaluating topic models with stability
CSIR Research Space (South Africa)
De Waal, A
2008-11-01
Full Text Available on unlabelled data, so that a ground truth does not exist and (b) "soft" (probabilistic) document clusters are created by state-of-the-art topic models, which complicates comparisons even when ground truth labels are available. Perplexity has often been used...
Inertia-less convectively-driven dynamo models in the limit of low Rossby number
Calkins, Michael A; Tobias, Steven M
2016-01-01
Compositional convection is thought to be an important energy source for magnetic field generation within planetary interiors. The Prandtl number, $Pr$, characterizing compositional convection is significantly larger than unity, suggesting that the inertial force may not be important on the small scales of convection. We develop asymptotic dynamo models for the case of small Rossby number and large Prandtl number in which inertia is absent on the convective scale. The relevant diffusivity parameter for this limit is the compositional Roberts number, $q = D/\\eta$, which is the ratio of compositional and magnetic diffusivities. Dynamo models are developed for both order one $q$ and the more geophysically relevant low $q$ limit. For both cases the ratio of magnetic to kinetic energy densities, $M$, is asymptotically large and reflects the fact that Alfv\\'en waves have been filtered from the dynamics. Taken together with previous investigations of asymptotic dynamo models for $Pr=O(1)$, our results show that the ...
Evaluation of animal models of neurobehavioral disorders
Directory of Open Access Journals (Sweden)
Nordquist Rebecca E
2009-02-01
Full Text Available Abstract Animal models play a central role in all areas of biomedical research. The process of animal model building, development and evaluation has rarely been addressed systematically, despite the long history of using animal models in the investigation of neuropsychiatric disorders and behavioral dysfunctions. An iterative, multi-stage trajectory for developing animal models and assessing their quality is proposed. The process starts with defining the purpose(s of the model, preferentially based on hypotheses about brain-behavior relationships. Then, the model is developed and tested. The evaluation of the model takes scientific and ethical criteria into consideration. Model development requires a multidisciplinary approach. Preclinical and clinical experts should establish a set of scientific criteria, which a model must meet. The scientific evaluation consists of assessing the replicability/reliability, predictive, construct and external validity/generalizability, and relevance of the model. We emphasize the role of (systematic and extended replications in the course of the validation process. One may apply a multiple-tiered 'replication battery' to estimate the reliability/replicability, validity, and generalizability of result. Compromised welfare is inherent in many deficiency models in animals. Unfortunately, 'animal welfare' is a vaguely defined concept, making it difficult to establish exact evaluation criteria. Weighing the animal's welfare and considerations as to whether action is indicated to reduce the discomfort must accompany the scientific evaluation at any stage of the model building and evaluation process. Animal model building should be discontinued if the model does not meet the preset scientific criteria, or when animal welfare is severely compromised. The application of the evaluation procedure is exemplified using the rat with neonatal hippocampal lesion as a proposed model of schizophrenia. In a manner congruent to
The 750 GeV LHC diphoton excess from a baryon number conserving string model
Kokorelis, Christos
2016-01-01
We propose an explanation of the LHC data excess resonance of 750 GeV in the diphoton distribution using D-brane models, with gauged baryon number, which accommodate the Standard Model together with vector like exotics. We identify the 750 GeV scalar as either the sneutrino (${\\tilde \
A Percentile Regression Model for the Number of Errors in Group Conversation Tests.
Liski, Erkki P.; Puntanen, Simo
A statistical model is presented for analyzing the results of group conversation tests in English, developed in a Finnish university study from 1977 to 1981. The model is illustrated with the findings from the study. In this study, estimates of percentile curves for the number of errors are of greater interest than the mean regression line. It was…
Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial
This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...
Institute of Scientific and Technical Information of China (English)
SUN Zai; HUANG Zhen; WANG JiaSong
2007-01-01
A size-specific aerosol dynamic model is set up to predict the evolution of particle number concentration within a chamber. Particle aggregation is based on the theory of Brownian coagulation, and the model not only comprises particle loss due to coagulation, but also considers the formation of large particles by collision. To validate the model, three different groups of chamber experiments with SMPS (Scanning Mobility Particle Sizer) are conducted. The results indicate that the advantage of the model over the past simple size bin model is its provision of detailed information of size spectrum evolution,and the results can be used to analyze the variations of number concentration and CMD (Count Median Diameter). Furthermore, some aerosol dynamic mechanisms that cannot be measured by instrument can be analyzed by the model simulation, which is significant for better understanding the removal and control mechanisms of ultrafine particles.
Spherical collapse model and cluster number counts in power-law f(T) gravity
Malekjani, M.; Basilakos, S.; Heidari, N.
2017-04-01
We study the spherical collapse model in the framework of spatially flat power law f(T) ∝ (- T)b gravity model. We find that the linear and non-linear growth of spherical overdensities of this particular f(T) model are affected by the power-law parameter b. Finally, we compute the predicted number counts of virialized haloes in order to distinguish the current f(T) model from the expectations of the concordance Λ cosmology. Specifically, the present analysis suggests that the f(T) gravity model with positive (negative) b predicts more (less) virialized objects with respect to those of Λ cold dark matter.
[Evaluation model for municipal health planning management].
Berretta, Isabel Quint; Lacerda, Josimari Telino de; Calvo, Maria Cristina Marino
2011-11-01
This article presents an evaluation model for municipal health planning management. The basis was a methodological study using the health planning theoretical framework to construct the evaluation matrix, in addition to an understanding of the organization and functioning designed by the Planning System of the Unified National Health System (PlanejaSUS) and definition of responsibilities for the municipal level under the Health Management Pact. The indicators and measures were validated using the consensus technique with specialists in planning and evaluation. The applicability was tested in 271 municipalities (counties) in the State of Santa Catarina, Brazil, based on population size. The proposed model features two evaluative dimensions which reflect the municipal health administrator's commitment to planning: the guarantee of resources and the internal and external relations needed for developing the activities. The data were analyzed using indicators, sub-dimensions, and dimensions. The study concludes that the model is feasible and appropriate for evaluating municipal performance in health planning management.
Buela-Casal, Gualberto; Zych, Izabela
2010-05-01
The study analyzes the relationship between the number of citations as calculated by the IN-RECS database and the quality evaluated by experts. The articles published in journals of the Spanish Psychological Association between 1996 and 2008 and selected by the Editorial Board of Psychology in Spain were the subject of the study. Psychology in Spain is a journal that includes the best papers published throughout the previous year, chosen by the Editorial Board made up of fifty specialists of acknowledged prestige within Spanish psychology and translated into English. The number of the citations of the 140 original articles republished in Psychology in Spain was compared to the number of the citations of the 140 randomly selected articles. Additionally, the study searched for a relationship between the number of the articles selected from each journal and their mean number of citations. The number of citations received by the best articles as evaluated by experts is significantly higher than the number of citations of the randomly selected articles. Also, the number of citations is higher in the articles from the most frequently selected journals. A statistically significant relation between the quality evaluated by experts and the number of the citations was found.
An interval number-based multiple attribute bid-decision making model for substation equipments
Directory of Open Access Journals (Sweden)
Zhu Lili
2016-01-01
Full Text Available By analyzing the characteristics of public bidding for substation equipments and combining with the research methods of multiple attribute decision-making problems, a multiple attribute bid-decision making model is presented. Firstly, the weight of interval numbers is specified by using the interval numbers theory and entropy theory. Secondly, the deviation degree of decision-making scheme is proposed. Then the schemes are sorted. A typical case is analyzed based on the above-mentioned.
Statistical Modeling of the Trends Concerning the Number of Hospitals and Medical Centres in Romania
Directory of Open Access Journals (Sweden)
Gabriela OPAIT
2017-04-01
Full Text Available This study reveals the technique for to achive the shapes of the mathematical models which put in evidence the distributions of the values concerning the number of Hospitals, respectively Medical Centres, in our country, in the time horizon 2005-2014. In the same time, we can to observe the algorithm applied for to construct forecasts about the evolutions regarding the number of Hospitals and Medical Centres in Romania.
The Influence of the Number of Different Stocks on the Levy-Levy-Solomon Model
Kohl, R.
The stock market model of Levy, Levy, Solomon is simulated for more than one stock to analyze the behavior for a large number of investors. Small markets can lead to realistic looking prices for one and more stocks. A large number of investors leads to a semi-regular fashion simulating one stock. For many stocks, three of the stocks are semi-regular and dominant, the rest is chaotic. Aside from that we changed the utility function and checked the results.
A Fokker-Planck Model of the Boltzmann Equation with Correct Prandtl Number for Polyatomic Gases
Mathiaud, J.; Mieussens, L.
2017-09-01
We propose an extension of the Fokker-Planck model of the Boltzmann equation to get a correct Prandtl number in the Compressible Navier-Stokes asymptotics for polyatomic gases. This is obtained by replacing the diffusion coefficient (which is the equilibrium temperature) by a non diagonal temperature tensor, like the Ellipsoidal-Statistical model is obtained from the Bathnagar-Gross-Krook model of the Boltzmann equation, and by adding a diffusion term for the internal energy. Our model is proved to satisfy the properties of conservation and a H-theorem. A Chapman-Enskog analysis shows how to compute the transport coefficients of our model. Some numerical tests are performed to illustrate that a correct Prandtl number can be obtained.
Two-dimensional lattice Boltzmann model for compressible flows with high Mach number
Gan, Yanbiao; Xu, Aiguo; Zhang, Guangcai; Yu, Xijun; Li, Yingjun
2008-03-01
In this paper we present an improved lattice Boltzmann model for compressible Navier-Stokes system with high Mach number. The model is composed of three components: (i) the discrete-velocity-model by M. Watari and M. Tsutahara [Phys. Rev. E 67 (2003) 036306], (ii) a modified Lax-Wendroff finite difference scheme where reasonable dissipation and dispersion are naturally included, (iii) artificial viscosity. The improved model is convenient to compromise the high accuracy and stability. The included dispersion term can effectively reduce the numerical oscillation at discontinuity. The added artificial viscosity helps the scheme to satisfy the von Neumann stability condition. Shock tubes and shock reflections are used to validate the new scheme. In our numerical tests the Mach numbers are successfully increased up to 20 or higher. The flexibility of the new model makes it suitable for tracking shock waves with high accuracy and for investigating nonlinear nonequilibrium complex systems.
A Fokker-Planck Model of the Boltzmann Equation with Correct Prandtl Number for Polyatomic Gases
Mathiaud, J.; Mieussens, L.
2017-07-01
We propose an extension of the Fokker-Planck model of the Boltzmann equation to get a correct Prandtl number in the Compressible Navier-Stokes asymptotics for polyatomic gases. This is obtained by replacing the diffusion coefficient (which is the equilibrium temperature) by a non diagonal temperature tensor, like the Ellipsoidal-Statistical model is obtained from the Bathnagar-Gross-Krook model of the Boltzmann equation, and by adding a diffusion term for the internal energy. Our model is proved to satisfy the properties of conservation and a H-theorem. A Chapman-Enskog analysis shows how to compute the transport coefficients of our model. Some numerical tests are performed to illustrate that a correct Prandtl number can be obtained.
Quality Evaluation Model for Map Labeling
Institute of Scientific and Technical Information of China (English)
FAN Hong; ZHANG Zuxun; DU Daosheng
2005-01-01
This paper discusses and sums up the basic criterions of guaranteeing the labeling quality and abstracts the four basic factors including the conflict for a label with a label, overlay for label with the features, position's priority and the association for a label with its feature. By establishing the scoring system, a formalized four-factors quality evaluation model is constructed. Last, this paper introduces the experimental result of the quality evaluation model applied to the automatic map labeling system-MapLabel.
Finite Temperature Induced Fermion Number In The Nonlinear sigma Model In (2+1) Dimensions
Dunne, G V; Rao, K; Dunne, Gerald V.; Lopez-Sarrion, Justo; Rao, Kumar
2002-01-01
We compute the finite temperature induced fermion number for fermions coupled to a static nonlinear sigma model background in (2+1) dimensions, in the derivative expansion limit. While the zero temperature induced fermion number is well known to be topological (it is the winding number of the background), at finite temperature there is a temperature dependent correction that is nontopological -- this finite T correction is sensitive to the detailed shape of the background. At low temperature we resum the derivative expansion to all orders, and we consider explicit forms of the background as a CP^1 instanton or as a baby skyrmion.
Metrics for Evaluation of Student Models
Pelanek, Radek
2015-01-01
Researchers use many different metrics for evaluation of performance of student models. The aim of this paper is to provide an overview of commonly used metrics, to discuss properties, advantages, and disadvantages of different metrics, to summarize current practice in educational data mining, and to provide guidance for evaluation of student…
Mou, Yi; Berteletti, Ilaria; Hyde, Daniel C
2017-09-06
Preschool children vary tremendously in their numerical knowledge, and these individual differences strongly predict later mathematics achievement. To better understand the sources of these individual differences, we measured a variety of cognitive and linguistic abilities motivated by previous literature to be important and then analyzed which combination of these variables best explained individual differences in actual number knowledge. Through various data-driven Bayesian model comparison and selection strategies on competing multiple regression models, our analyses identified five variables of unique importance to explaining individual differences in preschool children's symbolic number knowledge: knowledge of the count list, nonverbal approximate numerical ability, working memory, executive conflict processing, and knowledge of letters and words. Furthermore, our analyses revealed that knowledge of the count list, likely a proxy for explicit practice or experience with numbers, and nonverbal approximate numerical ability were much more important to explaining individual differences in number knowledge than general cognitive and language abilities. These findings suggest that children use a diverse set of number-specific, general cognitive, and language abilities to learn about symbolic numbers, but the contribution of number-specific abilities may overshadow that of more general cognitive abilities in the learning process. Copyright © 2017 Elsevier Inc. All rights reserved.
High-order lattice Boltzmann models for wall-bounded flows at finite Knudsen numbers.
Feuchter, C; Schleifenbaum, W
2016-07-01
We analyze a large number of high-order discrete velocity models for solving the Boltzmann-Bhatnagar-Gross-Krook equation for finite Knudsen number flows. Using the Chapman-Enskog formalism, we prove for isothermal flows a relation identifying the resolved flow regimes for low Mach numbers. Although high-order lattice Boltzmann models recover flow regimes beyond the Navier-Stokes level, we observe for several models significant deviations from reference results. We found this to be caused by their inability to recover the Maxwell boundary condition exactly. By using supplementary conditions for the gas-surface interaction it is shown how to systematically generate discrete velocity models of any order with the inherent ability to fulfill the diffuse Maxwell boundary condition accurately. Both high-order quadratures and an exact representation of the boundary condition turn out to be crucial for achieving reliable results. For Poiseuille flow, we can reproduce the mass flow and slip velocity up to the Knudsen number of 1. Moreover, for small Knudsen numbers, the Knudsen layer behavior is recovered.
SAPHIRE models and software for ASP evaluations
Energy Technology Data Exchange (ETDEWEB)
Sattison, M.B.; Schroeder, J.A.; Russell, K.D. [Idaho National Engineering Lab., Idaho Falls, ID (United States)] [and others
1995-04-01
The Idaho National Engineering Laboratory (INEL) over the past year has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of conditional core damage probability (CCDP) evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both NRR and AEOD. This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events.
Simplified physical models of the flow around flexible insect wings at low Reynolds numbers
Harenberg, Steve; Reis, Johnny; Miller, Laura
2011-11-01
Some of the smallest insects fly at Reynolds numbers in the range of 5-100. We built a dynamically scaled physical model of a flexible insect wing and measured the resulting wing deformations and flow fields. The wing models were submerged in diluted corn syrup and rotated about the root of the wing for Reynolds numbers ranging from 1-100. Spatially resolved flow fields were obtained using particle image velocimetry (PIV). Deformations of the wing were tracked using DLTdv software to determine the motion and induced curvature of the wing.
Directory of Open Access Journals (Sweden)
L.V. Nedorezov
2014-09-01
Full Text Available Stochastic model of migrations of individuals within the limits of finite domain on a plane is considered. It is assumed that population size scale is homogeneous, and there doesn't exist an interval of optimal values of population size (Alley effect doesn't realize for population. For every fixed value of population size number of interactions between individuals is calculated (as average in space and time. Correspondence between several classic models and numbers of interactions between individuals is analyzed.
Low Reynolds number turbulence modeling of blood flow in arterial stenoses.
Ghalichi, F; Deng, X; De Champlain, A; Douville, Y; King, M; Guidoin, R
1998-01-01
Moderate and severe arterial stenoses can produce highly disturbed flow regions with transitional and or turbulent flow characteristics. Neither laminar flow modeling nor standard two-equation models such as the kappa-epsilon turbulence ones are suitable for this kind of blood flow. In order to analyze the transitional or turbulent flow distal to an arterial stenosis, authors of this study have used the Wilcox low-Re turbulence model. Flow simulations were carried out on stenoses with 50, 75 and 86% reductions in cross-sectional area over a range of physiologically relevant Reynolds numbers. The results obtained with this low-Re turbulence model were compared with experimental measurements and with the results obtained by the standard kappa-epsilon model in terms of velocity profile, vortex length, wall shear stress, wall static pressure, and turbulence intensity. The comparisons show that results predicted by the low-Re model are in good agreement with the experimental measurements. This model accurately predicts the critical Reynolds number at which blood flow becomes transitional or turbulent distal an arterial stenosis. Most interestingly, over the Re range of laminar flow, the vortex length calculated with the low-Re model also closely matches the vortex length predicted by laminar flow modeling. In conclusion, the study strongly suggests that the proposed model is suitable for blood flow studies in certain areas of the arterial tree where both laminar and transitional/turbulent flows coexist.
Communicating about quantity without a language model: number devices in homesign grammar.
Coppola, Marie; Spaepen, Elizabet; Goldin-Meadow, Susan
2013-01-01
All natural languages have formal devices for communicating about number, be they lexical (e.g., two, many) or grammatical (e.g., plural markings on nouns and/or verbs). Here we ask whether linguistic devices for number arise in communication systems that have not been handed down from generation to generation. We examined deaf individuals who had not been exposed to a usable model of conventional language (signed or spoken), but had nevertheless developed their own gestures, called homesigns, to communicate. Study 1 examined four adult homesigners and a hearing communication partner for each homesigner. The adult homesigners produced two main types of number gestures: gestures that enumerated sets (cardinal number marking), and gestures that signaled one vs. more than one (non-cardinal number marking). Both types of gestures resembled, in form and function, number signs in established sign languages and, as such, were fully integrated into each homesigner's gesture system and, in this sense, linguistic. The number gestures produced by the homesigners' hearing communication partners displayed some, but not all, of the homesigners' linguistic patterns. To better understand the origins of the patterns displayed by the adult homesigners, Study 2 examined a child homesigner and his hearing mother, and found that the child's number gestures displayed all of the properties found in the adult homesigners' gestures, but his mother's gestures did not. The findings suggest that number gestures and their linguistic use can appear relatively early in homesign development, and that hearing communication partners are not likely to be the source of homesigners' linguistic expressions of non-cardinal number. Linguistic devices for number thus appear to be so fundamental to language that they can arise in the absence of conventional linguistic input. Copyright © 2013 Elsevier Inc. All rights reserved.
A multilevel model to address batch effects in copy number estimation using SNP arrays.
Scharpf, Robert B; Ruczinski, Ingo; Carvalho, Benilton; Doan, Betty; Chakravarti, Aravinda; Irizarry, Rafael A
2011-01-01
Submicroscopic changes in chromosomal DNA copy number dosage are common and have been implicated in many heritable diseases and cancers. Recent high-throughput technologies have a resolution that permits the detection of segmental changes in DNA copy number that span thousands of base pairs in the genome. Genomewide association studies (GWAS) may simultaneously screen for copy number phenotype and single nucleotide polymorphism (SNP) phenotype associations as part of the analytic strategy. However, genomewide array analyses are particularly susceptible to batch effects as the logistics of preparing DNA and processing thousands of arrays often involves multiple laboratories and technicians, or changes over calendar time to the reagents and laboratory equipment. Failure to adjust for batch effects can lead to incorrect inference and requires inefficient post hoc quality control procedures to exclude regions that are associated with batch. Our work extends previous model-based approaches for copy number estimation by explicitly modeling batch and using shrinkage to improve locus-specific estimates of copy number uncertainty. Key features of this approach include the use of biallelic genotype calls from experimental data to estimate batch-specific and locus-specific parameters of background and signal without the requirement of training data. We illustrate these ideas using a study of bipolar disease and a study of chromosome 21 trisomy. The former has batch effects that dominate much of the observed variation in the quantile-normalized intensities, while the latter illustrates the robustness of our approach to a data set in which approximately 27% of the samples have altered copy number. Locus-specific estimates of copy number can be plotted on the copy number scale to investigate mosaicism and guide the choice of appropriate downstream approaches for smoothing the copy number as a function of physical position. The software is open source and implemented in the R
Analysis and evaluation of collaborative modeling processes
Ssebuggwawo, D.
2012-01-01
Analysis and evaluation of collaborative modeling processes is confronted with many challenges. On the one hand, many systems design and re-engineering projects require collaborative modeling approaches that can enhance their productivity. But, such collaborative efforts, which often consist of the
A method to evaluate response models
Bruijnes, Merijn; Wapperom, Sjoerd; op den Akker, Hendrikus J.A.; Heylen, Dirk K.J.; Bickmore, Timothy; Marcella, Stacy; Sidner, Candace
We are working towards computational models of mind of virtual characters that act as suspects in interview (interrogation) training of police officers. We implemented a model that calculates the responses of the virtual suspect based on theory and observation. We evaluated it by means of our test,
Latent risk and trend models for the evolution of annual fatality numbers in 30 European countries.
Dupont, Emmanuelle; Commandeur, Jacques J F; Lassarre, Sylvain; Bijleveld, Frits; Martensen, Heike; Antoniou, Constantinos; Papadimitriou, Eleonora; Yannis, George; Hermans, Elke; Pérez, Katherine; Santamariña-Rubio, Elena; Usami, Davide Shingo; Giustiniani, Gabriele
2014-10-01
In this paper a unified methodology is presented for the modelling of the evolution of road safety in 30 European countries. For each country, annual data of the best available exposure indicator and of the number of fatalities were simultaneously analysed with the bivariate latent risk time series model. This model is based on the assumption that the amount of exposure and the number of fatalities are intrinsically related. It captures the dynamic evolution in the fatalities as the product of the dynamic evolution in two latent trends: the trend in the fatality risk and the trend in the exposure to that risk. Before applying the latent risk model to the different countries it was first investigated and tested whether the exposure indicator at hand and the fatalities in each country were in fact related at all. If they were, the latent risk model was applied to that country; if not, a univariate local linear trend model was applied to the fatalities series only, unless the latent risk time series model was found to yield better forecasts than the univariate local linear trend model. In either case, the temporal structure of the unobserved components of the optimal model was established, and structural breaks in the trends related to external events were identified and captured by adding intervention variables to the appropriate components of the model. As a final step, for each country the optimally modelled developments were projected into the future, thus yielding forecasts for the number of fatalities up to and including 2020. Copyright © 2014 Elsevier Ltd. All rights reserved.
Museum: Multidimensional web page segment evaluation model
Kuppusamy, K S
2012-01-01
The evaluation of a web page with respect to a query is a vital task in the web information retrieval domain. This paper proposes the evaluation of a web page as a bottom-up process from the segment level to the page level. A model for evaluating the relevancy is proposed incorporating six different dimensions. An algorithm for evaluating the segments of a web page, using the above mentioned six dimensions is proposed. The benefits of fine-granining the evaluation process to the segment level instead of the page level are explored. The proposed model can be incorporated for various tasks like web page personalization, result re-ranking, mobile device page rendering etc.
On Correlation Numbers in 2D Minimal Gravity and Matrix Models
Belavin, A A
2008-01-01
We test recent results for the four-point correlation numbers in Minimal Liouville Gravity against calculations in the one-Matrix Models, and find full agreement. In the process, we construct the resonance transformation which relates coupling parameters of the Liouville Gravity with the couplings of the Matrix Models, up to the terms of the order 4. We also conjecture the general form of this transformation.
Fox, Rodney O.; Vie, Aymeric; Laurent, Frederique; Chalons, Christophe; Massot, Marc
2012-11-01
Numerous applications involve a disperse phase carried by a gaseous flow. To simulate such flows, one can resort to a number density function (NDF) governed a kinetic equation. Traditionally, Lagrangian Monte-Carlo methods are used to solve for the NDF, but are expensive as the number of numerical particles needed must be large to control statistical errors. Moreover, such methods are not well adapted to high-performance computing because of the intrinsic inhomogeneity of the NDF. To overcome these issues, Eulerian methods can be used to solve for the moments of the NDF resulting in an unclosed Eulerian system of hyperbolic conservation laws. To obtain closure, in this work a multivariate bi-Gaussian quadrature is used, which can account for particle trajectory crossing (PTC) over a large range of Stokes numbers. This closure uses up to four quadrature points in 2-D velocity phase space to capture large-scale PTC, and an anisotropic Gaussian distribution around each quadrature point to model small-scale PTC. Simulations of 2-D particle-laden isotropic turbulence at different Stokes numbers are employed to validate the Eulerian models against results from the Lagrangian approach. Good agreement is found for the number density fields over the entire range of Stokes numbers tested. Research carried out at the Center for Turbulence Research 2012 Summer Program.
Serfling, Robert; Ogola, Gerald
2016-02-10
Among men, prostate cancer (CaP) is the most common newly diagnosed cancer and the second leading cause of death from cancer. A major issue of very large scale is avoiding both over-treatment and under-treatment of CaP cases. The central challenge is deciding clinical significance or insignificance when the CaP biopsy results are positive but only marginally so. A related concern is deciding how to increase the number of biopsy cores for larger prostates. As a foundation for improved choice of number of cores and improved interpretation of biopsy results, we develop a probability model for the number of positive cores found in a biopsy, given the total number of cores, the volumes of the tumor nodules, and - very importantly - the prostate volume. Also, three applications are carried out: guidelines for the number of cores as a function of prostate volume, decision rules for insignificant versus significant CaP using number of positive cores, and, using prior distributions on total tumor size, Bayesian posterior probabilities for insignificant CaP and posterior median CaP. The model-based results have generality of application, take prostate volume into account, and provide attractive tradeoffs of specificity versus sensitivity. Copyright © 2015 John Wiley & Sons, Ltd.
Winahju, W. S.; Mukarromah, A.; Putri, S.
2015-03-01
Leprosy is a chronic infectious disease caused by bacteria of leprosy (Mycobacterium leprae). Leprosy has become an important thing in Indonesia because its morbidity is quite high. Based on WHO data in 2014, in 2012 Indonesia has the highest number of new leprosy patients after India and Brazil with a contribution of 18.994 people (8.7% of the world). This number makes Indonesia automatically placed as the country with the highest number of leprosy morbidity of ASEAN countries. The province that most contributes to the number of leprosy patients in Indonesia is East Java. There are two kind of leprosy. They consist of pausibacillary and multibacillary. The morbidity of multibacillary leprosy is higher than pausibacillary leprosy. This paper will discuss modeling both of the number of multibacillary and pausibacillary leprosy patients as responses variables. These responses are count variables, so modeling will be conducted by using bivariate poisson regression method. Unit experiment used is in East Java, and predictors involved are: environment, demography, and poverty. The model uses data in 2012, and the result indicates that all predictors influence significantly.
A Comparison of Three Random Number Generators for Aircraft Dynamic Modeling Applications
Grauer, Jared A.
2017-01-01
Three random number generators, which produce Gaussian white noise sequences, were compared to assess their suitability in aircraft dynamic modeling applications. The first generator considered was the MATLAB (registered) implementation of the Mersenne-Twister algorithm. The second generator was a website called Random.org, which processes atmospheric noise measured using radios to create the random numbers. The third generator was based on synthesis of the Fourier series, where the random number sequences are constructed from prescribed amplitude and phase spectra. A total of 200 sequences, each having 601 random numbers, for each generator were collected and analyzed in terms of the mean, variance, normality, autocorrelation, and power spectral density. These sequences were then applied to two problems in aircraft dynamic modeling, namely estimating stability and control derivatives from simulated onboard sensor data, and simulating flight in atmospheric turbulence. In general, each random number generator had good performance and is well-suited for aircraft dynamic modeling applications. Specific strengths and weaknesses of each generator are discussed. For Monte Carlo simulation, the Fourier synthesis method is recommended because it most accurately and consistently approximated Gaussian white noise and can be implemented with reasonable computational effort.
Modeling Size-number Distributions of Seeds for Use in Soil Bank Studies
Institute of Scientific and Technical Information of China (English)
Hugo Casco; Alexandra Soveral Dias; Luís Silva Dias
2008-01-01
Knowledge of soil seed banks is essential to understand the dynamics of plant populations and communities and would greatly benefit from the integration of existing knowledge on ecological correlations of seed size and shape. The present study aims to establish a feasible and meaningful method to describe size-number distributions of seeds in multi-species situations. For that purpose, size-number distributions of seeds with known length, width and thickness were determined by sequential sieving. The most appropriate combination of sieves and seeds dimensions was established, and the adequacy of the power function and the Weibull model to describe size-number distributions of spherical, non.spherical, and all seeds was investigated. We found that the geometric mean of seed length, width and thickness was the most adequate size estimator, providing shape-independent measures of seeds volume directly related to sieves mesh side, and that both the power function and the Weibuli model provide high quality descriptions of size-number distributions of spherical,non-spherical, and all seeds. We also found that, in spite of its slightly lower accuracy, the power function is, at this stage, a more trustworthy model to characterize size-number distributions of seeds in soil banks because in some Weibull equations the estimates of the scale parameter were not acceptable.
Directory of Open Access Journals (Sweden)
R. H. Moore
2013-04-01
Full Text Available We use the Global Modelling Initiative (GMI chemical transport model with a cloud droplet parameterisation adjoint to quantify the sensitivity of cloud droplet number concentration to uncertainties in predicting CCN concentrations. Published CCN closure uncertainties for six different sets of simplifying compositional and mixing state assumptions are used as proxies for modelled CCN uncertainty arising from application of those scenarios. It is found that cloud droplet number concentrations (Nd are fairly insensitive to the number concentration (Na of aerosol which act as CCN over the continents (∂lnNd/∂lnNa ~10–30%, but the sensitivities exceed 70% in pristine regions such as the Alaskan Arctic and remote oceans. This means that CCN concentration uncertainties of 4–71% translate into only 1–23% uncertainty in cloud droplet number, on average. Since most of the anthropogenic indirect forcing is concentrated over the continents, this work shows that the application of Köhler theory and attendant simplifying assumptions in models is not a major source of uncertainty in predicting cloud droplet number or anthropogenic aerosol indirect forcing for the liquid, stratiform clouds simulated in these models. However, it does highlight the sensitivity of some remote areas to pollution brought into the region via long-range transport (e.g., biomass burning or from seasonal biogenic sources (e.g., phytoplankton as a source of dimethylsulfide in the southern oceans. Since these transient processes are not captured well by the climatological emissions inventories employed by current large-scale models, the uncertainties in aerosol-cloud interactions during these events could be much larger than those uncovered here. This finding motivates additional measurements in these pristine regions, for which few observations exist, to quantify the impact (and associated uncertainty of transient aerosol processes on cloud properties.
Modeling for Green Supply Chain Evaluation
Directory of Open Access Journals (Sweden)
Elham Falatoonitoosi
2013-01-01
Full Text Available Green supply chain management (GSCM has become a practical approach to develop environmental performance. Under strict regulations and stakeholder pressures, enterprises need to enhance and improve GSCM practices, which are influenced by both traditional and green factors. This study developed a causal evaluation model to guide selection of qualified suppliers by prioritizing various criteria and mapping causal relationships to find effective criteria to improve green supply chain. The aim of the case study was to model and examine the influential and important main GSCM practices, namely, green logistics, organizational performance, green organizational activities, environmental protection, and green supplier evaluation. In the case study, decision-making trial and evaluation laboratory technique is applied to test the developed model. The result of the case study shows only “green supplier evaluation” and “green organizational activities” criteria of the model are in the cause group and the other criteria are in the effect group.
Directory of Open Access Journals (Sweden)
K. J. Pringle
2009-01-01
Full Text Available Empirical relationships that link cloud droplet number (CDN to aerosol number or mass are commonly used to calculate global fields of CDN for climate forcing assessments. In this work we use a sectional global model of sulfate and sea-salt aerosol coupled to a mechanistic aerosol activation scheme to explore the limitations of this approach. We find that a given aerosol number concentration produces a wide range of CDN concentrations due to variations in the shape of the aerosol size distribution. On a global scale, the dependence of CDN on the size distribution results in regional biases in predicted CDN (for a given aerosol number. Empirical relationships between aerosol number and CDN are often derived from regional data but applied to the entire globe. In an analogous process, we derive regional "correlation-relations" between aerosol number and CDN and apply these regional relations to calculations of CDN on the global scale. The global mean percentage error in CDN caused by using regionally derived CDN-aerosol relations is 20 to 26%, which is about half the global mean percentage change in CDN caused by doubling the updraft velocity. However, the error is as much as 25–75% in the Southern Ocean, the Arctic and regions of persistent stratocumulus when an aerosol-CDN correlation relation from the North Atlantic is used. These regions produce much higher CDN concentrations (for a given aerosol number than predicted by the globally uniform empirical relations. CDN-aerosol number relations from different regions also show very different sensitivity to changing aerosol. The magnitude of the rate of change of CDN with particle number, a measure of the aerosol efficacy, varies by a factor 4. CDN in cloud processed regions of persistent stratocumulus is particularly sensitive to changing aerosol number. It is therefore likely that the indirect effect will be underestimated in these important regions.
Directory of Open Access Journals (Sweden)
K. J. Pringle
2009-06-01
Full Text Available Empirical relationships that link cloud droplet number (CDN to aerosol number or mass are commonly used to calculate global fields of CDN for climate forcing assessments. In this work we use a sectional global model of sulfate and sea-salt aerosol coupled to a mechanistic aerosol activation scheme to explore the limitations of this approach. We find that a given aerosol number concentration produces a wide range of CDN concentrations due to variations in the shape of the aerosol size distribution. On a global scale, the dependence of CDN on the size distribution results in regional biases in predicted CDN (for a given aerosol number. Empirical relationships between aerosol number and CDN are often derived from regional data but applied to the entire globe. In an analogous process, we derive regional "correlation-relations" between aerosol number and CDN and apply these regional relations to calculations of CDN on the global scale. The global mean percentage error in CDN caused by using regionally derived CDN-aerosol relations is 20 to 26%, which is about half the global mean percentage change in CDN caused by doubling the updraft velocity. However, the error is as much as 25–75% in the Southern Ocean, the Arctic and regions of persistent stratocumulus when an aerosol-CDN correlation relation from the North Atlantic is used. These regions produce much higher CDN concentrations (for a given aerosol number than predicted by the globally uniform empirical relations. CDN-aerosol number relations from different regions also show very different sensitivity to changing aerosol. The magnitude of the rate of change of CDN with particle number, a measure of the aerosol efficacy, varies by a factor 4. CDN in cloud processed regions of persistent stratocumulus is particularly sensitive to changing aerosol number. It is therefore likely that the indirect effect will be underestimated in these important regions.
Fuzzy Based Evaluation of Software Quality Using Quality Models and Goal Models
Directory of Open Access Journals (Sweden)
Arfan Mansoor
2015-09-01
Full Text Available Software quality requirements are essential part for the success of software development. Defined and guaranteed quality in software development requires identifying, refining, and predicting quality properties by appropriate means. Goal models of goal oriented requirements engineering (GORE and quality models are useful for modelling of functional goals as well as for quality goals. Once the goal models are obtained representing the functional requirements and integrated quality goals, there is need to evaluate each functional requirement arising from functional goals and quality requirement arising from quality goals. The process consist of two main parts. In first part, the goal models are used to evaluate functional goals. The leaf level goals are used to establish the evaluation criteria. Stakeholders are also involved to contribute their opinions about the importance of each goal (functional and/or quality goal. Stakeholder opinions are then converted into quantifiable numbers using triangle fuzzy numbers (TFN. After applying the defuzzification process on TFN, the scores (weights are obtained for each goal. In second part specific quality goals are identified, refined/tailored based on existing quality models and their evaluation is performed similarly using TFN and by applying defuzzification process. The two step process helps to evaluate each goal based on stakeholder opinions and to evaluate the impact of quality requirements. It also helps to evaluate the relationships among functional goals and quality goals. The process is described and applied on ’cyclecomputer’ case study.
Open intersection numbers, Kontsevich--Penner model and cut-and-join operators
Alexandrov, Alexander
2014-01-01
We continue our investigation of the Kontsevich--Penner model, which describes intersection theory on moduli spaces both for open and closed curves. In particular, we show how Buryak's residue formula, which connects two generating functions of intersection numbers, appears in the general context of matrix models and tau-functions. This allows us to prove that the Kontsevich--Penner matrix integral indeed describes open intersection numbers. For arbitrary $N$ we show that the string and dilaton equations completely specify the solution of the KP hierarchy. We derive a complete family of the Virasoro and W-constraints and using these constraints we construct the cut-and-join operators. The case $N=1$, corresponding to open intersection numbers, is particularly interesting: for this case we obtain two different families of the Virasoro constraints, so that the difference between them describes the dependence of the tau-function on even times.
Open intersection numbers, Kontsevich-Penner model and cut-and-join operators
Energy Technology Data Exchange (ETDEWEB)
Alexandrov, Alexander [Mathematics Institute, University of Freiburg,Eckerstrasse 1, 79104 Freiburg (Germany); ITEP,Bolshaya Cheremushkinskaya 25, 117218 Moscow (Russian Federation)
2015-08-07
We continue our investigation of the Kontsevich-Penner model, which describes intersection theory on moduli spaces both for open and closed curves. In particular, we show how Buryak’s residue formula, which connects two generating functions of intersection numbers, appears in the general context of matrix models and tau-functions. This allows us to prove that the Kontsevich-Penner matrix integral indeed describes open intersection numbers. For arbitrary N we show that the string and dilaton equations completely specify the solution of the KP hierarchy. We derive a complete family of the Virasoro and W-constraints, and using these constraints, we construct the cut-and-join operators. The case N=1, corresponding to open intersection numbers, is particularly interesting: for this case we obtain two different families of the Virasoro constraints, so that the difference between them describes the dependence of the tau-function on even times.
Modeling the Aerodynamic Lift Produced by Oscillating Airfoils at Low Reynolds Number
Khalid, Muhammad Saif Ullah
2015-01-01
For present study, setting Strouhal Number (St) as control parameter, numerical simulations for flow past oscillating NACA-0012 airfoil at 1,000 Reynolds Numbers (Re) are performed. Temporal profiles of unsteady forces; lift and thrust, and their spectral analysis clearly indicate the solution to be a period-1 attractor for low Strouhal numbers. This study reveals that aerodynamic forces produced by plunging airfoil are independent of initial kinematic conditions of airfoil that proves the existence of limit cycle. Frequencies present in the oscillating lift force are composed of fundamental (fs), even and odd harmonics (3fs) at higher Strouhal numbers. Using numerical simulations, shedding frequencies (f_s) were observed to be nearly equal to the excitation frequencies in all the cases. Unsteady lift force generated due to the plunging airfoil is modeled by modified van der Pol oscillator. Using method of multiple scales and spectral analysis of steady-state CFD solutions, frequencies and damping terms in th...
High Reynolds Number Studies in the Wake of a Submarine Model
Jimenez, Juan; Reynolds, Ryan; Smits, Alexander
2005-11-01
Results are presented from submarine wake studies conducted in Princeton University's High Reynolds Number Test Facility (HRTF). Compressed air is used as a working fluid enabling Reynolds numbers based on length of up to 10^8, about 1/5 of full scale. Measurements at Reynolds numbers up to 3 x10^6 have been completed, and show that, for the model condition without fins, the wake mean velocity was self-similar at locations 6 and 9 diameters downstream. Also, PIV at Reynolds numbers near 10^4 showed that when the yaw angle was varied the sail-tip and sail-hull junction vortices increased in magnitude emphasizing the importance of fully understanding the flow characteristics of a maneuvering submarine.
Deliyianni, Eleni; Gagatsis, Athanasios; Elia, Iliada; Panaoura, Areti
2016-01-01
The aim of this study was to propose and validate a structural model in fraction and decimal number addition, which is founded primarily on a synthesis of major theoretical approaches in the field of representations in Mathematics and also on previous research on the learning of fractions and decimals. The study was conducted among 1,701 primary…
The type-reproduction number T in models for infectious disease control
Heesterbeek, J.A.P.; Roberts, M.G.
2007-01-01
A ubiquitous quantity in epidemic modelling is the basic reproduction number R0. This became so popular in the 1990s that ‘All you need know is R0!’ became a familiar catch-phrase. The value of R0 defines, among other things, the control effort needed to eliminate the infection from a homogeneous ho
Positive expectations feedback experiments and number guessing games as models of financial markets
Sonnemans, J.; Tuinstra, J.
2010-01-01
In repeated number guessing games choices typically converge quickly to the Nash equilibrium. In positive expectations feedback experiments, however, convergence to the equilibrium price tends to be very slow, if it occurs at all. Both types of experimental designs have been suggested as modeling es
Modeling of low-capillary number segmented flows in microchannels using OpenFOAM
Hoang, D.A.; Van Steijn V.; Portela, L.M.; Kreutzer, M.T.; Kleijn, C.R.
2012-01-01
Modeling of low-Capillary number segmented flows in microchannels is important for the design of microfluidic devices. We present numerical validations of microfluidic flow simulations using the volume-of-fluid (VOF) method as implemented in OpenFOAM. Two benchmark cases were investigated to ensure
Comparison of criteria for choosing the number of classes in Bayesian finite mixture models
K. Nasserinejad (Kazem); J.M. van Rosmalen (Joost); W. de Kort (Wim); E.M.E.H. Lesaffre (Emmanuel)
2017-01-01
textabstractIdentifying the number of classes in Bayesian finite mixture models is a challenging problem. Several criteria have been proposed, such as adaptations of the deviance information criterion, marginal likelihoods, Bayes factors, and reversible jump MCMC techniques. It was recently shown th
Deliyianni, Eleni; Gagatsis, Athanasios; Elia, Iliada; Panaoura, Areti
2016-01-01
The aim of this study was to propose and validate a structural model in fraction and decimal number addition, which is founded primarily on a synthesis of major theoretical approaches in the field of representations in Mathematics and also on previous research on the learning of fractions and decimals. The study was conducted among 1,701 primary…
Undetected Higgs decays and neutrino masses in gauge mediated, lepton number violating models
Banks, Tom; Fortin, Jean-François
2008-01-01
We discuss SUSY models in which renormalizable lepton number violating couplings hide the decay of the Higgs through h -> \\chi_1^0 + \\chi_1^0 followed by \\chi_1^0 -> \\tau + 2 jets or \\chi_1^0 -> \
Economic Evaluation in Medical Information Technology: Why the Numbers Don’t Add Up
Eisenstein, Eric L.; Ortiz, Maqui; Anstrom, Kevin J.; Crosslin, David R.; Lobach, David F.
2006-01-01
Standards for the economic evaluation of medical technologies were instituted in the mid-1990s, yet little is known about their application in medical information technology studies. In a review of evaluation studies published between 1982 and 2002, we found that the volume and variety of economic evaluations had increased. However, investigators routinely omitted key cost or effectiveness elements in their designs, resulting in publications with incomplete, and potentially biased, economic findings. PMID:17238533
Directory of Open Access Journals (Sweden)
Juwita Rini
2017-07-01
ABSTRACT Basically education is a process that helps humans to develope themselves to face any changes that occur. In the whole process of education, teaching and learning activities are the most basic activities. It implies the success or failure of the achievement of educational goals depends on how the teaching and learning process are designed and implemented. It has strong relation to the learning model applied in teaching and learning process. One of learning model that can be applied is a cooperative learning model. Cooperative learning model is believed to improve students' cognitive and affective skills. Cooperative learning model has several types or variations. One f them is Numbered Head Together (NHT. But it is found a problem that can hamper the achievement of optimal learning and teaching process during the implementation of NHT. Therefore, the need for an appropriate solution so that NHT can be implemented more effectively. Keywords: cooperative learning model, NHT, implementation of NHT.
Hansson-Sandsten, Maria
2010-05-01
The purpose of this paper is to present the optimal number of windows and window lengths using multiple window spectrogram for estimation of non-stationary processes with shorter or longer duration. Such processes could start in the EEG as a result of a stimuli, e.g., steady-state visual evoked potentials (SSVEP). In many applications, the Welch method is used with standard set-ups for window lengths and number of averaged spectra/spectrograms. This paper optimizes the window lengths and number of windows of the Welch method and other more recent, so called, multiple window or multitaper methods and compares the mean squared errors of these methods. Approximative formulas for the choice of optimal number of windows and window lengths are also given. Examples of spectrogram estimation of SSVEP are shown.
CSIR Research Space (South Africa)
Dorasamy, K
2015-09-01
Full Text Available Directional Patterns, which are formed by grouping regions of orientation fields falling within a specific range, vary under rotation and the number of regions. For fingerprint classification schemes, this can result in missclassification due...
Vayenas, Constantinos G; Grigoriou, Dimitrios P
2016-01-01
We discuss the common features between the Standard Model taxonomy of particles, based on electric charge, strangeness and isospin, and the taxonomy emerging from the key structural elements of the rotating neutrino model, which describes baryons as bound states formed by three highly relativistic electrically polarized neutrinos forming a symmetric ring rotating around a central electrically charged or polarized lepton. It is shown that the two taxonomies are fully compatible with each other.
Evaluation of a lake whitefish bioenergetics model
Madenjian, Charles P.; O'Connor, Daniel V.; Pothoven, Steven A.; Schneeberger, Philip J.; Rediske, Richard R.; O'Keefe, James P.; Bergstedt, Roger A.; Argyle, Ray L.; Brandt, Stephen B.
2006-01-01
We evaluated the Wisconsin bioenergetics model for lake whitefish Coregonus clupeaformis in the laboratory and in the field. For the laboratory evaluation, lake whitefish were fed rainbow smelt Osmerus mordax in four laboratory tanks during a 133-d experiment. Based on a comparison of bioenergetics model predictions of lake whitefish food consumption and growth with observed consumption and growth, we concluded that the bioenergetics model furnished significantly biased estimates of both food consumption and growth. On average, the model overestimated consumption by 61% and underestimated growth by 16%. The source of the bias was probably an overestimation of the respiration rate. We therefore adjusted the respiration component of the bioenergetics model to obtain a good fit of the model to the observed consumption and growth in our laboratory tanks. Based on the adjusted model, predictions of food consumption over the 133-d period fell within 5% of observed consumption in three of the four tanks and within 9% of observed consumption in the remaining tank. We used polychlorinated biphenyls (PCBs) as a tracer to evaluate model performance in the field. Based on our laboratory experiment, the efficiency with which lake whitefish retained PCBs from their food (I?) was estimated at 0.45. We applied the bioenergetics model to Lake Michigan lake whitefish and then used PCB determinations of both lake whitefish and their prey from Lake Michigan to estimate p in the field. Application of the original model to Lake Michigan lake whitefish yielded a field estimate of 0.28, implying that the original formulation of the model overestimated consumption in Lake Michigan by 61%. Application of the bioenergetics model with the adjusted respiration component resulted in a field I? estimate of 0.56, implying that this revised model underestimated consumption by 20%.
Institute of Scientific and Technical Information of China (English)
Zheng-yan Lin; Yu-ze Yuan
2012-01-01
Semiparametric models with diverging number of predictors arise in many contemporary scientific areas. Variable selection for these models consists of two components: model selection for non-parametric components and selection of significant variables for the parametric portion.In this paper,we consider a variable selection procedure by combining basis function approximation with SCAD penalty.The proposed procedure simultaneously selects significant variables in the parametric components and the nonparametric components.With appropriate selection of tuning parameters,we establish the consistency and sparseness of this procedure.
GENESYS 1990-91: Selected Program Evaluations. Publication Number 90.39.
Wilkinson, David; Spano, Sedra G.
GENESYS is a GENeric Evaluation SYStem for data collection and evaluation through computer technology. GENESYS gathers and reports the standard information (student characteristics, achievement, attendance, discipline, grades/credits, dropouts, and retainees) for specific groups of students. In the Austin (Texas) Independent School District's…
Comparison of Criteria for Choosing the Number of Classes in Bayesian Finite Mixture Models.
Nasserinejad, Kazem; van Rosmalen, Joost; de Kort, Wim; Lesaffre, Emmanuel
2017-01-01
Identifying the number of classes in Bayesian finite mixture models is a challenging problem. Several criteria have been proposed, such as adaptations of the deviance information criterion, marginal likelihoods, Bayes factors, and reversible jump MCMC techniques. It was recently shown that in overfitted mixture models, the overfitted latent classes will asymptotically become empty under specific conditions for the prior of the class proportions. This result may be used to construct a criterion for finding the true number of latent classes, based on the removal of latent classes that have negligible proportions. Unlike some alternative criteria, this criterion can easily be implemented in complex statistical models such as latent class mixed-effects models and multivariate mixture models using standard Bayesian software. We performed an extensive simulation study to develop practical guidelines to determine the appropriate number of latent classes based on the posterior distribution of the class proportions, and to compare this criterion with alternative criteria. The performance of the proposed criterion is illustrated using a data set of repeatedly measured hemoglobin values of blood donors.
Working with Teaching Assistants: Three Models Evaluated
Cremin, Hilary; Thomas, Gary; Vincett, Karen
2005-01-01
Questions about how best to deploy teaching assistants (TAs) are particularly opposite given the greatly increasing numbers of TAs in British schools and given findings about the difficulty effecting adult teamwork in classrooms. In six classrooms, three models of team organisation and planning for the work of teaching assistants -- "room…
Evaluating the Pedagogical Potential of Hybrid Models
Levin, Tzur; Levin, Ilya
2013-01-01
The paper examines how the use of hybrid models--that consist of the interacting continuous and discrete processes--may assist in teaching system thinking. We report an experiment in which undergraduate students were asked to choose between a hybrid and a continuous solution for a number of control problems. A correlation has been found between…
Saphire models and software for ASP evaluations
Energy Technology Data Exchange (ETDEWEB)
Sattison, M.B. [Idaho National Engineering Lab., Idaho Falls, ID (United States)
1997-02-01
The Idaho National Engineering Laboratory (INEL) over the three years has created 75 plant-specific Accident Sequence Precursor (ASP) models using the SAPHIRE suite of PRA codes. Along with the new models, the INEL has also developed a new module for SAPHIRE which is tailored specifically to the unique needs of ASP evaluations. These models and software will be the next generation of risk tools for the evaluation of accident precursors by both the U.S. Nuclear Regulatory Commission`s (NRC`s) Office of Nuclear Reactor Regulation (NRR) and the Office for Analysis and Evaluation of Operational Data (AEOD). This paper presents an overview of the models and software. Key characteristics include: (1) classification of the plant models according to plant response with a unique set of event trees for each plant class, (2) plant-specific fault trees using supercomponents, (3) generation and retention of all system and sequence cutsets, (4) full flexibility in modifying logic, regenerating cutsets, and requantifying results, and (5) user interface for streamlined evaluation of ASP events. Future plans for the ASP models is also presented.
Directory of Open Access Journals (Sweden)
R. H. Moore
2012-08-01
Full Text Available We use the Global Modeling Initiative (GMI chemical transport model with a cloud droplet parameterization adjoint to quantify the sensitivity of cloud droplet number concentration to uncertainties in predicting CCN concentrations. Published CCN closure prediction uncertainties for six different sets of simplifying compositional and mixing state assumptions are used as proxies for modeled CCN uncertainty arising from application of those scenarios. It is found that cloud droplet number concentrations are fairly insensitive to CCN-active aerosol number concentrations over the continents (∂Nd/∂N_{a} ~ 10–30%, but the sensitivities exceed 70% in pristine regions such as the Alaskan Arctic and remote oceans. Since most of the anthropogenic indirect forcing is concentrated over the continents, this work shows that the application of Köhler theory and attendant simplifying assumptions in models is not a major source of uncertainty in predicting cloud droplet number or anthropogenic aerosol indirect forcing for the liquid, stratiform clouds simulated in these models. However, it does highlight the sensitivity of some remote areas to pollution brought into the region via long-range transport (e.g. biomass burning or from seasonal biogenic sources (e.g. phytoplankton as a source of dimethylsulfide in the southern oceans. Since these transient processes are not captured well by the climatological emissions inventories employed by current large-scale models, the uncertainties in aerosol-cloud interactions during these events could be much larger than those uncovered here. This finding motivates additional measurements in these pristine regions, which have recieved little attention to date, in order to quantify the impact of, and uncertainty associated with, transient processes in effecting changes in cloud properties.
Institute of Scientific and Technical Information of China (English)
ZHANG Ling; ZHOU Jun-li; CHEN Xiao-chun; LAN Li; ZHANG Nan
2008-01-01
ABE-KONDOH-NAGANO, ABID, YANG-SHIH and LAUNDER-SHARMA low-Reynolds number turbulence models were applied to simulating unsteady turbulence flow around a square cylinder in different phases flow field and time-averaged unsteady flow field. Meanwhile, drag and lift coefficients of the four different low-Reynolds number turbulence models were analyzed. The simulated results of YANG-SHIH model are close to the large eddy simulation results and experimental results, and they are significantly better than those of ABE-KONDOH-NAGANO, ABID and LAUNDER-SHARMR models. The modification of the generation of turbulence kinetic energy is the key factor to a successful simulation for YANG-SHIH model, while the correction of the turbulence near the wall has minor influence on the simulation results. For ABE-KONDOH-NAGANO, ABID and LAUNDER-SHARMA models satisfactory simulation results cannot be obtained due to lack of the modification of the generation of turbulence kinetic energy. With the joint force of wall function and the turbulence models with the adoption of corrected swirl stream,flow around a square cylinder can be fully simulated with less grids by the near-wall.
Pole tide Love number - an important parameter for polar motion modeling
Kirschner, S.; Schmidt, M. G.; Seitz, F.
2013-12-01
The Euler-Liouville equation is the basic physical model to describe Earth rotation. It is based on the balance of angular momentum in the Earth system. The pole tide Love number is needed to characterize the rotational deformation effect, which depends on the internal structure and rheology of the Earth. There is a direct dependency between the pole tide Love number and the period and damping of the Chandler oscillation. Here we estimate the pole tide Love number on the basis of an inversion of the Euler-Liouville equation. The Earth orientation parameters are used as input parameters. They have been observed precisely over several decades by geodetic methods (C01 and C04 time series). It will be shown that the estimated pole tide Love number leads to significantly better results for polar motion compared to the original value taken from the Conventions of the International Earth Rotation and Reference System Service (IERS). Nevertheless the estimation is dependent on the input models for the subsystems (e.g. atmosphere and ocean models), applied estimation approach and time frame. These aspects are analyzed and discussed in detail.
Directory of Open Access Journals (Sweden)
Ardisa U. Pradita
2014-04-01
Full Text Available Normal 0 false false false IN X-NONE X-NONE MicrosoftInternetExplorer4 Green tea leaf (Camellia sinensis is one of herbal plants that is used for traditional medicine. Epigallocatechin gallate (EGCG in green tea is the most potential polyphenol component and has the strongest biological activity. It is known that EGCG has potential effect on wound healing. Objective: This study aimed to determine the effect of adding green tea EGCG into periodontal dressing on the number of fibroblasts after gingival artificial wound in animal model. Methods: Gingival artifical wound model was performed using 2mm punch biopsy on 24 rabbits (Oryctolagus cuniculus. The animals were divided into two groups. Periodontal dressing with EGCG and without EGCG was applied to the experimental and control group, respectively. Decapitation period was scheduled at day 3, 5, and 7 after treatment. Histological analysis to count the number of fibroblasts was performed. Results: Number of fibroblasts was significantly increased in time over the experimental group treated with EGCG periodontal dressing compared to control (p<0.05. Conclusion: EGCG periodontal dressing could increase the number of fibroblast, therefore having role in wound healing after periodontal surgery in animal model.DOI: 10.14693/jdi.v20i3.197
Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen
2017-01-01
Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children's strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders' NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children's NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders' NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children's benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that
Evaluation of trends in wheat yield models
Ferguson, M. C.
1982-01-01
Trend terms in models for wheat yield in the U.S. Great Plains for the years 1932 to 1976 are evaluated. The subset of meteorological variables yielding the largest adjusted R(2) is selected using the method of leaps and bounds. Latent root regression is used to eliminate multicollinearities, and generalized ridge regression is used to introduce bias to provide stability in the data matrix. The regression model used provides for two trends in each of two models: a dependent model in which the trend line is piece-wise continuous, and an independent model in which the trend line is discontinuous at the year of the slope change. It was found that the trend lines best describing the wheat yields consisted of combinations of increasing, decreasing, and constant trend: four combinations for the dependent model and seven for the independent model.
Institute of Scientific and Technical Information of China (English)
WANG Hui; LI Ting-ju; JIN Jun-ze
2005-01-01
In order to estimate the feasibility of electromagnetic casting (EMC) for different metals, a mathematical model named the electromagnetic dimensionless number (EMDN) was presented, and its validity was proved by the experiments of aluminum and Sn-3 %Pb alloy. From the experiment and the analysis of EMDN it can be concluded that the EMC of steel can be attained only when the magnetic flux density is larger than 0.09 T, while that required for aluminum is only 0. 04 T. The mathematical expression of the electromagnetic dimensionless number was given out.
Directory of Open Access Journals (Sweden)
Tan Rodney H. G.
2016-01-01
Full Text Available This paper presents the evaluation of horizontal axis wind turbine torque and mechanical power generation and its relation to the number of blades at a given wind speed. The relationship of wind turbine rotational frequency, tip speed, minimum wind speed, mechanical power and torque related to the number of blades are derived. The purpose of this study is to determine the wind energy extraction efficiency achieved for every increment of blade number. Effective factor is introduced to interpret the effectiveness of the wind turbine extracting wind energy below and above the minimum wind speed for a given number of blades. Improve factor is introduced to indicate the improvement achieved for every increment of blades. The evaluation was performance with wind turbine from 1 to 6 blades. The evaluation results shows that the higher the number of blades the lower the minimum wind speed to achieve unity effective factor. High improve factors are achieved between 1 to 2 and 2 to 3 blades increment. It contributes to better understanding and determination for the choice of the number of blades for wind turbine design.
CRITICAL ANALYSIS OF EVALUATION MODEL LOMCE
Directory of Open Access Journals (Sweden)
José Luis Bernal Agudo
2015-06-01
Full Text Available The evaluation model that the LOMCE projects sinks its roots into the neoliberal beliefs, reflecting a specific way of understanding the world. What matters is not the process but the results, being the evaluation the center of the education-learning processes. It presents an evil planning, since the theory that justifies the model doesn’t specify upon coherent proposals, where there is an excessive worry for excellence and diversity is left out. A comprehensive way of understanding education should be recovered.
A model-based evaluation system of enterprise
Institute of Scientific and Technical Information of China (English)
Yan Junwei; Ye Yang; Wang Jian
2005-01-01
This paper analyses the architecture of enterprise modeling, proposesindicator selection principles and indicator decomposition methods, examines the approaches to the evaluation of enterprise modeling and designs an evaluation model of AHP. Then a model-based evaluation system of enterprise is presented toeffectively evaluate the business model in the framework of enterprise modeling.
The power of sensitivity analysis and thoughts on models with large numbers of parameters
Energy Technology Data Exchange (ETDEWEB)
Havlacek, William [Los Alamos National Laboratory
2008-01-01
The regulatory systems that allow cells to adapt to their environments are exceedingly complex, and although we know a great deal about the intricate mechanistic details of many of these systems, our ability to make accurate predictions about their system-level behaviors is severely limited. We would like to make such predictions for a number of reasons. How can we reverse dysfunctional molecular changes of these systems that cause disease? More generally, how can we harness and direct cellular activities for beneficial purposes? Our ability to make accurate predictions about a system is also a measure ofour fundamental understanding of that system. As evidenced by our mastery of technological systems, a useful understanding ofa complex system can often be obtained through the development and analysis ofa mathematical model, but predictive modeling of cellular regulatory systems, which necessarily relies on quantitative experimentation, is still in its infancy. There is much that we need to learn before modeling for practical applications becomes routine. In particular, we need to address a number of issues surrounding the large number of parameters that are typically found in a model for a cellular regulatory system.
Institute of Scientific and Technical Information of China (English)
Xiao-bin ZHANG; Wei ZHANG; Xue-jun ZHANG
2012-01-01
The volume of fluid (VOF) formulation is applied to model the combustion process of a single droplet in a hightemperature convective air free stream environment.The calculations solve the flow field for both phases,and consider the droplet deformation based on an axisymmetrical model.The chemical reaction is modeled with one-step finite-rate mechanism and the thcrmo-physica1 properties for the gas mixture are species and temperature dependence.A mass transfer model applicable to the VOF calculations due to vaporization of the liquid phases is developed in consideration with the fluctuation of the liquid surface.The model is validated by examining the burning rate constants at different convective air temperatures,which accord well with experimental data of previous studies.Other phenomena from the simulations,such as the transient history of droplet deformation and flame structure,are also qualitatively accordant with the descriptions of other numerical results.However,a different droplet deformation mechanism for the low Reynolds number is explained compared with that for the high Reynolds number.The calculations verified the feasibility of the VOF computational fluid dynamics (CFD) formulation as well as the mass transfer model due to vaporization.
Modeling Energy and Development : An Evaluation of Models and Concepts
Ruijven, Bas van; Urban, Frauke; Benders, René M.J.; Moll, Henri C.; Sluijs, Jeroen P. van der; Vries, Bert de; Vuuren, Detlef P. van
2008-01-01
Most global energy models are developed by institutes from developed countries focusing primarily oil issues that are important in industrialized countries. Evaluation of the results for Asia of the IPCC/SRES models shows that broad concepts of energy and development. the energy ladder and the envir
Computational modelling of Yorùbá numerals in a number-to-text conversion system
Directory of Open Access Journals (Sweden)
Olúgbénga O. Akinadé
2014-08-01
Full Text Available In this paper, we examine the processes underlying the Yorùbá numeral system and describe a computational system that is capable of converting cardinal numbers to their equivalent Standard Yorùbá number names. First, we studied the mathematical and linguistic basis of the Yorùbá numeral system so as to formalise its arithmetic and syntactic procedures. Next, the process involved in formulating a Context-Free Grammar (CFG to capture the structure of the Yorùbá numeral system was highlighted. Thereafter, the model was reduced into a set of computer programs to implement the numerical to lexical conversion process. System evaluation was done by ranking the output from the software and comparing the output with the representations given by a group of Yorùbá native speakers. The result showed that the system gave correct representation for numbers and produced a recall of 100% with respect to the collected corpus. Our future study is focused on developing a text normalisation system that will produce number names for other numerical expressions such as ordinal numbers, date, time, money, ratio, etc. in Yorùbá text.
Directory of Open Access Journals (Sweden)
Nasim Karimi
2016-12-01
Conclusion: According to the results of this study, it can be concluded that occupational factors are associated with the number of MSDs developing among carpet weavers. Thus, using standard tools and decreasing hours of work per day can reduce frequency of MSDs among carpet weavers.
Ehret, Phillip J; Monroe, Brian M; Read, Stephen J
2015-05-01
We present a neural network implementation of central components of the iterative reprocessing (IR) model. The IR model argues that the evaluation of social stimuli (attitudes, stereotypes) is the result of the IR of stimuli in a hierarchy of neural systems: The evaluation of social stimuli develops and changes over processing. The network has a multilevel, bidirectional feedback evaluation system that integrates initial perceptual processing and later developing semantic processing. The network processes stimuli (e.g., an individual's appearance) over repeated iterations, with increasingly higher levels of semantic processing over time. As a result, the network's evaluations of stimuli evolve. We discuss the implications of the network for a number of different issues involved in attitudes and social evaluation. The success of the network supports the IR model framework and provides new insights into attitude theory.
A Generic Evaluation Model for Semantic Web Services
Shafiq, Omair
Semantic Web Services research has gained momentum over the last few Years and by now several realizations exist. They are being used in a number of industrial use-cases. Soon software developers will be expected to use this infrastructure to build their B2B applications requiring dynamic integration. However, there is still a lack of guidelines for the evaluation of tools developed to realize Semantic Web Services and applications built on top of them. In normal software engineering practice such guidelines can already be found for traditional component-based systems. Also some efforts are being made to build performance models for servicebased systems. Drawing on these related efforts in component-oriented and servicebased systems, we identified the need for a generic evaluation model for Semantic Web Services applicable to any realization. The generic evaluation model will help users and customers to orient their systems and solutions towards using Semantic Web Services. In this chapter, we have presented the requirements for the generic evaluation model for Semantic Web Services and further discussed the initial steps that we took to sketch such a model. Finally, we discuss related activities for evaluating semantic technologies.
Energy Technology Data Exchange (ETDEWEB)
Sari, Salih [Hacettepe University, Department of Nuclear Engineering, Beytepe, 06800 Ankara (Turkey); Erguen, Sule [Hacettepe University, Department of Nuclear Engineering, Beytepe, 06800 Ankara (Turkey)], E-mail: se@nuke.hacettepe.edu.tr; Barik, Muhammet; Kocar, Cemil; Soekmen, Cemal Niyazi [Hacettepe University, Department of Nuclear Engineering, Beytepe, 06800 Ankara (Turkey)
2009-03-15
In this study, isothermal turbulent bubbly flow is mechanistically modeled. For the modeling, Fluent version 6.3.26 is used as the computational fluid dynamics solver. First, the mechanistic models that simulate the interphase momentum transfer between the gas (bubbles) and liquid (continuous) phases are investigated, and proper models for the known flow conditions are selected. Second, an interfacial area transport equation (IATE) solution is added to Fluent's solution scheme in order to model the interphase momentum transfer mechanisms. In addition to solving IATE, bubble number density (BND) approach is also added to Fluent and this approach is also used in the simulations. Different source/sink models derived for the IATE and BND models are also investigated. The simulations of experiments based on the available data in literature are performed by using IATE and BND models in two and three-dimensions. The results show that the simulations performed by using IATE and BND models agree with each other and with the experimental data. The simulations performed in three-dimensions give better agreement with the experimental data.
Evaluating the AS-level Internet models: beyond topological characteristics
Institute of Scientific and Technical Information of China (English)
Fan Zheng-Ping
2012-01-01
A surge number of models has been proposed to model the Internet in the past decades.However,the issue on which models are better to model the Internet has still remained a problem.By analysing the evolving dynamics of the Internet,we suggest that at the autonomous system (AS) level,a suitable Internet model,should at least be heterogeneous and have a linearly growing mechanism.More importantly,we show that the roles of topological characteristics in evaluating and differentiating Internet models are apparently over-estimated from an engineering perspective.Also,we find that an assortative network is not necessarily more robust than a disassortative network and that a smaller average shortest path length does not necessarily mean a higher robustness,which is different from the previous observations. Our analytic results are helpful not only for the Internet,but also for other general complex networks.
Evaluation of Workflow Management Systems - A Meta Model Approach
Directory of Open Access Journals (Sweden)
Michael Rosemann
1998-11-01
Full Text Available The automated enactment of processes through the use of workflow management systems enables the outsourcing of the control flow from application systems. By now a large number of systems, that follow different workflow paradigms, are available. This leads to the problem of selecting the appropriate workflow management system for a given situation. In this paper we outline the benefits of a meta model approach for the evaluation and comparison of different workflow management systems. After a general introduction on the topic of meta modeling the meta models of the workflow management systems WorkParty (Siemens Nixdorf and FlowMark (IBM are compared as an example. These product specific meta models can be generalized to meta reference models, which helps to specify a workflow methodology. Exemplary, an organisational reference meta model is presented, which helps users in specifying their requirements for a workflow management system.
Linear programming models and methods of matrix games with payoffs of triangular fuzzy numbers
Li, Deng-Feng
2016-01-01
This book addresses two-person zero-sum finite games in which the payoffs in any situation are expressed with fuzzy numbers. The purpose of this book is to develop a suite of effective and efficient linear programming models and methods for solving matrix games with payoffs in fuzzy numbers. Divided into six chapters, it discusses the concepts of solutions of matrix games with payoffs of intervals, along with their linear programming models and methods. Furthermore, it is directly relevant to the research field of matrix games under uncertain economic management. The book offers a valuable resource for readers involved in theoretical research and practical applications from a range of different fields including game theory, operational research, management science, fuzzy mathematical programming, fuzzy mathematics, industrial engineering, business and social economics. .
Model-integrated estimation of normal tissue contamination for cancer SNP allelic copy number data.
Stjernqvist, Susann; Rydén, Tobias; Greenman, Chris D
2011-01-01
SNP allelic copy number data provides intensity measurements for the two different alleles separately. We present a method that estimates the number of copies of each allele at each SNP position, using a continuous-index hidden Markov model. The method is especially suited for cancer data, since it includes the fraction of normal tissue contamination, often present when studying data from cancer tumors, into the model. The continuous-index structure takes into account the distances between the SNPs, and is thereby appropriate also when SNPs are unequally spaced. In a simulation study we show that the method performs favorably compared to previous methods even with as much as 70% normal contamination. We also provide results from applications to clinical data produced using the Affymetrix genome-wide SNP 6.0 platform.
The Estimate of the Optimum Number of Retail Stores of Small Market Areas Using Agent Model
Tajima, Takuya; Hibino, Takayuki; Abe, Takehiko; Kimura, Haruhiko
At present, the conditions of the location and the optimal arrangements for retail stores of small market areas are examined with the several surveys. The surveys are important because the proceeds are influenced largely with the selection of the location. However, costs, time and experience are necessary in the surveys. For this reason, this research is intended for the retail stores of small market areas which expend a great deal of money for the surveys. The retail stores of small market areas in this paper are convenience stores. The purpose of this paper is to estimate the optimum number of convenience stores by computer simulation. We adopted the agent model. We constructed an agent model that had the customer agent, the shop agent and the landscape with the kind of necessary minimum parameters. And, we were able to make the simulation environment that reflected the real world. As a result, we could estimate the optimum number of convenience stores by simulations.
Evaluation of a Mysis bioenergetics model
Chipps, S.R.; Bennett, D.H.
2002-01-01
Direct approaches for estimating the feeding rate of the opossum shrimp Mysis relicta can be hampered by variable gut residence time (evacuation rate models) and non-linear functional responses (clearance rate models). Bioenergetics modeling provides an alternative method, but the reliability of this approach needs to be evaluated using independent measures of growth and food consumption. In this study, we measured growth and food consumption for M. relicta and compared experimental results with those predicted from a Mysis bioenergetics model. For Mysis reared at 10??C, model predictions were not significantly different from observed values. Moreover, decomposition of mean square error indicated that 70% of the variation between model predictions and observed values was attributable to random error. On average, model predictions were within 12% of observed values. A sensitivity analysis revealed that Mysis respiration and prey energy density were the most sensitive parameters affecting model output. By accounting for uncertainty (95% CLs) in Mysis respiration, we observed a significant improvement in the accuracy of model output (within 5% of observed values), illustrating the importance of sensitive input parameters for model performance. These findings help corroborate the Mysis bioenergetics model and demonstrate the usefulness of this approach for estimating Mysis feeding rate.
On the solutions of the $Z_n$-Belavin model with arbitrary number of sites
Hao, Kun; Li, Guang-Liang; Yang, Wen-Li; Shi, Kangjie; Wang, Yupeng
2016-01-01
The periodic $Z_n$-Belavin model on a lattice with an arbitrary number of sites $N$ is studied via the off-diagonal Bethe Ansatz method (ODBA). The eigenvalues of the corresponding transfer matrix are given in terms of an unified inhomogeneous $T-Q$ relation. In the special case of $N=nl$ with $l$ being also a positive integer, the resulting $T-Q$ relation recovers the homogeneous one previously obtained via algebraic Bethe Ansatz.
Assessing Model Assumptions for Turbulent Premixed Combustion at High Karlovitz Number
2015-09-03
flames in the high-Karlovitz regime are characterized and modeled using Direct Numerical Simulations ( DNS ) with detailed chemistry. To enable the present...Simulations, detailed chemistry 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT UU 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON...information. 15. SUBJECT TERMS. Key words or phrases identifying major concepts in the report. 16. SECURITY CLASSIFICATION. Enter security classification
Influence of Turbulence Model for Wind Turbine Simulation in Low Reynolds Number
Directory of Open Access Journals (Sweden)
Masami Suzuki
2016-01-01
Full Text Available In designing a wind turbine, the validation of the mathematical model’s result is normally carried out by comparison with wind tunnel experiment data. However, the Reynolds number of the wind tunnel experiment is low, and the flow does not match fully developed turbulence on the leading edge of a wind turbine blade. Therefore, the transition area from laminar to turbulent flow becomes wide under these conditions, and the separation point is difficult to predict using turbulence models. The prediction precision decreases dramatically when working with tip speed ratios less than the maximum power point. This study carries out a steadiness calculation with turbulence model and an unsteadiness calculation with laminar model for a three-blade horizontal axis wind turbine. The validation of the calculations is performed by comparing with experimental results. The power coefficients calculated without turbulence models are in agreement with the experimental data for a tip speed ratio greater than 5.
Link between reduced nephron number and hypertension: studies in a mutant mouse model.
Poladia, Deepali Pitre; Kish, Kayle; Kutay, Benjamin; Bauer, John; Baum, Michel; Bates, Carlton M
2006-04-01
Low birth weight (LBW) infants with reduced nephron numbers have significantly increased risk for hypertension later in life, which is a devastating health problem. The risk from a reduction in nephron number alone is not clear. Recently, using conditional knock-out approach, we have developed a mutant mouse with reduced nephron number in utero and no change in birth weight, by deleting fibroblast growth factor receptor 2 (fgfr2) in the ureteric bud. Our purpose was to investigate the role of in utero reduced nephron number alone in absence of LBW as a risk for developing hypertension in adulthood. Using tail cuff blood pressure measurements we observed significant increases in systolic blood pressure in one year old mutant mice versus controls. We also detected cardiac end-organ injury from hypertension as shown by significant increases in normalized heart weights, left ventricular (LV) wall thickness, and LV tissue area. Two-dimensional echocardiography revealed no changes in cardiac output and therefore significant increases in systemic vascular resistance in mutants versus controls. We also observed increases in serum blood urea nitrogen (BUN) levels and histologic evidence of glomerular and renal tubular injury in mutant mice versus controls. Thus, these studies suggest that our mutant mice may serve as a relevant model to study the link between reduction of nephron number in utero and the risk of hypertension and chronic renal failure in adulthood.
A top-down model to generate ensembles of runoff from a large number of hillslopes
Directory of Open Access Journals (Sweden)
P. R. Furey
2013-09-01
Full Text Available We hypothesize that total hillslope water loss for a rainfall–runoff event is inversely related to a function of a lognormal random variable, based on basin- and point-scale observations taken from the 21 km2 Goodwin Creek Experimental Watershed (GCEW in Mississippi, USA. A top-down approach is used to develop a new runoff generation model both to test our physical-statistical hypothesis and to provide a method of generating ensembles of runoff from a large number of hillslopes in a basin. The model is based on the assumption that the probability distributions of a runoff/loss ratio have a space–time rescaling property. We test this assumption using streamflow and rainfall data from GCEW. For over 100 rainfall–runoff events, we find that the spatial probability distributions of a runoff/loss ratio can be rescaled to a new distribution that is common to all events. We interpret random within-event differences in runoff/loss ratios in the model to arise from soil moisture spatial variability. Observations of water loss during events in GCEW support this interpretation. Our model preserves water balance in a mean statistical sense and supports our hypothesis. As an example, we use the model to generate ensembles of runoff at a large number of hillslopes for a rainfall–runoff event in GCEW.
A top-down model to generate ensembles of runoff from a large number of hillslopes
Furey, P. R.; Gupta, V. K.; Troutman, B. M.
2013-09-01
We hypothesize that total hillslope water loss for a rainfall-runoff event is inversely related to a function of a lognormal random variable, based on basin- and point-scale observations taken from the 21 km2 Goodwin Creek Experimental Watershed (GCEW) in Mississippi, USA. A top-down approach is used to develop a new runoff generation model both to test our physical-statistical hypothesis and to provide a method of generating ensembles of runoff from a large number of hillslopes in a basin. The model is based on the assumption that the probability distributions of a runoff/loss ratio have a space-time rescaling property. We test this assumption using streamflow and rainfall data from GCEW. For over 100 rainfall-runoff events, we find that the spatial probability distributions of a runoff/loss ratio can be rescaled to a new distribution that is common to all events. We interpret random within-event differences in runoff/loss ratios in the model to arise from soil moisture spatial variability. Observations of water loss during events in GCEW support this interpretation. Our model preserves water balance in a mean statistical sense and supports our hypothesis. As an example, we use the model to generate ensembles of runoff at a large number of hillslopes for a rainfall-runoff event in GCEW.
Evaluation of Usability Utilizing Markov Models
Penedo, Janaina Rodrigues; Diniz, Morganna; Ferreira, Simone Bacellar Leal; Silveira, Denis S.; Capra, Eliane
2012-01-01
Purpose: The purpose of this paper is to analyze the usability of a remote learning system in its initial development phase, using a quantitative usability evaluation method through Markov models. Design/methodology/approach: The paper opted for an exploratory study. The data of interest of the research correspond to the possible accesses of users…
Near-wall variable-Prandtl-number turbulence model for compressible flows
Sommer, T. P.; So, R. M. C.; Zhang, H. S.
1993-01-01
A near-wall four-equation turbulence model is developed for the calculation of high-speed compressible turbulent boundary layers. The four equations used are the k-epsilon equations and the theta(exp 2)-epsilon (sub theta) equations. These equations are used to define the turbulent diffusivities for momentum and heat fluxes, thus allowing the assumption of dynamic similarity between momentum and heat transport to be relaxed. The Favre-averaged equations of motion are solved in conjunction with the four transport equations. Calculations are compared with measurements and with another model's predictions where the assumption of the constant turbulent Prandtl number is invoked. Compressible flat plate turbulent boundary layers with both adiabatic and constant temperature wall boundary conditions are considered. Results for the range of low Mach numbers and temperature ratios investigated are essentially the same as those obtained using an identical near-wall k-epsilon model. In general, there are significant improvements in the predictions of mean flow properties at high Mach numbers.
Evaluating spatial patterns in hydrological modelling
DEFF Research Database (Denmark)
Koch, Julian
of spatial information in a holistic assessment. Opposed, statistical measures typically only address a limited amount of spatial information. A web-based survey and a citizen science project are employed to quantify the collective perceptive skills of humans aiming at benchmarking spatial metrics...... of environmental science, such as meteorology, geostatistics or geography. In total, seven metrics are evaluated with respect to their capability to quantitatively compare spatial patterns. The human visual perception is often considered superior to computer based measures, because it integrates various dimensions...... with respect to their capability to mimic human evaluations. This PhD thesis aims at expanding the standard toolbox of spatial model evaluation with innovative metrics that adequately compare spatial patterns. Driven by the rise of more complex model structures and the increase of suitable remote sensing...
Evaluation of help model replacement codes
Energy Technology Data Exchange (ETDEWEB)
Whiteside, Tad [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hang, Thong [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Flach, Gregory [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2009-07-01
This work evaluates the computer codes that are proposed to be used to predict percolation of water through the closure-cap and into the waste containment zone at the Department of Energy closure sites. This work compares the currently used water-balance code (HELP) with newly developed computer codes that use unsaturated flow (Richards’ equation). It provides a literature review of the HELP model and the proposed codes, which result in two recommended codes for further evaluation: HYDRUS-2D3D and VADOSE/W. This further evaluation involved performing actual simulations on a simple model and comparing the results of those simulations to those obtained with the HELP code and the field data. From the results of this work, we conclude that the new codes perform nearly the same, although moving forward, we recommend HYDRUS-2D3D.
Broadcast Evaluation Report Number Four: Industrial Chemistry Component 524: TV7 R3.
Gallagher, Margaret
The Institute of Educational Technology of the British Open University evaluated an Open University broadcast course in the chemistry of carbon compounds. Industrial chemistry was a separate but parallel component of the course which was presented by television and radio broadcast. Questionnaires, telephone interviews, and group discussions were…
Bechtold, Peter; Hohenstein, Ralph; Schmidt, Michael
2013-08-15
We introduce a method to objectively evaluate systems of differing beam deflection technologies that commonly are described by disparate technical specifications. Using our new approach based on resolvable spots we will compare commercially available random-access beam deflection technologies, namely galvanometer scanners, piezo scanners, MEMS scanners, acousto-optic deflectors, and electro-optic deflectors.
Increasing the number of single nucleotide polymorphisms used in genomic evaluation of dairy cattle
GeneSeek designed a new version of the GeneSeek Genomic Profiler HD BeadChip for Dairy Cattle, which had >77,000 single nucleotide polymorphisms (SNPs). A set of >140,000 SNPs was selected that included all SNPs on the existing GeneSeek chip, all SNPs used in U.S. national genomic evaluations, SNPs ...
Evaluating the TD model of classical conditioning.
Ludvig, Elliot A; Sutton, Richard S; Kehoe, E James
2012-09-01
The temporal-difference (TD) algorithm from reinforcement learning provides a simple method for incrementally learning predictions of upcoming events. Applied to classical conditioning, TD models suppose that animals learn a real-time prediction of the unconditioned stimulus (US) on the basis of all available conditioned stimuli (CSs). In the TD model, similar to other error-correction models, learning is driven by prediction errors--the difference between the change in US prediction and the actual US. With the TD model, however, learning occurs continuously from moment to moment and is not artificially constrained to occur in trials. Accordingly, a key feature of any TD model is the assumption about the representation of a CS on a moment-to-moment basis. Here, we evaluate the performance of the TD model with a heretofore unexplored range of classical conditioning tasks. To do so, we consider three stimulus representations that vary in their degree of temporal generalization and evaluate how the representation influences the performance of the TD model on these conditioning tasks.
Wang, Wentao
2012-03-01
Both theoretical analysis and nonlinear 2D numerical simulations are used to study the concentration difference and Peclet number effect on the measurement error of electroosmotic mobility in microchannels. We propose a compact analytical model for this error as a function of normalized concentration difference and Peclet number in micro electroosmotic flow. The analytical predictions of the errors are consistent with the numerical simulations. © 2012 IEEE.
Evaluating computational models of cholesterol metabolism.
Paalvast, Yared; Kuivenhoven, Jan Albert; Groen, Albert K
2015-10-01
Regulation of cholesterol homeostasis has been studied extensively during the last decades. Many of the metabolic pathways involved have been discovered. Yet important gaps in our knowledge remain. For example, knowledge on intracellular cholesterol traffic and its relation to the regulation of cholesterol synthesis and plasma cholesterol levels is incomplete. One way of addressing the remaining questions is by making use of computational models. Here, we critically evaluate existing computational models of cholesterol metabolism making use of ordinary differential equations and addressed whether they used assumptions and make predictions in line with current knowledge on cholesterol homeostasis. Having studied the results described by the authors, we have also tested their models. This was done primarily by testing the effect of statin treatment in each model. Ten out of eleven models tested have made assumptions in line with current knowledge of cholesterol metabolism. Three out of the ten remaining models made correct predictions, i.e. predicting a decrease in plasma total and LDL cholesterol or increased uptake of LDL upon treatment upon the use of statins. In conclusion, few models on cholesterol metabolism are able to pass a functional test. Apparently most models have not undergone the critical iterative systems biology cycle of validation. We expect modeling of cholesterol metabolism to go through many more model topologies and iterative cycles and welcome the increased understanding of cholesterol metabolism these are likely to bring.
Whitmore, Stephen A.; Petersen, Brian J.; Scott, David D.
1996-01-01
This paper develops a dynamic model for pressure sensors in continuum and rarefied flows with longitudinal temperature gradients. The model was developed from the unsteady Navier-Stokes momentum, energy, and continuity equations and was linearized using small perturbations. The energy equation was decoupled from momentum and continuity assuming a polytropic flow process. Rarefied flow conditions were accounted for using a slip flow boundary condition at the tubing wall. The equations were radially averaged and solved assuming gas properties remain constant along a small tubing element. This fundamental solution was used as a building block for arbitrary geometries where fluid properties may also vary longitudinally in the tube. The problem was solved recursively starting at the transducer and working upstream in the tube. Dynamic frequency response tests were performed for continuum flow conditions in the presence of temperature gradients. These tests validated the recursive formulation of the model. Model steady-state behavior was analyzed using the final value theorem. Tests were performed for rarefied flow conditions and compared to the model steady-state response to evaluate the regime of applicability. Model comparisons were excellent for Knudsen numbers up to 0.6. Beyond this point, molecular affects caused model analyses to become inaccurate.
Examining the Impact of Prandtl Number and Surface Convection Models on Deep Solar Convection
O'Mara, B. D.; Augustson, K.; Featherstone, N. A.; Miesch, M. S.
2015-12-01
Turbulent motions within the solar convection zone play a central role in the generation and maintenance of the Sun's magnetic field. This magnetic field reverses its polarity every 11 years and serves as the source of powerful space weather events, such as solar flares and coronal mass ejections, which can affect artificial satellites and power grids. The structure and inductive properties are linked to the amplitude (i.e. speed) of convective motion. Using the NASA Pleiades supercomputer, a 3D fluids code simulates these processes by evolving the Navier-Stokes equations in time and under an anelastic constraint. This code simulates the fluxes describing heat transport in the sun in a global spherical-shell geometry. Such global models can explicitly capture the large-scale motions in the deep convection zone but heat transport from unresolved small-scale convection in the surface layers must be parameterized. Here we consider two models for heat transport by surface convection, including a conventional turbulent thermal diffusion as well as an imposed flux that carries heat through the surface in a manner that is independent of the deep convection and the entropy stratification it establishes. For both models, we investigate the scaling of convective amplitude with decreasing diffusion (increasing Rayleigh number). If the Prandtl number is fixed, we find that the amplitude of convective motions increases with decreasing diffusion, possibly reaching an asymptotic value in the low diffusion limit. However, if only the thermal diffusion is decreased (keeping the viscosity fixed), we find that the amplitude of convection decreases with decreasing diffusion. Such a high-Prandtl-number, high-Peclet-number limit may be relevant for the Sun if magnetic fields mix momentum, effectively acting as an enhanced viscosity. In this case, our results suggest that the amplitude of large-scale convection in the Sun may be substantially less than in current models that employ an
Evaluation of the Design Metric to Reduce the Number of Defects in Software Development
Qureshi, M Rizwan Jameel; 10.5815/ijitcs.2012.04.02
2012-01-01
Software design is one of the most important and key activities in the system development life cycle (SDLC) phase that ensures the quality of software. Different key areas of design are very vital to be taken into consideration while designing software. Software design describes how the software system is decomposed and managed in smaller components. Object-oriented (OO) paradigm has facilitated software industry with more reliable and manageable software and its design. The quality of the software design can be measured through different metrics such as Chidamber and Kemerer (CK) design metrics, Mood Metrics & Lorenz and Kidd metrics. CK metrics is one of the oldest and most reliable metrics among all metrics available to software industry to evaluate OO design. This paper presents an evaluation of CK metrics to propose an improved CK design metrics values to reduce the defects during software design phase in software. This paper will also describe that whether a significant effect of any CK design metri...
[IMSS in numbers. Evaluation of the performance of health institutions in Mexico, 2004].
2006-01-01
The evaluation of health institutions performance in Mexico during 2004 was done using 29 indicators that describe intra-hospital mortality rates, productivity of health services, availability of health resources, quality of care, security, investment and costs of health care and the satisfaction level by users of health services. This exercise describes the efficiency and organization of health services provided by the different health institutions and allows comparing and balancing the performance of each institution. Results indicate the differences in availability of resources, inequity in the financing health care services, and inefficiency in the use of resources but also describe the level of efficacy of certain institutions and the satisfaction level that different users have of health services. The evaluation of the performance of the entire health institutions should provide the means to improve all the process of health care and to increase the quality of care in all health institutions in the country.
Model evaluation methodology applicable to environmental assessment models
Energy Technology Data Exchange (ETDEWEB)
Shaeffer, D.L.
1979-08-01
A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessment models might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes.
Dynamic non-equilibrium wall-modeling for large eddy simulation at high Reynolds numbers
Kawai, Soshi; Larsson, Johan
2013-01-01
A dynamic non-equilibrium wall-model for large-eddy simulation at arbitrarily high Reynolds numbers is proposed and validated on equilibrium boundary layers and a non-equilibrium shock/boundary-layer interaction problem. The proposed method builds on the prior non-equilibrium wall-models of Balaras et al. [AIAA J. 34, 1111-1119 (1996)], 10.2514/3.13200 and Wang and Moin [Phys. Fluids 14, 2043-2051 (2002)], 10.1063/1.1476668: the failure of these wall-models to accurately predict the skin friction in equilibrium boundary layers is shown and analyzed, and an improved wall-model that solves this issue is proposed. The improvement stems directly from reasoning about how the turbulence length scale changes with wall distance in the inertial sublayer, the grid resolution, and the resolution-characteristics of numerical methods. The proposed model yields accurate resolved turbulence, both in terms of structure and statistics for both the equilibrium and non-equilibrium flows without the use of ad hoc corrections. Crucially, the model accurately predicts the skin friction, something that existing non-equilibrium wall-models fail to do robustly.
Modeling of Sunspot Numbers by a Modified Binary Mixture of Laplace Distribution Functions
Sabarinath, A.; Anilkumar, A. K.
2008-07-01
This paper presents a new approach for describing the shape of 11-year sunspot cycles by considering the monthly averaged values. This paper also brings out a prediction model based on the analysis of 22 sunspot cycles from the year 1749 onward. It is found that the shape of the sunspot cycles with monthly averaged values can be described by a functional form of modified binary mixture of Laplace density functions, modified suitably by introducing two additional parameters in the standard functional form. The six parameters, namely two locations, two scales, and two area parameters, characterize this model. The nature of the estimated parameters for the sunspot cycles from 1749 onward has been analyzed and finally we arrived at a sufficient set of the parameters for the proposed model. It is seen that this model picks up the sunspot peaks more closely than any other model without losing the match at other places at the same time. The goodness of fit for the proposed model is also computed with the Hathaway Wilson Reichmann overline{χ} measure, which shows, on average, that the fitted model passes within 0.47 standard deviations of the actual averaged monthly sunspot numbers.
AN INTEGRATED FUZZY AHP AND TOPSIS MODEL FOR SUPPLIER EVALUATION
Directory of Open Access Journals (Sweden)
Željko Stević
2016-05-01
Full Text Available In today’s modern supply chains, the adequate suppliers’ choice has strategic meaning for entire companies’ business. The aim of this paper is to evaluate different suppliers using the integrated model that recognizes a combination of fuzzy AHP (Analytical Hierarchy Process and the TOPSIS method. Based on six criteria, the expert team was formed to compare them, so determination of their significance is being done with fuzzy AHP method. Expert team also compares suppliers according to each criteria and on the base of triangular fuzzy numbers. Based on their inputs, TOPSIS method is used to estimate potential solutions. Suggested model accomplishes certain advantages in comparison with previously used traditional models which were used to make decisions about evaluation and choice of supplier.
Lifetime-Aware Cloud Data Centers: Models and Performance Evaluation
Directory of Open Access Journals (Sweden)
Luca Chiaraviglio
2016-06-01
Full Text Available We present a model to evaluate the server lifetime in cloud data centers (DCs. In particular, when the server power level is decreased, the failure rate tends to be reduced as a consequence of the limited number of components powered on. However, the variation between the different power states triggers a failure rate increase. We therefore consider these two effects in a server lifetime model, subject to an energy-aware management policy. We then evaluate our model in a realistic case study. Our results show that the impact on the server lifetime is far from negligible. As a consequence, we argue that a lifetime-aware approach should be pursued to decide how and when to apply a power state change to a server.
Helical turbulent Prandtl number in the A model of passive vector advection
Hnatič, M.; Zalom, P.
2016-11-01
Using the field theoretic renormalization group technique in the two-loop approximation, turbulent Prandtl numbers are obtained in the general A model of passive vector advected by fully developed turbulent velocity field with violation of spatial parity introduced via the continuous parameter ρ ranging from ρ =0 (no violation of spatial parity) to |ρ |=1 (maximum violation of spatial parity). Values of A represent a continuously adjustable parameter which governs the interaction structure of the model. In nonhelical environments, we demonstrate that A is restricted to the interval -1.723 ≤A ≤2.800 (rounded to 3 decimal places) in the two-loop order of the field theoretic model. However, when ρ >0.749 (rounded to 3 decimal places), the restrictions may be removed, which means that presence of helicity exerts a stabilizing effect onto the possible stationary regimes of the system. Furthermore, three physically important cases A ∈{-1 ,0 ,1 } are shown to lie deep within the allowed interval of A for all values of ρ . For the model of the linearized Navier-Stokes equations (A =-1 ) up to date unknown helical values of the turbulent Prandtl number have been shown to equal 1 regardless of parity violation. Furthermore, we have shown that interaction parameter A exerts strong influence on advection-diffusion processes in turbulent environments with broken spatial parity. By varying A continuously, we explain high stability of the kinematic MHD model (A =1 ) against helical effects as a result of its proximity to the A =0.912 (rounded to 3 decimal places) case where helical effects are completely suppressed. Contrary, for the physically important A =0 model, we show that it lies deep within the interval of models where helical effects cause the turbulent Prandtl number to decrease with |ρ | . We thus identify internal structure of interactions given by the parameter A , and not the vector character of the admixture itself being the dominant factor influencing
Helical turbulent Prandtl number in the A model of passive vector advection.
Hnatič, M; Zalom, P
2016-11-01
Using the field theoretic renormalization group technique in the two-loop approximation, turbulent Prandtl numbers are obtained in the general A model of passive vector advected by fully developed turbulent velocity field with violation of spatial parity introduced via the continuous parameter ρ ranging from ρ=0 (no violation of spatial parity) to |ρ|=1 (maximum violation of spatial parity). Values of A represent a continuously adjustable parameter which governs the interaction structure of the model. In nonhelical environments, we demonstrate that A is restricted to the interval -1.723≤A≤2.800 (rounded to 3 decimal places) in the two-loop order of the field theoretic model. However, when ρ>0.749 (rounded to 3 decimal places), the restrictions may be removed, which means that presence of helicity exerts a stabilizing effect onto the possible stationary regimes of the system. Furthermore, three physically important cases A∈{-1,0,1} are shown to lie deep within the allowed interval of A for all values of ρ. For the model of the linearized Navier-Stokes equations (A=-1) up to date unknown helical values of the turbulent Prandtl number have been shown to equal 1 regardless of parity violation. Furthermore, we have shown that interaction parameter A exerts strong influence on advection-diffusion processes in turbulent environments with broken spatial parity. By varying A continuously, we explain high stability of the kinematic MHD model (A=1) against helical effects as a result of its proximity to the A=0.912 (rounded to 3 decimal places) case where helical effects are completely suppressed. Contrary, for the physically important A=0 model, we show that it lies deep within the interval of models where helical effects cause the turbulent Prandtl number to decrease with |ρ|. We thus identify internal structure of interactions given by the parameter A, and not the vector character of the admixture itself being the dominant factor influencing diffusion
Resonance decay effect on conserved number fluctuations in a hadron resonance gas model
Mishra, D K; Netrakanti, P K; Mohanty, A K
2016-01-01
We study the effect of charged secondaries coming from resonance decay on the net-baryon, net-charge and net-strangeness fluctuations in high energy heavy-ion collisions within the hadron resonance gas (HRG) model. We emphasize the importance of including weak decays along with other resonance decays in the HRG, while comparing with the experimental observables. The effect of kinematic cuts on resonances and primordial particles on the conserved number fluctuations are also studied. The HRG model calculations with the inclusion of resonance decays and kinematical cuts are compared with the recent experimental data from STAR and PHENIX experiments. We find a good agreement between our model calculations and the experimental measurements for both net-proton and net-charge distributions.
Effect of resonance decay on conserved number fluctuations in a hadron resonance gas model
Mishra, D. K.; Garg, P.; Netrakanti, P. K.; Mohanty, A. K.
2016-07-01
We study the effect of charged secondaries coming from resonance decay on the net-baryon, net-charge, and net-strangeness fluctuations in high-energy heavy-ion collisions within the hadron resonance gas (HRG) model. We emphasize the importance of including weak decays along with other resonance decays in the HRG, while comparing with the experimental observables. The effect of kinematic cuts on resonances and primordial particles on the conserved number fluctuations are also studied. The HRG model calculations with the inclusion of resonance decays and kinematical cuts are compared with the recent experimental data from STAR and PHENIX experiments. We find good agreement between our model calculations and the experimental measurements for both net-proton and net-charge distributions.
Dadzie, S Kokou; Reese, Jason M
2012-04-01
There are some hydrodynamic equations that, while their parent kinetic equation satisfies fundamental mechanical properties, appear themselves to violate mechanical or thermodynamic properties. This paper aims to shed some light on the source of this problem. Starting with diffusive volume hydrodynamic models, the microscopic temporal and spatial scales are first separated at the kinetic level from the macroscopic scales at the hydrodynamic level. Then, we consider Klimontovich's spatial stochastic version of the Boltzmann kinetic equation and show that, for small local Knudsen numbers, the stochastic term vanishes and the kinetic equation becomes the Boltzmann equation. The collision integral dominates in the small local Knudsen number regime, which is associated with the exact traditional continuum limit. We find a subdomain of the continuum range, which the conventional Knudsen number classification does not account for appropriately. In this subdomain, it is possible to obtain a fully mechanically consistent volume (or mass) diffusion model that satisfies the second law of thermodynamics on the grounds of extended non-local-equilibrium thermodynamics.
Two models for evaluating landslide hazards
Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.
2006-01-01
Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.
Principal component and factor analytic models in international sire evaluation
Directory of Open Access Journals (Sweden)
Jakobsen Jette
2011-09-01
Full Text Available Abstract Background Interbull is a non-profit organization that provides internationally comparable breeding values for globalized dairy cattle breeding programmes. Due to different trait definitions and models for genetic evaluation between countries, each biological trait is treated as a different trait in each of the participating countries. This yields a genetic covariance matrix of dimension equal to the number of countries which typically involves high genetic correlations between countries. This gives rise to several problems such as over-parameterized models and increased sampling variances, if genetic (covariance matrices are considered to be unstructured. Methods Principal component (PC and factor analytic (FA models allow highly parsimonious representations of the (covariance matrix compared to the standard multi-trait model and have, therefore, attracted considerable interest for their potential to ease the burden of the estimation process for multiple-trait across country evaluation (MACE. This study evaluated the utility of PC and FA models to estimate variance components and to predict breeding values for MACE for protein yield. This was tested using a dataset comprising Holstein bull evaluations obtained in 2007 from 25 countries. Results In total, 19 principal components or nine factors were needed to explain the genetic variation in the test dataset. Estimates of the genetic parameters under the optimal fit were almost identical for the two approaches. Furthermore, the results were in a good agreement with those obtained from the full rank model and with those provided by Interbull. The estimation time was shortest for models fitting the optimal number of parameters and prolonged when under- or over-parameterized models were applied. Correlations between estimated breeding values (EBV from the PC19 and PC25 were unity. With few exceptions, correlations between EBV obtained using FA and PC approaches under the optimal fit were
Supply Chain Collaboration Risk Evaluation Based on Trapezoidal Fuzzy Numbers Similarity
Directory of Open Access Journals (Sweden)
Lei Wen
2013-02-01
Full Text Available Supply chains are confronted with more complicated risks on the current financial crisis, as makes risk control in supply chain management more exigent. Supply chain collaboration, as the important part of supply chain management, is the key method of improving supply chains profits. This study introduces risk management into the mechanism of supply chain collaboration. In supply chain risk management, supply chain collaboration risk is recognized as an important section which can help SCM more efficiently. This study is aimed to present a collaboration risk analysis methods based on trapezoidal fuzzy numbers similarity to solve this problem. By using two linguist terms probability of failure and severity of loss, supply chain collaboration risk can be calculated and expressed with linguistic term. At last, an example is used to identify the efficiency of this method.
Number of Children and Telomere Length in Women: A Prospective, Longitudinal Evaluation
Barha, Cindy K.; Hanna, Courtney W.; Salvante, Katrina G.; Wilson, Samantha L.; Robinson, Wendy P.; Altman, Rachel M.; Nepomnaschy, Pablo A.
2016-01-01
Life history theory (LHT) predicts a trade-off between reproductive effort and the pace of biological aging. Energy invested in reproduction is not available for tissue maintenance, thus having more offspring is expected to lead to accelerated senescence. Studies conducted in a variety of non-human species are consistent with this LHT prediction. Here we investigate the relationship between the number of surviving children born to a woman and telomere length (TL, a marker of cellular aging) over 13 years in a group of 75 Kaqchikel Mayan women. Contrary to LHT’s prediction, women who had fewer children exhibited shorter TLs than those who had more children (p = 0.045) after controlling for TL at the onset of the 13-year study period. An “ultimate” explanation for this apparently protective effect of having more children may lay with human’s cooperative-breeding strategy. In a number of socio-economic and cultural contexts, having more chilren appears to be linked to an increase in social support for mothers (e.g., allomaternal care). Higher social support, has been argued to reduce the costs of further reproduction. Lower reproductive costs may make more metabolic energy available for tissue maintenance, resulting in a slower pace of cellular aging. At a “proximate” level, mechanisms involved may include the actions of the gonadal steroid estradiol, which increases dramatically during pregnancy. Estradiol is known to protect TL from the effects of oxidative stress as well as increase telomerase activity, an enzyme that maintains TL. Future research should explore the potential role of social support as well as that of estradiol and other potential biological pathways in the trade-offs between reproductive effort and the pace of cellular aging within and among human as well as in non-human populations. PMID:26731744
Number of Children and Telomere Length in Women: A Prospective, Longitudinal Evaluation.
Directory of Open Access Journals (Sweden)
Cindy K Barha
Full Text Available Life history theory (LHT predicts a trade-off between reproductive effort and the pace of biological aging. Energy invested in reproduction is not available for tissue maintenance, thus having more offspring is expected to lead to accelerated senescence. Studies conducted in a variety of non-human species are consistent with this LHT prediction. Here we investigate the relationship between the number of surviving children born to a woman and telomere length (TL, a marker of cellular aging over 13 years in a group of 75 Kaqchikel Mayan women. Contrary to LHT's prediction, women who had fewer children exhibited shorter TLs than those who had more children (p = 0.045 after controlling for TL at the onset of the 13-year study period. An "ultimate" explanation for this apparently protective effect of having more children may lay with human's cooperative-breeding strategy. In a number of socio-economic and cultural contexts, having more chilren appears to be linked to an increase in social support for mothers (e.g., allomaternal care. Higher social support, has been argued to reduce the costs of further reproduction. Lower reproductive costs may make more metabolic energy available for tissue maintenance, resulting in a slower pace of cellular aging. At a "proximate" level, mechanisms involved may include the actions of the gonadal steroid estradiol, which increases dramatically during pregnancy. Estradiol is known to protect TL from the effects of oxidative stress as well as increase telomerase activity, an enzyme that maintains TL. Future research should explore the potential role of social support as well as that of estradiol and other potential biological pathways in the trade-offs between reproductive effort and the pace of cellular aging within and among human as well as in non-human populations.
Determining the Quantum Numbers of Simplified Models in $t\\bar{t}X$ production at the LHC
Dolan, Matthew J; Wang, Qi; Yu, Zhao-Huan
2016-01-01
Simplified models provide an avenue for characterising and exploring New Physics for large classes of UV theories. In this article we study the ability of the LHC to probe the spin and parity quantum numbers of a new light resonance $X$ which couples predominantly to the third generation quarks in a variety of simplified models through the $t\\bar t X$ channel. After evaluating the LHC discovery potential for $X$, we suggest several kinematic variables sensitive to the spin and CP properties of the new resonance. We show how an analysis exploiting differential distributions in the semi-leptonic channel can discriminate among various possibilities. We find that the potential to discriminate a scalar from a pseudoscalar or (axial) vector to be particularly promising.
Joshipura, Anjan S
2010-01-01
The charged fermion mass matrices are always invariant under $U(1)^3$ symmetry linked to the fermion number transformation. A class of two Higgs doublet models (2HDM) can be identified by requiring that the definition of this symmetry in an arbitrary weak basis be independent of Higgs parameters such as the ratio of the Higgs vacuum expectation values. The tree level flavour changing neutral currents normally present in 2HDM are absent in this class of models but unlike the type I or type II Higgs doublet models, the charged Higgs couplings in these models contain additional flavour dependent CP violating phases. These phases can account for the recent hints of the beyond standard model CP violation in the $B_d$ and $B_s$ mixing. In particular, there is a range of parameters in which new phases do not contribute to the $K$ meson CP violation but give identical new physics contribution to the $B_d$ and $B_s$ meson mixing. Specific model realizations of the above scenario are briefly discussed.
The increase in the number of astrocytes in the total cerebral ischemia model in rats
Kudabayeva, M.; Kisel, A.; Chernysheva, G.; Smol'yakova, V.; Plotnikov, M.; Khodanovich, M.
2017-08-01
Astrocytes are the most abundant cell class in the CNS. Astrocytic therapies have a huge potential for neuronal repair after stroke. The majority of brain stroke studies address the damage to neurons. Modern studies turn to the usage of morphological and functional changes in astroglial cells after stroke in regenerative medicine. Our study is focused on the changes in the number of astrocytes in the hippocampus (where new glia cells divide) after brain ischemia. Ischemia was modeled by occlusion of tr. brachiocephalicus, a. subclavia sin., a. carotis communis sin. Astrocytes were determined using immunohistochemical labeling with anti GFAP antibody. We found out that the number of astrocytes increased on the 10th and 30th days after stroke in the CA1, CA2 fields, the granular layer of dentate gyrus (GrDG) and hilus. The morphology of astrocytes became reactive in these regions. Therefore, our results revealed long-term reactive astrogliosis in the hippocampus region after total ischemia in rats.
Directory of Open Access Journals (Sweden)
Matsyura A.V.
2011-12-01
Full Text Available The problem of the mathematical analysis of the number dynamics of the nesting waterbirds for the islands of the south of Ukraine is examined. The algorithm of the evaluation of changes in the number of island birds is proposed. Data of the long-term monitoring of the number of birds were analyzed according to this algorithm. The necessity of the implementation of the statistical indices together with the graphic representation of island birds’ turnover is proved. The trends of population dynamics are determined for the key species. The discussed procedure of the complex evaluation is proposed for the management planning of the island bird species and their habitats. The performed analysis of the number dynamics of the key-stone breeding island birds showed that, with the exception of little tern, the population status and the prognosis of number are sufficiently favorable. From the data of long-term monitoring we came up with the conclusion about the existence of island habitats with carrying capacity to maintain the additional number of breeding birds. In the case of unfavorable conditions like strengthening of anthropogenic press, concurrent interrelations, deficiency of feed resources or drastic reduction of breeding biotopes, the birds due to turnover are capable to successfully react even without reducing their number and breeding success. The extinction rate of the breeding bird species from the island sites directly correlates with the number of breeding species. For the species with equal abundance, the extinction probability is higher for birds, whose numbers are unstable and characterized by significant fluctuations. This testifies the urgency of the constant monitoring and analysis of the number dynamics of breeding bird species in region. The suggested procedure of analysis is recommended for drawing up of management plans and performing of prognoses of number of breeding island bird species. More detail analysis with use of
Percolation Model of Insider Threats to Assess the Optimum Number of Rules
Kepner, Jeremy; Michaleas, Pete
2014-01-01
Rules, regulations, and policies are the basis of civilized society and are used to coordinate the activities of individuals who have a variety of goals and purposes. History has taught that over-regulation (too many rules) makes it difficult to compete and under-regulation (too few rules) can lead to crisis. This implies an optimal number of rules that avoids these two extremes. Rules create boundaries that define the latitude an individual has to perform their activities. This paper creates a Toy Model of a work environment and examines it with respect to the latitude provided to a normal individual and the latitude provided to an insider threat. Simulations with the Toy Model illustrate four regimes with respect to an insider threat: under-regulated, possibly optimal, tipping-point, and over-regulated. These regimes depend up the number of rules (N) and the minimum latitude (Lmin) required by a normal individual to carry out their activities. The Toy Model is then mapped onto the standard 1D Percolation Mo...
A NOVEL SLIGHTLY COMPRESSIBLE MODEL FOR LOW MACH NUMBER PERFECT GAS FLOW CALCULATION
Institute of Scientific and Technical Information of China (English)
邓小刚; 庄逢甘
2002-01-01
By analyzing the characteristics of low Mach number perfect gas flows, a novel Slightly Compressible Model (SCM) for low Mach number perfect gas flows is derived. In view of numerical calculations, this model is proved very efficient,for it is kept within the p-v frame but does not have to satisfy the time consuming divergence-free condition in order to get the incompressible Navier-Stokes equation solutions. Writing the equations in the form of conservation laws, we have derived the characteristic systems which are necessary for numerical calculations. A cellcentered finite-volume method with flux difference upwind-biased schemes is used for the equation solutions and a new Exact Newton Relaxation (ENR) implicit method is developed. Various computed results are presented to validate the present model.Laminar flow solutions over a circular cylinder with wake developing and vortex shedding are presented. Results for inviscid flow over a sphere are compared in excellent agreement with the exact analytic incompressible solution. Three-dimensional viscous flow solutions over sphere and prolate spheroid are also calculated and compared well with experiments and other incompressible solutions. Finally, good convergent performaces are shown for sphere viscous flows.
Airfoil Aeroelastic Flutter Analysis Based on Modified Leishman-Beddoes Model at Low Mach Number
Institute of Scientific and Technical Information of China (English)
SHAO Song; ZHU Qinghua; ZHANG Chenglin; NI Xianping
2011-01-01
Based on modified Leishman-Beddoes(L-B)state space model at low Mach number(lower than 0.3),the airfoil aeroelastic system is presented in this paper.The main modifications for L-B model include a new dynamic stall criterion and revisions of normal force and pitching moment coefficient.The bifurcation diagrams,the limit cycle oscillation (LCO)phase plane plots and the time domain response figures are applied to investigating the stall flutter bifurcation behavior of airfoil aeroelastic systems with symmetry or asymmetry.It is shown that the symmetric periodical oscillation happens after subcritical bifurcation caused by dynamic stall,and the asymmetric periodical oscillation,which is caused by the interaction of dynamic stall and static divergence,only happens in the airfoil aeroelastic system with asymmetry.Validations of the modified L-B model and the airfoil aeroelastic system are presented with the experimental airload data of NACA0012 and OA207 and experimental stall flutter data of NACA0012 respectively.Results demonstrate that the airfoil aeroelastic system presented in this paper is effective and accurate,which can be applied to the investigation of airfoil stall flutter at low Mach number.
Data and simulation modeling to determine the estimated number of primary care beds in Oklahoma.
Pulat, P S; Foote, B L; Kasap, S; Splinter, G L; Lucas, M K
1999-05-01
Growth in managed care has resulted in an increased need for studies in health care planning. The article focuses on the estimated number of primary care beds needed in the state to provide quality services to all Oklahoma residents. Based on the 1996 population estimations for Oklahoma and its distribution to the 77 counties, a space-filling approach is used to determine 46 service areas, each of which will provide primary care service to its area population. Service areas are constructed such that residents of an area are within an acceptable distance to a primary care provider in the area. A simulation model is developed to determine the number of primary care beds needed in each service area. Statistical analysis on actual hospital data is used to determine the distributions of inpatient flow and length of stay. The simulation model is validated for acute care hospitals before application to the service areas. Sensitivity analysis on model input parameter values is performed to determine their effect on primary care bed calculations. The effect of age distribution on the bed requirement is also studied. The results of this study will assist the Oklahoma Health Care Authority in the development of sound health care policy decisions.
CTBT integrated verification system evaluation model supplement
Energy Technology Data Exchange (ETDEWEB)
EDENBURN,MICHAEL W.; BUNTING,MARCUS; PAYNE JR.,ARTHUR C.; TROST,LAWRENCE C.
2000-03-02
Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0.
CTBT integrated verification system evaluation model supplement
Energy Technology Data Exchange (ETDEWEB)
EDENBURN,MICHAEL W.; BUNTING,MARCUS; PAYNE JR.,ARTHUR C.; TROST,LAWRENCE C.
2000-03-02
Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0.
Evaluating spatial patterns in hydrological modelling
DEFF Research Database (Denmark)
Koch, Julian
is not fully exploited by current modelling frameworks due to the lack of suitable spatial performance metrics. Furthermore, the traditional model evaluation using discharge is found unsuitable to lay confidence on the predicted catchment inherent spatial variability of hydrological processes in a fully...... the contiguous United Sates (10^6 km2). To this end, the thesis at hand applies a set of spatial performance metrics on various hydrological variables, namely land-surface-temperature (LST), evapotranspiration (ET) and soil moisture. The inspiration for the applied metrics is found in related fields...
Comparative analysis of used car price evaluation models
Chen, Chuancan; Hao, Lulu; Xu, Cong
2017-05-01
An accurate used car price evaluation is a catalyst for the healthy development of used car market. Data mining has been applied to predict used car price in several articles. However, little is studied on the comparison of using different algorithms in used car price estimation. This paper collects more than 100,000 used car dealing records throughout China to do empirical analysis on a thorough comparison of two algorithms: linear regression and random forest. These two algorithms are used to predict used car price in three different models: model for a certain car make, model for a certain car series and universal model. Results show that random forest has a stable but not ideal effect in price evaluation model for a certain car make, but it shows great advantage in the universal model compared with linear regression. This indicates that random forest is an optimal algorithm when handling complex models with a large number of variables and samples, yet it shows no obvious advantage when coping with simple models with less variables.
Evaluation of CNN as anthropomorphic model observer
Massanes, Francesc; Brankov, Jovan G.
2017-03-01
Model observers (MO) are widely used in medical imaging to act as surrogates of human observers in task-based image quality evaluation, frequently towards optimization of reconstruction algorithms. In this paper, we explore the use of convolutional neural networks (CNN) to be used as MO. We will compare CNN MO to alternative MO currently being proposed and used such as the relevance vector machine based MO and channelized Hotelling observer (CHO). As the success of the CNN, and other deep learning approaches, is rooted in large data sets availability, which is rarely the case in medical imaging systems task-performance evaluation, we will evaluate CNN performance on both large and small training data sets.
Huber, Stefan; Klein, Elise; Willmes, Klaus; Nuerk, Hans-Christoph; Moeller, Korbinian
2014-01-01
Decimal fractions comply with the base-10 notational system of natural Arabic numbers. Nevertheless, recent research suggested that decimal fractions may be represented differently than natural numbers because two number processing effects (i.e., semantic interference and compatibility effects) differed in their size between decimal fractions and natural numbers. In the present study, we examined whether these differences indeed indicate that decimal fractions are represented differently from natural numbers. Therefore, we provided an alternative explanation for the semantic congruity effect, namely a string length congruity effect. Moreover, we suggest that the smaller compatibility effect for decimal fractions compared to natural numbers was driven by differences in processing strategy (sequential vs. parallel). To evaluate this claim, we manipulated the tenth and hundredth digits in a magnitude comparison task with participants' eye movements recorded, while the unit digits remained identical. In addition, we evaluated whether our empirical findings could be simulated by an extended version of our computational model originally developed to simulate magnitude comparisons of two-digit natural numbers. In the eye-tracking study, we found evidence that participants processed decimal fractions more sequentially than natural numbers because of the identical leading digit. Importantly, our model was able to account for the smaller compatibility effect found for decimal fractions. Moreover, string length congruity was an alternative account for the prolonged reaction times for incongruent decimal pairs. Consequently, we suggest that representations of natural numbers and decimal fractions do not differ.
Directory of Open Access Journals (Sweden)
Stefan eHuber
2014-04-01
Full Text Available Decimal fractions comply with the base-10 notational system of natural Arabic numbers. Nevertheless, recent research suggested that decimal fractions may be represented differently than natural numbers because two number processing effects (i.e., semantic interference and compatibility effects differed in their size between decimal fractions and natural numbers. In the present study, we examined whether these differences indeed indicate that decimal fractions are represented differently from natural numbers. Therefore, we provided an alternative explanation for the semantic congruity effect, namely a string length congruity effect. Moreover, we suggest that the smaller compatibility effect for decimal fractions compared to natural numbers was driven by differences in processing strategy (sequential vs. parallel.To evaluate this claim, we manipulated the tenth and hundredth digits in a magnitude comparison task with participants' eye movements recorded, while the unit digits remained identical. In addition, we evaluated whether our empirical findings could be simulated by an extended version of our computational model originally developed to simulate magnitude comparisons of two-digit natural numbers. In the eye-tracking study, we found evidence that participants processed decimal fractions more sequentially than natural numbers because of the identical leading digit. Importantly, our model was able to account for the smaller compatibility effect found for decimal fractions. Moreover, string length congruity was an alternative account for the prolonged reaction times for incongruent decimal pairs. Consequently, we suggest that representations of natural numbers and decimal fractions do not differ.
Complex Evaluation Model of Corporate Energy Management
Ágnes Kádár Horváth
2014-01-01
With the ever increasing energy problems at the doorstep alongside with political, economic, social and environmental challenges, conscious energy management has become of increasing importance in corporate resource management. Rising energy costs, stricter environmental and climate regulations as well as considerable changes in the energy market require companies to rationalise their energy consumption and cut energy costs. This study presents a complex evaluation model of corporate energy m...
Phoenix Metropolitan Model Deployment Initiative Evaluation Report
Zimmerman, C; Marks, J.; Jenq, J.; Cluett, Chris; DeBlasio, Allan; Lappin, Jane; Rakha, Hesham A.; Wunderlich, K
2000-01-01
This report presents the evaluation results of the Phoenix, Arizona Metropolitan Model Deployment Initiative (MMDI). The MMDI was a three-year program of the Intelligent Transportation Systems (ITS) Joint Program Office of the U.S. Department of Transportation. It focused on aggressive deployment of ITS at four sites across the United States, including the metropolitan areas of San Antonio, Seattle, NY/NJ/Connecticut as well as Phoenix. The focus of the deployments was on integration of exist...
Sander, S. P.; Friedl, R. R.; Barker, J. R.; Golden, D. M.; Kurylo, M. J.; Wine, P. H.; Abbatt, J.; Burkholder, J. B.; Kolb, C. E.; Moortgat, G. K.; Huie, R. E.; Orkin, V. L.
2009-01-01
This is the supplement to the fifteenth in a series of evaluated sets of rate constants and photochemical cross sections compiled by the NASA Panel for Data Evaluation. The data are used primarily to model stratospheric and upper tropospheric processes, with particular emphasis on the ozone layer and its possible perturbation by anthropogenic and natural phenomena. Copies of this evaluation are available in electronic form and may be printed from the following Internet URL: http://jpldataeval.jpl.nasa.gov/.
Implicit moral evaluations: A multinomial modeling approach.
Cameron, C Daryl; Payne, B Keith; Sinnott-Armstrong, Walter; Scheffer, Julian A; Inzlicht, Michael
2017-01-01
Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
A number-projected model with generalized pairing interaction in application to rotating nuclei
Energy Technology Data Exchange (ETDEWEB)
Satula, W. [Warsaw Univ. (Poland)]|[Joint Institute for Heavy Ion Research, Oak Ridge, TN (United States)]|[Univ. of Tennessee, Knoxville, TN (United States)]|[Royal Institute of Technology, Stockholm (Sweden); Wyss, R. [Royal Institute of Technology, Stockholm (Sweden)
1996-12-31
A cranked mean-field model that takes into account both T=1 and T=0 pairing interactions is presented. The like-particle pairing interaction is described by means of a standard seniority force. The neutron-proton channel includes simultaneously correlations among particles moving in time reversed orbits (T=1) and identical orbits (T=0). The coupling between different pairing channels and nuclear rotation is taken into account selfconsistently. Approximate number-projection is included by means of the Lipkin-Nogami method. The transitions between different pairing phases are discussed as a function of neutron/proton excess, T{sub z}, and rotational frequency, {Dirac_h}{omega}.
Equivalent Alkane Carbon Number of Live Crude Oil: A Predictive Model Based on Thermodynamics
Directory of Open Access Journals (Sweden)
Creton Benoit
2016-09-01
Full Text Available We took advantage of recently published works and new experimental data to propose a model for the prediction of the Equivalent Alkane Carbon Number of live crude oil (EACNlo for EOR processes. The model necessitates the a priori knowledge of reservoir pressure and temperature conditions as well as the initial gas to oil ratio. Additionally, some required volumetric properties for hydrocarbons were predicted using an equation of state. The model has been validated both on our own experimental data and data from the literature. These various case studies cover broad ranges of conditions in terms of API gravity index, gas to oil ratio, reservoir pressure and temperature, and composition of representative gas. The predicted EACNlo values reasonably agree with experimental EACN values, i.e. determined by comparison with salinity scans for a series of n-alkanes from nC8 to nC18. The model has been used to generate high pressure high temperature data, showing competing effects of the gas to oil ratio, pressure and temperature. The proposed model allows to strongly narrow down the spectrum of possibilities in terms of EACNlo values, and thus a more rational use of equipments.
Acceptance criteria for urban dispersion model evaluation
Hanna, Steven; Chang, Joseph
2012-05-01
The authors suggested acceptance criteria for rural dispersion models' performance measures in this journal in 2004. The current paper suggests modified values of acceptance criteria for urban applications and tests them with tracer data from four urban field experiments. For the arc-maximum concentrations, the fractional bias should have a magnitude 0.3. For all data paired in space, for which a threshold concentration must always be defined, the normalized absolute difference should be SCIPUFF dispersion model with the urban canopy option and the urban dispersion model (UDM) option. In each set of evaluations, three or four likely options are tested for meteorological inputs (e.g., a local building top wind speed, the closest National Weather Service airport observations, or outputs from numerical weather prediction models). It is found that, due to large natural variability in the urban data, there is not a large difference between the performance measures for the two model options and the three or four meteorological input options. The more detailed UDM and the state-of-the-art numerical weather models do provide a slight improvement over the other options. The proposed urban dispersion model acceptance criteria are satisfied at over half of the field experiments.
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-12-01
This annual report of the US Nuclear Regulatory Commission`s Office for Analysis and Evaluation of Operational Data (AEOD) describes activities conducted during 1996. The report is published in three parts. NUREG-1272, Vol. 10, No. 1, covers power reactors and presents an overview of the operating experience of the nuclear power industry from the NRC perspective, including comments about trends of some key performance measures. The report also includes the principal findings and issues identified in AEOD studies over the past year and summarizes information from such sources as licensee event reports and reports to the NRC`s Operations Center. NUREG-1272, Vol. 10, No. 2, covers nuclear materials and presents a review of the events and concerns during 1996 associated with the use of licensed material in nonreactor applications, such as personnel overexposures and medical misadministrations. Both reports also contain a discussion of the Incident Investigation Team program and summarize both the Incident Investigation Team and Augmented Inspection Team reports. Each volume contains a list of the AEOD reports issued from CY 1980 through 1996. NUREG-1272, Vol. 10, No. 3, covers technical training and presents the activities of the Technical Training Center in support of the NRC`s mission in 1996.
Improved pump turbine transient behaviour prediction using a Thoma number-dependent hillchart model
Manderla, M.; Kiniger, K.; Koutnik, J.
2014-03-01
Water hammer phenomena are important issues for high head hydro power plants. Especially, if several reversible pump-turbines are connected to the same waterways there may be strong interactions between the hydraulic machines. The prediction and coverage of all relevant load cases is challenging and difficult using classical simulation models. On the basis of a recent pump-storage project, dynamic measurements motivate an improved modeling approach making use of the Thoma number dependency of the actual turbine behaviour. The proposed approach is validated for several transient scenarios and turns out to increase correlation between measurement and simulation results significantly. By applying a fully automated simulation procedure broad operating ranges can be covered which provides a consistent insight into critical load case scenarios. This finally allows the optimization of the closing strategy and hence the overall power plant performance.
Blocking probability in the hose-model optical VPN with different number of wavelengths
Roslyakov, Alexander V.
2017-04-01
Connection setup with guaranteed quality of service (QoS) in the optical virtual private network (OVPN) is a major goal for the network providers. In order to support this we propose a QoS based OVPN connection set up mechanism over WDM network to the end customer. The proposed WDM network model can be specified in terms of QoS parameter such as blocking probability. We estimated this QoS parameter based on the hose-model OVPN. In this mechanism the OVPN connections also can be created or deleted according to the availability of the wavelengths in the optical path. In this paper we have considered the impact of the number of wavelengths on the computation of blocking probability. The goal of the work is to dynamically provide a best OVPN connection during frequent arrival of connection requests with QoS requirements.
Localized Majorana-Like Modes in a Number-Conserving Setting: An Exactly Solvable Model.
Iemini, Fernando; Mazza, Leonardo; Rossini, Davide; Fazio, Rosario; Diehl, Sebastian
2015-10-09
In this Letter we present, in a number conserving framework, a model of interacting fermions in a two-wire geometry supporting nonlocal zero-energy Majorana-like edge excitations. The model has an exactly solvable line, on varying the density of fermions, described by a topologically nontrivial ground state wave function. Away from the exactly solvable line we study the system by means of the numerical density matrix renormalization group. We characterize its topological properties through the explicit calculation of a degenerate entanglement spectrum and of the braiding operators which are exponentially localized at the edges. Furthermore, we establish the presence of a gap in its single particle spectrum while the Hamiltonian is gapless, and compute the correlations between the edge modes as well as the superfluid correlations. The topological phase covers a sizable portion of the phase diagram, the solvable line being one of its boundaries.
Directory of Open Access Journals (Sweden)
O. Thouron
2011-12-01
Full Text Available A new parameterization scheme is described for calculation of supersaturation in LES models that specifically aims at the simulation of cloud condensation nuclei (CCN activation and prediction of the droplet number concentration. The scheme is tested against current parameterizations in the framework of the Meso-NH LES model. It is shown that the saturation adjustment scheme based on parameterizations of CCN activation in a convective updraft over estimates the droplet concentration in the cloud core while it cannot simulate cloud top supersaturation production due to mixing between cloudy and clear air. A supersaturation diagnostic scheme mitigates these artefacts by accounting for the presence of already condensed water in the cloud core but it is too sensitive to supersaturation fluctuations at cloud top and produces spurious CCN activation during cloud top mixing. The proposed pseudo-prognostic scheme shows performance similar to the diagnostic one in the cloud core but significantly mitigates CCN activation at cloud top.
Model for modulated and chaotic waves in zero-Prandtl-number rotating convection
Indian Academy of Sciences (India)
Alaka Das; Krishna Kumar
2008-09-01
The effects of time-periodic forcing in a few-mode model for zero-Prandtl-number convection with rigid body rotation is investigated. The time-periodic modulation of the rotation rate about the vertical axis and gravity modulation are considered separately. In the presence of periodic variation of the rotation rate, the model shows modulated waves with a band of frequencies. The increase in the external forcing amplitude widens the frequency band of the modulated waves, which ultimately leads to temporally chaotic waves. The gravity modulation, on the other hand, with small frequencies, destroys the quasiperiodic waves at the onset and leads to chaos through intermittency. The spectral power density shows more power to a band of frequencies in the case of periodic modulation of the rotation rate. In the case of externally imposed vertical vibration, the spectral density has more power at lower frequencies. The two types of forcing show different routes to chaos.
Transport properties site descriptive model. Guidelines for evaluation and modelling
Energy Technology Data Exchange (ETDEWEB)
Berglund, Sten [WSP Environmental, Stockholm (Sweden); Selroos, Jan-Olof [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)
2004-04-01
This report describes a strategy for the development of Transport Properties Site Descriptive Models within the SKB Site Investigation programme. Similar reports have been produced for the other disciplines in the site descriptive modelling (Geology, Hydrogeology, Hydrogeochemistry, Rock mechanics, Thermal properties, and Surface ecosystems). These reports are intended to guide the site descriptive modelling, but also to provide the authorities with an overview of modelling work that will be performed. The site descriptive modelling of transport properties is presented in this report and in the associated 'Strategy for the use of laboratory methods in the site investigations programme for the transport properties of the rock', which describes laboratory measurements and data evaluations. Specifically, the objectives of the present report are to: Present a description that gives an overview of the strategy for developing Site Descriptive Models, and which sets the transport modelling into this general context. Provide a structure for developing Transport Properties Site Descriptive Models that facilitates efficient modelling and comparisons between different sites. Provide guidelines on specific modelling issues where methodological consistency is judged to be of special importance, or where there is no general consensus on the modelling approach. The objectives of the site descriptive modelling process and the resulting Transport Properties Site Descriptive Models are to: Provide transport parameters for Safety Assessment. Describe the geoscientific basis for the transport model, including the qualitative and quantitative data that are of importance for the assessment of uncertainties and confidence in the transport description, and for the understanding of the processes at the sites. Provide transport parameters for use within other discipline-specific programmes. Contribute to the integrated evaluation of the investigated sites. The site descriptive
Directory of Open Access Journals (Sweden)
Jichul Ryu
2016-04-01
Full Text Available In this study, 52 asymptotic Curve Number (CN regression equations were developed for combinations of representative land covers and hydrologic soil groups. In addition, to overcome the limitations of the original Long-term Hydrologic Impact Assessment (L-THIA model when it is applied to larger watersheds, a watershed-scale L-THIA Asymptotic CN (ACN regression equation model (watershed-scale L-THIA ACN model was developed by integrating the asymptotic CN regressions and various modules for direct runoff/baseflow/channel routing. The watershed-scale L-THIA ACN model was applied to four watersheds in South Korea to evaluate the accuracy of its streamflow prediction. The coefficient of determination (R2 and Nash–Sutcliffe Efficiency (NSE values for observed versus simulated streamflows over intervals of eight days were greater than 0.6 for all four of the watersheds. The watershed-scale L-THIA ACN model, including the asymptotic CN regression equation method, can simulate long-term streamflow sufficiently well with the ten parameters that have been added for the characterization of streamflow.
Hlokwe, T M; van Helden, P; Michel, A
2013-11-01
The usefulness of variable number tandem repeat (VNTR) typing based on limited numbers of loci has previously proven inferior compared to IS6110-RFLP typing when applied to the study of the molecular epidemiology of bovine tuberculosis (BTB) in both livestock and wildlife in southern Africa. In this study, the discriminatory power of 29 published VNTR loci in the characterization of 131 Mycobacterium bovis strains isolated predominantly from wildlife and a smaller number from livestock in southern Africa was assessed. Allelic diversities calculated when loci were evaluated on a selected panel of 23 M. bovis isolates with identified varying degrees of genetic relatedness from different geographic origins as well as M. bovis BCG ranged from 0.00 to 0.63. Of the 29 loci tested, 13 were polymorphic (QUB 11a, QUB 11b, QUB 18, ETR-B and -C, Mtub 21, MIRU 16 and 26, ETR-E, QUB 26, MIRU 23, ETR-A, and Mtub 12). In addition, a comparative evaluation of the 13 loci on a panel of 65 isolates previously characterized by IS6110 restriction fragment length polymorphism (RFLP) typing and further evaluation on 41 isolates with no typing history from Kruger National Park (KNP) highlighted that M. bovis from epidemiologically unrelated cases of BTB in different geographic regions can be adequately distinguished. However, there is a need for improvement of the method to fully discriminate between the parental KNP strain and its clones to allow the detection of evolutionary events causing transmission between and within wildlife species.
Directory of Open Access Journals (Sweden)
Mohsen Sayyah Markabi
2014-10-01
Full Text Available Purpose: Evaluation and selection of efficient suppliers is one of the key issues in supply chain management which depends on wide range of qualitative and quantitative criteria. The aim of this research is to develop a mathematical model for evaluating and selecting efficient suppliers when faced with supply and demand uncertainties.Design/methodology/approach: In this research Grey Relational Analysis (GRA and Data Envelopment Analysis (DEA are used to evaluate and select efficient suppliers under uncertainties. Furthermore, a novel ranking method is introduced for the units that their efficiencies are obtained in the form of interval grey numbers.Findings: The study indicates that the proposed model in addition to providing satisfactory and acceptable results avoids time-consuming computations and consequently reduces the solution time. To name another advantage of the proposed model, we can point out that it enables us to make decision based on different levels of risk.Originality/value: The paper presents a mathematical model for evaluating and selecting efficient suppliers in a stochastic environment so that companies can use in order to make better decisions.
An evaluation of mathematical models for predicting skin permeability.
Lian, Guoping; Chen, Longjian; Han, Lujia
2008-01-01
A number of mathematical models have been proposed for predicting skin permeability, mostly empirical and very few are deterministic. Early empirical models use simple lipophilicity parameters. The recent trend is to use more complicated molecular structure descriptors. There has been much debate on which models best predict skin permeability. This article evaluates various mathematical models using a comprehensive experimental dataset of skin permeability for 124 chemical compounds compiled from various sources. Of the seven models compared, the deterministic model of Mitragotri gives the best prediction. The simple quantitative structure permeability relationships (QSPR) model of Potts and Guy gives the second best prediction. The two models have many features in common. Both assume the lipid matrix as the pathway of transdermal permeation. Both use octanol-water partition coefficient and molecular size. Even the mathematical formulae are similar. All other empirical QSPR models that use more complicated molecular structure descriptors fail to provide satisfactory prediction. The molecular structure descriptors in the more complicated QSPR models are empirically related to skin permeation. The mechanism on how these descriptors affect transdermal permeation is not clear. Mathematically it is an ill-defined approach to use many colinearly related parameters rather than fewer independent parameters in multi-linear regression.
Network modeling of the transcriptional effects of copy number aberrations in glioblastoma
Jörnsten, Rebecka; Abenius, Tobias; Kling, Teresia; Schmidt, Linnéa; Johansson, Erik; Nordling, Torbjörn E M; Nordlander, Bodil; Sander, Chris; Gennemark, Peter; Funa, Keiko; Nilsson, Björn; Lindahl, Linda; Nelander, Sven
2011-01-01
DNA copy number aberrations (CNAs) are a hallmark of cancer genomes. However, little is known about how such changes affect global gene expression. We develop a modeling framework, EPoC (Endogenous Perturbation analysis of Cancer), to (1) detect disease-driving CNAs and their effect on target mRNA expression, and to (2) stratify cancer patients into long- and short-term survivors. Our method constructs causal network models of gene expression by combining genome-wide DNA- and RNA-level data. Prognostic scores are obtained from a singular value decomposition of the networks. By applying EPoC to glioblastoma data from The Cancer Genome Atlas consortium, we demonstrate that the resulting network models contain known disease-relevant hub genes, reveal interesting candidate hubs, and uncover predictors of patient survival. Targeted validations in four glioblastoma cell lines support selected predictions, and implicate the p53-interacting protein Necdin in suppressing glioblastoma cell growth. We conclude that large-scale network modeling of the effects of CNAs on gene expression may provide insights into the biology of human cancer. Free software in MATLAB and R is provided. PMID:21525872
Hadron Resonance Gas Model for An Arbitrarily Large Number of Different Hard-Core Radii
Oliinychenko, D R; Sagun, V V; Ivanytskyi, A I; Yakimenko, I P; Nikonov, E G; Taranenko, A V; Zinovjev, G M
2016-01-01
We develop a novel formulation of the hadron-resonance gas model which, besides a hard-core repulsion, explicitly accounts for the surface tension induced by the interaction between the particles. Such an equation of state allows us to go beyond the Van der Waals approximation for any number of different hard-core radii. A comparison with the Carnahan-Starling equation of state shows that the new model is valid for packing fractions 0.2-0.22, while the usual Van der Waals model is inapplicable at packing fractions above 0.11-0.12. Moreover, it is shown that the equation of state with induced surface tension is softer than the one of hard spheres and remains causal at higher particle densities. The great advantage of our model is that there are only two equations to be solved and it does not depend on the various values of the hard-core radii used for different hadronic resonances. Using this novel equation of state we obtain a high-quality fit of the ALICE hadron multiplicities measured at center-of-mass ener...
Implementing the Serial Number Tracking model in telecommunications: a case study of Croatia
Directory of Open Access Journals (Sweden)
Neven Polovina
2012-01-01
Full Text Available Background: The case study describes the implementation of the SNT (Serial Number Tracking model in an integrated information system, as a means of business support in a Croatian mobile telecommunications company. Objectives: The goal was to show how to make the best practice of the SNT implementation in the telecommunication industry, with referencing to problems which have arisen during the implementation. Methods/Approach: the case study approach was used based on the documentation about the SNT model and the business intelligence system in the Croatian mobile telecommunications company. Results: Economic aspects of the effectiveness of the SNT model are described and confirmed based on actual tangible and predominantly on intangible benefits. Conclusions: Advantages of the SNT model are multiple: operating costs for storage and transit of goods were reduced, accuracy of deliveries and physical inventory was improved; a new source of information for the business intelligence system was obtained; operating processes in the distribution of goods were advanced; transit insurance costs decreased and there were fewer cases of fraudulent behaviour.
Animal Models of Psychiatric Disorders That Reflect Human Copy Number Variation
Directory of Open Access Journals (Sweden)
Jun Nomura
2012-01-01
Full Text Available The development of genetic technologies has led to the identification of several copy number variations (CNVs in the human genome. Genome rearrangements affect dosage-sensitive gene expression in normal brain development. There is strong evidence associating human psychiatric disorders, especially autism spectrum disorders (ASDs and schizophrenia to genetic risk factors and accumulated CNV risk loci. Deletions in 1q21, 3q29, 15q13, 17p12, and 22q11, as well as duplications in 16p11, 16p13, and 15q11-13 have been reported as recurrent CNVs in ASD and/or schizophrenia. Chromosome engineering can be a useful technology to reflect human diseases in animal models, especially CNV-based psychiatric disorders. This system, based on the Cre/loxP strategy, uses large chromosome rearrangement such as deletion, duplication, inversion, and translocation. Although it is hard to reflect human pathophysiology in animal models, some aspects of molecular pathways, brain anatomy, cognitive, and behavioral phenotypes can be addressed. Some groups have created animal models of psychiatric disorders, ASD, and schizophrenia, which are based on human CNV. These mouse models display some brain anatomical and behavioral abnormalities, providing insight into human neuropsychiatric disorders that will contribute to novel drug screening for these devastating disorders.
A multi-model assessment of the impact of sea spray geoengineering on cloud droplet number
Directory of Open Access Journals (Sweden)
K. J. Pringle
2012-12-01
Full Text Available Artificially increasing the albedo of marine boundary layer clouds by the mechanical emission of sea spray aerosol has been proposed as a geoengineering technique to slow the warming caused by anthropogenic greenhouse gases. A previous global model study (Korhonen et al., 2010 found that only modest increases (< 20% and sometimes even decreases in cloud drop number (CDN concentrations would result from emission scenarios calculated using a windspeed dependent geoengineering flux parameterisation. Here we extend that work to examine the conditions under which decreases in CDN can occur, and use three independent global models to quantify maximum achievable CDN changes. We find that decreases in CDN can occur when at least three of the following conditions are met: the injected particle number is < 100 cm^{−3}, the injected diameter is > 250–300 nm, the background aerosol loading is large (≥ 150 cm^{−3} and the in-cloud updraught velocity is low (< 0.2 m s^{−1}. With lower background loadings and/or increased updraught velocity, significant increases in CDN can be achieved. None of the global models predict a decrease in CDN as a result of geoengineering, although there is considerable diversity in the calculated efficiency of geoengineering, which arises from the diversity in the simulated marine aerosol distributions. All three models show a small dependence of geoengineering efficiency on the injected particle size and the geometric standard deviation of the injected mode. However, the achievability of significant cloud drop enhancements is strongly dependent on the cloud updraught speed. With an updraught speed of 0.1 m s^{−1} a global mean CDN of 375 cm^{−3} (previously estimated to cancel the forcing caused by CO_{2} doubling is achievable in only about 50% of grid boxes which have > 50% cloud cover, irrespective of the amount of aerosol injected. But at stronger updraft speeds (0
Głowacka, Katarzyna; Kromdijk, Johannes; Leonelli, Lauriebeth; Niyogi, Krishna K; Clemente, Tom E; Long, Stephen P
2016-04-01
Stable transformation of plants is a powerful tool for hypothesis testing. A rapid and reliable evaluation method of the transgenic allele for copy number and homozygosity is vital in analysing these transformations. Here the suitability of Southern blot analysis, thermal asymmetric interlaced (TAIL-)PCR, quantitative (q)PCR and digital droplet (dd)PCR to estimate T-DNA copy number, locus complexity and homozygosity were compared in transgenic tobacco. Southern blot analysis and ddPCR on three generations of transgenic offspring with contrasting zygosity and copy number were entirely consistent, whereas TAIL-PCR often underestimated copy number. qPCR deviated considerably from the Southern blot results and had lower precision and higher variability than ddPCR. Comparison of segregation analyses and ddPCR of T1 progeny from 26 T0 plants showed that at least 19% of the lines carried multiple T-DNA insertions per locus, which can lead to unstable transgene expression. Segregation analyses failed to detect these multiple copies, presumably because of their close linkage. This shows the importance of routine T-DNA copy number estimation. Based on our results, ddPCR is the most suitable method, because it is as reliable as Southern blot analysis yet much faster. A protocol for this application of ddPCR to large plant genomes is provided.
Rinaldo, A.; Gatto, M.; Mari, L.; Casagrandi, R.; Righetto, L.; Bertuzzo, E.; Rodriguez-Iturbe, I.
2012-12-01
still lacking. Here, we show that the requirement that all the local reproduction numbers R0 be larger than unity is neither necessary nor sufficient for outbreaks to occur when local settlements are connected by networks of primary and secondary infection mechanisms. To determine onset conditions, we derive general analytical expressions for a reproduction matrix G0 explicitly accounting for spatial distributions of human settlements and pathogen transmission via hydrological and human mobility networks. At disease onset, a generalized reproduction number Λ0 (the dominant eigenvalue of G0) must be larger than unity. We also show that geographical outbreak patterns in complex environments are linked to the dominant eigenvector and to spectral properties of G0. Tests against data and computations for the 2010 Haiti and 2000 KwaZulu-Natal cholera outbreaks, as well as against computations for metapopulation networks, demonstrate that eigenvectors of G0 provide a synthetic and effective tool for predicting the disease course in space and time. Networked connectivity models, describing the interplay between hydrology, epidemiology and social behavior sustaining human mobility, thus prove to be key tools for emergency management of waterborne infections.
Evaluating face trustworthiness: a model based approach.
Todorov, Alexander; Baron, Sean G; Oosterhof, Nikolaas N
2008-06-01
Judgments of trustworthiness from faces determine basic approach/avoidance responses and approximate the valence evaluation of faces that runs across multiple person judgments. Here, based on trustworthiness judgments and using a computer model for face representation, we built a model for representing face trustworthiness (study 1). Using this model, we generated novel faces with an increased range of trustworthiness and used these faces as stimuli in a functional Magnetic Resonance Imaging study (study 2). Although participants did not engage in explicit evaluation of the faces, the amygdala response changed as a function of face trustworthiness. An area in the right amygdala showed a negative linear response-as the untrustworthiness of faces increased so did the amygdala response. Areas in the left and right putamen, the latter area extended into the anterior insula, showed a similar negative linear response. The response in the left amygdala was quadratic--strongest for faces on both extremes of the trustworthiness dimension. The medial prefrontal cortex and precuneus also showed a quadratic response, but their response was strongest to faces in the middle range of the trustworthiness dimension.
Evaluation of onset of nucleate boiling models
Energy Technology Data Exchange (ETDEWEB)
Huang, LiDong [Heat Transfer Research, Inc., College Station, TX (United States)], e-mail: lh@htri.net
2009-07-01
This article discusses available models and correlations for predicting the required heat flux or wall superheat for the Onset of Nucleate Boiling (ONB) on plain surfaces. It reviews ONB data in the open literature and discusses the continuing efforts of Heat Transfer Research, Inc. in this area. Our ONB database contains ten individual sources for ten test fluids and a wide range of operating conditions for different geometries, e.g., tube side and shell side flow boiling and falling film evaporation. The article also evaluates literature models and correlations based on the data: no single model in the open literature predicts all data well. The prediction uncertainty is especially higher in vacuum conditions. Surface roughness is another critical criterion in determining which model should be used. However, most models do not directly account for surface roughness, and most investigators do not provide surface roughness information in their published findings. Additional experimental research is needed to improve confidence in predicting the required wall superheats for nucleation boiling for engineering design purposes. (author)
Data Assimilation and Model Evaluation Experiment Datasets.
Lai, Chung-Chieng A.; Qian, Wen; Glenn, Scott M.
1994-05-01
The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMÉE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets.The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: 1)collection of observational data; 2) analysis and interpretation; 3) interpolation using the Optimum Thermal Interpolation System package; 4) quality control and re-analysis; and 5) data archiving and software documentation.The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement.Suggestions for DAMEE data usages include 1) ocean modeling and data assimilation studies, 2) diagnosis and theorectical studies, and 3) comparisons with locally detailed observations.
European Cohesion Policy: A Proposed Evaluation Model
Directory of Open Access Journals (Sweden)
Alina Bouroşu (Costăchescu
2012-06-01
Full Text Available The current approach of European Cohesion Policy (ECP is intended to be a bridge between different fields of study, emphasizing the intersection between "the public policy cycle, theories of new institutionalism and the new public management”. ECP can be viewed as a focal point between putting into practice the principles of the new governance theory, theories of economic convergence and divergence and the governance of common goods. After a short introduction of defining the concepts used, the author discussed on the created image of ECP by applying three different theories, focusing on the structural funds implementation system (SFIS, directing the discussion on the evaluation part of this policy, by proposing a model of performance evaluation of the system, in order to outline key principles for creating effective management mechanisms of ECP.
Teymuri, Ghulam Heidar; Sadeghian, Marzieh; Kangavari, Mehdi; Asghari, Mehdi; Madrese, Elham; Abbasinia, Marzieh; Ahmadnezhad, Iman; Gholizadeh, Yavar
2013-01-01
Background: One of the significant dangers that threaten people’s lives is the increased risk of accidents. Annually, more than 1.3 million people die around the world as a result of accidents, and it has been estimated that approximately 300 deaths occur daily due to traffic accidents in the world with more than 50% of that number being people who were not even passengers in the cars. The aim of this study was to examine traffic accidents in Tehran and forecast the number of future accidents using a time-series model. Methods: The study was a cross-sectional study that was conducted in 2011. The sample population was all traffic accidents that caused death and physical injuries in Tehran in 2010 and 2011, as registered in the Tehran Emergency ward. The present study used Minitab 15 software to provide a description of accidents in Tehran for the specified time period as well as those that occurred during April 2012. Results: The results indicated that the average number of daily traffic accidents in Tehran in 2010 was 187 with a standard deviation of 83.6. In 2011, there was an average of 180 daily traffic accidents with a standard deviation of 39.5. One-way analysis of variance indicated that the average number of accidents in the city was different for different months of the year (P < 0.05). Most of the accidents occurred in March, July, August, and September. Thus, more accidents occurred in the summer than in the other seasons. The number of accidents was predicted based on an auto-regressive, moving average (ARMA) for April 2012. The number of accidents displayed a seasonal trend. The prediction of the number of accidents in the city during April of 2012 indicated that a total of 4,459 accidents would occur with mean of 149 accidents per day during these three months. Conclusion: The number of accidents in Tehran displayed a seasonal trend, and the number of accidents was different for different seasons of the year. PMID:26120405
Use of an operational model evaluation system for model intercomparison
Energy Technology Data Exchange (ETDEWEB)
Foster, K. T., LLNL
1998-03-01
The Atmospheric Release Advisory Capability (ARAC) is a centralized emergency response system used to assess the impact from atmospheric releases of hazardous materials. As part of an on- going development program, new three-dimensional diagnostic windfield and Lagrangian particle dispersion models will soon replace ARAC`s current operational windfield and dispersion codes. A prototype model performance evaluation system has been implemented to facilitate the study of the capabilities and performance of early development versions of these new models relative to ARAC`s current operational codes. This system provides tools for both objective statistical analysis using common performance measures and for more subjective visualization of the temporal and spatial relationships of model results relative to field measurements. Supporting this system is a database of processed field experiment data (source terms and meteorological and tracer measurements) from over 100 individual tracer releases.
Advances in Application of Models in Soil Quality Evaluation
Institute of Scientific and Technical Information of China (English)
SI Zhi-guo; WANG Ji-jie; YU Yuan-chun; LIANG Guan-feng; CHEN Chang-ren; SHU Hong-lan
2012-01-01
Soil quality is a comprehensive reflection of soil properties.Since the soil quality concept was put forward in the 1970s,the quality of different type soils in different regions have been evaluated through a variety of evaluation methods,but it still lacks universal soil quantity evaluation models and methods.In this paper,the applications and prospects of grey relevancy comprehensive evaluation model,attribute hierarchical model,fuzzy comprehensive evaluation model,matter-element model,RAGA-based PPC /PPE model and GIS model in soil quality evaluation are reviewed.
Wall-modeled large-eddy simulation of transonic airfoil buffet at high Reynolds number
Fukushima, Yuma; Kawai, Soshi
2016-11-01
In this study, we conduct the wall-modeled large-eddy simulation (LES) of transonic buffet phenomena over the OAT15A supercritical airfoil at high Reynolds number. The transonic airfoil buffet involves shock-turbulent boundary layer interactions and shock vibration associated with the flow separation downstream of the shock wave. The wall-modeled LES developed by Kawai and Larsson PoF (2012) is tuned on the K supercomputer for high-fidelity simulation. We first show the capability of the present wall-modeled LES on the transonic airfoil buffet phenomena and then investigate the detailed flow physics of unsteadiness of shock waves and separated boundary layer interaction phenomena. We also focus on the sustaining mechanism of the buffet phenomena, including the source of the pressure waves propagated from the trailing edge and the interactions between the shock wave and the generated sound waves. This work was supported in part by MEXT as a social and scientific priority issue to be tackled by using post-K computer. Computer resources of the K computer was provided by the RIKEN Advanced Institute for Computational Science (Project ID: hp150254).
A model for evaluating the ballistic resistance of stratified packs
Pirvu, C.; Georgescu, C.; Badea, S.; Deleanu, L.
2016-08-01
Models for evaluating the ballistic performance of stratified packs are useful in reducing the time for laboratory tests, understanding the failure process and identifying key factors to improve the architecture of the packs. The authors present the results of simulating the bullet impact on a packs made of 24 layers, taking into consideration the friction between layers (μ = 0.4) and the friction between bullet and layers (μ = 0.3). The aim of this study is to obtain a number of layers that allows for the bullet arrest in the packs and to have several layers undamaged in order to offer a high level of safety for this kind of packs that could be included in individual armors. The model takes into account the yield and fracture limits of the two materials the bullet is made of and those for one layer, here considered as an orthotropic material, having maximum equivalent plastic strain of 0.06. All materials are considered to have bilinear isotropic hardening behavior. After documentation, the model was designed as isothermal because thermal influence of the impact is considered low for these impact velocities. The model was developed with the help of Ansys 14.5. Each layer has 200 mm × 200 × 0.35 mm. The bullet velocity just before impact was 400 m/s, a velocity characterizing the average values obtained in close range with a ballistic barrel and the bullet model is following the shape and dimensions of the 9 mm FMJ (full metal jacket). The model and the results concerning the number of broken layers were validated by experiments, as the number of broken layers for the actual pack (made of 24 layers of LFT SB1) were also seven...eight. The models for ballistic impact are useful when they are particularly formulated for resembling to the actual system projectile - target.
Machrafi, H.; Rednikov, A.; Colinet, P.; Dauby, P. C.
2015-05-01
A one-sided model of the thermal Marangoni instability owing to evaporation into an inert gas is developed. Two configurations are studied in parallel: a horizontal liquid layer and a spherical droplet. With the dynamic gas properties being admittedly negligible, one-sided approaches typically hinge upon quantifying heat and mass transfer through the gas phase by means of transfer coefficients (like in the Newton's cooling law), which in dimensionless terms eventually corresponds to using Biot numbers. Quite a typical arrangement encountered in the literature is a constant Biot number, the same for perturbations of different wavelengths and maybe even the same as for the reference state. In the present work, we underscore the relevance of accounting for its wave-number dependence, which is especially the case in the evaporative context with relatively large values of the resulting effective Biot number. We illustrate the effect in the framework of the Marangoni instability thresholds. As a concrete example, we consider HFE-7100 (a standard refrigerant) for the liquid and air for the inert gas.
Machrafi, H; Rednikov, A; Colinet, P; Dauby, P C
2015-05-01
A one-sided model of the thermal Marangoni instability owing to evaporation into an inert gas is developed. Two configurations are studied in parallel: a horizontal liquid layer and a spherical droplet. With the dynamic gas properties being admittedly negligible, one-sided approaches typically hinge upon quantifying heat and mass transfer through the gas phase by means of transfer coefficients (like in the Newton's cooling law), which in dimensionless terms eventually corresponds to using Biot numbers. Quite a typical arrangement encountered in the literature is a constant Biot number, the same for perturbations of different wavelengths and maybe even the same as for the reference state. In the present work, we underscore the relevance of accounting for its wave-number dependence, which is especially the case in the evaporative context with relatively large values of the resulting effective Biot number. We illustrate the effect in the framework of the Marangoni instability thresholds. As a concrete example, we consider HFE-7100 (a standard refrigerant) for the liquid and air for the inert gas.
Directory of Open Access Journals (Sweden)
Xiaoyi Wang
2015-01-01
Full Text Available In wastewater treatment plants (WWTPs, the dissolved oxygen is the key variable to be controlled in bioreactors. In this paper, linear active disturbance rejection control (LADRC is utilized to track the dissolved oxygen concentration based on benchmark simulation model number 1 (BSM1. Optimal LADRC parameters tuning approach for wastewater treatment processes is obtained by analyzing and simulations on BSM1. Moreover, by analyzing the estimation capacity of linear extended state observer (LESO in the control of dissolved oxygen, the parameter range of LESO is acquired, which is a valuable guidance for parameter tuning in simulation and even in practice. The simulation results show that LADRC can overcome the disturbance existing in the control of wastewater and improve the tracking accuracy of dissolved oxygen. LADRC provides another practical solution to the control of WWTPs.
On the proper Mach number and ratio of specific heats for modeling the Venus bow shock
Tatrallyay, M.; Russell, C. T.; Luhmann, J. G.; Barnes, A.; Mihalov, J. D.
1984-01-01
Observational data from the Pioneer Venus Orbiter are used to investigate the physical characteristics of the Venus bow shock, and to explore some general issues in the numerical simulation of collisionless shocks. It is found that since equations from gas-dynamic (GD) models of the Venus shock cannot in general replace MHD equations, it is not immediately obvious what the optimum way is to describe the desired MHD situation with a GD code. Test case analysis shows that for quasi-perpendicular shocks it is safest to use the magnetospheric Mach number as an input to the GD code. It is also shown that when comparing GD predicted temperatures with MHD predicted temperatures total energy should be compared since the magnetic energy density provides a significant fraction of the internal energy of the MHD fluid for typical solar wind parameters. Some conclusions are also offered on the properties of the terrestrial shock.
A Forecasting Model of Required Number of Wheat Bulk Carriers for Africa
Institute of Scientific and Technical Information of China (English)
Masayoshi Kubo
2008-01-01
<正>The ocean transportation of grain bulk carriers is promoted by development of ocean economic.With the development of coastal region,the cargo transportation wi11 become more and more important,especially for the resource such as grain,oil and coal.In this study,a model is built to estimate the number of grain bulk carriers needed for wheat based upon analyzing the relationships between Tons and Ton-miles of Africa wheat transportation.We find that the agricultural policies greatly affect the wheat transportation to Africa.Then,using two scenarios, we predict how many ships are necessary for the maritime transportation of wheat from other places to Africa in the future.We believe that this research is extremely useful to maritime transportation of wheat to Africa.
Segregation process and phase transition in cyclic predator-prey models with even number of species
Szabo, Gyorgy; Sznaider, Gustavo Ariel
2007-01-01
We study a spatial cyclic predator-prey model with an even number of species (for n=4, 6, and 8) that allows the formation of two defective alliances consisting of the even and odd label species. The species are distributed on the sites of a square lattice. The evolution of spatial distribution is governed by iteration of two elementary processes on neighboring sites chosen randomly: if the sites are occupied by a predator-prey pair then the predator invades the prey's site; otherwise the species exchange their site with a probability $X$. For low $X$ values a self-organizing pattern is maintained by cyclic invasions. If $X$ exceeds a threshold value then two types of domains grow up that formed by the odd and even label species, respectively. Monte Carlo simulations indicate the blocking of this segregation process within a range of X for n=8.
Chattopadhyay, Goutami; 10.1140/epjp/i2012-12043-9
2012-01-01
This study reports a statistical analysis of monthly sunspot number time series and observes non homogeneity and asymmetry within it. Using Mann-Kendall test a linear trend is revealed. After identifying stationarity within the time series we generate autoregressive AR(p) and autoregressive moving average (ARMA(p,q)). Based on minimization of AIC we find 3 and 1 as the best values of p and q respectively. In the next phase, autoregressive neural network (AR-NN(3)) is generated by training a generalized feedforward neural network (GFNN). Assessing the model performances by means of Willmott's index of second order and coefficient of determination, the performance of AR-NN(3) is identified to be better than AR(3) and ARMA(3,1).
Automated expert modeling for automated student evaluation.
Energy Technology Data Exchange (ETDEWEB)
Abbott, Robert G.
2006-01-01
The 8th International Conference on Intelligent Tutoring Systems provides a leading international forum for the dissemination of original results in the design, implementation, and evaluation of intelligent tutoring systems and related areas. The conference draws researchers from a broad spectrum of disciplines ranging from artificial intelligence and cognitive science to pedagogy and educational psychology. The conference explores intelligent tutoring systems increasing real world impact on an increasingly global scale. Improved authoring tools and learning object standards enable fielding systems and curricula in real world settings on an unprecedented scale. Researchers deploy ITS's in ever larger studies and increasingly use data from real students, tasks, and settings to guide new research. With high volumes of student interaction data, data mining, and machine learning, tutoring systems can learn from experience and improve their teaching performance. The increasing number of realistic evaluation studies also broaden researchers knowledge about the educational contexts for which ITS's are best suited. At the same time, researchers explore how to expand and improve ITS/student communications, for example, how to achieve more flexible and responsive discourse with students, help students integrate Web resources into learning, use mobile technologies and games to enhance student motivation and learning, and address multicultural perspectives.
Two criteria for evaluating risk prediction models.
Pfeiffer, R M; Gail, M H
2011-09-01
We propose and study two criteria to assess the usefulness of models that predict risk of disease incidence for screening and prevention, or the usefulness of prognostic models for management following disease diagnosis. The first criterion, the proportion of cases followed PCF (q), is the proportion of individuals who will develop disease who are included in the proportion q of individuals in the population at highest risk. The second criterion is the proportion needed to follow-up, PNF (p), namely the proportion of the general population at highest risk that one needs to follow in order that a proportion p of those destined to become cases will be followed. PCF (q) assesses the effectiveness of a program that follows 100q% of the population at highest risk. PNF (p) assess the feasibility of covering 100p% of cases by indicating how much of the population at highest risk must be followed. We show the relationship of those two criteria to the Lorenz curve and its inverse, and present distribution theory for estimates of PCF and PNF. We develop new methods, based on influence functions, for inference for a single risk model, and also for comparing the PCFs and PNFs of two risk models, both of which were evaluated in the same validation data.
Finds in Testing Experiments for Model Evaluation
Institute of Scientific and Technical Information of China (English)
WU Ji; JIA Xiaoxia; LIU Chang; YANG Haiyan; LIU Chao
2005-01-01
To evaluate the fault location and the failure prediction models, simulation-based and code-based experiments were conducted to collect the required failure data. The PIE model was applied to simulate failures in the simulation-based experiment. Based on syntax and semantic level fault injections, a hybrid fault injection model is presented. To analyze the injected faults, the difficulty to inject (DTI) and difficulty to detect (DTD) are introduced and are measured from the programs used in the code-based experiment. Three interesting results were obtained from the experiments: 1) Failures simulated by the PIE model without consideration of the program and testing features are unreliably predicted; 2) There is no obvious correlation between the DTI and DTD parameters; 3) The DTD for syntax level faults changes in a different pattern to that for semantic level faults when the DTI increases. The results show that the parameters have a strong effect on the failures simulated, and the measurement of DTD is not strict.
CTBT Integrated Verification System Evaluation Model
Energy Technology Data Exchange (ETDEWEB)
Edenburn, M.W.; Bunting, M.L.; Payne, A.C. Jr.
1997-10-01
Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia`s Monitoring Systems and Technology Center and has been funded by the US Department of Energy`s Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, top-level, modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM`s unique features is that it integrates results from the various CTBT sensor technologies (seismic, infrasound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection) and location accuracy of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system`s performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. This report describes version 1.2 of IVSEM.
Sepúlveda, Nuno
2013-02-26
Background: The advent of next generation sequencing technology has accelerated efforts to map and catalogue copy number variation (CNV) in genomes of important micro-organisms for public health. A typical analysis of the sequence data involves mapping reads onto a reference genome, calculating the respective coverage, and detecting regions with too-low or too-high coverage (deletions and amplifications, respectively). Current CNV detection methods rely on statistical assumptions (e.g., a Poisson model) that may not hold in general, or require fine-tuning the underlying algorithms to detect known hits. We propose a new CNV detection methodology based on two Poisson hierarchical models, the Poisson-Gamma and Poisson-Lognormal, with the advantage of being sufficiently flexible to describe different data patterns, whilst robust against deviations from the often assumed Poisson model.Results: Using sequence coverage data of 7 Plasmodium falciparum malaria genomes (3D7 reference strain, HB3, DD2, 7G8, GB4, OX005, and OX006), we showed that empirical coverage distributions are intrinsically asymmetric and overdispersed in relation to the Poisson model. We also demonstrated a low baseline false positive rate for the proposed methodology using 3D7 resequencing data and simulation. When applied to the non-reference isolate data, our approach detected known CNV hits, including an amplification of the PfMDR1 locus in DD2 and a large deletion in the CLAG3.2 gene in GB4, and putative novel CNV regions. When compared to the recently available FREEC and cn.MOPS approaches, our findings were more concordant with putative hits from the highest quality array data for the 7G8 and GB4 isolates.Conclusions: In summary, the proposed methodology brings an increase in flexibility, robustness, accuracy and statistical rigour to CNV detection using sequence coverage data. 2013 Seplveda et al.; licensee BioMed Central Ltd.
An evaluation framework for participatory modelling
Krueger, T.; Inman, A.; Chilvers, J.
2012-04-01
Strong arguments for participatory modelling in hydrology can be made on substantive, instrumental and normative grounds. These arguments have led to increasingly diverse groups of stakeholders (here anyone affecting or affected by an issue) getting involved in hydrological research and the management of water resources. In fact, participation has become a requirement of many research grants, programs, plans and policies. However, evidence of beneficial outcomes of participation as suggested by the arguments is difficult to generate and therefore rare. This is because outcomes are diverse, distributed, often tacit, and take time to emerge. In this paper we develop an evaluation framework for participatory modelling focussed on learning outcomes. Learning encompasses many of the potential benefits of participation, such as better models through diversity of knowledge and scrutiny, stakeholder empowerment, greater trust in models and ownership of subsequent decisions, individual moral development, reflexivity, relationships, social capital, institutional change, resilience and sustainability. Based on the theories of experiential, transformative and social learning, complemented by practitioner experience our framework examines if, when and how learning has occurred. Special emphasis is placed on the role of models as learning catalysts. We map the distribution of learning between stakeholders, scientists (as a subgroup of stakeholders) and models. And we analyse what type of learning has occurred: instrumental learning (broadly cognitive enhancement) and/or communicative learning (change in interpreting meanings, intentions and values associated with actions and activities; group dynamics). We demonstrate how our framework can be translated into a questionnaire-based survey conducted with stakeholders and scientists at key stages of the participatory process, and show preliminary insights from applying the framework within a rural pollution management situation in
Akdogan, Ilgaz; Adiguzel, Esat; Yilmaz, Ismail; Ozdemir, M Bulent; Sahiner, Melike; Tufan, A Cevik
2008-10-22
This study was designed to evaluate the penicillin-induced epilepsy model in terms of dose-response relationship of penicillin used to induce epilepsy seizure on hippocampal neuron number and hippocampal volume in Sprague-Dawley rats. Seizures were induced with 300, 500, 1500 and 2000IU of penicillin-G injected intracortically in rats divided in four experimental groups, respectively. Control group was injected intracortically with saline. Animals were decapitated on day 7 of treatment and brains were removed. The total neuron number of pyramidal cell layer from rat hippocampus was estimated using the optical fractionator method. The volume of same hippocampal areas was estimated using the Cavalieri method. Dose-dependent decrease in hippocampal neuron number was observed in three experimental groups (300, 500 and 1500IU of penicillin-G), and the effects were statistically significant when compared to the control group (P<0.009). Dose-dependent decrease in hippocampal volume, on the other hand, was observed in all three of these groups; however, the difference compared to the control group was only statistically significant in 1500IU of penicillin-G injected group (P<0.009). At the dose of 2000IU penicillin-G, all animals died due to status seizures. These results suggest that the appropriate dose of penicillin has to be selected for a given experimental epilepsy study in order to demonstrate the relevant epileptic seizure and its effects. Intracortical 1500IU penicillin-induced epilepsy model may be a good choice to practice studies that investigate neuroprotective mechanisms of the anti-epileptic drugs.
Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches
Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward
2015-01-01
As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.
Designing and evaluating representations to model pedagogy
Directory of Open Access Journals (Sweden)
Elizabeth Masterman
2013-08-01
Full Text Available This article presents the case for a theory-informed approach to designing and evaluating representations for implementation in digital tools to support Learning Design, using the framework of epistemic efficacy as an example. This framework, which is rooted in the literature of cognitive psychology, is operationalised through dimensions of fit that attend to: (1 the underlying ontology of the domain, (2 the purpose of the task that the representation is intended to facilitate, (3 how best to support the cognitive processes of the users of the representations, (4 users’ differing needs and preferences, and (5 the tool and environment in which the representations are constructed and manipulated.Through showing how epistemic efficacy can be applied to the design and evaluation of representations, the article presents the Learning Designer, a constructionist microworld in which teachers can both assemble their learning designs and model their pedagogy in terms of students’ potential learning experience. Although the activity of modelling may add to the cognitive task of design, the article suggests that the insights thereby gained can additionally help a lecturer who wishes to reuse a particular learning design to make informed decisions about its value to their practice.
Göbel, U; Niem, V
2012-01-01
The impact factor is a purely bibliometric parameter built on a number of publications and their citations that occur within clearly defined periods. Appropriate interpretation of the impact factor is important as it is also used worldwide for the evaluation of research performance. It is assumed that the number of medical journals reflects the extent of diseases and patient populations involved and that the number is correlated with the level of the impact factor. 174 category lists (Subject Categories) are included in the area Health Sciences of the ISI Web of Knowledge of Thomson Reuters, 71 of which belong to the field of medicine and 50 of which have a clinical and/or application-oriented focus. These alphabetically arranged 50 category lists were consecutively numbered, randomized by odd and even numbers, respectively, into 2 equal-sized groups and then grouped according to organ specialities, sub-specialities and cross-disciplinary fields. By tossing up a coin it was decided which group should be evaluated first. Only then the category lists were downloaded and the number of journals, as well as the impact factors of journals ranking number 1 and 2, as well as the impact factors of journals at the end of the first third and at the end of the first half of each category list were compared. The number of journals per category list varies considerably between 5 and 252. The lists of organ specialties and cross-disciplinary fields include more than three times as many journals as those of the sub-specialities; the highest numbers of journals are listed for the cross-disciplinary fields. The level of impact factor of journals that rank number 1 in the lists varies considerably and ranges from 3,058 to 94,333; a similar variability exists for the journals at rank 2. On the other hand, the impact factor of journals at the end of the first third of the lists varies from 1,214 and 3,953, and for those journals at the end of the first half of a respective category
Comprehensive Evaluation Cloud Model for Ship Navigation Adaptability
Man Zhu; Y.Q. Wen; Zhou, C. H.; C.S. Xiao
2014-01-01
In this paper, using cloud model and Delphi, we build a comprehensive evaluation cloud model to solve the problems of qualitative description and quantitative transformation in ship navigation adaptability comprehensive evaluation. In the model, the normal cloud generator is used to find optimal cloud models of reviews and evaluation factors. The weight of each evaluation factor is determined by cloud model and Delphi. The floating cloud algorithm is applied to aggregate the bottom level’s ev...
Energy Technology Data Exchange (ETDEWEB)
Yamamoto, Yoshinobu, E-mail: yamamotoy@yamanashi.ac.jp [Division of Mechanical Engineering, University of Yamanashi, 4-3-11 Takeda, Kofu 400-8511 (Japan); Kunugi, Tomoaki, E-mail: kunugi@nucleng.kyoto-u.ac.jp [Department of Nuclear Engineering, Kyoto University, C3-d2S06, Kyoto-Daigaku Katsura, Nishikyo-Ku 615-8540, Kyoto (Japan)
2016-11-01
Highlights: • We show the applicability to predict the heat transfer imposed on a uniform wall-normal magnetic field by means of the zero-equation heat transfer model. • Quasi-theoretical turbulent Prandtl numbers with various molecular Prandtl number fluids were obtained. • Improvements of the prediction accuracy in turbulent kinetic energy and turbulent dissipation rate under the magnetic fields were accomplished. - Abstract: Zero-equation heat transfer models based on the constant turbulent Prandtl number are evaluated using direct numerical simulation (DNS) data for fully developed channel flows imposed on a uniform wall-normal magnetic field. Quasi-theoretical turbulent Prandtl numbers are estimated by DNS data of various molecular Prandtl number fluids. From the viewpoint of highly-accurate magneto-hydrodynamic (MHD) heat transfer prediction, the parameters of the turbulent eddy viscosity of the k–É› model are optimized under the magnetic fields. Consequently, we use the zero-equation model based on a constant turbulent Prandtl number to demonstrate MHD heat transfer, and show the applicability of using this model to predict the heat transfer.
Marchesini, Ivan; Mergili, Martin; Schneider-Muntau, Barbara; Alvioli, Massimiliano; Rossi, Mauro; Guzzetti, Fausto
2015-04-01
We used the software r.slope.stability for physically-based landslide susceptibility modelling in the 90 km² Collazzone area, Central Italy, exploiting a comprehensive set of lithological, geotechnical, and landslide inventory data. The model results were evaluated against the inventory. r.slope.stability is a GIS-supported tool for modelling shallow and deep-seated slope stability and slope failure probability at comparatively broad scales. Developed as a raster module of the GRASS GIS software, r.slope.stability evaluates the slope stability for a large number of randomly selected ellipsoidal potential sliding surfaces. The bottom of the soil (for shallow slope stability) or the bedding planes of lithological layers (for deep-seated slope stability) are taken as potential sliding surfaces by truncating the ellipsoids, allowing for the analysis of relatively complex geological structures. To take account for the uncertain geotechnical and geometric parameters, r.slope.stability computes the slope failure probability by testing multiple parameter combinations sampled deterministically or stochastically, and evaluating the ratio between the number of parameter combinations yielding a factor of safety below 1 and the total number of tested combinations. Any single raster cell may be intersected by multiple sliding surfaces, each associated with a slope failure probability. The most critical sliding surface is relevant for each pixel. Intensive use of r.slope.stability in the Collazzone Area has opened up two questions elaborated in the present work: (i) To what extent does a larger number of geotechnical tests help to better constrain the geotechnical characteristics of the study area and, consequently, to improve the model results? The ranges of values of cohesion and angle of internal friction obtained through 13 direct shear tests corresponds remarkably well to the range of values suggested by a geotechnical textbook. We elaborate how far an increased number of
He, Yan-Zhang; Liu, Yi-Min; Bao, Cheng-Guang
2017-08-01
The coupled Gross-Pitaevskii equations for two-species BEC have been solved analytically under the Thomas-Fermi approximation (TFA). Based on the analytical solution, two formulae are derived to relate the particle numbers NA and NB with the root mean square radii of the two kinds of atoms. Only the case that both kinds of atoms have nonzero distribution at the center of an isotropic trap is considered. In this case the TFA has been found to work nicely. Thus, the two formulae are applicable and are useful for the evaluation of NA and NB .
Lepton number violating processes in the minimal 3-3-1 model with sterile neutrinos
Machado, A C B; Pleitez, V
2016-01-01
We consider a given numerical solution for the real part of the unitary matrices which diagonalize the charged lepton mass matrices in the minimal 3-3-1 model with sterile neutrinos, and we study its phenomenological consequences in various processes involving flavor number violating interactions. In the allowed leptonic tree level decays $l_i\\to l_jl_kl_k$, where $l_i=\\mu,\\tau$, $l_{j,k}=e,\\mu$, we found that in particular the channel $\\mu\\to eee$ impose a lower mass limit on the vector doubly charged bilepton of $4.58$ TeV and that the scalar contributions are negligible in this kind of processes. We also test the matrices solution in the tree level Higgs reaction $h^0\\to l_il_j$ and in the one-loop decays $l_i\\to l_j\\gamma$, and found that in the loop processes the virtual interactions of the exotic particles with leptons provide signals much larger than in the standard model, but still quite smaller than the experimental upper limits.
An Optimization Model for Design of Asphalt Pavements Based on IHAP Code Number 234
Directory of Open Access Journals (Sweden)
Ali Reza Ghanizadeh
2016-01-01
Full Text Available Pavement construction is one of the most costly parts of transportation infrastructures. Incommensurate design and construction of pavements, in addition to the loss of the initial investment, would impose indirect costs to the road users and reduce road safety. This paper aims to propose an optimization model to determine the optimal configuration as well as the optimum thickness of different pavement layers based on the Iran Highway Asphalt Paving Code Number 234 (IHAP Code 234. After developing the optimization model, the optimum thickness of pavement layers for secondary rural roads, major rural roads, and freeways was determined based on the recommended prices in “Basic Price List for Road, Runway and Railway” of Iran in 2015 and several charts were developed to determine the optimum thickness of pavement layers including asphalt concrete, granular base, and granular subbase with respect to road classification, design traffic, and resilient modulus of subgrade. Design charts confirm that in the current situation (material prices in 2015, application of asphalt treated layer in pavement structure is not cost effective. Also it was shown that, with increasing the strength of subgrade soil, the subbase layer may be removed from the optimum structure of pavement.
Energy Technology Data Exchange (ETDEWEB)
Lim, Kyo-Sun Sunny [Pacific Northwest National Laboratory, Richland Washington USA; Riihimaki, Laura [Pacific Northwest National Laboratory, Richland Washington USA; Comstock, Jennifer M. [Pacific Northwest National Laboratory, Richland Washington USA; Schmid, Beat [Pacific Northwest National Laboratory, Richland Washington USA; Sivaraman, Chitra [Pacific Northwest National Laboratory, Richland Washington USA; Shi, Yan [Pacific Northwest National Laboratory, Richland Washington USA; McFarquhar, Greg M. [Department of Atmospheric Sciences, University of Illinois at Urbana-Champaign, Urbana Illinois USA
2016-03-06
A new cloud-droplet number concentration (NDROP) value added product (VAP) has been produced at the Atmospheric Radiation Measurement (ARM) Southern Great Plains (SGP) site for the 13 years from January 1998 to January 2011. The retrieval is based on surface radiometer measurements of cloud optical depth from the multi-filter rotating shadow-band radiometer (MFRSR) and liquid water path from the microwave radiometer (MWR). It is only applicable for single-layered warm clouds. Validation with in situ aircraft measurements during the extended-term aircraft field campaign, Routine ARM Aerial Facility (AAF) CLOWD Optical Radiative Observations (RACORO), shows that the NDROP VAP robustly reproduces the primary mode of the in situ measured probability density function (PDF), but produces a too wide distribution, primarily caused by frequent high cloud-droplet number concentration. Our analysis shows that the error in the MWR retrievals at low liquid water paths is one possible reason for this deficiency. Modification through the diagnosed liquid water path from the coordinate solution improves not only the PDF of the NDROP VAP but also the relationship between the cloud-droplet number concentration and cloud-droplet effective radius. Consideration of entrainment effects rather than assuming an adiabatic cloud improves the values of the NDROP retrieval by reducing the magnitude of cloud-droplet number concentration. Aircraft measurements and retrieval comparisons suggest that retrieving the vertical distribution of cloud-droplet number concentration and effective radius is feasible with an improvement of the parameter representing the mixing effects between environment and clouds and with a better understanding of the effect of mixing degree on cloud properties.
Directory of Open Access Journals (Sweden)
Lluïsa Jordi Nebot
2013-03-01
Full Text Available This article examines new tutoring evaluation methods to be adopted in the course, Machine Theory, in the Escola Tècnica Superior d’Enginyeria Industrial de Barcelona (ETSEIB, Universitat Politècnica de Catalunya. These new methods have been developed in order to facilitate teaching staff work and include students in the evaluation process. Machine Theory is a required course with a large number of students. These students are divided into groups of three, and required to carry out a supervised work constituting 20% of their final mark. These new evaluation methods were proposed in response to the significant increase of students in spring semester of 2010-2011, and were pilot tested during fall semester of academic year 2011-2012, in the previous Industrial Engineering degree program. Pilot test results were highly satisfactory for students and teachers, alike, and met proposed educational objectives. For this reason, the new evaluation methodology was adopted in spring semester of 2011-2012, in the current bachelor’s degree program in Industrial Technology (Grau en Enginyeria en Tecnologies Industrials, GETI, where it has also achieved highly satisfactory results.
Human Modeling Evaluations in Microgravity Workstation and Restraint Development
Whitmore, Mihriban; Chmielewski, Cynthia; Wheaton, Aneice; Hancock, Lorraine; Beierle, Jason; Bond, Robert L. (Technical Monitor)
1999-01-01
The International Space Station (ISS) will provide long-term missions which will enable the astronauts to live and work, as well as, conduct research in a microgravity environment. The dominant factor in space affecting the crew is "weightlessness" which creates a challenge for establishing workstation microgravity design requirements. The crewmembers will work at various workstations such as Human Research Facility (HRF), Microgravity Sciences Glovebox (MSG) and Life Sciences Glovebox (LSG). Since the crew will spend considerable amount of time at these workstations, it is critical that ergonomic design requirements are integral part of design and development effort. In order to achieve this goal, the Space Human Factors Laboratory in the Johnson Space Center Flight Crew Support Division has been tasked to conduct integrated evaluations of workstations and associated crew restraints. Thus, a two-phase approach was used: 1) ground and microgravity evaluations of the physical dimensions and layout of the workstation components, and 2) human modeling analyses of the user interface. Computer-based human modeling evaluations were an important part of the approach throughout the design and development process. Human modeling during the conceptual design phase included crew reach and accessibility of individual equipment, as well as, crew restraint needs. During later design phases, human modeling has been used in conjunction with ground reviews and microgravity evaluations of the mock-ups in order to verify the human factors requirements. (Specific examples will be discussed.) This two-phase approach was the most efficient method to determine ergonomic design characteristics for workstations and restraints. The real-time evaluations provided a hands-on implementation in a microgravity environment. On the other hand, only a limited number of participants could be tested. The human modeling evaluations provided a more detailed analysis of the setup. The issues identified
Cherry, S.; White, G.C.; Keating, K.A.; Haroldson, Mark A.; Schwartz, Charles C.
2007-01-01
Current management of the grizzly bear (Ursus arctos) population in Yellowstone National Park and surrounding areas requires annual estimation of the number of adult female bears with cubs-of-the-year. We examined the performance of nine estimators of population size via simulation. Data were simulated using two methods for different combinations of population size, sample size, and coefficient of variation of individual sighting probabilities. We show that the coefficient of variation does not, by itself, adequately describe the effects of capture heterogeneity, because two different distributions of capture probabilities can have the same coefficient of variation. All estimators produced biased estimates of population size with bias decreasing as effort increased. Based on the simulation results we recommend the Chao estimator for model M h be used to estimate the number of female bears with cubs of the year; however, the estimator of Chao and Shen may also be useful depending on the goals of the research.
Moisture evaluation by dynamic thermography data modeling
Bison, Paolo G.; Grinzato, Ermanno G.; Marinetti, Sergio
1994-03-01
This paper discusses the design of a nondestructive method for in situ detection of moistened areas in buildings and the evaluation of the water content in porous materials by thermographic analysis. The use of heat transfer model to interpret data allows to improve the measurement accuracy taking into account the actual boundary conditions. The relative increase of computation time is balanced by the additional advantage to optimize the testing procedure of different objects simulating the heat transfer. Experimental results on bricks used in building for restoration activities, are discussed. The water content measured in different hygrometric conditions is compared with known values. A correction on the absorptivity coefficient dependent on water content is introduced.
Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang
2014-12-01
Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.
Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang
2014-12-01
Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.
ZATPAC: a model consortium evaluates teen programs.
Owen, Kathryn; Murphy, Dana; Parsons, Chris
2009-09-01
How do we advance the environmental literacy of young people, support the next generation of environmental stewards and increase the diversity of the leadership of zoos and aquariums? We believe it is through ongoing evaluation of zoo and aquarium teen programming and have founded a consortium to pursue those goals. The Zoo and Aquarium Teen Program Assessment Consortium (ZATPAC) is an initiative by six of the nation's leading zoos and aquariums to strengthen institutional evaluation capacity, model a collaborative approach toward assessing the impact of youth programs, and bring additional rigor to evaluation efforts within the field of informal science education. Since its beginning in 2004, ZATPAC has researched, developed, pilot-tested and implemented a pre-post program survey instrument designed to assess teens' knowledge of environmental issues, skills and abilities to take conservation actions, self-efficacy in environmental actions, and engagement in environmentally responsible behaviors. Findings from this survey indicate that teens who join zoo/aquarium programs are already actively engaged in many conservation behaviors. After participating in the programs, teens showed a statistically significant increase in their reported knowledge of conservation and environmental issues and their abilities to research, explain, and find resources to take action on conservation issues of personal concern. Teens also showed statistically significant increases pre-program to post-program for various conservation behaviors, including "I talk with my family and/or friends about things they can do to help the animals or the environment," "I save water...," "I save energy...," "When I am shopping I look for recycled products," and "I help with projects that restore wildlife habitat."
Galbraith, Mary J.
1974-01-01
Examination of models for representing integers demonstrates that formal operational thought is required for establishing the operations on integers. Advocated is the use of many models for introducing negative numbers but, apart from addition, it is recommended that operations on integers be delayed until the formal operations stage. (JP)
RTMOD: Real-Time MODel evaluation
Energy Technology Data Exchange (ETDEWEB)
Graziani, G; Galmarini, S. [Joint Research centre, Ispra (Italy); Mikkelsen, T. [Risoe National Lab., Wind Energy and Atmospheric Physics Dept. (Denmark)
2000-01-01
The 1998 - 1999 RTMOD project is a system based on an automated statistical evaluation for the inter-comparison of real-time forecasts produced by long-range atmospheric dispersion models for national nuclear emergency predictions of cross-boundary consequences. The background of RTMOD was the 1994 ETEX project that involved about 50 models run in several Institutes around the world to simulate two real tracer releases involving a large part of the European territory. In the preliminary phase of ETEX, three dry runs (i.e. simulations in real-time of fictitious releases) were carried out. At that time, the World Wide Web was not available to all the exercise participants, and plume predictions were therefore submitted to JRC-Ispra by fax and regular mail for subsequent processing. The rapid development of the World Wide Web in the second half of the nineties, together with the experience gained during the ETEX exercises suggested the development of this project. RTMOD featured a web-based user-friendly interface for data submission and an interactive program module for displaying, intercomparison and analysis of the forecasts. RTMOD has focussed on model intercomparison of concentration predictions at the nodes of a regular grid with 0.5 degrees of resolution both in latitude and in longitude, the domain grid extending from 5W to 40E and 40N to 65N. Hypothetical releases were notified around the world to the 28 model forecasters via the web on a one-day warning in advance. They then accessed the RTMOD web page for detailed information on the actual release, and as soon as possible they then uploaded their predictions to the RTMOD server and could soon after start their inter-comparison analysis with other modelers. When additional forecast data arrived, already existing statistical results would be recalculated to include the influence by all available predictions. The new web-based RTMOD concept has proven useful as a practical decision-making tool for realtime
Directory of Open Access Journals (Sweden)
S. Sulaiman
2017-06-01
Full Text Available An important element in the electric power distribution system is the underground cable. However continuous applications of high voltages unto the cable, may lead to insulation degradations and subsequent cable failure. Since any disruption to the electricity supply may lead to economic losses as well as lowering customer satisfaction, the maintenance of cables is very important to an electrical utility company. Thus, a reliable diagnostic technique that is able to accurately assess the condition of cable insulation operating is critical, in order for cable replacement exercise to be done. One such diagnostic technique to assess the level of degradation within the cable insulation is the Polarization / Depolarization Current (PDC analysis. This research work attempts to investigate PDC behaviour for medium voltage (MV cross-linked polyethylene (XLPE insulated cables, via baseline PDC measurements and utilizing the measured data to simulate for PDC analysis. Once PDC simulations have been achieved, the values of conductivity of XLPE cable insulations can be approximated. Cable conductivity serves as an indicator to the level of degradation within XLPE cable insulation. It was found that for new and unused XLPE cables, the polarization and depolarization currents have almost overlapping trendlines, as the cable insulation’s conduction current is negligible. Using a linear dielectric circuit equivalence model as the XLPE cable insulation and its corresponding governing equations, it is possible to optimize the number of parallel RC branches to simulate PDC analysis, with a very high degree of accuracy. The PDC simulation model has been validated against the baseline PDC measurements.
Detailed modeling of sloshing in satellites tank at low Bond numbers
Lepilliez, Mathieu; Tanguy, Sebastien; Interface Team
2015-11-01
Consumption of ergols is a critical issue regarding the whole lifetime of a satellite. During maneuvers in mission phases, the Helium bubble used to pressurize the tank can move freely inside, thus generating movement of the center of mass, and sloshing which can disrupt the control of the satellite. In this study we present numerical results obtained from CFD computation, using an Immersed Interface Method to model the tank with a level-set approach for both liquid-gas interface and solid-fluid interface. A parametric study is proposed to observe the influence of the Bond number on resulting forces and torques generated on the tank. One can observe different steps during the maneuvers under microgravity: the first part is dominated by accelerations and volume forces, which flatten the bubble on the hydrophilic tank wall. When the forcing stops, the bubble bounces back, generating sloshing by moving under the influence of inertia and capillary effects. Finally viscous effects damp the sloshing by dissipating the kinetic energy of the bubble. Those results are compared to actual in-flight data for different typical maneuvers on forces and torques, allowing us to characterize the period and damping of the sloshing. CNES/ Airbus Defence & Space funding.
Extending the restricted nonlinear model for wall-turbulence to high Reynolds numbers
Bretheim, Joel; Meneveau, Charles; Gayme, Dennice
2016-11-01
The restricted nonlinear (RNL) model for wall-turbulence is motivated by the long-observed streamwise-coherent structures that play an important role in these flows. The RNL equations, derived by restricting the convective term in the Navier-Stokes equations, provide a computationally efficient approach due to fewer degrees of freedom in the underlying dynamics. Recent simulations of the RNL system have been conducted for turbulent channel flows at low Reynolds numbers (Re), yielding insights into the dynamical mechanisms and statistics of wall-turbulence. Despite the computational advantages of the RNL system, simulations at high Re remain out-of-reach. We present a new Large Eddy Simulation (LES) framework for the RNL system, enabling its use in engineering applications at high Re such as turbulent flows through wind farms. Initial results demonstrate that, as observed at moderate Re, restricting the range of streamwise varying structures present in the simulation (i.e., limiting the band of x Fourier components or kx modes) significantly affects the accuracy of the statistics. Our results show that only a few well-chosen kx modes lead to RNL turbulence with accurate statistics, including the mean profile and the well-known inner and outer peaks in energy spectra. This work is supported by NSF (WindInspire OISE-1243482).
An Enhanced Fuzzy Multi Criteria Decision Making Model with A proposed Polygon Fuzzy Number
Directory of Open Access Journals (Sweden)
Samah Bekheet
2014-06-01
Full Text Available Decisions in real world applications are often made under the presence of conflicting, uncertain, incomplete and imprecise information. Fuzzy multi Criteria Decision making (FMCDM approach provides a powerful approach for drawing rational decisions under uncertainty given in the form of linguistic values. Linguistic values are usually represented as fuzzy numbers. Most of researchers adopt either triangle or trapezoidal fuzzy numbers. Since triangle, intervals, and even singleton are special cases of Trapezoidal fuzzy numbers, so, for most researchers Trapezoidal fuzzy numbers are considered Generalized fuzzy numbers (GFN. In this paper, we introduce polygon fuzzy number (PFN as the actual form of GFN. The proposed form of PFN provides higher flexibility to decision makers to express their own linguistic rather than other form of fuzzy numbers. The given illustrative example ensures such ability for better handling of the FMCDM problems.
Rainwater harvesting: model-based design evaluation.
Ward, S; Memon, F A; Butler, D
2010-01-01
The rate of uptake of rainwater harvesting (RWH) in the UK has been slow to date, but is expected to gain momentum in the near future. The designs of two different new-build rainwater harvesting systems, based on simple methods, are evaluated using three different design methods, including a continuous simulation modelling approach. The RWH systems are shown to fulfill 36% and 46% of WC demand. Financial analyses reveal that RWH systems within large commercial buildings maybe more financially viable than smaller domestic systems. It is identified that design methods based on simple approaches generate tank sizes substantially larger than the continuous simulation. Comparison of the actual tank sizes and those calculated using continuous simulation established that the tanks installed are oversized for their associated demand level and catchment size. Oversizing tanks can lead to excessive system capital costs, which currently hinders the uptake of systems. Furthermore, it is demonstrated that the catchment area size is often overlooked when designing UK-based RWH systems. With respect to these findings, a recommendation for a transition from the use of simple tools to continuous simulation models is made.
Evaluation of clinical information modeling tools.
Moreno-Conde, Alberto; Austin, Tony; Moreno-Conde, Jesús; Parra-Calderón, Carlos L; Kalra, Dipak
2016-11-01
Clinical information models are formal specifications for representing the structure and semantics of the clinical content within electronic health record systems. This research aims to define, test, and validate evaluation metrics for software tools designed to support the processes associated with the definition, management, and implementation of these models. The proposed framework builds on previous research that focused on obtaining agreement on the essential requirements in this area. A set of 50 conformance criteria were defined based on the 20 functional requirements agreed by that consensus and applied to evaluate the currently available tools. Of the 11 initiative developing tools for clinical information modeling identified, 9 were evaluated according to their performance on the evaluation metrics. Results show that functionalities related to management of data types, specifications, metadata, and terminology or ontology bindings have a good level of adoption. Improvements can be made in other areas focused on information modeling and associated processes. Other criteria related to displaying semantic relationships between concepts and communication with terminology servers had low levels of adoption. The proposed evaluation metrics were successfully tested and validated against a representative sample of existing tools. The results identify the need to improve tool support for information modeling and software development processes, especially in those areas related to governance, clinician involvement, and optimizing the technical validation of testing processes. This research confirmed the potential of these evaluation metrics to support decision makers in identifying the most appropriate tool for their organization. Los Modelos de Información Clínica son especificaciones para representar la estructura y características semánticas del contenido clínico en los sistemas de Historia Clínica Electrónica. Esta investigación define, prueba y valida
Sadi, M; Dabir, B
2003-01-01
Monte Carlo Method is one of the most powerful techniques to model different processes, such as polymerization reactions. By this method, without any need to solve moment equations, a very detailed information on the structure and properties of polymers are obtained. The number of algorithm repetitions (selected volumes of reactor for modelling which represent the number of initial molecules) is very important in this method. In Monte Carlo method calculations are based on the random number of generations and reaction probability determinations. so the number of algorithm repetition is very important. In this paper, the initiation reaction was considered alone and the importance of number of initiator molecules on the result were studied. It can be concluded that Monte Carlo method will not give accurate results if the number of molecules is not satisfied to be big enough, because in that case , selected volume would not be representative of the whole system.
COMPUTER MODEL FOR ORGANIC FERTILIZER EVALUATION
Directory of Open Access Journals (Sweden)
Zdenko Lončarić
2009-12-01
Full Text Available Evaluation of manures, composts and growing media quality should include enough properties to enable an optimal use from productivity and environmental points of view. The aim of this paper is to describe basic structure of organic fertilizer (and growing media evaluation model to present the model example by comparison of different manures as well as example of using plant growth experiment for calculating impact of pH and EC of growing media on lettuce plant growth. The basic structure of the model includes selection of quality indicators, interpretations of indicators value, and integration of interpreted values into new indexes. The first step includes data input and selection of available data as a basic or additional indicators depending on possible use as fertilizer or growing media. The second part of the model uses inputs for calculation of derived quality indicators. The third step integrates values into three new indexes: fertilizer, growing media, and environmental index. All three indexes are calculated on the basis of three different groups of indicators: basic value indicators, additional value indicators and limiting factors. The possible range of indexes values is 0-10, where range 0-3 means low, 3-7 medium and 7-10 high quality. Comparing fresh and composted manures, higher fertilizer and environmental indexes were determined for composted manures, and the highest fertilizer index was determined for composted pig manure (9.6 whereas the lowest for fresh cattle manure (3.2. Composted manures had high environmental index (6.0-10 for conventional agriculture, but some had no value (environmental index = 0 for organic agriculture because of too high zinc, copper or cadmium concentrations. Growing media indexes were determined according to their impact on lettuce growth. Growing media with different pH and EC resulted in very significant impacts on height, dry matter mass and leaf area of lettuce seedlings. The highest lettuce
Evaluation Model of Life Loss Due to Dam Failure
Huang, Dongjing
2016-04-01
Dam failure poses a serious threat to human life, however there is still lack of systematic research on life loss which due to dam failure in China. From the perspective of protecting human life, an evaluation model for life loss caused by dam failure is put forward. The model building gets three progressive steps. Twenty dam failure cases in China are preferably chosen as the basic data, considering geographical location and construction time of dams, as well as various conditions of dam failure. Then twelve impact factors of life loss are selected, including severity degree of flood, population at risk, understanding of dam failure, warning time, evacuation condition, number of damaged buildings, water temperature, reservoir storage, dam height, dam type, break time and distance from flood area to dam. And through principal component analysis, it gets four principal components consisting of the first flood character principle component, the second warning system principle component, the third human character principle component and the fourth space-time impact principle component. After multivariate nonlinear regression and ten-fold validation in combination, the evaluation model for life loss is finally established. And the result of the proposed model is closer to the true value and better in fitting effect in comparison with the results of RESCDAM method and M. Peng method. The proposed model is not only applied to evaluate life loss and its rate under various kinds of dam failure conditions in China, but also provides reliable cause analysis and prediction approach to reduce the risk of life loss.
DEFF Research Database (Denmark)
Morales Rodriguez, Ricardo; Meyer, Anne S.; Gernaey, Krist
2011-01-01
An assessment of a number of different process flowsheets for bioethanol production was performed using dynamic model-based simulations. The evaluation employed diverse operational scenarios such as, fed-batch, continuous and continuous with recycle configurations. Each configuration was evaluate...
Model evaluation of RIMPUFF within complex terrain using an 41Ar radiological dataset
DEFF Research Database (Denmark)
Dyer, Leisa L.; Astrup, Poul
2012-01-01
The newly updated atmospheric dispersion model RIMPUFF is evaluated using routine releases of 41Ar from the former HIFAR research reactor located in Sydney, Australia. A large number of 41Ar measurements from a network of environmental gamma detectors are used to evaluate the model under a range...... of atmospheric stability conditions within the complex terrain area. Model sensitivity of input data is analysed including meteorological station data, land use maps, surface roughness and wind interpolation schemes. Various model evaluation tools are used such as gamma dose rate plots, exploratory data analyses...... and relevant statistical performance measures. Copyright © 2012 Inderscience Enterprises Ltd....
M. Boumans
2013-01-01
This article proposes a more objective Type B evaluation. This can be achieved when Type B uncertainty evaluations are model-based. This implies, however, grey-box modelling and validation instead of white-box modelling and validation which are appropriate for Type A evaluation.
The multi-scale aerosol-climate model PNNL-MMF: model description and evaluation
Directory of Open Access Journals (Sweden)
M. Wang
2011-03-01
Full Text Available Anthropogenic aerosol effects on climate produce one of the largest uncertainties in estimates of radiative forcing of past and future climate change. Much of this uncertainty arises from the multi-scale nature of the interactions between aerosols, clouds and large-scale dynamics, which are difficult to represent in conventional general circulation models (GCMs. In this study, we develop a multi-scale aerosol-climate model that treats aerosols and clouds across different scales, and evaluate the model performance, with a focus on aerosol treatment. This new model is an extension of a multi-scale modeling framework (MMF model that embeds a cloud-resolving model (CRM within each grid column of a GCM. In this extension, the effects of clouds on aerosols are treated by using an explicit-cloud parameterized-pollutant (ECPP approach that links aerosol and chemical processes on the large-scale grid with statistics of cloud properties and processes resolved by the CRM. A two-moment cloud microphysics scheme replaces the simple bulk microphysics scheme in the CRM, and a modal aerosol treatment is included in the GCM. With these extensions, this multi-scale aerosol-climate model allows the explicit simulation of aerosol and chemical processes in both stratiform and convective clouds on a global scale.
Simulated aerosol budgets in this new model are in the ranges of other model studies. Simulated gas and aerosol concentrations are in reasonable agreement with observations (within a factor of 2 in most cases, although the model underestimates black carbon concentrations at the surface by a factor of 2–4. Simulated aerosol size distributions are in reasonable agreement with observations in the marine boundary layer and in the free troposphere, while the model underestimates the accumulation mode number concentrations near the surface, and overestimates the accumulation mode number concentrations in the middle and upper free troposphere by a factor
The multi-scale aerosol-climate model PNNL-MMF: model description and evaluation
Wang, M.; Ghan, S.; Easter, R.; Ovchinnikov, M.; Liu, X.; Kassianov, E.; Qian, Y.; Gustafson, W. I., Jr.; Larson, V. E.; Schanen, D. P.; Khairoutdinov, M.; Morrison, H.
2011-03-01
Anthropogenic aerosol effects on climate produce one of the largest uncertainties in estimates of radiative forcing of past and future climate change. Much of this uncertainty arises from the multi-scale nature of the interactions between aerosols, clouds and large-scale dynamics, which are difficult to represent in conventional general circulation models (GCMs). In this study, we develop a multi-scale aerosol-climate model that treats aerosols and clouds across different scales, and evaluate the model performance, with a focus on aerosol treatment. This new model is an extension of a multi-scale modeling framework (MMF) model that embeds a cloud-resolving model (CRM) within each grid column of a GCM. In this extension, the effects of clouds on aerosols are treated by using an explicit-cloud parameterized-pollutant (ECPP) approach that links aerosol and chemical processes on the large-scale grid with statistics of cloud properties and processes resolved by the CRM. A two-moment cloud microphysics scheme replaces the simple bulk microphysics scheme in the CRM, and a modal aerosol treatment is included in the GCM. With these extensions, this multi-scale aerosol-climate model allows the explicit simulation of aerosol and chemical processes in both stratiform and convective clouds on a global scale. Simulated aerosol budgets in this new model are in the ranges of other model studies. Simulated gas and aerosol concentrations are in reasonable agreement with observations (within a factor of 2 in most cases), although the model underestimates black carbon concentrations at the surface by a factor of 2-4. Simulated aerosol size distributions are in reasonable agreement with observations in the marine boundary layer and in the free troposphere, while the model underestimates the accumulation mode number concentrations near the surface, and overestimates the accumulation mode number concentrations in the middle and upper free troposphere by a factor of about 2. The
Directory of Open Access Journals (Sweden)
Tetsuya Oda
2012-01-01
Full Text Available Node placement problems have been long investigated in the optimization field due to numerous applications in location science and classification. Facility location problems are showing their usefulness to communication networks, and more especially from Wireless Mesh Networks (WMNs field. Recently, such problems are showing their usefulness to communication networks, where facilities could be servers or routers offering connectivity services to clients. In this paper, we deal with the effect of mutation and crossover operators in GA for node placement problem. We evaluate the performance of the proposed system using different selection operators and different distributions of router nodes considering number of covered users parameter. The simulation results show that for Linear and Exponential ranking methods, the system has a good performance for all rates of crossover and mutation.
Directory of Open Access Journals (Sweden)
Margareth Regina Dibo
2013-07-01
Full Text Available Introduction Here, we evaluated sweeping methods used to estimate the number of immature Aedes aegypti in large containers. Methods III/IV instars and pupae at a 9:1 ratio were placed in three types of containers with, each one with three different water levels. Two sweeping methods were tested: water-surface sweeping and five-sweep netting. The data were analyzed using linear regression. Results The five-sweep netting technique was more suitable for drums and water-tanks, while the water-surface sweeping method provided the best results for swimming pools. Conclusions Both sweeping methods are useful tools in epidemiological surveillance programs for the control of Aedes aegypti.
Estimating quasi-loglinear models for a Rasch table if the numbers of items is large
Kelderman, Henk
1987-01-01
The Rasch Model and various extensions of this model can be formulated as a quasi loglinear model for the incomplete subgroup x score x item response 1 x ... x item response k contingency table. By comparing various loglinear models, specific deviations of the Rasch model can be tested. Parameter es
Yamamoto, Takehisa; Hayama, Yoko; Hidano, Arata; Kobayashi, Sota; Muroga, Norihiko; Ishikawa, Kiyoyasu; Ogura, Aki; Tsutsui, Toshiyuki
2014-01-01
Because antimicrobial resistance in food-producing animals is a major public health concern, many countries have implemented antimicrobial monitoring systems at a national level. When designing a sampling scheme for antimicrobial resistance monitoring, it is necessary to consider both cost effectiveness and statistical plausibility. In this study, we examined how sampling scheme precision and sensitivity can vary with the number of animals sampled from each farm, while keeping the overall sample size constant to avoid additional sampling costs. Five sampling strategies were investigated. These employed 1, 2, 3, 4 or 6 animal samples per farm, with a total of 12 animals sampled in each strategy. A total of 1,500 Escherichia coli isolates from 300 fattening pigs on 30 farms were tested for resistance against 12 antimicrobials. The performance of each sampling strategy was evaluated by bootstrap resampling from the observational data. In the bootstrapping procedure, farms, animals, and isolates were selected randomly with replacement, and a total of 10,000 replications were conducted. For each antimicrobial, we observed that the standard deviation and 2.5-97.5 percentile interval of resistance prevalence were smallest in the sampling strategy that employed 1 animal per farm. The proportion of bootstrap samples that included at least 1 isolate with resistance was also evaluated as an indicator of the sensitivity of the sampling strategy to previously unidentified antimicrobial resistance. The proportion was greatest with 1 sample per farm and decreased with larger samples per farm. We concluded that when the total number of samples is pre-specified, the most precise and sensitive sampling strategy involves collecting 1 sample per farm.
Directory of Open Access Journals (Sweden)
Takehisa Yamamoto
Full Text Available Because antimicrobial resistance in food-producing animals is a major public health concern, many countries have implemented antimicrobial monitoring systems at a national level. When designing a sampling scheme for antimicrobial resistance monitoring, it is necessary to consider both cost effectiveness and statistical plausibility. In this study, we examined how sampling scheme precision and sensitivity can vary with the number of animals sampled from each farm, while keeping the overall sample size constant to avoid additional sampling costs. Five sampling strategies were investigated. These employed 1, 2, 3, 4 or 6 animal samples per farm, with a total of 12 animals sampled in each strategy. A total of 1,500 Escherichia coli isolates from 300 fattening pigs on 30 farms were tested for resistance against 12 antimicrobials. The performance of each sampling strategy was evaluated by bootstrap resampling from the observational data. In the bootstrapping procedure, farms, animals, and isolates were selected randomly with replacement, and a total of 10,000 replications were conducted. For each antimicrobial, we observed that the standard deviation and 2.5-97.5 percentile interval of resistance prevalence were smallest in the sampling strategy that employed 1 animal per farm. The proportion of bootstrap samples that included at least 1 isolate with resistance was also evaluated as an indicator of the sensitivity of the sampling strategy to previously unidentified antimicrobial resistance. The proportion was greatest with 1 sample per farm and decreased with larger samples per farm. We concluded that when the total number of samples is pre-specified, the most precise and sensitive sampling strategy involves collecting 1 sample per farm.
Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.
2013-04-01
The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.
Shields, Matt
The development of Micro Aerial Vehicles has been hindered by the poor understanding of the aerodynamic loading and stability and control properties of the low Reynolds number regime in which the inherent low aspect ratio (LAR) wings operate. This thesis experimentally evaluates the static and damping aerodynamic stability derivatives to provide a complete aerodynamic model for canonical flat plate wings of aspect ratios near unity at Reynolds numbers under 1 x 105. This permits the complete functionality of the aerodynamic forces and moments to be expressed and the equations of motion to solved, thereby identifying the inherent stability properties of the wing. This provides a basis for characterizing the stability of full vehicles. The influence of the tip vortices during sideslip perturbations is found to induce a loading condition referred to as roll stall, a significant roll moment created by the spanwise induced velocity asymmetry related to the displacement of the vortex cores relative to the wing. Roll stall is manifested by a linearly increasing roll moment with low to moderate angles of attack and a subsequent stall event similar to a lift polar; this behavior is not experienced by conventional (high aspect ratio) wings. The resulting large magnitude of the roll stability derivative, Cl,beta and lack of roll damping, Cl ,rho, create significant modal responses of the lateral state variables; a linear model used to evaluate these modes is shown to accurately reflect the solution obtained by numerically integrating the nonlinear equations. An unstable Dutch roll mode dominates the behavior of the wing for small perturbations from equilibrium, and in the presence of angle of attack oscillations a previously unconsidered coupled mode, referred to as roll resonance, is seen develop and drive the bank angle? away from equilibrium. Roll resonance requires a linear time variant (LTV) model to capture the behavior of the bank angle, which is attributed to the
Evaluation of video quality models for multimedia
Brunnström, Kjell; Hands, David; Speranza, Filippo; Webster, Arthur
2008-02-01
The Video Quality Experts Group (VQEG) is a group of experts from industry, academia, government and standards organizations working in the field of video quality assessment. Over the last 10 years, VQEG has focused its efforts on the evaluation of objective video quality metrics for digital video. Objective video metrics are mathematical models that predict the picture quality as perceived by an average observer. VQEG has completed validation tests for full reference objective metrics for the Standard Definition Television (SDTV) format. From this testing, two ITU Recommendations were produced. This standardization effort is of great relevance to the video industries because objective metrics can be used for quality control of the video at various stages of the delivery chain. Currently, VQEG is undertaking several projects in parallel. The most mature project is concerned with objective measurement of multimedia content. This project is probably the largest coordinated set of video quality testing ever embarked upon. The project will involve the collection of a very large database of subjective quality data. About 40 subjective assessment experiments and more than 160,000 opinion scores will be collected. These will be used to validate the proposed objective metrics. This paper describes the test plan for the project, its current status, and one of the multimedia subjective tests.
Quantum distance and the Euler number index of the Bloch band in a one-dimensional spin model.
Ma, Yu-Quan
2014-10-01
We study the Riemannian metric and the Euler characteristic number of the Bloch band in a one-dimensional spin model with multisite spins exchange interactions. The Euler number of the Bloch band originates from the Gauss-Bonnet theorem on the topological characterization of the closed Bloch states manifold in the first Brillouin zone. We study this approach analytically in a transverse field XY spin chain with three-site spin coupled interactions. We define a class of cyclic quantum distance on the Bloch band and on the ground state, respectively, as a local characterization for quantum phase transitions. Specifically, we give a general formula for the Euler number by means of the Berry curvature in the case of two-band models, which reveals its essential relation to the first Chern number of the band insulators. Finally, we show that the ferromagnetic-paramagnetic phase transition in zero temperature can be distinguished by the Euler number of the Bloch band.
Directory of Open Access Journals (Sweden)
Fateme Tajik
2013-01-01
Full Text Available Introduction: Lack of knowledge about root canal anatomy can cause mistakes in diagnosis, treatment planning and failure of treatment. Mandibular canine is usually single-rooted it may have two roots or more root canals. The purpose of this study was evaluating the number of root and root-canals of mandibular canine using digital radiography with different angles and comparing it with clearing method.Materials & Methods: This study was a diagnostic test. Two hundred human mandibular canine teeth were studied. Digital radiography of the teeth from mesiodistal, bacculingual and 200 mesial views were prepared. Radiographic evaluation was down by two observers (An oral radiologist and an endodontist separately. Then dental clearing was performed. Data analysis was done using SPSS.Ver.17 software and statistical tests of MC Nemar. P0.001. Findings of digital radiography in mesiodistal view showed that 180 teeth (90% were single-canal and 20 teeth (10% had two canals, which were not different from those of clearing method (P=0.25. In 200 mesial view, 192 teeth (96% were single-canal and 8 teeth (4% had two canals, which were different from those of clearing method (P=0.012.Conclusion: Despite the low prevalence of anatomical variations in mandibular canine in this in vitro study, due to the lack of significant difference of radiographic mesiodistal views compared to that of clearing technique, CBCT modality is recommended for obtaining fast and complete diagnosis of unusual root canal.
Chaves, Luciano Eustáquio; Nascimento, Luiz Fernando Costa; Rizol, Paloma Maria Silva Rocha
2017-06-22
Predict the number of hospitalizations for asthma and pneumonia associated with exposure to air pollutants in the city of São José dos Campos, São Paulo State. This is a computational model using fuzzy logic based on Mamdani's inference method. For the fuzzification of the input variables of particulate matter, ozone, sulfur dioxide and apparent temperature, we considered two relevancy functions for each variable with the linguistic approach: good and bad. For the output variable number of hospitalizations for asthma and pneumonia, we considered five relevancy functions: very low, low, medium, high and very high. DATASUS was our source for the number of hospitalizations in the year 2007 and the result provided by the model was correlated with the actual data of hospitalization with lag from zero to two days. The accuracy of the model was estimated by the ROC curve for each pollutant and in those lags. In the year of 2007, 1,710 hospitalizations by pneumonia and asthma were recorded in São José dos Campos, State of São Paulo, with a daily average of 4.9 hospitalizations (SD = 2.9). The model output data showed positive and significant correlation (r = 0.38) with the actual data; the accuracies evaluated for the model were higher for sulfur dioxide in lag 0 and 2 and for particulate matter in lag 1. Fuzzy modeling proved accurate for the pollutant exposure effects and hospitalization for pneumonia and asthma approach. Prever o número de internações por asma e pneumonia associadas à exposição a poluentes do ar no município em São José dos Campos, estado de São Paulo. Trata-se de um modelo computacional que utiliza a lógica fuzzy baseado na técnica de inferência de Mamdani. Para a fuzzificação das variáveis de entrada material particulado, ozônio, dióxido de enxofre e temperatura aparente foram consideradas duas funções de pertinência para cada variável com abordagem linguísticas: bom e ruim. Para a variável de saída número interna
Evaluating MJO Event Initiation and Decay in the Skeleton Model using an RMM-like Index
2015-11-25
the skeleton model. 441 The decrease in the number of separate MJO events in the skeleton model while 442 maintaining a nearly equal or slightly...Climate, 26, 859 1130-1151. 860 Thual, S., and A. J. Majda (2015), A skeleton model for the MJO with refined vertical structure, 861 Climate Dynam...1 2 Evaluating MJO Event Initiation and Decay in the Skeleton Model using an RMM-like Index 3 4 5 Justin P. Stachnik*1,2, Duane E
Performance evaluation of quality monitor models in spot welding
Institute of Scientific and Technical Information of China (English)
Zhang Zhongdian; Li Dongqing; Wang Kai
2005-01-01
Performance of quality monitor models in spot welding determines the monitor precision directly, so it's crucial to evaluate it. Previously, mean square error ( MSE ) is often used to evaluate performances of models, but it can only show the total errors of finite specimens of models, and cannot show whether the quality information inferred from models are accurate and reliable enough or not. For this reason, by means of measure error theory, a new way to evaluate the performances of models according to the error distributions is developed as follows: Only if correct and precise enough the error distribution of model is, the quality information inferred from model is accurate and reliable.
modelling for optimal number of line storage reservoirs in a water ...
African Journals Online (AJOL)
user
the reservoir system location be integrated into the ... systems experience (figure 1), storage facilities within the system .... Therefore, a computer programme for designing cost effective water distribution net works called Economic Number of.
Model Experiments with Low Reynolds Number Effects in a Ventilated Room
DEFF Research Database (Denmark)
Nielsen, Peter V.; Filholm, Claus; Topp, Claus;
The flow in a ventilated room will not always be a fully developed turbulent flow . Reduced air change rates owing to energy considerations and the application of natural ventilation with openings in the outer wall will give room air movements with low turbulence effects. This paper discusses...... the isothermal low Reynolds number flow from a slot inlet in the end wall of the room. The experiments are made on the scale of 1 to 5. Measurements indicate a low Reynolds number effect in the wall jet flow. The virtual origin of the wall jet moves forward in front of the opening at a small Reynolds number......, an effect that is also known from measurements on free jets. The growth rate of the jet, or the length scale, increases and the velocity decay factor decreases at small Reynolds numbers....
Baldoví, José J; Gaita-Ariño, Alejandro; Coronado, Eugenio
2015-07-28
In a previous study, we introduced the Radial Effective Charge (REC) model to study the magnetic properties of lanthanide single ion magnets. Now, we perform an empirical determination of the effective charges (Zi) and radial displacements (Dr) of this model using spectroscopic data. This systematic study allows us to relate Dr and Zi with chemical factors such as the coordination number and the electronegativities of the metal and the donor atoms. This strategy is being used to drastically reduce the number of free parameters in the modeling of the magnetic and spectroscopic properties of f-element complexes.
McLerran, Larry; Skokov, Vladimir V.
2017-01-01
We modify the McLerran-Venugopalan model to include only a finite number of sources of color charge. In the effective action for such a system of a finite number of sources, there is a point-like interaction and a Coulombic interaction. The point interaction generates the standard fluctuation term in the McLerran-Venugopalan model. The Coulomb interaction generates the charge screening originating from well known evolution in x. Such a model may be useful for computing angular harmonics of flow measured in high energy hadron collisions for small systems. In this paper we provide a basic formulation of the problem on a lattice.
Definition of Magnetic Monopole Numbers for SU(N) Lattice Gauge-Higgs Models
Hollands, S
2001-01-01
A geometric definition for a magnetic charge of Abelian monopoles in SU(N) lattice gauge theories with Higgs fields is presented. The corresponding local monopole number defined for almost all field configurations does not require gauge fixing and is stable against small perturbations. Its topological content is that of a 3-cochain. A detailed prescription for calculating the local monopole number is worked out. Our method generalizes a magnetic charge definition previously invented by Phillips and Stone for SU(2).
Global daily reference evapotranspiration modeling and evaluation
Senay, G.B.; Verdin, J.P.; Lietzow, R.; Melesse, Assefa M.
2008-01-01
Accurate and reliable evapotranspiration (ET) datasets are crucial in regional water and energy balance studies. Due to the complex instrumentation requirements, actual ET values are generally estimated from reference ET values by adjustment factors using coefficients for water stress and vegetation conditions, commonly referred to as crop coefficients. Until recently, the modeling of reference ET has been solely based on important weather variables collected from weather stations that are generally located in selected agro-climatic locations. Since 2001, the National Oceanic and Atmospheric Administration's Global Data Assimilation System (GDAS) has been producing six-hourly climate parameter datasets that are used to calculate daily reference ET for the whole globe at 1-degree spatial resolution. The U.S. Geological Survey Center for Earth Resources Observation and Science has been producing daily reference ET (ETo) since 2001, and it has been used on a variety of operational hydrological models for drought and streamflow monitoring all over the world. With the increasing availability of local station-based reference ET estimates, we evaluated the GDAS-based reference ET estimates using data from the California Irrigation Management Information System (CIMIS). Daily CIMIS reference ET estimates from 85 stations were compared with GDAS-based reference ET at different spatial and temporal scales using five-year daily data from 2002 through 2006. Despite the large difference in spatial scale (point vs. ???100 km grid cell) between the two datasets, the correlations between station-based ET and GDAS-ET were very high, exceeding 0.97 on a daily basis to more than 0.99 on time scales of more than 10 days. Both the temporal and spatial correspondences in trend/pattern and magnitudes between the two datasets were satisfactory, suggesting the reliability of using GDAS parameter-based reference ET for regional water and energy balance studies in many parts of the world
Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso
2016-01-01
ABSTRACT Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process. IMPORTANCE We developed a model to enable the quantitative assessment of bacterial survivors of inactivation procedures
Directory of Open Access Journals (Sweden)
Fernando Augusto de Souza
2014-07-01
Full Text Available The aim of this research was to evaluate the influence of the number and position of nutrient levels used in dose-response trials in the estimation of the optimal-level (OL and the goodness of fit on the models: quadratic polynomial (QP, exponential (EXP, linear response plateau (LRP and quadratic response plateau (QRP. It was used data from dose-response trials realized in FCAV-Unesp Jaboticabal considering the homogeneity of variances and normal distribution. The fit of the models were evaluated considered the following statistics: adjusted coefficient of determination (R²adj, coefficient of variation (CV and the sum of the squares of deviations (SSD.It was verified in QP and EXP models that small changes on the placement and distribution of the levels caused great changes in the estimation of the OL. The LRP model was deeply influenced by the absence or presence of the level between the response and stabilization phases (change in the straight to plateau. The QRP needed more levels on the response phase and the last level on stabilization phase to estimate correctly the plateau. It was concluded that the OL and the adjust of the models are dependent on the positioning and the number of the levels and the specific characteristics of each model, but levels defined near to the true requirement and not so spaced are better to estimate the OL.
Dynamic Ambient Noise Model (DANM) Evaluation Using Port Everglades Data
2006-05-31
PROGRAM ELEMENT NUMBER 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 2. REPORT TYPE1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 6...Thompson ASTRAL model (Version 5.0) or the Parabolic Equation (PE Version 5.1) model for calculating the TL grid. PE 5.1 is configured to use the Range...the hour (less than 0.76 dB for all frequencies; see Table 7). The PE model is expected to be more accurate (but slower) than ASTRAL for highly
Evaluation of the perceptual grouping parameter in the CTVA model
Directory of Open Access Journals (Sweden)
Manuel Cortijo
2005-01-01
Full Text Available The CODE Theory of Visual Attention (CTVA is a mathematical model explaining the effects of grouping by proximity and distance upon reaction times and accuracy of response with regard to elements in the visual display. The predictions of the theory agree quite acceptably in one and two dimensions (CTVA-2D with the experimental results (reaction times and accuracy of response. The difference between reaction-times for the compatible and incompatible responses, known as the responsecompatibility effect, is also acceptably predicted, except at small distances and high number of distractors. Further results using the same paradigm at even smaller distances have been now obtained, showing greater discrepancies. Then, we have introduced a method to evaluate the strength of sensory evidence (eta parameter, which takes grouping by similarity into account and minimizes these discrepancies.
Rotating Square-Ended U-Bend Using Low-Reynolds-Number Models
Directory of Open Access Journals (Sweden)
Konstantinos-Stephen P. Nikas
2005-01-01
bend is better reproduced by the low-Re models. Turbulence levels within the rotating U-bend are underpredicted, but DSM models produce a more realistic distribution. Along the leading side, all models overpredict heat transfer levels just after the bend. Along the trailing side, the heat transfer predictions of the low-Re DSM with the NYap, are close to the measurements.
Minimum required number of specimen records to develop accurate species distribution models
Proosdij, van A.S.J.; Sosef, M.S.M.; Wieringa, J.J.; Raes, N.
2016-01-01
Species distribution models (SDMs) are widely used to predict the occurrence of species. Because SDMs generally use presence-only data, validation of the predicted distribution and assessing model accuracy is challenging. Model performance depends on both sample size and species’ prevalence, being t
Minimum required number of specimen records to develop accurate species distribution models
Proosdij, van A.S.J.; Sosef, M.S.M.; Wieringa, Jan; Raes, N.
2015-01-01
Species Distribution Models (SDMs) are widely used to predict the occurrence of species. Because SDMs generally use presence-only data, validation of the predicted distribution and assessing model accuracy is challenging. Model performance depends on both sample size and species’ prevalence, being
Minimum required number of specimen records to develop accurate species distribution models
Proosdij, van A.S.J.; Sosef, M.S.M.; Wieringa, J.J.; Raes, N.
2016-01-01
Species distribution models (SDMs) are widely used to predict the occurrence of species. Because SDMs generally use presence-only data, validation of the predicted distribution and assessing model accuracy is challenging. Model performance depends on both sample size and species’ prevalence, being
Rhode Island Model Evaluation & Support System: Teacher. Edition III
Rhode Island Department of Education, 2015
2015-01-01
Rhode Island educators believe that implementing a fair, accurate, and meaningful educator evaluation and support system will help improve teaching and learning. The primary purpose of the Rhode Island Model Teacher Evaluation and Support System (Rhode Island Model) is to help all teachers improve. Through the Model, the goal is to help create a…
Evaluation of Fabric Hand with Grey Element Model
Institute of Scientific and Technical Information of China (English)
CHEN Dong-sheng; GAN Ying-jin; BAI Yue
2004-01-01
A premium composite grey element model is established and used for objective evaluation of fabric hand. Fabric hand is regarded as a grey system and the model is composed of fabric mechanical properties, which are primary hand attributes. Based on comparison with a standard model, fabric hand can be objectively evaluated.
An Evaluation of Muzzle Flash Prediction Models
1983-11-01
21005 10. PROGRAM ELEMENT, PROJECT. TASK AREA & WORK UNIT NUMBERS 1L161102AH43 It. CONTROLLING OFFICE NAME AND ADDRESS US Army AMCCOM, ARDC...Bracuti Dover, NJ 07801 Commander Armament R&D Ctr, USAAMCCOM ATTN: DRSMC-SCA, L. Stiefel B. Brodman DRSMC-LCB-I, D. Spring DRSMC-LCE, R...Number 2. Does this report satisfy a need? (Comment on purpose, related project, or other area of interest for which report will be used.) 3. How
Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu
2017-02-15
Despite effective inactivation procedures, small numbers of bacterial cells may still remain in food samples. The risk that bacteria will survive these procedures has not been estimated precisely because deterministic models cannot be used to describe the uncertain behavior of bacterial populations. We used the Poisson distribution as a representative probability distribution to estimate the variability in bacterial numbers during the inactivation process. Strains of four serotypes of Salmonella enterica, three serotypes of enterohemorrhagic Escherichia coli, and one serotype of Listeria monocytogenes were evaluated for survival. We prepared bacterial cell numbers following a Poisson distribution (indicated by the parameter λ, which was equal to 2) and plated the cells in 96-well microplates, which were stored in a desiccated environment at 10% to 20% relative humidity and at 5, 15, and 25°C. The survival or death of the bacterial cells in each well was confirmed by adding tryptic soy broth as an enrichment culture. Changes in the Poisson distribution parameter during the inactivation process, which represent the variability in the numbers of surviving bacteria, were described by nonlinear regression with an exponential function based on a Weibull distribution. We also examined random changes in the number of surviving bacteria using a random number generator and computer simulations to determine whether the number of surviving bacteria followed a Poisson distribution during the bacterial death process by use of the Poisson process. For small initial cell numbers, more than 80% of the simulated distributions (λ = 2 or 10) followed a Poisson distribution. The results demonstrate that variability in the number of surviving bacteria can be described as a Poisson distribution by use of the model developed by use of the Poisson process.
¿Evaluating or patchworking? An Evaluand-oriented Responsive Evaluation Model
Directory of Open Access Journals (Sweden)
Iván Jorrín Abellán
2009-12-01
Full Text Available This article presents the CSCL Evaluand-Oriented Responsive Evaluation Model, an evolving evaluation model, conceived as a “boundary object”, to be used in the evaluation of a wide range of CSCL systems. The model relies on a responsive evaluation approach and tries to provide potential evaluators with a practical tool to evaluate CSCL systems. The article is driven by a needlework metaphor that tries to illustrate the complexity of the traditions, perspectives and practical issues that converge in this proposal.
Evaluation of black carbon estimations in global aerosol models
Koch, D.; Schulz, M.; McNaughton, C.; Spackman, J.R.; Balkanski, Y.; Bauer, S.; Krol, M.C.
2009-01-01
We evaluate black carbon (BC) model predictions from the AeroCom model intercomparison project by considering the diversity among year 2000 model simulations and comparing model predictions with available measurements. These model-measurement intercomparisons include BC surface and aircraft concentr
Issues in Value-at-Risk Modeling and Evaluation
J. Danielsson; C.G. de Vries (Casper); B.N. Jorgensen (Bjørn); P.F. Christoffersen (Peter); F.X. Diebold (Francis); T. Schuermann (Til); J.A. Lopez (Jose); B. Hirtle (Beverly)
1998-01-01
textabstractDiscusses the issues in value-at-risk modeling and evaluation. Value of value at risk; Horizon problems and extreme events in financial risk management; Methods of evaluating value-at-risk estimates.
Lepton number, black hole entropy and 10 to the 32 copies of the Standard Model
Kovalenko, Sergey; Schmidt, Ivan
2010-01-01
Lepton number violating processes are a typical problem in theories with a low quantum gravity scale. In this paper we examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. Naively one would expect black holes to introduce TeV scale LNV operators, thus generating unacceptably large rates of LNV processes. We show, however, that this does not happen in this scenario due to a complicated compensation mechanism between contributions of different Majorana neutrino states to these processes. As a result rates of LNV processes are extremely small and far beyond experimental reach, at least for the left-handed neutrino states.
Evaluating Vocational Programs: A Three Dimensional Model.
Rehman, Sharaf N.; Nejad, Mahmoud
The traditional methods of assessing the academic programs in the liberal arts are inappropriate for evaluating vocational and technical programs. In traditional academic disciplines, assessment of instruction is conducted in two fashions: student evaluation at the end of a course and institutional assessment of its goals and mission. Because of…
Modeling and Evaluating Emotions Impact on Cognition
2013-07-01
International Conference on Automatic Face and Gesture Recognition . Shanghai, China, April 2013 • Wenji Mao and Jonathan Gratch. Modeling Social...Modeling, Lorentz Center, Leiden. August 2011 • Keynote speaker, IEEE International Conference on Automatic Face and Gesture Recognition , Santa
Statistical models of shape optimisation and evaluation
Davies, Rhodri; Taylor, Chris
2014-01-01
Deformable shape models have wide application in computer vision and biomedical image analysis. This book addresses a key issue in shape modelling: establishment of a meaningful correspondence between a set of shapes. Full implementation details are provided.
Feoktistova, V S; Vavilkova, T V; Sirotkina, O V; Boldueva, S A; Gaikovaia, L B; Leonova, I A; Laskovets, A B; Ermakov, A I
2015-04-01
The endothelium dysfunction takes leading place in pathogenesis of development of cardiovascular diseases. The circulating endothelium cells of peripheral blood can act as a direct cell marker of damage and remodeling of endothelium. The study was carried out to develop a new approach to diagnose of endothelium dysfunction by force of determination of number of circulating endothelium cells using flow cytometry technique and to apply determination of circulating endothelium cells for evaluation of risk of development of ischemic heart disease in women of young and middle age. The study embraced 62 female patients with angiography confirmed ischemic heart disease, exertional angina pectoris at the level of functional class I-II (mean age 51 ± 6 years) and 49 women without anamnesis of ischemic heart disease (mean age 52 ± 9 years). The occurrence of more than three circulating endothelium cells by 3 x 105 leukocytes in peripheral blood increases relative risk of development of ischemic heart disease up to 4 times in women of young and middle age and risk of development of acute myocardial infarction up to 8 times in women with ischemic heart disease. The study demonstrated possibility to apply flow cytometry technique to quantitatively specify circulating endothelium cells in peripheral blood and forecast risk of development of ischemic heart disease in women of young and middle age depending on level of circulating endothelium cells.
SIMPLEBOX: a generic multimedia fate evaluation model
Meent D van de
1993-01-01
This document describes the technical details of the multimedia fate model SimpleBox, version 1.0 (930801). SimpleBox is a multimedia box model of what is commonly referred to as a "Mackay-type" model ; it assumes spatially homogeneous environmental compartments (air, water, suspended matter, aquati
SIMPLEBOX: a generic multimedia fate evaluation model
van de Meent D
1993-01-01
This document describes the technical details of the multimedia fate model SimpleBox, version 1.0 (930801). SimpleBox is a multimedia box model of what is commonly referred to as a "Mackay-type" model ; it assumes spatially homogeneous environmental compartments (air, water, suspended m
Directory of Open Access Journals (Sweden)
S. Zengah
2013-06-01
Full Text Available Fatigue damage increases with applied load cycles in a cumulative manner. Fatigue damage models play a key role in life prediction of components and structures subjected to random loading. The aim of this paper is the examination of the performance of the “Damaged Stress Model”, proposed and validated, against other fatigue models under random loading before and after reconstruction of the load histories. To achieve this objective, some linear and nonlinear models proposed for fatigue life estimation and a batch of specimens made of 6082T6 aluminum alloy is subjected to random loading. The damage was cumulated by Miner’s rule, Damaged Stress Model (DSM, Henry model and Unified Theory (UT and random cycles were counted with a rain-flow algorithm. Experimental data on high-cycle fatigue by complex loading histories with different mean and amplitude stress values are analyzed for life calculation and model predictions are compared.
Bifurcations and complex dynamics of an SIR model with the impact of the number of hospital beds
Shan, Chunhua; Zhu, Huaiping
2014-09-01
In this paper we establish an SIR model with a standard incidence rate and a nonlinear recovery rate, formulated to consider the impact of available resource of the public health system especially the number of hospital beds. For the three dimensional model with total population regulated by both demographics and diseases incidence, we prove that the model can undergo backward bifurcation, saddle-node bifurcation, Hopf bifurcation and cusp type of Bogdanov-Takens bifurcation of codimension 3. We present the bifurcation diagram near the cusp type of Bogdanov-Takens bifurcation point of codimension 3 and give epidemiological interpretation of the complex dynamical behaviors of endemic due to the variation of the number of hospital beds. This study suggests that maintaining enough number of hospital beds is crucial for the control of the infectious diseases.
Directory of Open Access Journals (Sweden)
António J.A. Santos
2014-09-01
Full Text Available A total of 120 Acacia melanoxylon R. Br. (Australian blackwood stem discs, belonging to 20 trees from four sites in Portugal, were used in this study. The samples were kraft pulped under standard identical conditions targeted to a Kappa number of 15. A Near Infrared (NIR partial least squares regression (PLSR model was developed for the Kappa number prediction using 75 pulp samples with a narrow Kappa number variation range of 10 to 17. Very good correlations between NIR spectra of A. melanoxylon pulps and Kappa numbers were obtained. Besides the raw spectra, also pre-processed spectra with ten methods were used for PLS analysis (cross validation with 48 samples, and a test set validation was made with 27 samples. The first derivative spectra in the wavenumber range from 6110 to 5440 cm-1 yielded the best model with a root mean square error of prediction of 0.4 units of Kappa number, a coefficient of determination of 92.1%, and two PLS components, with the ratios of performance to deviation (RPD of 3.6 and zero outliers. The obtained NIR-PLSR model for Kappa number determination is sufficiently accurate to be used in screening programs and in quality control.
Modelling Problem-Solving Situations into Number Theory Tasks: The Route towards Generalisation
Papadopoulos, Ioannis; Iatridou, Maria
2010-01-01
This paper examines the way two 10th graders cope with a non-standard generalisation problem that involves elementary concepts of number theory (more specifically linear Diophantine equations) in the geometrical context of a rectangle's area. Emphasis is given on how the students' past experience of problem solving (expressed through interplay…
STUDY ON NEW PASSIVE SCALAR FLUX MODEL WITH DIFFUSIVITY OF COMPLEX NUMBER
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
The turbulent passive scalar fluxes were studied by separately considering the contributions of small-eddy motions and large-eddy ones.Explicit algebraic approximation was achieved for both small-eddy and large-eddy scalar fluxes.Especially, the large-eddy scalar flux was modelled with complex diffusivity.The singular difficulties in usual algebraic scalar models, do not occur any more in this model.In addition, this new model provides a new way to reasonably describe the negative transport phenomena appearing in asymmetric turbulent flows.
Evaluation of long-range transport models in NOVANA; Evaluering af langtransportmodeller i NOVANA
Energy Technology Data Exchange (ETDEWEB)
Frohn, L.M.; Brandt, J.; Christensen, J.H.; Geels, C.; Hertel, O.; Skjoeth, C.A.; Ellemann, T.
2007-06-15
The Lagrangian model ACDEP which has been applied in BOP/-NOVA/NOVANA during the period 1995-2004, has been replaced by the more modern Eulerian model DEHM. The new model has a number of advantages, such as a better description of the three-dimensional atmospheric transport, a larger domain, a possibility for high spatial resolution in the calculations and a more detailed description of photochemical processes and dry deposition. In advance of the replacement, the results of the two models have been compared and evaluated using European and Danish measurements. Calculations have been performed with both models applying the same meteorological and emission input, for Europe for the year 2000 as well as for Denmark for the period 2000-2003. The European measurements applied in the present evaluation are obtained through EMEP. Using these measurements DEHM and ACDEP have been compared with respect to daily and yearly mean concentrations of ammonia (NH{sub 3}), ammonium (NH{sub 4}{sup +}), the sum of NH{sub 3} and NH{sub 4}{sup +} (SNH), nitric acid (HNO{sub 3}), nitrate (NO{sub 3}{sup -}), the sum of HNO{sub 3} and NO{sub 3}{sup -} (SNO{sub 3}), nitrogen dioxide (NO{sub 2}), ozone (O{sub 3}), sulphur dioxide (SO{sub 2}) and sulphate (SO{sub 4}{sup 2-}) as well as the hourly mean and daily maximum concentrations of O{sub 3}. Furthermore the daily and yearly total values of precipitation and wet deposition of NH{sub 4}{sup +}, NO{sub 3}{sup -} and SO{sub 4}{sup 2-} have been compared for the two models. The statistical parameters applied in the comparison are correlation, bias and fractional bias. The result of the comparison with the EMEP data is, that DEHM achieves better correlation coefficients for all chemical parameters (16 parameters in total) when the daily values are analysed, and for 15 out of 16 parameters when yearly values are taken into account. With respect to the fractional bias, the results obtained with DEHM are better than the corresponding results
QUALITY OF AN ACADEMIC STUDY PROGRAMME - EVALUATION MODEL
Directory of Open Access Journals (Sweden)
Mirna Macur
2016-01-01
Full Text Available Quality of an academic study programme is evaluated by many: employees (internal evaluation and by external evaluators: experts, agencies and organisations. Internal and external evaluation of an academic programme follow written structure that resembles on one of the quality models. We believe the quality models (mostly derived from EFQM excellence model don’t fit very well into non-profit activities, policies and programmes, because they are much more complex than environment, from which quality models derive from (for example assembly line. Quality of an academic study programme is very complex and understood differently by various stakeholders, so we present dimensional evaluation in the article. Dimensional evaluation, as opposed to component and holistic evaluation, is a form of analytical evaluation in which the quality of value of the evaluand is determined by looking at its performance on multiple dimensions of merit or evaluation criteria. First stakeholders of a study programme and their views, expectations and interests are presented, followed by evaluation criteria. They are both joined into the evaluation model revealing which evaluation criteria can and should be evaluated by which stakeholder. Main research questions are posed and research method for each dimension listed.
Vigren, E.; Altwegg, K.; Edberg, N. J. T.; Eriksson, A. I.; Galand, M.; Henri, P.; Johansson, F.; Odelstad, E.; Tzou, C.-Y.; Valliéres, X.
2016-09-01
During 2015 January 9-11, at a heliocentric distance of ˜2.58-2.57 au, the ESA Rosetta spacecraft resided at a cometocentric distance of ˜28 km from the nucleus of comet 67P/Churyumov-Gerasimenko, sweeping the terminator at northern latitudes of 43°N-58°N. Measurements by the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis/Comet Pressure Sensor (ROSINA/COPS) provided neutral number densities. We have computed modeled electron number densities using the neutral number densities as input into a Field Free Chemistry Free model, assuming H2O dominance and ion-electron pair formation by photoionization only. A good agreement (typically within 25%) is found between the modeled electron number densities and those observed from measurements by the Mutual Impedance Probe (RPC/MIP) and the Langmuir Probe (RPC/LAP), both being subsystems of the Rosetta Plasma Consortium. This indicates that ions along the nucleus-spacecraft line were strongly coupled to the neutrals, moving radially outward with about the same speed. Such a statement, we propose, can be further tested by observations of H3O+/H2O+ number density ratios and associated comparisons with model results.
Peñagaricano, F; Urioste, J I; Naya, H; de los Campos, G; Gianola, D
2011-04-01
Black skin spots are associated with pigmented fibres in wool, an important quality fault. Our objective was to assess alternative models for genetic analysis of presence (BINBS) and number (NUMBS) of black spots in Corriedale sheep. During 2002-08, 5624 records from 2839 animals in two flocks, aged 1 through 6 years, were taken at shearing. Four models were considered: linear and probit for BINBS and linear and Poisson for NUMBS. All models included flock-year and age as fixed effects and animal and permanent environmental as random effects. Models were fitted to the whole data set and were also compared based on their predictive ability in cross-validation. Estimates of heritability ranged from 0.154 to 0.230 for BINBS and 0.269 to 0.474 for NUMBS. For BINBS, the probit model fitted slightly better to the data than the linear model. Predictions of random effects from these models were highly correlated, and both models exhibited similar predictive ability. For NUMBS, the Poisson model, with a residual term to account for overdispersion, performed better than the linear model in goodness of fit and predictive ability. Predictions of random effects from the Poisson model were more strongly correlated with those from BINBS models than those from the linear model. Overall, the use of probit or linear models for BINBS and of a Poisson model with a residual for NUMBS seems a reasonable choice for genetic selection purposes in Corriedale sheep.
Institute of Scientific and Technical Information of China (English)
甘才俊; 吴子牛
2003-01-01
The slipflow model is usually used to study microflows when the Knudsen number lies between 0.01 and 0.1. The instability due to microscale effect seems to have never been studied before.In this paper we present preliminary results for the instability (not physical instability) of this model when applied to microchannel flow with a vanishing Reynolds number. The present paper is restricted to symmetrical mode. Both first-order and second-order slip boundary conditions will be considered.
DEFF Research Database (Denmark)
Kumar, Prashant; Garmory, Andrew; Ketzel, Matthias
2009-01-01
Pollution Model (OSPM) and Computational Fluid Dynamics (CFD) code FLUENT. All models disregarded any particle dynamics. CFD simulations have been carried out in a simplified geometry of the selected street canyon. Four different sizes of emission sources have been used in the CFD simulations to assess...
438 Optimal Number of States in Hidden Markov Models and its ...
African Journals Online (AJOL)
(Al-Ani, et al., 2007) or Artificial Neural Networks (Zheng & Koenig, n.d.) can ... A Hidden Markov Model (R.Rabiner, 1989) is a stochastic finite state machine ..... likelihood of other models (i.e. for different states), the learning procedure is.
Robust Medical Test Evaluation Using Flexible Bayesian Semiparametric Regression Models
Directory of Open Access Journals (Sweden)
Adam J. Branscum
2013-01-01
Full Text Available The application of Bayesian methods is increasing in modern epidemiology. Although parametric Bayesian analysis has penetrated the population health sciences, flexible nonparametric Bayesian methods have received less attention. A goal in nonparametric Bayesian analysis is to estimate unknown functions (e.g., density or distribution functions rather than scalar parameters (e.g., means or proportions. For instance, ROC curves are obtained from the distribution functions corresponding to continuous biomarker data taken from healthy and diseased populations. Standard parametric approaches to Bayesian analysis involve distributions with a small number of parameters, where the prior specification is relatively straight forward. In the nonparametric Bayesian case, the prior is placed on an infinite dimensional space of all distributions, which requires special methods. A popular approach to nonparametric Bayesian analysis that involves Polya tree prior distributions is described. We provide example code to illustrate how models that contain Polya tree priors can be fit using SAS software. The methods are used to evaluate the covariate-specific accuracy of the biomarker, soluble epidermal growth factor receptor, for discerning lung cancer cases from controls using a flexible ROC regression modeling framework. The application highlights the usefulness of flexible models over a standard parametric method for estimating ROC curves.
Standardizing the performance evaluation of short-term wind prediction models
DEFF Research Database (Denmark)
Madsen, Henrik; Pinson, Pierre; Kariniotakis, G.
2005-01-01
Short-term wind power prediction is a primary requirement for efficient large-scale integration of wind generation in power systems and electricity markets. The choice of an appropriate prediction model among the numerous available models is not trivial, and has to be based on an objective...... evaluation of model performance. This paper proposes a standardized protocol for the evaluation of short-term wind-poser preciction systems. A number of reference prediction models are also described, and their use for performance comparison is analysed. The use of the protocol is demonstrated using results...
Energy Technology Data Exchange (ETDEWEB)
Piepel, G.; Redgate, T. [Pacific Northwest National Lab., Richland, WA (United States). Statistics Group
1997-12-01
Statistical mixture experiment techniques were applied to a waste glass data set to investigate the effects of the glass components on Product Consistency Test (PCT) sodium release (NR) and to develop a model for PCT NR as a function of the component proportions. The mixture experiment techniques indicate that the waste glass system can be reduced from nine to four components for purposes of modeling PCT NR. Empirical mixture models containing four first-order terms and one or two second-order terms fit the data quite well, and can be used to predict the NR of any glass composition in the model domain. The mixture experiment techniques produce a better model in less time than required by another approach.
Models of economic geography: dynamics, estimation and policy evaluation
Knaap, Thijs
2004-01-01
In this thesis we look at economic geography models from a number of angles. We started by placing the theory in a context of preceding theories, both earlier work on spatial economics and other children of the monopolistic competition ‘revolution.’ Next, we looked at the theoretical properties of these models, especially when we allow firms to have different demand functions for intermediate goods. We estimated the model using a dataset on US states, and computed a number of counterfactuals....
Nearfield Unsteady Pressures at Cruise Mach Numbers for a Model Scale Counter-Rotation Open Rotor
Stephens, David B.
2012-01-01
An open rotor experiment was conducted at cruise Mach numbers and the unsteady pressure in the nearfield was measured. The system included extensive performance measurements, which can help provide insight into the noise generating mechanisms in the absence of flow measurements. A set of data acquired at a constant blade pitch angle but various rotor speeds was examined. The tone levels generated by the front and rear rotor were found to be nearly equal when the thrust was evenly balanced between rotors.
Directory of Open Access Journals (Sweden)
Z. Jurányi
2010-08-01
Full Text Available Atmospheric aerosol particles are able to act as cloud condensation nuclei (CCN and are therefore important for the climate and the hydrological cycle, but their properties are not fully understood. Total CCN number concentrations at 10 different supersaturations in the range of SS=0.12–1.18% were measured in May 2008 at the remote high alpine research station, Jungfraujoch, Switzerland (3580 m a.s.l.. In this paper, we present a closure study between measured and predicted CCN number concentrations. CCN predictions were done using dry number size distribution (scanning particle mobility sizer, SMPS and bulk chemical composition data (aerosol mass spectrometer, AMS, and multi-angle absorption photometer, MAAP in a simplified Köhler theory. The predicted and the measured CCN number concentrations agree very well and are highly correlated. A sensitivity study showed that the temporal variability of the chemical composition at the Jungfraujoch can be neglected for a reliable CCN prediction, whereas it is important to know the mean chemical composition. The exact bias introduced by using a too low or too high hygroscopicity parameter for CCN prediction was further quantified and shown to be substantial for the lowest supersaturation.
Despite the high average organic mass fraction (~45% in the fine mode, there was no indication that the surface tension was substantially reduced at the point of CCN activation. A comparison between hygroscopicity tandem differential mobility analyzer (HTDMA, AMS/MAAP, and CCN derived κ values showed that HTDMA measurements can be used to determine particle hygroscopicity required for CCN predictions if no suitable chemical composition data are available.
Jurányi, Z.; Gysel, M.; Weingartner, E.; Decarlo, P. F.; Kammermann, L.; Baltensperger, U.
2010-08-01
Atmospheric aerosol particles are able to act as cloud condensation nuclei (CCN) and are therefore important for the climate and the hydrological cycle, but their properties are not fully understood. Total CCN number concentrations at 10 different supersaturations in the range of SS=0.12-1.18% were measured in May 2008 at the remote high alpine research station, Jungfraujoch, Switzerland (3580 m a.s.l.). In this paper, we present a closure study between measured and predicted CCN number concentrations. CCN predictions were done using dry number size distribution (scanning particle mobility sizer, SMPS) and bulk chemical composition data (aerosol mass spectrometer, AMS, and multi-angle absorption photometer, MAAP) in a simplified Köhler theory. The predicted and the measured CCN number concentrations agree very well and are highly correlated. A sensitivity study showed that the temporal variability of the chemical composition at the Jungfraujoch can be neglected for a reliable CCN prediction, whereas it is important to know the mean chemical composition. The exact bias introduced by using a too low or too high hygroscopicity parameter for CCN prediction was further quantified and shown to be substantial for the lowest supersaturation. Despite the high average organic mass fraction (~45%) in the fine mode, there was no indication that the surface tension was substantially reduced at the point of CCN activation. A comparison between hygroscopicity tandem differential mobility analyzer (HTDMA), AMS/MAAP, and CCN derived κ values showed that HTDMA measurements can be used to determine particle hygroscopicity required for CCN predictions if no suitable chemical composition data are available.
Peltier Thermoelectric Modules Modeling and Evaluation
Chakib Alaoui
2011-01-01
The purpose of this work is to develop and experimentally test a model for the Peltier effect heat pump for the transient simulation in Spice software. The proposed model uses controlled sources and lumped components and its parameters can be directly calculated from the manufacturer’s data-sheets. In order to validate this model, a refrigeration chamber was designed and fabricated by using the Peltier modules. The overall system was experimentally tested and simulated with Spice. The simulat...
Directory of Open Access Journals (Sweden)
Ilse Storch
2002-06-01
Full Text Available This paper explores the effects of spatial resolution on the performance and applicability of habitat models in wildlife management and conservation. A Habitat Suitability Index (HSI model for the Capercaillie (Tetrao urogallus in the Bavarian Alps, Germany, is presented. The model was exclusively built on non-spatial, small-scale variables of forest structure and without any consideration of landscape patterns. The main goal was to assess whether a HSI model developed from small-scale habitat preferences can explain differences in population abundance at larger scales. To validate the model, habitat variables and indirect sign of Capercaillie use (such as feathers or feces were mapped in six study areas based on a total of 2901 20 m radius (for habitat variables and 5 m radius sample plots (for Capercaillie sign. First, the model's representation of Capercaillie habitat preferences was assessed. Habitat selection, as expressed by Ivlev's electivity index, was closely related to HSI scores, increased from poor to excellent habitat suitability, and was consistent across all study areas. Then, habitat use was related to HSI scores at different spatial scales. Capercaillie use was best predicted from HSI scores at the small scale. Lowering the spatial resolution of the model stepwise to 36-ha, 100-ha, 400-ha, and 2000-ha areas and relating Capercaillie use to aggregated HSI scores resulted in a deterioration of fit at larger scales. Most importantly, there were pronounced differences in Capercaillie abundance at the scale of study areas, which could not be explained by the HSI model. The results illustrate that even if a habitat model correctly reflects a species' smaller scale habitat preferences, its potential to predict population abundance at larger scales may remain limited.
Estimating the annual number of breeding attempts from breeding dates using mixture models.
Cornulier, Thomas; Elston, David A; Arcese, Peter; Benton, Tim G; Douglas, David J T; Lambin, Xavier; Reid, Jane; Robinson, Robert A; Sutherland, William J
2009-11-01
Well-established statistical methods exist to estimate variation in a number of key demographic rates from field data, including life-history transition probabilities and reproductive success per attempt. However, our understanding of the processes underlying population change remains incomplete without knowing the number of reproductive attempts individuals make annually; this is a key demographic rate for which we have no satisfactory method of estimating. Using census data to estimate this parameter from requires disaggregating the overlying temporal distributions of first and subsequent breeding attempts. We describe a Bayesian mixture method to estimate the annual number of reproductive attempts from field data to provide a new tool for demographic inference. We validate our method using comprehensive data on individually-marked song sparrows Melospiza melodia, and then apply it to more typical nest record data collected over 45 years on yellowhammers Emberiza citrinella. We illustrate the utility of our method by testing, and rejecting, the hypothesis that declines in UK yellowhammer populations have occurred concurrently with declines in annual breeding frequency.
A Regional Climate Model Evaluation System Project
National Aeronautics and Space Administration — Develop a packaged data management infrastructure for the comparison of generated climate model output to existing observational datasets that includes capabilities...
Evaluation of Turbulence Models in Gas Dispersion
Moen, Alexander
2016-01-01
Several earlier model validation studies for predicting gas dispersion scenarios have been conducted for the three RANS two-equation eddy viscosity turbulence models, the standard k-ε (SKE), Re- Normalisation group k-ε (RNG) and Realizable k-ε (Realizable). However, these studies have mainly validated one or two of the models, and have mostly used one simulation case as a basis for determining which model is the best suited for predicting such scenarios. In addition, the studies have shown co...
On the paradoxical evolution of the number of photons in a new model of interpolating Hamiltonians
Valverde, C
2016-01-01
We introduce a new Hamiltonian model which interpolates between the Jaynes-Cummings model and other types of such Hamiltonians. It works with two interpolating parameters, rather than one as traditional. Taking advantage of this greater degree of freedom, we can perform continuous interpolation between the various types of these Hamiltonians. As applications we discuss a paradox raised in literature and compare the time evolution of photon statistics obtained in the various interpolating models. The role played by the average excitation in these comparisons is also highlighted.
Prandtl number effects in MRT lattice Boltzmann models for shocked and unshocked compressible fluids
Institute of Scientific and Technical Information of China (English)
无
2011-01-01
This paper constructs a new multiple relaxation time lattice Boltzmann model which is not only for the shocked compressible fluids,but also for the unshocked compressible fluids.To make the model work for unshocked compressible fluids,a key step is to modify the collision operators of energy flux so that the viscous coefficient in momentum equation is consistent with that in energy equation even in the unshocked system.The unnecessity of the modification for systems under strong shock is analyzed.The model ...
Supersymmetric Froggatt-Nielsen Models with Baryon- and Lepton-Number Violation
Dreiner, H K; Thormeier, Marc
2004-01-01
We systematically investigate the embedding of U(1)_X Froggatt-Nielsen models in local supersymmetry. We restrict ourselves to models with a single flavon field. We do not impose a discrete symmetry by hand, e.g. R-parity, baryon-parity or lepton-parity. Thus we determine the order of magnitude of the baryon- and/or lepton violating coupling constants through the Froggatt-Nielsen mechanism. We then scrutinize whether the predicted coupling constants are in accord with weak or GUT scale constraints. Many models turn out to be incompatible.
The multi-scale aerosol-climate model PNNL-MMF: model description and evaluation
Directory of Open Access Journals (Sweden)
M. Wang
2010-10-01
Full Text Available Anthropogenic aerosol effects on climate produce one of the largest uncertainties in estimates of radiative forcing of past and future climate change. Much of this uncertainty arises from the multi-scale nature of the interactions between aerosols, clouds and large-scale dynamics, which are difficult to represent in conventional global climate models (GCMs. In this study, we develop a multi-scale aerosol climate model that treats aerosols and clouds across different scales, and evaluate the model performance, with a focus on aerosol treatment. This new model is an extension of a multi-scale modeling framework (MMF model that embeds a cloud-resolving model (CRM within each grid column of a GCM. In this extension, the effects of clouds on aerosols are treated by using an explicit-cloud parameterized-pollutant (ECPP approach that links aerosol and chemical processes on the large-scale grid with statistics of cloud properties and processes resolved by the CRM. A two-moment cloud microphysics scheme replaces the simple bulk microphysics scheme in the CRM, and a modal aerosol treatment is included in the GCM. With these extensions, this multi-scale aerosol-climate model allows the explicit simulation of aerosol and chemical processes in both stratiform and convective clouds on a global scale.
Simulated aerosol budgets in this new model are in the ranges of other model studies. Simulated gas and aerosol concentrations are in reasonable agreement with observations, although the model underestimates black carbon concentrations at the surface. Simulated aerosol size distributions are in reasonable agreement with observations in the marine boundary layer and in the free troposphere, while the model underestimates the accumulation mode number concentrations near the surface, and overestimates the accumulation number concentrations in the free troposphere. Simulated cloud condensation nuclei (CCN concentrations are within the observational
Optimization and evaluation of probabilistic-logic sequence models
DEFF Research Database (Denmark)
Christiansen, Henning; Lassen, Ole Torp
Analysis of biological sequence data demands more and more sophisticated and fine-grained models, but these in turn introduce hard computational problems. A class of probabilistic-logic models is considered, which increases the expressibility from HMM's and SCFG's regular and context-free languages...... for preprocessing or splitting them into submodels. An evaluation method for approximating models is suggested based on automatic generation of samples. These models and evaluation processes are illustrated in the PRISM system developed by other authors....
Evaluation of Fast-Time Wake Vortex Prediction Models
Proctor, Fred H.; Hamilton, David W.
2009-01-01
Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.
Evaluating Energy Efficiency Policies with Energy-Economy Models
Energy Technology Data Exchange (ETDEWEB)
Mundaca, Luis; Neij, Lena; Worrell, Ernst; McNeil, Michael A.
2010-08-01
The growing complexities of energy systems, environmental problems and technology markets are driving and testing most energy-economy models to their limits. To further advance bottom-up models from a multidisciplinary energy efficiency policy evaluation perspective, we review and critically analyse bottom-up energy-economy models and corresponding evaluation studies on energy efficiency policies to induce technological change. We use the household sector as a case study. Our analysis focuses on decision frameworks for technology choice, type of evaluation being carried out, treatment of market and behavioural failures, evaluated policy instruments, and key determinants used to mimic policy instruments. Although the review confirms criticism related to energy-economy models (e.g. unrealistic representation of decision-making by consumers when choosing technologies), they provide valuable guidance for policy evaluation related to energy efficiency. Different areas to further advance models remain open, particularly related to modelling issues, techno-economic and environmental aspects, behavioural determinants, and policy considerations.
Evaluation of EOR Processes Using Network Models
DEFF Research Database (Denmark)
Larsen, Jens Kjell; Krogsbøll, Anette
1998-01-01
The report consists of the following parts: 1) Studies of wetting properties of model fluids and fluid mixtures aimed at an optimal selection of candidates for micromodel experiments. 2) Experimental studies of multiphase transport properties using physical models of porous networks (micromodels)...
Evaluation of spinal cord injury animal models
Institute of Scientific and Technical Information of China (English)
Ning Zhang; Marong Fang; Haohao Chen; Fangming Gou; Mingxing Ding
2014-01-01
Because there is no curative treatment for spinal cord injury, establishing an ideal animal model is important to identify injury mechanisms and develop therapies for individuals suffering from spinal cord injuries. In this article, we systematically review and analyze various kinds of animal models of spinal cord injury and assess their advantages and disadvantages for further studies.
Evaluating Econometric Models and Expert Intuition
R. Legerstee (Rianne)
2012-01-01
textabstractThis thesis is about forecasting situations which involve econometric models and expert intuition. The first three chapters are about what it is that experts do when they adjust statistical model forecasts and what might improve that adjustment behavior. It is investigated how expert for
Mambrini, Y.; Moultaka, G.
2001-01-01
We reconsider the Infrared Quasi Fixed Points which were studied recently in the literature in the context of the Baryon and Lepton number violating Minimal Supersymmetric Standard Model (hep-ph/0011274). The complete analysis requires further care and reveals more structure than what was previously shown. The formalism we develop here is quite general, and can be readily applied to a large class of models.
Bauer, Christopher
1993-11-01
Stirling engine heat exchangers are shell-and-tube type with oscillatory flow (zero-mean velocity) for the inner fluid. This heat transfer process involves laminar-transition turbulent flow motions under oscillatory flow conditions. A low Reynolds number kappa-epsilon model, (Lam-Bremhorst form), was utilized in the present study to simulate fluid flow and heat transfer in a circular tube. An empirical transition model was used to activate the low Reynolds number k-e model at the appropriate time within the cycle for a given axial location within the tube. The computational results were compared with experimental flow and heat transfer data for: (1) velocity profiles, (2) kinetic energy of turbulence, (3) skin friction factor, (4) temperature profiles, and (5) wall heat flux. The experimental data were obtained for flow in a tube (38 mm diameter and 60 diameter long), with the maximum Reynolds number based on velocity being Re(sub max) = 11840, a dimensionless frequency (Valensi number) of Va = 80.2, at three axial locations X/D = 16, 30 and 44. The agreement between the computations and the experiment is excellent in the laminar portion of the cycle and good in the turbulent portion. Moreover, the location of transition was predicted accurately. The Low Reynolds Number kappa-epsilon model, together with an empirical transition model, is proposed herein to generate the wall heat flux values at different operating parameters than the experimental conditions. Those computational data can be used for testing the much simpler and less accurate one dimensional models utilized in 1-D Stirling Engine design codes.
Haghani, Shima; Sedehi, Morteza; Kheiri, Soleiman
2017-09-02
Traditional statistical models often are based on certain presuppositions and limitations that may not presence in actual data and lead to turbulence in estimation or prediction. In these situations, artificial neural networks (ANNs) could be suitable alternative rather than classical statistical methods. A prospective cohort study. The study was conducted in Shahrekord Blood Transfusion Center, Shahrekord, central Iran, on blood donors from 2008-2009. The accuracy of the proposed model to prediction of number of return to blood donations was compared with classical statistical models. A number of 864 donors who had a first-time successful donation were followed for five years. Number of return for blood donation was considered as response variable. Poisson regression (PR), negative binomial regression (NBR), zero-inflated Poisson regression (ZIPR) and zero-inflated negative binomial regression (ZINBR) as well as ANN model were fitted to data. MSE criterion was used to compare models. To fitting the models, STATISTICA 10 and, R 3.2.2 was used RESULTS: The MSE of PR, NBR, ZIPR, ZINBR and ANN models was obtained 2.71, 1.01, 1.54, 0.094 and 0.056 for the training and 4.05, 9.89, 3.99, 2.53 and 0.27 for the test data, respectively. The ANN model had the least MSE in both training, and test data set and has a better performance than classic models. ANN could be a suitable alternative for modeling such data because of fewer restrictions.
Moral development: a differential evaluation of dominant models.
Omery, A
1983-10-01
This article examines and evaluates the supporting evidence from the prevailing models of moral development. Using the criteria of empirical relevance, intersubjectivity, and usefulness, the classical model from psychoanalytic theory, Kohlberg's and Gilligan's models from cognitive developmental theory, and the social learning theory model are reviewed. Additional considerations such as the theoretical congruency and sex role bias of certain models are briefly discussed before concluding with the current use of the models by nursing.
Modeling and designing of variable-period and variable-pole-number undulator
Directory of Open Access Journals (Sweden)
I. Davidyuk
2016-02-01
Full Text Available The concept of permanent-magnet variable-period undulator (VPU was proposed several years ago and has found few implementations so far. The VPUs have some advantages as compared with conventional undulators, e.g., a wider range of radiation wavelength tuning and the option to increase the number of poles for shorter periods. Both these advantages will be realized in the VPU under development now at Budker INP. In this paper, we present the results of 2D and 3D magnetic field simulations and discuss some design features of this VPU.
van Dam, Herman T; Seifert, Stefan; Schaart, Dennis R
2012-08-07
In the design and application of scintillation detectors based on silicon photomultipliers (SiPMs), e.g. in positron emission tomography imaging, it is important to understand and quantify the non-proportionality of the SiPM response due to saturation, crosstalk and dark counts. A new type of SiPM, the so-called digital silicon photomultiplier (dSiPM), has recently been introduced. Here, we develop a model of the probability distribution of the number of fired microcells, i.e. the number of counted scintillation photons, in response to a given amount of energy deposited in a scintillator optically coupled to a dSiPM. Based on physical and functional principles, the model elucidates the statistical behavior of dSiPMs. The model takes into account the photon detection efficiency of the detector; the light yield, excess variance and time profile of the scintillator; and the crosstalk probability, dark count rate, integration time and the number of microcells of the dSiPM. Furthermore, relations for the expectation value and the variance of the number of fired cells are deduced. These relations are applied in the experimental validation of the model using a dSiPM coupled to a LSO:Ce,Ca scintillator. Finally, we propose an accurate method for the correction of energy spectra measured with dSiPM-based scintillation detectors.
Coastline modelling for nourishment strategy evaluation
Huisman, B.J.A.; Wang, Z.B.; De Ronde, J.G.; Stronkhorst, J.; Sprengers, C.J.
2013-01-01
Coastal zone managers in the Netherlands require new dedicated tools for the assessment of the long-term impacts of coastal maintenance policies. The policies need to be evaluated on the impacts on multiple coastal functions in order to be able to optimize the performance of such strategies. This pa
Designing and Evaluating Representations to Model Pedagogy
Masterman, Elizabeth; Craft, Brock
2013-01-01
This article presents the case for a theory-informed approach to designing and evaluating representations for implementation in digital tools to support Learning Design, using the framework of epistemic efficacy as an example. This framework, which is rooted in the literature of cognitive psychology, is operationalised through dimensions of fit…
MIRAGE: Model description and evaluation of aerosols and trace gases
Easter, Richard C.; Ghan, Steven J.; Zhang, Yang; Saylor, Rick D.; Chapman, Elaine G.; Laulainen, Nels S.; Abdul-Razzak, Hayder; Leung, L. Ruby; Bian, Xindi; Zaveri, Rahul A.
2004-10-01
The Model for Integrated Research on Atmospheric Global Exchanges (MIRAGE) modeling system, designed to study the impacts of anthropogenic aerosols on the global environment, is described. MIRAGE consists of a chemical transport model coupled online with a global climate model. The chemical transport model simulates trace gases, aerosol number, and aerosol chemical component mass (sulfate, methane sulfonic acid (MSA), organic matter, black carbon (BC), sea salt, and mineral dust) for four aerosol modes (Aitken, accumulation, coarse sea salt, and coarse mineral dust) using the modal aerosol dynamics approach. Cloud-phase and interstitial aerosol are predicted separately. The climate model, based on Community Climate Model, Version 2 (CCM2), has physically based treatments of aerosol direct and indirect forcing. Stratiform cloud water and droplet number are simulated using a bulk microphysics parameterization that includes aerosol activation. Aerosol and trace gas species simulated by MIRAGE are presented and evaluated using surface and aircraft measurements. Surface-level SO2 in North American and European source regions is higher than observed. SO2 above the boundary layer is in better agreement with observations, and surface-level SO2 at marine locations is somewhat lower than observed. Comparison with other models suggests insufficient SO2 dry deposition; increasing the deposition velocity improves simulated SO2. Surface-level sulfate in North American and European source regions is in good agreement with observations, although the seasonal cycle in Europe is stronger than observed. Surface-level sulfate at high-latitude and marine locations, and sulfate above the boundary layer, are higher than observed. This is attributed primarily to insufficient wet removal; increasing the wet removal improves simulated sulfate at remote locations and aloft. Because of the high sulfate bias, radiative forcing estimates for anthropogenic sulfur given in 2001 by S. J. Ghan and
Energy Technology Data Exchange (ETDEWEB)
Alvarez, Gabriel, E-mail: galvarez@fis.ucm.e [Departamento de Fisica Teorica II, Facultad de Ciencias Fisicas, Universidad Complutense, 28040 Madrid (Spain); Martinez Alonso, Luis, E-mail: luism@fis.ucm.e [Departamento de Fisica Teorica II, Facultad de Ciencias Fisicas, Universidad Complutense, 28040 Madrid (Spain); Medina, Elena, E-mail: elena.medina@uca.e [Departamento de Matematicas, Facultad de Ciencias, Universidad de Cadiz, 11510 Puerto Real, Cadiz (Spain)
2011-07-11
We present a method to compute the genus expansion of the free energy of Hermitian matrix models from the large N expansion of the recurrence coefficients of the associated family of orthogonal polynomials. The method is based on the Bleher-Its deformation of the model, on its associated integral representation of the free energy, and on a method for solving the string equation which uses the resolvent of the Lax operator of the underlying Toda hierarchy. As a byproduct we obtain an efficient algorithm to compute generating functions for the enumeration of labeled k-maps which does not require the explicit expressions of the coefficients of the topological expansion. Finally we discuss the regularization of singular one-cut models within this approach.
A dea model with a non discritionary variablefor olympic evaluation
Directory of Open Access Journals (Sweden)
Jõao Carlos C.B. Soares de Mello
2012-04-01
Full Text Available In recent years, a lot of work has been done dealing with alternative performance rankings for the Olympic Games. Almost all of these works use Data Envelopment Analysis (DEA. Generally speaking, those works can be divided into two categories: Pure rankings with unitary input models and relative rankings with classical DEA models; both output oriented. In this paper we introduce an approach taking into account the number of athletes as a proxy to the country investment in sports. This number is an input for a DEA model, and the other input is the population of the country. We have three outputs,the number of gold, silver and bronze medals earned by each country. Contrary to the usual approach in the literature, our model is not output oriented. It is a non-radial DEA model oriented to the "number of athletes" input, as our goal is not a countries' ranking. We intend to analyse whether the number of athletes competing for each country accords with the number of won medals. For this analysis, we compare eachcountry with its benchmarks. The Decision Making Units (DMU are all the countries participating in the Beijing Olympic Games, including those that did not earn a single medal. We use a BCC model and we compare each DMU's target with the number of athletes who have won, at least one medal.
A Validated All-Pressure Fluid Drop Model and Lewis Number Effects for a Binary Mixture
Harstad, K.; Bellan, J.
1999-01-01
The differences between subcritical liquid drop and supercritical fluid drop behavior are discussed. Under subcritical, evaporative high emission rate conditions, a film layer is present in the inner part of the drop surface which contributes to the unique determination of the boundary conditions; it is this film layer which contributes to the solution's convective-diffusive character. In contrast, under supercritical condition as the boundary conditions contain a degree of arbitrariness due to the absence of a surface, and the solution has then a purely diffusive character. Results from simulations of a free fluid drop under no-gravity conditions are compared to microgravity experimental data from suspended, large drop experiments at high, low and intermediary temperatures and in a range of pressures encompassing the sub-and supercritical regime. Despite the difference between the conditions of the simulations and experiments (suspension vs. free floating), the time rate of variation of the drop diameter square is remarkably well predicted in the linear curve regime. The drop diameter is determined in the simulations from the location of the maximum density gradient, and agrees well with the data. It is also shown that the classical calculation of the Lewis number gives qualitatively erroneous results at supercritical conditions, but that an effective Lewis number previously defined gives qualitatively correct estimates of the length scales for heat and mass transfer at all pressures.
Temporal stability of magic-number metal clusters: beyond the shell closing model
Desireddy, Anil; Kumar, Santosh; Guo, Jingshu; Bolan, Michael D.; Griffith, Wendell P.; Bigioni, Terry P.
2013-02-01
The anomalous stability of magic-number metal clusters has been associated with closed geometric and electronic shells and the opening of HOMO-LUMO gaps. Despite this enhanced stability, magic-number clusters are known to decay and react in the condensed phase to form other products. Improving our understanding of their decay mechanisms and developing strategies to control or eliminate cluster instability is a priority, to develop a more complete theory of their stability, to avoid studying mixtures of clusters produced by the decay of purified materials, and to enable technology development. Silver clusters are sufficiently reactive to facilitate the study of the ambient temporal stability of magic-number metal clusters and to begin to understand their decay mechanisms. Here, the solution phase stability of a series of silver:glutathione (Ag:SG) clusters was studied as a function of size, pH and chemical environment. Cluster stability was found to be a non-monotonic function of size. Electrophoretic separations showed that the dominant mechanism involved the redistribution of mass toward smaller sizes, where the products were almost exclusively previously known cluster sizes. Optical absorption spectra showed that the smaller clusters evolved toward the two most stable cluster sizes. The net surface charge was found to play an important role in cluster stabilization although charge screening had no effect on stability, contrary to DLVO theory. The decay mechanism was found to involve the loss of Ag+ ions and silver glutathionates. Clusters could be stabilized by the addition of Ag+ ions and destabilized by either the addition of glutathione or the removal of Ag+ ions. Clusters were also found to be most stable in near neutral pH, where they had a net negative surface charge. These results provide new mechanistic insights into the control of post-synthesis stability and chemical decay of magic-number metal clusters, which could be used to develop design principles
Rhode Island Model Evaluation & Support System: Support Professional. Edition II
Rhode Island Department of Education, 2015
2015-01-01
Rhode Island educators believe that implementing a fair, accurate, and meaningful evaluation and support system for support professionals will help improve student outcomes. The primary purpose of the Rhode Island Model Support Professional Evaluation and Support System (Rhode Island Model) is to help all support professionals do their best work…
Rhode Island Model Evaluation & Support System: Building Administrator. Edition III
Rhode Island Department of Education, 2015
2015-01-01
Rhode Island educators believe that implementing a fair, accurate, and meaningful educator evaluation and support system will help improve teaching, learning, and school leadership. The primary purpose of the Rhode Island Model Building Administrator Evaluation and Support System (Rhode Island Model) is to help all building administrators improve.…
Modelling in Evaluating a Working Life Project in Higher Education
Sarja, Anneli; Janhonen, Sirpa; Havukainen, Pirjo; Vesterinen, Anne
2012-01-01
This article describes an evaluation method based on collaboration between the higher education, a care home and university, in a R&D project. The aim of the project was to elaborate modelling as a tool of developmental evaluation for innovation and competence in project cooperation. The approach was based on activity theory. Modelling enabled a…
The Development of Educational Evaluation Models in Indonesia.
Nasoetion, N.; And Others
The primary purpose of this project was to develop model evaluation procedures that could be applied to large educational undertakings in Indonesia. Three programs underway in Indonesia were selected for the development of evaluation models: the Textbook-Teacher Upgrading Project, the Development School Project, and the Examinations (Item Bank)…
Evaluating a Community-School Model of Social Work Practice
Diehl, Daniel; Frey, Andy
2008-01-01
While research has shown that social workers can have positive impacts on students' school adjustment, evaluations of overall practice models continue to be limited. This article evaluates a model of community-school social work practice by examining its effect on problem behaviors and concerns identified by teachers and parents at referral. As…
Pilot evaluation in TENCompetence: a theory-driven model1
J. Schoonenboom; H. Sligte; A. Moghnieh; M. Specht; C. Glahn; K. Stefanov
2008-01-01
This paper describes a theory-driven evaluation model that is used in evaluating four pilots in which an infrastructure for lifelong competence development, which is currently being developed, is validated. The model makes visible the separate implementation steps that connect the envisaged infrastr
Evaluating a Training Using the "Four Levels Model"
Steensma, Herman; Groeneveld, Karin
2010-01-01
Purpose: The aims of this study are: to present a training evaluation based on the "four levels model"; to demonstrate the value of experimental designs in evaluation studies; and to take a first step in the development of an evidence-based training program. Design/methodology/approach: The Kirkpatrick four levels model was used to…
Increased numbers of orexin/hypocretin neurons in a genetic rat depression model
DEFF Research Database (Denmark)
Mikrouli, Elli; Wörtwein, Gitta; Soylu, Rana
2011-01-01
The Flinders Sensitive Line (FSL) rat is a genetic animal model of depression that displays characteristics similar to those of depressed patients including lower body weight, decreased appetite and reduced REM sleep latency. Hypothalamic neuropeptides such as orexin/hypocretin, melanin-concentra......The Flinders Sensitive Line (FSL) rat is a genetic animal model of depression that displays characteristics similar to those of depressed patients including lower body weight, decreased appetite and reduced REM sleep latency. Hypothalamic neuropeptides such as orexin/hypocretin, melanin...
Astrophysical aspects of fermion number violation in the supersymmetrical standard model
Manka, R
1993-01-01
The model of the supersymmetrical ball in the supersymmetrical standard model with additional global U(1) fermion symmetry is presented. We show that the supersymmetry breaking scale ( R-parity ), the global U(1) fermion symmetry scale and the electroweak symmetry breaking scale are strictly connected to each other. The realistic ball with $M \\sim 10^5 - 10^9 M_{\\odot} $ and the radius $ R \\sim 10^{12} - 10^{14} cm $ is obtained. Inside the ball all full symmetries are restored. The ball is stabilized by superpartners and right neutrinos which are massless inside.
Hydrologic Evaluation of Landfill Performance (HELP) Model
The program models rainfall, runoff, infiltration, and other water pathways to estimate how much water builds up above each landfill liner. It can incorporate data on vegetation, soil types, geosynthetic materials, initial moisture conditions, slopes, etc.
TMDL MODEL EVALUATION AND RESEARCH NEEDS
This review examines the modeling research needs to support environmental decision-making for the 303(d) requirements for development of total maximum daily loads (TMDLs) and related programs such as 319 Nonpoint Source Program activities, watershed management, stormwater permits...
A Context-Adaptive Model for Program Evaluation.
Lynch, Brian K.
1990-01-01
Presents an adaptable, context-sensitive model for ESL/EFL program evaluation, consisting of seven steps that guide an evaluator through consideration of relevant issues, information, and design elements. Examples from an evaluation of the Reading for Science and Technology Project at the University of Guadalajara, Mexico are given. (31…
Directory of Open Access Journals (Sweden)
Mathukumalli Srinivasa Rao
Full Text Available The present study features the estimation of number of generations of tobacco caterpillar, Spodoptera litura. Fab. on peanut crop at six locations in India using MarkSim, which provides General Circulation Model (GCM of future data on daily maximum (T.max, minimum (T.min air temperatures from six models viz., BCCR-BCM2.0, CNRM-CM3, CSIRO-Mk3.5, ECHams5, INCM-CM3.0 and MIROC3.2 along with an ensemble of the six from three emission scenarios (A2, A1B and B1. This data was used to predict the future pest scenarios following the growing degree days approach in four different climate periods viz., Baseline-1975, Near future (NF -2020, Distant future (DF-2050 and Very Distant future (VDF-2080. It is predicted that more generations would occur during the three future climate periods with significant variation among scenarios and models. Among the seven models, 1-2 additional generations were predicted during DF and VDF due to higher future temperatures in CNRM-CM3, ECHams5 & CSIRO-Mk3.5 models. The temperature projections of these models indicated that the generation time would decrease by 18-22% over baseline. Analysis of variance (ANOVA was used to partition the variation in the predicted number of generations and generation time of S. litura on peanut during crop season. Geographical location explained 34% of the total variation in number of generations, followed by time period (26%, model (1.74% and scenario (0.74%. The remaining 14% of the variation was explained by interactions. Increased number of generations and reduction of generation time across the six peanut growing locations of India suggest that the incidence of S. litura may increase due to projected increase in temperatures in future climate change periods.
Recommendations concerning energy information model documentation, public access, and evaluation
Energy Technology Data Exchange (ETDEWEB)
Wood, D.O.; Mason, M.J.
1979-10-01
A review is presented of the Energy Information Administration (EIA) response to Congressional and management concerns, relating specifically to energy information system documentation, public access to EIA systems, and scientific/peer evaluation. The relevant organizational and policy responses of EIA are discussed. An analysis of the model development process and approaches to, and organization of, model evaluation is presented. Included is a survey of model evaluation studies. A more detailed analysis of the origins of the legislated documentation and public access requirements is presented in Appendix A, and the results of an informal survey of other agency approaches to public access and evaluation is presented in Appendix B. Appendix C provides a survey of non-EIA activities relating to model documentation and evaluation. Twelve recommendations to improve EIA's procedures for energy information system documentation, evaluation activities, and public access are determined. These are discussed in detail. (MCW)
Directory of Open Access Journals (Sweden)
Okan Yaşar
2010-03-01
Full Text Available This study has focused on the status of geography teaching in Turkish Higher Education. In this manner, the aim of the study is to stipulate teaching methods and materials being used by academics and to evaluate the factors effecting the choice of these methods and materials. The scope of study included 23 departments, 16 of which were geography and 7 of which were geography teaching departments from 21 universities in Turkey. Surveys have been applied both for students and the academic staff based on surveying model in the study. Sampling had consisted of 957 students and 120 academic staff. Data and various variables (university, faculty, title, expertise area, occupational teaching experience, and class have been analyzed by different statistical methods. Findings showed that academic staff used lecturing methods with high frequency, which was followed by the methods of field work applications. Moreover, it was also seen that visual and technology aided material use had high frequencies in terms of material usage. Significant differences were also seen in the comparison of views of students and academic staff. In addition, factors that influenced the academic staff’s selection of methods and materials were the course objectives and subject characteristics.
Center for Integrated Nanotechnologies (CINT) Chemical Release Modeling Evaluation
Energy Technology Data Exchange (ETDEWEB)
Stirrup, Timothy Scott [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2016-12-20
This evaluation documents the methodology and results of chemical release modeling for operations at Building 518, Center for Integrated Nanotechnologies (CINT) Core Facility. This evaluation is intended to supplement an update to the CINT [Standalone] Hazards Analysis (SHA). This evaluation also updates the original [Design] Hazards Analysis (DHA) completed in 2003 during the design and construction of the facility; since the original DHA, additional toxic materials have been evaluated and modeled to confirm the continued low hazard classification of the CINT facility and operations. This evaluation addresses the potential catastrophic release of the current inventory of toxic chemicals at Building 518 based on a standard query in the Chemical Information System (CIS).
Statistical modeling for visualization evaluation through data fusion.
Chen, Xiaoyu; Jin, Ran
2017-11-01
There is a high demand of data visualization providing insights to users in various applications. However, a consistent, online visualization evaluation method to quantify mental workload or user preference is lacking, which leads to an inefficient visualization and user interface design process. Recently, the advancement of interactive and sensing technologies makes the electroencephalogram (EEG) signals, eye movements as well as visualization logs available in user-centered evaluation. This paper proposes a data fusion model and the application procedure for quantitative and online visualization evaluation. 15 participants joined the study based on three different visualization designs. The results provide a regularized regression model which can accurately predict the user's evaluation of task complexity, and indicate the significance of all three types of sensing data sets for visualization evaluation. This model can be widely applied to data visualization evaluation, and other user-centered designs evaluation and data analysis in human factors and ergonomics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Standardizing the performance evaluation of short-term wind prediction models
DEFF Research Database (Denmark)
Madsen, Henrik; Pinson, Pierre; Kariniotakis, G.;
2005-01-01
evaluation of model performance. This paper proposes a standardized protocol for the evaluation of short-term wind-poser preciction systems. A number of reference prediction models are also described, and their use for performance comparison is analysed. The use of the protocol is demonstrated using results...... from both on-shore and off-shore wind forms. The work was developed in the frame of the Anemos project (EU R&D project) where the protocol has been used to evaluate more than 10 prediction systems....
Systematic evaluation of atmospheric chemistry-transport model CHIMERE
Khvorostyanov, Dmitry; Menut, Laurent; Mailler, Sylvain; Siour, Guillaume; Couvidat, Florian; Bessagnet, Bertrand; Turquety, Solene
2017-04-01
Regional-scale atmospheric chemistry-transport models (CTM) are used to develop air quality regulatory measures, to support environmentally sensitive decisions in the industry, and to address variety of scientific questions involving the atmospheric composition. Model performance evaluation with measurement data is critical to understand their limits and the degree of confidence in model results. CHIMERE CTM (http://www.lmd.polytechnique.fr/chimere/) is a French national tool for operational forecast and decision support and is widely used in the international research community in various areas of atmospheric chemistry and physics, climate, and environment (http://www.lmd.polytechnique.fr/chimere/CW-articles.php). This work presents the model evaluation framework applied systematically to the new CHIMERE CTM versions in the course of the continuous model development. The framework uses three of the four CTM evaluation types identified by the Environmental Protection Agency (EPA) and the American Meteorological Society (AMS): operational, diagnostic, and dynamic. It allows to compare the overall model performance in subsequent model versions (operational evaluation), identify specific processes and/or model inputs that could be improved (diagnostic evaluation), and test the model sensitivity to the changes in air quality, such as emission reductions and meteorological events (dynamic evaluation). The observation datasets currently used for the evaluation are: EMEP (surface concentrations), AERONET (optical depths), and WOUDC (ozone sounding profiles). The framework is implemented as an automated processing chain and allows interactive exploration of the results via a web interface.
A Model To Address Design Constraints of Training Delivered via Satellite. Study Number Eight.
Montler, Joseph; Geroy, Gary D.
This document: summarizes how some companies are addressing the design constraints involved in using satellite technology to deliver training, presents a model aimed at examining cost effectiveness of the satellite option, and includes a guide to designing instructional materials for delivery by satellite. A survey of 39 organizations, 12…
Stable direction recovery in single-index models with a diverging number of predictors
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
Large dimensional predictors are often introduced in regressions to attenuate the possible modeling bias. We consider the stable direction recovery in single-index models in which we solely assume the response Y is independent of the diverging dimensional predictors X when βτ 0 X is given, where β 0 is a p n × 1 vector, and p n →∞ as the sample size n →∞. We first explore sufficient conditions under which the least squares estimation β n0 recovers the direction β 0 consistently even when p n = o(√ n). To enhance the model interpretability by excluding irrelevant predictors in regressions, we suggest an e1-regularization algorithm with a quadratic constraint on magnitude of least squares residuals to search for a sparse estimation of β 0 . Not only can the solution β n of e1-regularization recover β 0 consistently, it also produces sufficiently sparse estimators which enable us to select "important" predictors to facilitate the model interpretation while maintaining the prediction accuracy. Further analysis by simulations and an application to the car price data suggest that our proposed estimation procedures have good finite-sample performance and are computationally efficient.
The Caribbean News Agency: Third World Model. Journalism Monographs Number 71.
Cuthbert, Marlene
This monograph is a history of the Caribbean News Agency (CANA), which is jointly owned by private and public mass media of its region and independent of both governments and foreign news agencies. It is proposed that CANA may provide a unique model of an independent, regional third-world news agency. Sections of the monograph examine (1) CANA's…
Modelling the number density of Halpha emitters for future spectroscopic near-IR space missions
Pozzetti, L; Geach, J E; Cimatti, A; Baugh, C; Cucciati, O; Merson, A; Norberg, P; Shi, D
2016-01-01
The future space missions Euclid and WFIRST-AFTA will use the Halpha emission line to measure the redshifts of tens of millions of galaxies. The Halpha luminosity function at z>0.7 is one of the major sources of uncertainty in forecasting cosmological constraints from these missions. We construct unified empirical models of the Halpha luminosity function spanning the range of redshifts and line luminosities relevant to the redshift surveys proposed with Euclid and WFIRST-AFTA. By fitting to observed luminosity functions from Halpha surveys, we build three models for its evolution. Different fitting methodologies, functional forms for the luminosity function, subsets of the empirical input data, and treatment of systematic errors are considered to explore the robustness of the results. Functional forms and model parameters are provided for all three models, along with the counts and redshift distributions up to z~2.5 for a range of limiting fluxes (F_Halpha>0.5 - 3 x 10^-16 erg cm^-2 s^-1) that are relevant fo...
Equipartitions and a Distribution for Numbers: A Statistical Model for Benford's Law
Iafrate, Joseph R; Strauch, Frederick W
2015-01-01
A statistical model for the fragmentation of a conserved quantity is analyzed, using the principle of maximum entropy and the theory of partitions. Upper and lower bounds for the restricted partitioning problem are derived and applied to the distribution of fragments. The resulting power law directly leads to Benford's law for the first digits of the parts.
Directory of Open Access Journals (Sweden)
Morteza Khodabin
2013-06-01
Full Text Available In this paper, the confidence interval for the solution of stochastic exponential population growth model where the so-called parameter, population growth rate is not completely definite and it depends on some random environmental effects is obtained. We use Iran population data in the period 1921-2006 as an example.