A Range-Based Multivariate Model for Exchange Rate Volatility
B. Tims (Ben); R.J. Mahieu (Ronald)
2003-01-01
textabstractIn this paper we present a parsimonious multivariate model for exchange rate volatilities based on logarithmic high-low ranges of daily exchange rates. The multivariate stochastic volatility model divides the log range of each exchange rate into two independent latent factors, which are
A Range-Based Multivariate Model for Exchange Rate Volatility
Tims, Ben; Mahieu, Ronald
2003-01-01
textabstractIn this paper we present a parsimonious multivariate model for exchange rate volatilities based on logarithmic high-low ranges of daily exchange rates. The multivariate stochastic volatility model divides the log range of each exchange rate into two independent latent factors, which are interpreted as the underlying currency specific components. Due to the normality of logarithmic volatilities the model can be estimated conveniently with standard Kalman filter techniques. Our resu...
Prediction of pipeline corrosion rate based on grey Markov models
International Nuclear Information System (INIS)
Chen Yonghong; Zhang Dafa; Peng Guichu; Wang Yuemin
2009-01-01
Based on the model that combined by grey model and Markov model, the prediction of corrosion rate of nuclear power pipeline was studied. Works were done to improve the grey model, and the optimization unbiased grey model was obtained. This new model was used to predict the tendency of corrosion rate, and the Markov model was used to predict the residual errors. In order to improve the prediction precision, rolling operation method was used in these prediction processes. The results indicate that the improvement to the grey model is effective and the prediction precision of the new model combined by the optimization unbiased grey model and Markov model is better, and the use of rolling operation method may improve the prediction precision further. (authors)
A Model of Exchange-Rate-Based Stabilization for Turkey
Ozlem Aytac
2008-01-01
The literature on the exchange-rate-based stabilization has focused almost exclusively in Latin America. Many other countries however, such as Egypt, Lebanon and Turkey; have undertaken this sort of programs in the last 10-15 years. I depart from the existing literature by developing a model specifically for the 2000-2001 heterodox exchange-rate-based stabilization program in Turkey: When the government lowers the rate of crawl, the rate of domestic credit creation is set equal to the lower r...
Improved air ventilation rate estimation based on a statistical model
International Nuclear Information System (INIS)
Brabec, M.; Jilek, K.
2004-01-01
A new approach to air ventilation rate estimation from CO measurement data is presented. The approach is based on a state-space dynamic statistical model, allowing for quick and efficient estimation. Underlying computations are based on Kalman filtering, whose practical software implementation is rather easy. The key property is the flexibility of the model, allowing various artificial regimens of CO level manipulation to be treated. The model is semi-parametric in nature and can efficiently handle time-varying ventilation rate. This is a major advantage, compared to some of the methods which are currently in practical use. After a formal introduction of the statistical model, its performance is demonstrated on real data from routine measurements. It is shown how the approach can be utilized in a more complex situation of major practical relevance, when time-varying air ventilation rate and radon entry rate are to be estimated simultaneously from concurrent radon and CO measurements
Rate-Based Model Predictive Control of Turbofan Engine Clearance
DeCastro, Jonathan A.
2006-01-01
An innovative model predictive control strategy is developed for control of nonlinear aircraft propulsion systems and sub-systems. At the heart of the controller is a rate-based linear parameter-varying model that propagates the state derivatives across the prediction horizon, extending prediction fidelity to transient regimes where conventional models begin to lose validity. The new control law is applied to a demanding active clearance control application, where the objectives are to tightly regulate blade tip clearances and also anticipate and avoid detrimental blade-shroud rub occurrences by optimally maintaining a predefined minimum clearance. Simulation results verify that the rate-based controller is capable of satisfying the objectives during realistic flight scenarios where both a conventional Jacobian-based model predictive control law and an unconstrained linear-quadratic optimal controller are incapable of doing so. The controller is evaluated using a variety of different actuators, illustrating the efficacy and versatility of the control approach. It is concluded that the new strategy has promise for this and other nonlinear aerospace applications that place high importance on the attainment of control objectives during transient regimes.
A Logistic Regression Based Auto Insurance Rate-Making Model Designed for the Insurance Rate Reform
Directory of Open Access Journals (Sweden)
Zhengmin Duan
2018-02-01
Full Text Available Using a generalized linear model to determine the claim frequency of auto insurance is a key ingredient in non-life insurance research. Among auto insurance rate-making models, there are very few considering auto types. Therefore, in this paper we are proposing a model that takes auto types into account by making an innovative use of the auto burden index. Based on this model and data from a Chinese insurance company, we built a clustering model that classifies auto insurance rates into three risk levels. The claim frequency and the claim costs are fitted to select a better loss distribution. Then the Logistic Regression model is employed to fit the claim frequency, with the auto burden index considered. Three key findings can be concluded from our study. First, more than 80% of the autos with an auto burden index of 20 or higher belong to the highest risk level. Secondly, the claim frequency is better fitted using the Poisson distribution, however the claim cost is better fitted using the Gamma distribution. Lastly, based on the AIC criterion, the claim frequency is more adequately represented by models that consider the auto burden index than those do not. It is believed that insurance policy recommendations that are based on Generalized linear models (GLM can benefit from our findings.
Directory of Open Access Journals (Sweden)
Hae Kyung Im
2012-02-01
Full Text Available The International HapMap project has made publicly available extensive genotypic data on a number of lymphoblastoid cell lines (LCLs. Building on this resource, many research groups have generated a large amount of phenotypic data on these cell lines to facilitate genetic studies of disease risk or drug response. However, one problem that may reduce the usefulness of these resources is the biological noise inherent to cellular phenotypes. We developed a novel method, termed Mixed Effects Model Averaging (MEM, which pools data from multiple sources and generates an intrinsic cellular growth rate phenotype. This intrinsic growth rate was estimated for each of over 500 HapMap cell lines. We then examined the association of this intrinsic growth rate with gene expression levels and found that almost 30% (2,967 out of 10,748 of the genes tested were significant with FDR less than 10%. We probed further to demonstrate evidence of a genetic effect on intrinsic growth rate by determining a significant enrichment in growth-associated genes among genes targeted by top growth-associated SNPs (as eQTLs. The estimated intrinsic growth rate as well as the strength of the association with genetic variants and gene expression traits are made publicly available through a cell-based pharmacogenomics database, PACdb. This resource should enable researchers to explore the mediating effects of proliferation rate on other phenotypes.
U.S. Environmental Protection Agency — This dataset provides the city-specific air exchange rate measurements, modeled, literature-based as well as housing characteristics. This dataset is associated with...
[Prediction of schistosomiasis infection rates of population based on ARIMA-NARNN model].
Ke-Wei, Wang; Yu, Wu; Jin-Ping, Li; Yu-Yu, Jiang
2016-07-12
To explore the effect of the autoregressive integrated moving average model-nonlinear auto-regressive neural network (ARIMA-NARNN) model on predicting schistosomiasis infection rates of population. The ARIMA model, NARNN model and ARIMA-NARNN model were established based on monthly schistosomiasis infection rates from January 2005 to February 2015 in Jiangsu Province, China. The fitting and prediction performances of the three models were compared. Compared to the ARIMA model and NARNN model, the mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model were the least with the values of 0.011 1, 0.090 0 and 0.282 4, respectively. The ARIMA-NARNN model could effectively fit and predict schistosomiasis infection rates of population, which might have a great application value for the prevention and control of schistosomiasis.
DEFF Research Database (Denmark)
Guiastrennec, B; Sonne, David Peick; Hansen, M
2016-01-01
Bile acids released postprandially modify the rate and extent of absorption of lipophilic compounds. The present study aimed to predict gastric emptying (GE) rate and gallbladder emptying (GBE) patterns in response to caloric intake. A mechanism-based model for GE, cholecystokinin plasma concentr......Bile acids released postprandially modify the rate and extent of absorption of lipophilic compounds. The present study aimed to predict gastric emptying (GE) rate and gallbladder emptying (GBE) patterns in response to caloric intake. A mechanism-based model for GE, cholecystokinin plasma...... concentrations, and GBE was developed on data from 33 patients with type 2 diabetes and 33 matched nondiabetic individuals who were administered various test drinks. A feedback action of the caloric content entering the proximal small intestine was identified for the rate of GE. The cholecystokinin...
Directory of Open Access Journals (Sweden)
Dilek Teker
2013-01-01
Full Text Available The aim of this research is to compose a new rating methodology and provide credit notches to 23 countries which of 13 are developed and 10 are emerging. There are various literature that explains the determinants of credit ratings. Following the literature, we select 11 variables for our model which of 5 are eliminated by the factor analysis. We use specific dummies to investigate the structural breaks in time and cross section such as pre crises, post crises, BRIC membership, EU membership, OPEC membership, shipbuilder country and platinum reserved country. Then we run an ordered probit model and give credit notches to the countries. We use FITCH ratings as benchmark. Thus, at the end we compare the notches of FITCH with the ones we derive out of our estimated model.
Mathematical modeling of high-rate Anammox UASB reactor based on granular packing patterns
International Nuclear Information System (INIS)
Tang, Chong-Jian; He, Rui; Zheng, Ping; Chai, Li-Yuan; Min, Xiao-Bo
2013-01-01
Highlights: ► A novel model was conducted to estimate volumetric nitrogen conversion rates. ► The packing patterns of the granules in Anammox reactor are investigated. ► The simple cubic packing pattern was simulated in high-rate Anammox UASB reactor. ► Operational strategies concerning sludge concentration were proposed by the modeling. -- Abstract: A novel mathematical model was developed to estimate the volumetric nitrogen conversion rates of a high-rate Anammox UASB reactor based on the packing patterns of granular sludge. A series of relationships among granular packing density, sludge concentration, hydraulic retention time and volumetric conversion rate were constructed to correlate Anammox reactor performance with granular packing patterns. It was suggested that the Anammox granules packed as the equivalent simple cubic pattern in high-rate UASB reactor with packing density of 50–55%, which not only accommodated a high concentration of sludge inside the reactor, but also provided large pore volume, thus prolonging the actual substrate conversion time. Results also indicated that it was necessary to improve Anammox reactor performance by enhancing substrate loading when sludge concentration was higher than 37.8 gVSS/L. The established model was carefully calibrated and verified, and it well simulated the performance of granule-based high-rate Anammox UASB reactor
Mathematical modeling of high-rate Anammox UASB reactor based on granular packing patterns
Energy Technology Data Exchange (ETDEWEB)
Tang, Chong-Jian, E-mail: chjtangzju@yahoo.com.cn [Department of Environmental Engineering, School of Metallurgical Science and Engineering, Central South University, Changsha 410083 (China); National Engineering Research Center for Control and Treatment of Heavy Metal Pollution, Changsha 410083 (China); He, Rui; Zheng, Ping [Department of Environmental Engineering, Zhejiang University, Zijingang Campus, Hangzhou 310058 (China); Chai, Li-Yuan; Min, Xiao-Bo [Department of Environmental Engineering, School of Metallurgical Science and Engineering, Central South University, Changsha 410083 (China); National Engineering Research Center for Control and Treatment of Heavy Metal Pollution, Changsha 410083 (China)
2013-04-15
Highlights: ► A novel model was conducted to estimate volumetric nitrogen conversion rates. ► The packing patterns of the granules in Anammox reactor are investigated. ► The simple cubic packing pattern was simulated in high-rate Anammox UASB reactor. ► Operational strategies concerning sludge concentration were proposed by the modeling. -- Abstract: A novel mathematical model was developed to estimate the volumetric nitrogen conversion rates of a high-rate Anammox UASB reactor based on the packing patterns of granular sludge. A series of relationships among granular packing density, sludge concentration, hydraulic retention time and volumetric conversion rate were constructed to correlate Anammox reactor performance with granular packing patterns. It was suggested that the Anammox granules packed as the equivalent simple cubic pattern in high-rate UASB reactor with packing density of 50–55%, which not only accommodated a high concentration of sludge inside the reactor, but also provided large pore volume, thus prolonging the actual substrate conversion time. Results also indicated that it was necessary to improve Anammox reactor performance by enhancing substrate loading when sludge concentration was higher than 37.8 gVSS/L. The established model was carefully calibrated and verified, and it well simulated the performance of granule-based high-rate Anammox UASB reactor.
State-space dynamic model for estimation of radon entry rate, based on Kalman filtering
International Nuclear Information System (INIS)
Brabec, Marek; Jilek, Karel
2007-01-01
To predict the radon concentration in a house environment and to understand the role of all factors affecting its behavior, it is necessary to recognize time variation in both air exchange rate and radon entry rate into a house. This paper describes a new approach to the separation of their effects, which effectively allows continuous estimation of both radon entry rate and air exchange rate from simultaneous tracer gas (carbon monoxide) and radon gas measurement data. It is based on a state-space statistical model which permits quick and efficient calculations. Underlying computations are based on (extended) Kalman filtering, whose practical software implementation is easy. Key property is the model's flexibility, so that it can be easily adjusted to handle various artificial regimens of both radon gas and CO gas level manipulation. After introducing the statistical model formally, its performance will be demonstrated on real data from measurements conducted in our experimental, naturally ventilated and unoccupied room. To verify our method, radon entry rate calculated via proposed statistical model was compared with its known reference value. The results from several days of measurement indicated fairly good agreement (up to 5% between reference value radon entry rate and its value calculated continuously via proposed method, in average). Measured radon concentration moved around the level approximately 600 Bq m -3 , whereas the range of air exchange rate was 0.3-0.8 (h -1 )
An Empirical Rate Constant Based Model to Study Capacity Fading in Lithium Ion Batteries
Directory of Open Access Journals (Sweden)
Srivatsan Ramesh
2015-01-01
Full Text Available A one-dimensional model based on solvent diffusion and kinetics to study the formation of the SEI (solid electrolyte interphase layer and its impact on the capacity of a lithium ion battery is developed. The model uses the earlier work on silicon oxidation but studies the kinetic limitations of the SEI growth process. The rate constant of the SEI formation reaction at the anode is seen to play a major role in film formation. The kinetics of the reactions for capacity fading for various battery systems are studied and the rate constants are evaluated. The model is used to fit the capacity fade in different battery systems.
A Numerical Study of Water Loss Rate Distributions in MDCT-based Human Airway Models
Wu, Dan; Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long
2015-01-01
Both three-dimensional (3D) and one-dimensional (1D) computational fluid dynamics (CFD) methods are applied to study regional water loss in three multi-detector row computed-tomography (MDCT)-based human airway models at the minute ventilations of 6, 15 and 30 L/min. The overall water losses predicted by both 3D and 1D models in the entire respiratory tract agree with available experimental measurements. However, 3D and 1D models reveal different regional water loss rate distributions due to the 3D secondary flows formed at bifurcations. The secondary flows cause local skewed temperature and humidity distributions on inspiration acting to elevate the local water loss rate; and the secondary flow at the carina tends to distribute more cold air to the lower lobes. As a result, the 3D model predicts that the water loss rate first increases with increasing airway generation, and then decreases as the air approaches saturation, while the 1D model predicts a monotonic decrease of water loss rate with increasing airway generation. Moreover, the 3D (or 1D) model predicts relatively higher water loss rates in lower (or upper) lobes. The regional water loss rate can be related to the non-dimensional wall shear stress (τ*) by the non-dimensional mass transfer coefficient (h0*) as h0* = 1.15 τ*0.272, R = 0.842. PMID:25869455
Modelling of tomato stem diameter growth rate based on physiological responses
International Nuclear Information System (INIS)
Li, L.; Tan, J.; Lv, T.
2017-01-01
The stem diameter is an important parameter describing the growth of tomato plant during vegetative growth stage. A stem diameter growth model was developed to predict the response of plant growth under different conditions. By analyzing the diurnal variations of stem diameter in tomato (Solanum lycopersicum L.), it was found that the stem diameter measured at 3:00 am was the representative value as the daily basis of tomato stem diameter. Based on the responses of growth rate in stem diameter to light and temperature, a linear regression relationship was applied to establish the stem diameter growth rate prediction model for the vegetative growth stage in tomato and which was further validated by experiment. The root mean square error (RMSE) and relative error (RE) were used to test the correlation between measured and modeled stem diameter variations. Results showed that the model can be used in prediction for stem diameter growth rate at vegetative growth stage in tomato. (author)
Coast-down model based on rated parameters of reactor coolant pump
International Nuclear Information System (INIS)
Jiang Maohua; Zou Zhichao; Wang Pengfei; Ruan Xiaodong
2014-01-01
For a sudden loss of power in reactor coolant pump (RCP), a calculation model of rotor speed and flow characteristics based on rated parameters was studied. The derived model was verified by comparing with the power-off experimental data of 100D RCP. The results indicate that it can be used in preliminary design calculation and verification analysis. Then a design criterion of RCP was described based on the calculation model. The moment of inertia in AP1000 RCP was verified by this criterion. (authors)
Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER meas...
Multi-Frame Rate Based Multiple-Model Training for Robust Speaker Identification of Disguised Voice
DEFF Research Database (Denmark)
Prasad, Swati; Tan, Zheng-Hua; Prasad, Ramjee
2013-01-01
Speaker identification systems are prone to attack when voice disguise is adopted by the user. To address this issue,our paper studies the effect of using different frame rates on the accuracy of the speaker identification system for disguised voice.In addition, a multi-frame rate based multiple......-model training method is proposed. The experimental results show the superior performance of the proposed method compared to the commonly used single frame rate method for three types of disguised voice taken from the CHAINS corpus....
Comparison of two lung clearance models based on the dissolution rates of oxidized depleted uranium
International Nuclear Information System (INIS)
Crist, K.C.
1984-10-01
An in-vitro dissolution study was conducted on two respirable oxidized depleted uranium samples. The dissolution rates generated from this study were then utilized in the International Commission on Radiological Protection Task Group lung clearance model and a lung clearance model proposed by Cuddihy. Predictions from both models based on the dissolution rates of the amount of oxidized depleted uranium that would be cleared to blood from the pulmonary region following an inhalation exposure were compared. It was found that the predictions made by both models differed considerably. The difference between the predictions was attributed to the differences in the way each model perceives the clearance from the pulmonary region. 33 references, 11 figures, 9 tables
Comparison of two lung clearance models based on the dissolution rates of oxidized depleted uranium
Energy Technology Data Exchange (ETDEWEB)
Crist, K.C.
1984-10-01
An in-vitro dissolution study was conducted on two respirable oxidized depleted uranium samples. The dissolution rates generated from this study were then utilized in the International Commission on Radiological Protection Task Group lung clearance model and a lung clearance model proposed by Cuddihy. Predictions from both models based on the dissolution rates of the amount of oxidized depleted uranium that would be cleared to blood from the pulmonary region following an inhalation exposure were compared. It was found that the predictions made by both models differed considerably. The difference between the predictions was attributed to the differences in the way each model perceives the clearance from the pulmonary region. 33 references, 11 figures, 9 tables.
International Nuclear Information System (INIS)
Xu, Zejian; Huang, Fenglei
2012-01-01
Both descriptive and predictive capabilities of five physically based constitutive models (PB, NNL, ZA, VA, and RK) are investigated and compared systematically, in characterizing plastic behavior of the 603 steel at temperatures ranging from 288 to 873 K, and strain rates ranging from 0.001 to 4500 s −1 . Determination of the constitutive parameters is introduced in detail for each model. Validities of the established models are checked by strain rate jump tests performed under different loading conditions. The results show that the RK and NNL models have better performance in the description of material behavior, especially the work-hardening effect, while the PB and VA models predict better. The inconsistency that is observed between the capabilities of description and prediction of the models indicates the existence of the minimum number of required fitting data, reflecting the degree of a model's requirement for basic data in parameter calibration. It is also found that the description capability of a model is dependent to a large extent on both its form and the number of its constitutive parameters, while the precision of prediction relies largely on the performance of description. In the selection of constitutive models, the experimental data and the constitutive models should be considered synthetically to obtain a better efficiency in material behavior characterization
A Multiagent Cooperation Model Based on Trust Rating in Dynamic Infinite Interaction Environment
Directory of Open Access Journals (Sweden)
Sixia Fan
2018-01-01
Full Text Available To improve the liveness of agents and enhance trust and collaboration in multiagent system, a new cooperation model based on trust rating in dynamic infinite interaction environment (TR-DII is proposed. TR-DII model is used to control agent’s autonomy and selfishness and to make agent do the rational decision. TR-DII model is based on two important components. One is dynamic repeated interaction structure, and the other is trust rating. The dynamic repeated interaction structure is formed with multistage inviting and evaluating actions. It transforms agents’ interactions into an infinity task allocation environment, where controlled and renewable cycle is a component most agent models ignored. Additionally, it influences the expectations and behaviors of agents which may not appear in one-shot time but may appear in long-time cooperation. Moreover, with rewards and punishments mechanism (RPM, the trust rating (TR is proposed to control agent blindness in selection phase. However, RPM is the factor that directly influences decisions, not the reputation as other models have suggested. Meanwhile, TR could monitor agent’s statuses in which they could be trustworthy or untrustworthy. Also, it refines agent’s disrepute in a new way which is ignored by the others. Finally, grids puzzle experiment has been used to test TR-DII model and other five models are used as comparisons. The results show that TR-DII model can effectively adjust the trust level between agents and makes the solvers be more trustworthy and do choices that are more rational. Moreover, through interaction result feedback, TR-DII model could adjust the income function, to control cooperation reputation, and could achieve a closed-loop control.
DEFF Research Database (Denmark)
Mogensen, Christian Backer; Ankersen, Ejnar Skytte; Lindberg, Mats J
2018-01-01
. CONCLUSIONS: The GP based HaH model was more effective than the hospital specialist model in avoiding hospital admissions within 7 days among elderly patients with an acute medical condition with no differences in mental or physical recovery rates or deaths between the two models. REGISTRATION: No. NCT......BACKGROUND: Hospital at home (HaH) is an alternative to acute admission for elderly patients. It is unclear if should be cared for a primarily by a hospital intern specialist or by the patient's own general practitioner (GP). The study assessed whether a GP based model was more effective than...... Denmark, including + 65 years old patients with an acute medical condition that required acute hospital in-patient care. The patients were randomly assigned to hospital specialist based model or GP model of HaH care. Five physical and cognitive performance tests were performed at inclusion and after 7...
Zheng, Dandan; Hou, Huirang; Zhang, Tao
2016-04-01
For ultrasonic gas flow rate measurement based on ultrasonic exponential model, when the noise frequency is close to that of the desired signals (called similar-frequency noise) or the received signal amplitude is small and unstable at big flow rate, local convergence of the algorithm genetic-ant colony optimization-3cycles may appear, and measurement accuracy may be affected. Therefore, an improved method energy genetic-ant colony optimization-3cycles (EGACO-3cycles) is proposed to solve this problem. By judging the maximum energy position of signal, the initial parameter range of exponential model can be narrowed and then the local convergence can be avoided. Moreover, a DN100 flow rate measurement system with EGACO-3cycles method is established based on NI PCI-6110 and personal computer. A series of experiments are carried out for testing the new method and the measurement system. It is shown that local convergence doesn't appear with EGACO-3cycles method when similar-frequency noises exist and flow rate is big. Then correct time of flight can be obtained. Furthermore, through flow calibration on this system, the measurement range ratio is achieved 500:1, and the measurement accuracy is 0.5% with a low transition velocity 0.3 m/s. Copyright © 2016 Elsevier B.V. All rights reserved.
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.
Directory of Open Access Journals (Sweden)
Wei He
Full Text Available A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF for space instruments. A model for the system functional error rate (SFER is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA is presented. Based on experimental results of different ions (O, Si, Cl, Ti under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2, while the MTTF is approximately 110.7 h.
Directory of Open Access Journals (Sweden)
Xia Liang
2018-05-01
Full Text Available With the remarkable promotion of e-commerce platforms, consumers increasingly prefer to purchase products online. Online ratings facilitate consumers to choose among products. Thus, to help consumers effectively select products, it is necessary to provide decision support methods for consumers to trade online. Considering the decision makers are bounded rational, this paper proposes a novel decision support model for product selection based on online ratings, in which the regret aversion behavior of consumers is formulated. Massive online ratings provided by experienced consumers for alternative products associated with several evaluation attributes are obtained by software finders. Then, the evaluations of alternative products in format of stochastic variables are conducted. To select a desirable alternative product, a novel method is introduced to calculate gain and loss degrees of each alternative over others. Considering the regret behavior of consumers in the product selection process, the regret and rejoice values of alternative products for consumer are computed to obtain the perceived utility values of alternative products. According to the prior order of the evaluation attributes provided by the consumer, the prior weights of attributes are determined based on the perceived utility values of alternative products. Furthermore, the overall perceived utility values of alternative products are obtained to generate a ranking result. Finally, a practical example from Zol.com.cn for tablet computer selection is used to demonstrate the feasibility and practically of the proposed model.
Reliability prediction system based on the failure rate model for electronic components
International Nuclear Information System (INIS)
Lee, Seung Woo; Lee, Hwa Ki
2008-01-01
Although many methodologies for predicting the reliability of electronic components have been developed, their reliability might be subjective according to a particular set of circumstances, and therefore it is not easy to quantify their reliability. Among the reliability prediction methods are the statistical analysis based method, the similarity analysis method based on an external failure rate database, and the method based on the physics-of-failure model. In this study, we developed a system by which the reliability of electronic components can be predicted by creating a system for the statistical analysis method of predicting reliability most easily. The failure rate models that were applied are MILHDBK- 217F N2, PRISM, and Telcordia (Bellcore), and these were compared with the general purpose system in order to validate the effectiveness of the developed system. Being able to predict the reliability of electronic components from the stage of design, the system that we have developed is expected to contribute to enhancing the reliability of electronic components
Experimental validation of a rate-based model for CO2 capture using an AMP solution
DEFF Research Database (Denmark)
Gabrielsen, Jostein; Svendsen, H. F.; Michelsen, Michael Locht
2007-01-01
Detailed experimental data, including temperature profiles over the absorber, for a carbon dioxide (CO"2) absorber with structured packing in an integrated laboratory pilot plant using an aqueous 2-amino-2-methyl-1-propanol (AMP) solution are presented. The experimental gas-liquid material balance...... was within an average of 3.5% for the experimental conditions presented. A predictive rate-based steady-state model for CO"2 absorption into an AMP solution, using an implicit expression for the enhancement factor, has been validated against the presented pilot plant data. Furthermore, a parameter...
DEFF Research Database (Denmark)
De Giovanni, Domenico
2010-01-01
prepayment models for mortgage backed securities, this paper builds a Rational Expectation (RE) model describing the policyholders' behavior in lapsing the contract. A market model with stochastic interest rates is considered, and the pricing is carried out through numerical approximation...
DEFF Research Database (Denmark)
De Giovanni, Domenico
prepayment models for mortgage backed securities, this paper builds a Rational Expectation (RE) model describing the policyholders' behavior in lapsing the contract. A market model with stochastic interest rates is considered, and the pricing is carried out through numerical approximation...
Learning to maximize reward rate: a model based on semi-Markov decision processes.
Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R
2014-01-01
WHEN ANIMALS HAVE TO MAKE A NUMBER OF DECISIONS DURING A LIMITED TIME INTERVAL, THEY FACE A FUNDAMENTAL PROBLEM: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible "conditions." A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each "condition" being a "state" and the value of decision thresholds being the "actions" taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values.
Directory of Open Access Journals (Sweden)
Leng Fei
2008-09-01
Full Text Available This paper discusses the seismic analysis of concrete dams with consideration of material nonlinearity. Based on a consistent rate-dependent model and two thermodynamics-based models, two thermodynamics-based rate-dependent constitutive models were developed with consideration of the influence of the strain rate. They can describe the dynamic behavior of concrete and be applied to nonlinear seismic analysis of concrete dams taking into account the rate sensitivity of concrete. With the two models, a nonlinear analysis of the seismic response of the Koyna Gravity Dam and the Dagangshan Arch Dam was conducted. The results were compared with those of a linear elastic model and two rate-independent thermodynamics-based constitutive models, and the influences of constitutive models and strain rate on the seismic response of concrete dams were discussed. It can be concluded from the analysis that, during seismic response, the tensile stress is the control stress in the design and seismic safety evaluation of concrete dams. In different models, the plastic strain and plastic strain rate of concrete dams show a similar distribution. When the influence of the strain rate is considered, the maximum plastic strain and plastic strain rate decrease.
Murphy, Gregory J.
2012-01-01
This quantitative study explores the 2010 recommendation of the Educational Funding Advisory Board to consider the Evidence-Based Adequacy model of school funding in Illinois. This school funding model identifies and costs research based practices necessary in a prototypical school and sets funding levels based upon those practices. This study…
Modeling low-dose-rate effects in irradiated bipolar-base oxides
International Nuclear Information System (INIS)
Graves, R.J.; Cirba, C.R.; Schrimpf, R.D.; Milanowski, R.J.; Saigne, F.; Michez, A.; Fleetwood, D.M.; Witczak, S.C.
1997-02-01
A physical model is developed to quantify the contribution of oxide-trapped charge to enhanced low-dose-rate gain degradation in BJTs. Simulations show that space charge limited transport is partially responsible for the low-dose-rate enhancement
Stage-discharge rating curves based on satellite altimetry and modeled discharge in the Amazon basin
Paris, Adrien; Dias de Paiva, Rodrigo; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Calmant, Stephane; Garambois, Pierre-André; Collischonn, Walter; Bonnet, Marie-Paule; Seyler, Frederique
2016-05-01
In this study, rating curves (RCs) were determined by applying satellite altimetry to a poorly gauged basin. This study demonstrates the synergistic application of remote sensing and watershed modeling to capture the dynamics and quantity of flow in the Amazon River Basin, respectively. Three major advancements for estimating basin-scale patterns in river discharge are described. The first advancement is the preservation of the hydrological meanings of the parameters expressed by Manning's equation to obtain a data set containing the elevations of the river beds throughout the basin. The second advancement is the provision of parameter uncertainties and, therefore, the uncertainties in the rated discharge. The third advancement concerns estimating the discharge while considering backwater effects. We analyzed the Amazon Basin using nearly one thousand series that were obtained from ENVISAT and Jason-2 altimetry for more than 100 tributaries. Discharge values and related uncertainties were obtained from the rain-discharge MGB-IPH model. We used a global optimization algorithm based on the Monte Carlo Markov Chain and Bayesian framework to determine the rating curves. The data were randomly allocated into 80% calibration and 20% validation subsets. A comparison with the validation samples produced a Nash-Sutcliffe efficiency (Ens) of 0.68. When the MGB discharge uncertainties were less than 5%, the Ens value increased to 0.81 (mean). A comparison with the in situ discharge resulted in an Ens value of 0.71 for the validation samples (and 0.77 for calibration). The Ens values at the mouths of the rivers that experienced backwater effects significantly improved when the mean monthly slope was included in the RC. Our RCs were not mission-dependent, and the Ens value was preserved when applying ENVISAT rating curves to Jason-2 altimetry at crossovers. The cease-to-flow parameter of our RCs provided a good proxy for determining river bed elevation. This proxy was validated
International Nuclear Information System (INIS)
Rabi, Jose A.; Mohamad, Abdulmajeed A.
2004-01-01
Radon-222 is a radionuclide exhaled from phosphogypsum by-produced at phosphate fertilizer industries. Alternative large-scale application of this waste may indicate a material substitute for civil engineering provided that environmental issues concerning its disposal and management are overcome. The first part of this paper outlines a steady-state two-dimensional model for 222 Rn transport through porous media, inside which emanation (source term) and decay (sink term) exist. Boussinesq approach is evoked for the laminar buoyancy-driven interstitial air flow, which is also modeled according to Darcy-Brinkman formulation. In order to account for simultaneous effects of entailed physical parameters, governing equations are cast into dimensionless form. Apart from usual controlling parameters like Reynolds, Prandtl, Schmidt, Grashof and Darcy numbers, three unconventional dimensionless groups are put forward. Having in mind 222 Rn transport in phosphogypsum-bearing porous media, the physical meaning of those newly introduced parameters and representative values for the involved physical parameters are presented. A limiting diffusion-dominated scenario is addressed, for which an analytical solution is deduced for boundary conditions including an impermeable phosphogypsum stack base and a non-zero fixed concentration activity at the stack top. Accordingly, an expression for the average Sherwood number corresponding to the normalized 222 Rn exhalation rate is presented
Wang, Yunong; Cheng, Rongjun; Ge, Hongxia
2017-08-01
In this paper, a lattice hydrodynamic model is derived considering not only the effect of flow rate difference but also the delayed feedback control signal which including more comprehensive information. The control method is used to analyze the stability of the model. Furthermore, the critical condition for the linear steady traffic flow is deduced and the numerical simulation is carried out to investigate the advantage of the proposed model with and without the effect of flow rate difference and the control signal. The results are consistent with the theoretical analysis correspondingly.
State-Space Dynamic Model for Estimation of Radon Entry Rate, based on Kalman Filtering
Czech Academy of Sciences Publication Activity Database
Brabec, Marek; Jílek, K.
2007-01-01
Roč. 98, - (2007), s. 285-297 ISSN 0265-931X Grant - others:GA SÚJB JC_11/2006 Institutional research plan: CEZ:AV0Z10300504 Keywords : air ventilation rate * radon entry rate * state-space modeling * extended Kalman filter * maximum likelihood estimation * prediction error decomposition Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.963, year: 2007
Probabilistic Modeling of the Fatigue Crack Growth Rate for Ni-base Alloy X-750
International Nuclear Information System (INIS)
Yoon, Jae Young; Nam, Hyo On; Hwang, Il Soon; Tae Hyun Lee
2012-01-01
The Bayesian inference was employed to reduce the uncertainties contained in EAC modeling parameters that have been established from experiments with Alloy X-750. Corrosion fatigue crack growth rate model(FCGR) was developed by fitting into Paris' Law of measured data from the several fatigue tests conducted either in constant load or constant ΔK mode. From fitting the data to Paris' Law, the parameters C and m of Paris' Law model were assumed to obey the Gaussian distribution. These parameters characterizing the corrosion fatigue crack growth behavior of X-750 were updated to reduce the uncertainty in the model by using the Bayesian inference method. (author)
International Nuclear Information System (INIS)
Woods, T.
1991-02-01
The Hydrocarbon Supply Model is used to develop long-term trends in Lower-48 gas production and costs. The model utilizes historical find-rate patterns to predict the discovery rate and size distribution of future oil and gas field discoveries. The report documents the methodologies used to quantify historical oil and gas field find-rates and to project those discovery patterns for future drilling. It also explains the theoretical foundations for the find-rate approach. The new field and reserve growth resource base is documented and compared to other published estimates. The report has six sections. Section 1 provides background information and an overview of the model. Sections 2, 3, and 4 describe the theoretical foundations of the model, the databases, and specific techniques used. Section 5 presents the new field resource base by region and depth. Section 6 documents the reserve growth model components
Petersen, Mark D.; Zeng, Yuehua; Haller, Kathleen M.; McCaffrey, Robert; Hammond, William C.; Bird, Peter; Moschetti, Morgan; Shen, Zhengkang; Bormann, Jayne; Thatcher, Wayne
2014-01-01
The 2014 National Seismic Hazard Maps for the conterminous United States incorporate additional uncertainty in fault slip-rate parameter that controls the earthquake-activity rates than was applied in previous versions of the hazard maps. This additional uncertainty is accounted for by new geodesy- and geology-based slip-rate models for the Western United States. Models that were considered include an updated geologic model based on expert opinion and four combined inversion models informed by both geologic and geodetic input. The two block models considered indicate significantly higher slip rates than the expert opinion and the two fault-based combined inversion models. For the hazard maps, we apply 20 percent weight with equal weighting for the two fault-based models. Off-fault geodetic-based models were not considered in this version of the maps. Resulting changes to the hazard maps are generally less than 0.05 g (acceleration of gravity). Future research will improve the maps and interpret differences between the new models.
International Nuclear Information System (INIS)
Yang, F.Q.; Xue, H.; Zhao, L.Y.; Fang, X.R.
2014-01-01
Highlights: • Creep is considered to be the primary mechanical factor of crack tip film degradation. • The prediction model of SCC rate is based on crack tip creep strain rate. • The SCC rate calculated at the secondary stage of creep is recommended. • The effect of stress intensity factor on SCC growth rate is discussed. - Abstract: The quantitative prediction of stress corrosion cracking (SCC) of structure materials is essential in safety assessment of nuclear power plants. A new quantitative prediction model is proposed by combining the Ford–Andresen model, a crack tip creep model and an elastic–plastic finite element method. The creep at the crack tip is considered to be the primary mechanical factor of protective film degradation, and the creep strain rate at the crack tip is suggested as primary mechanical factor in predicting the SCC rate. The SCC rates at secondary stage of creep are recommended when using the approach introduced in this study to predict the SCC rates of materials in high temperature water. The proposed approach can be used to understand the SCC crack growth in structural materials of light water reactors
Evaluating crown fire rate of spread predictions from physics-based models
C. M. Hoffman; J. Ziegler; J. Canfield; R. R. Linn; W. Mell; C. H. Sieg; F. Pimont
2015-01-01
Modeling the behavior of crown fires is challenging due to the complex set of coupled processes that drive the characteristics of a spreading wildfire and the large range of spatial and temporal scales over which these processes occur. Detailed physics-based modeling approaches such as FIRETEC and the Wildland Urban Interface Fire Dynamics Simulator (WFDS) simulate...
Aandahl, R. Zachariah; Reyes, Josephine F.; Sisson, Scott A.; Tanaka, Mark M.
2012-01-01
Variable numbers of tandem repeats (VNTR) typing is widely used for studying the bacterial cause of tuberculosis. Knowledge of the rate of mutation of VNTR loci facilitates the study of the evolution and epidemiology of Mycobacterium tuberculosis. Previous studies have applied population genetic models to estimate the mutation rate, leading to estimates varying widely from around to per locus per year. Resolving this issue using more detailed models and statistical methods would lead to improved inference in the molecular epidemiology of tuberculosis. Here, we use a model-based approach that incorporates two alternative forms of a stepwise mutation process for VNTR evolution within an epidemiological model of disease transmission. Using this model in a Bayesian framework we estimate the mutation rate of VNTR in M. tuberculosis from four published data sets of VNTR profiles from Albania, Iran, Morocco and Venezuela. In the first variant, the mutation rate increases linearly with respect to repeat numbers (linear model); in the second, the mutation rate is constant across repeat numbers (constant model). We find that under the constant model, the mean mutation rate per locus is (95% CI: ,)and under the linear model, the mean mutation rate per locus per repeat unit is (95% CI: ,). These new estimates represent a high rate of mutation at VNTR loci compared to previous estimates. To compare the two models we use posterior predictive checks to ascertain which of the two models is better able to reproduce the observed data. From this procedure we find that the linear model performs better than the constant model. The general framework we use allows the possibility of extending the analysis to more complex models in the future. PMID:22761563
International Nuclear Information System (INIS)
Lee, Gyeong Geun; Lee, Yong Bok; Kim, Min Chul; Kwon, Junh Yun
2012-01-01
Neutron irradiation to reactor pressure vessel (RPV) steels causes a decrease in fracture toughness and an increase in yield strength while in service. It is generally accepted that the growth of point defect cluster (PDC) and copper rich precipitate (CRP) affects radiation hardening of RPV steels. A number of models have been proposed to account for the embrittlement of RPV steels. The rate theory based modeling mathematically described the evolution of radiation induced microstructures of ferritic steels under neutron irradiation. In this work, we compared the rate theory based modeling calculation with the surveillance test results of Korean Light Water Reactors (LWRs)
DEFF Research Database (Denmark)
Gaspar, Jozsef; Gladis, Arne; Woodley, John
2017-01-01
solvent-regeneration energy demand.The focus of this work is to develop a rate-based model for CO2 absorption using MDEA enhanced with CA and to validate it against pilot-scale absorption experiments. In this work, we compare model predictions to measured temperature and CO2 concentration profiles...
International Nuclear Information System (INIS)
Berger, G.
1997-01-01
Most of the mineral reactions in natural water-rock systems progress at conditions close to the chemical equilibrium. The kinetics of these reactions, in particular the dissolution rate of the primary minerals, is a major constrain for the numerical modelling of diagenetic and hydrothermal processes. In the case of silicates, recent experimental studies have pointed out the necessity to better understand the elementary reactions which control the dissolution process. This article presents several models that have been proposed to account for the observed dissolution rate/chemical affinity relationships. The case of glasses (R7T7), feldspars and clays, in water, in near neutral pH aqueous solutions and in acid/basic media, are reviewed. (A.C.)
Noise Reduction of MEMS Gyroscope Based on Direct Modeling for an Angular Rate Signal
Directory of Open Access Journals (Sweden)
Liang Xue
2015-02-01
Full Text Available In this paper, a novel approach for processing the outputs signal of the microelectromechanical systems (MEMS gyroscopes was presented to reduce the bias drift and noise. The principle for the noise reduction was presented, and an optimal Kalman filter (KF was designed by a steady-state filter gain obtained from the analysis of KF observability. In particular, the true angular rate signal was directly modeled to obtain an optimal estimate and make a self-compensation for the gyroscope without needing other sensor’s information, whether in static or dynamic condition. A linear fit equation that describes the relationship between the KF bandwidth and modeling parameter of true angular rate was derived from the analysis of KF frequency response. The test results indicated that the MEMS gyroscope having an ARW noise of 4.87°/h0.5 and a bias instability of 44.41°/h were reduced to 0.4°/h0.5 and 4.13°/h by the KF under a given bandwidth (10 Hz, respectively. The 1σ estimated error was reduced from 1.9°/s to 0.14°/s and 1.7°/s to 0.5°/s in the constant rate test and swing rate test, respectively. It also showed that the filtered angular rate signal could well reflect the dynamic characteristic of the input rate signal in dynamic conditions. The presented algorithm is proved to be effective at improving the measurement precision of the MEMS gyroscope.
Stage-discharge rating curves based on satellite altimetry and modeled discharge in the Amazon basin
Paris, Adrien; Dias de Paiva, Rodrigo; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Calmant, Stephane; Garambois, Pierre-André; Collischonn, Walter; Bonnet, Marie-Paule; Seyler, Frederique
2016-01-01
In this study, rating curves (RCs) were determined by applying satellite altimetry to a poorly gauged basin. This study demonstrates the synergistic application of remote sensing and watershed modeling to capture the dynamics and quantity of flow in the Amazon River Basin, respectively. Three major advancements for estimating basin-scale patterns in river discharge are described. The first advancement is the preservation of the hydrological meanings of the parameters expressed by ...
Development of a QTL-environment-based predictive model for node addition rate in common bean.
Zhang, Li; Gezan, Salvador A; Eduardo Vallejos, C; Jones, James W; Boote, Kenneth J; Clavijo-Michelangeli, Jose A; Bhakta, Mehul; Osorno, Juan M; Rao, Idupulapati; Beebe, Stephen; Roman-Paoli, Elvin; Gonzalez, Abiezer; Beaver, James; Ricaurte, Jaumer; Colbert, Raphael; Correll, Melanie J
2017-05-01
This work reports the effects of the genetic makeup, the environment and the genotype by environment interactions for node addition rate in an RIL population of common bean. This information was used to build a predictive model for node addition rate. To select a plant genotype that will thrive in targeted environments it is critical to understand the genotype by environment interaction (GEI). In this study, multi-environment QTL analysis was used to characterize node addition rate (NAR, node day - 1 ) on the main stem of the common bean (Phaseolus vulgaris L). This analysis was carried out with field data of 171 recombinant inbred lines that were grown at five sites (Florida, Puerto Rico, 2 sites in Colombia, and North Dakota). Four QTLs (Nar1, Nar2, Nar3 and Nar4) were identified, one of which had significant QTL by environment interactions (QEI), that is, Nar2 with temperature. Temperature was identified as the main environmental factor affecting NAR while day length and solar radiation played a minor role. Integration of sites as covariates into a QTL mixed site-effect model, and further replacing the site component with explanatory environmental covariates (i.e., temperature, day length and solar radiation) yielded a model that explained 73% of the phenotypic variation for NAR with root mean square error of 16.25% of the mean. The QTL consistency and stability was examined through a tenfold cross validation with different sets of genotypes and these four QTLs were always detected with 50-90% probability. The final model was evaluated using leave-one-site-out method to assess the influence of site on node addition rate. These analyses provided a quantitative measure of the effects on NAR of common beans exerted by the genetic makeup, the environment and their interactions.
Prediction of PWSCC in nickel base alloys using crack growth rate models
International Nuclear Information System (INIS)
Thompson, C.D.
1995-01-01
The Ford/Andresen slip dissolution SCC model, originally developed for stainless steel components in BWR environments, has been applied to Alloy 600 and Alloy X-750 tested in deaerated pure water chemistry. A method is described whereby the crack growth rates measured in compact tension specimens can be used to estimate crack growth in a component. Good agreement was found between model prediction and measured SCC in X-750 threaded fasteners over a wide range of temperatures, stresses, and material condition. Most data support the basic assumption of this model that cracks initiate early in life. The evidence supporting a particular SCC mechanism is mixed. Electrochemical repassivation data and estimates of oxide fracture strain indicate that the slip dissolution model can account for the observed crack growth rates, provided primary rather than secondary creep rates are used. However, approximately 100 cross-sectional TEM foils of SCC cracks including crack tips reveal no evidence of enhanced plasticity or unique dislocation patterns at the crack tip or along the crack to support a classic slip dissolution mechanism. No voids, hydrides,, or microcracks are found in the vicinity of the crack tips creating doubt about classic hydrogen related mechanisms. The bulk oxide films exhibit a surface oxide which is often different than the oxide found within a crack. Although bulk chromium concentration affects the rate of SCC, analytical data indicates the mechanism does not result from chromium depletion at the grain boundaries. The overall findings support a corrosion/dissolution mechanism but not one necessarily related to slip at the crack tip. (author). 12 refs, 27 figs
Modelling on optimal portfolio with exchange rate based on discontinuous stochastic process
Yan, Wei; Chang, Yuwen
2016-12-01
Considering the stochastic exchange rate, this paper is concerned with the dynamic portfolio selection in financial market. The optimal investment problem is formulated as a continuous-time mathematical model under mean-variance criterion. These processes follow jump-diffusion processes (Weiner process and Poisson process). Then the corresponding Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and its efferent frontier is obtained. Moreover, the optimal strategy is also derived under safety-first criterion.
Johnson, Brett Emery Trabun; Gamage, Thoshitha Thanushka; Bakken, David Edward
2018-01-02
This disclosure describes, in part, a system management component and failure detection component for use in a power grid data network to identify anomalies within the network and systematically adjust the quality of service of data published by publishers and subscribed to by subscribers within the network. In one implementation, subscribers may identify a desired data rate, a minimum acceptable data rate, desired latency, minimum acceptable latency and a priority for each subscription. The failure detection component may identify an anomaly within the network and a source of the anomaly. Based on the identified anomaly, data rates and or data paths may be adjusted in real-time to ensure that the power grid data network does not become overloaded and/or fail.
Probabilistic Modeling of the Fatigue Crack Growth Rate for Ni-base Alloy X-750
International Nuclear Information System (INIS)
Yoon, J.Y.; Nam, H.O.; Hwang, I.S.; Lee, T.H.
2012-01-01
Extending the operating life of existing nuclear power plants (NPP's) beyond 60 years. Many aging problems of passive components such as PWSCC, IASCC, FAC and Corrosion Fatigue; Safety analysis: Deterministic analysis + Probabilistic analysis; Many uncertainties of parameters or relationship in general probabilistic analysis such as probabilistic safety assessment (PSA); Bayesian inference: Decreasing uncertainties by updating unknown parameter; Ensuring the reliability of passive components (e.g. pipes) as well as active components (e.g. valve, pump) in NPP's; Developing probabilistic model for failures; Updating the fatigue crack growth rate (FCGR)
Innovative model-based flow rate optimization for vanadium redox flow batteries
König, S.; Suriyah, M. R.; Leibfried, T.
2016-11-01
In this paper, an innovative approach is presented to optimize the flow rate of a 6-kW vanadium redox flow battery with realistic stack dimensions. Efficiency is derived using a multi-physics battery model and a newly proposed instantaneous efficiency determination technique. An optimization algorithm is applied to identify optimal flow rates for operation points defined by state-of-charge (SoC) and current. The proposed method is evaluated against the conventional approach of applying Faraday's first law of electrolysis, scaled to the so-called flow factor. To make a fair comparison, the flow factor is also optimized by simulating cycles with different charging/discharging currents. It is shown through the obtained results that the efficiency is increased by up to 1.2% points; in addition, discharge capacity is also increased by up to 1.0 kWh or 5.4%. Detailed loss analysis is carried out for the cycles with maximum and minimum charging/discharging currents. It is shown that the proposed method minimizes the sum of losses caused by concentration over-potential, pumping and diffusion. Furthermore, for the deployed Nafion 115 membrane, it is observed that diffusion losses increase with stack SoC. Therefore, to decrease stack SoC and lower diffusion losses, a higher flow rate during charging than during discharging is reasonable.
Shale gas technology innovation rate impact on economic Base Case – Scenario model benchmarks
International Nuclear Information System (INIS)
Weijermars, Ruud
2015-01-01
Highlights: • Cash flow models control which technology is affordable in emerging shale gas plays. • Impact of technology innovation on IRR can be as important as wellhead price hikes. • Cash flow models are useful for technology decisions that make shale gas plays economic. • The economic gap can be closed by appropriate technology innovation. - Abstract: Low gas wellhead prices in North America have put its shale gas industry under high competitive pressure. Rapid technology innovation can help companies to improve the economic performance of shale gas fields. Cash flow models are paramount for setting effective production and technology innovation targets to achieve positive returns on investment in all global shale gas plays. Future cash flow of a well (or cluster of wells) may either improve further or deteriorate, depending on: (1) the regional volatility in gas prices at the wellhead – which must pay for the gas resource extraction, and (2) the cost and effectiveness of the well technology used. Gas price is an externality and cannot be controlled by individual companies, but well technology cost can be reduced while improving production output. We assume two plausible scenarios for well technology innovation and model the return on investment while checking against sensitivity to gas price volatility. It appears well technology innovation – if paced fast enough – can fully redeem the negative impact of gas price decline on shale well profits, and the required rates are quantified in our sensitivity analysis
Directory of Open Access Journals (Sweden)
Yolande Jordaan
2015-08-01
Full Text Available This paper is primarily concerned with the revenue and tax efficiency effects of adjustments to marginal tax rates on individual income as an instrument of possible tax reform. The hypothesis is that changes to marginal rates affect not only the revenue base, but also tax efficiency and the optimum level of taxes that supports economic growth. Using an optimal revenue-maximising rate (based on Laffer analysis, the elasticity of taxable income is derived with respect to marginal tax rates for each taxable-income category. These elasticities are then used to quantify the impact of changes in marginal rates on the revenue base and tax efficiency using a microsimulation (MS tax model. In this first paper on the research results, much attention is paid to the structure of the model and the way in which the database has been compiled. The model allows for the dissemination of individual taxpayers by income groups, gender, educational level, age group, etc. Simulations include a scenario with higher marginal rates which is also more progressive (as in the 1998/1999 fiscal year, in which case tax revenue increases but the increase is overshadowed by a more than proportional decrease in tax efficiency as measured by its deadweight loss. On the other hand, a lowering of marginal rates (to bring South Africa’s marginal rates more in line with those of its peers improves tax efficiency but also results in a substantial revenue loss. The estimated optimal individual tax to gross domestic product (GDP ratio in order to maximise economic growth (6.7 per cent shows a strong response to changes in marginal rates, and the results from this research indicate that a lowering of marginal rates would also move the actual ratio closer to its optimum level. Thus, the trade-off between revenue collected and tax efficiency should be carefully monitored when personal income tax reform is being considered.
Pulungan, Ditho Ardiansyah
2018-02-24
Polymers in general exhibit pressure- and rate-dependent behavior. Modeling such behavior requires extensive, costly and time-consuming experimental work. Common simplifications may lead to severe inaccuracy when using the model for predicting the failure of structures. Here, we propose a viscoelastic viscoplastic damage model for polypropylene-based polymers. Such a set of constitutive equations can be used to describe the response of polypropylene under various strain-rates and stress-triaxiality conditions. Our model can also be applied to a broad range of thermoplastic polymers. We detail the experimental campaign that is needed to identify every parameter of the model at best. We validated the proposed model by performing 3-point bending tests at different loading speeds, where the load-displacement response of polypropylene beam up to failure was accurately predicted.
Narooei, K; Arman, M
2018-03-01
In this research, the exponential stretched based hyperelastic strain energy was generalized to the hyper-viscoelastic model using the heredity integral of deformation history to take into account the strain rate effects on the mechanical behavior of materials. The heredity integral was approximated by the approach of Goh et al. to determine the model parameters and the same estimation was used for constitutive modeling. To present the ability of the proposed hyper-viscoelastic model, the stress-strain response of the thermoplastic elastomer gel tissue at different strain rates from 0.001 to 100/s was studied. In addition to better agreement between the current model and experimental data in comparison to the extended Mooney-Rivlin hyper-viscoelastic model, a stable material behavior was predicted for pure shear and balance biaxial deformation modes. To present the engineering application of current model, the Kolsky bars impact test of gel tissue was simulated and the effects of specimen size and inertia on the uniform deformation were investigated. As the mechanical response of polyurea was provided over wide strain rates of 0.0016-6500/s, the current model was applied to fit the experimental data. The results were shown more accuracy could be expected from the current research than the extended Ogden hyper-viscoelastic model. In the final verification example, the pig skin experimental data was used to determine parameters of the hyper-viscoelastic model. Subsequently, a specimen of pig skin at different strain rates was loaded to a fixed strain and the change of stress with time (stress relaxation) was obtained. The stress relaxation results were revealed the peak stress increases by applied strain rate until the saturated loading rate and the equilibrium stress with magnitude of 0.281MPa could be reached. Copyright © 2017 Elsevier Ltd. All rights reserved.
Evolution of the rate of biological aging using a phenotype based computational model.
Kittas, Aristotelis
2010-10-07
In this work I introduce a simple model to study how natural selection acts upon aging, which focuses on the viability of each individual. It is able to reproduce the Gompertz law of mortality and can make predictions about the relation between the level of mutation rates (beneficial/deleterious/neutral), age at reproductive maturity and the degree of biological aging. With no mutations, a population with low age at reproductive maturity R stabilizes at higher density values, while with mutations it reaches its maximum density, because even for large pre-reproductive periods each individual evolves to survive to maturity. Species with very short pre-reproductive periods can only tolerate a small number of detrimental mutations. The probabilities of detrimental (P(d)) or beneficial (P(b)) mutations are demonstrated to greatly affect the process. High absolute values produce peaks in the viability of the population over time. Mutations combined with low selection pressure move the system towards weaker phenotypes. For low values in the ratio P(d)/P(b), the speed at which aging occurs is almost independent of R, while higher values favor significantly species with high R. The value of R is critical to whether the population survives or dies out. The aging rate is controlled by P(d) and P(b) and the amount of the viability of each individual is modified, with neutral mutations allowing the system more "room" to evolve. The process of aging in this simple model is revealed to be fairly complex, yielding a rich variety of results. 2010 Elsevier Ltd. All rights reserved.
Probabilistic short-term forecasting of eruption rate at Kīlauea Volcano using a physics-based model
Anderson, K. R.
2016-12-01
Deterministic models of volcanic eruptions yield predictions of future activity conditioned on uncertainty in the current state of the system. Physics-based eruption models are well-suited for deterministic forecasting as they can relate magma physics with a wide range of observations. Yet, physics-based eruption forecasting is strongly limited by an inadequate understanding of volcanic systems, and the need for eruption models to be computationally tractable. At Kīlauea Volcano, Hawaii, episodic depressurization-pressurization cycles of the magma system generate correlated, quasi-exponential variations in ground deformation and surface height of the active summit lava lake. Deflations are associated with reductions in eruption rate, or even brief eruptive pauses, and thus partly control lava flow advance rates and associated hazard. Because of the relatively well-understood nature of Kīlauea's shallow magma plumbing system, and because more than 600 of these events have been recorded to date, they offer a unique opportunity to refine a physics-based effusive eruption forecasting approach and apply it to lava eruption rates over short (hours to days) time periods. A simple physical model of the volcano ascribes observed data to temporary reductions in magma supply to an elastic reservoir filled with compressible magma. This model can be used to predict the evolution of an ongoing event, but because the mechanism that triggers events is unknown, event durations are modeled stochastically from previous observations. A Bayesian approach incorporates diverse data sets and prior information to simultaneously estimate uncertain model parameters and future states of the system. Forecasts take the form of probability distributions for eruption rate or cumulative erupted volume at some future time. Results demonstrate the significant uncertainties that still remain even for short-term eruption forecasting at a well-monitored volcano - but also the value of a physics-based
Lum, Kristian; Swarup, Samarth; Eubank, Stephen; Hawdon, James
2014-09-06
We build an agent-based model of incarceration based on the susceptible-infected-suspectible (SIS) model of infectious disease propagation. Our central hypothesis is that the observed racial disparities in incarceration rates between Black and White Americans can be explained as the result of differential sentencing between the two demographic groups. We demonstrate that if incarceration can be spread through a social influence network, then even relatively small differences in sentencing can result in large disparities in incarceration rates. Controlling for effects of transmissibility, susceptibility and influence network structure, our model reproduces the observed large disparities in incarceration rates given the differences in sentence lengths for White and Black drug offenders in the USA without extensive parameter tuning. We further establish the suitability of the SIS model as applied to incarceration by demonstrating that the observed structural patterns of recidivism are an emergent property of the model. In fact, our model shows a remarkably close correspondence with California incarceration data. This work advances efforts to combine the theories and methods of epidemiology and criminology.
Model-based setting of inspiratory pressure and respiratory rate in pressure-controlled ventilation
International Nuclear Information System (INIS)
Schranz, C; Möller, K; Becher, T; Schädler, D; Weiler, N
2014-01-01
Mechanical ventilation carries the risk of ventilator-induced-lung-injury (VILI). To minimize the risk of VILI, ventilator settings should be adapted to the individual patient properties. Mathematical models of respiratory mechanics are able to capture the individual physiological condition and can be used to derive personalized ventilator settings. This paper presents model-based calculations of inspiration pressure (p I ), inspiration and expiration time (t I , t E ) in pressure-controlled ventilation (PCV) and a retrospective evaluation of its results in a group of mechanically ventilated patients. Incorporating the identified first order model of respiratory mechanics in the basic equation of alveolar ventilation yielded a nonlinear relation between ventilation parameters during PCV. Given this patient-specific relation, optimized settings in terms of minimal p I and adequate t E can be obtained. We then retrospectively analyzed data from 16 ICU patients with mixed pathologies, whose ventilation had been previously optimized by ICU physicians with the goal of minimization of inspiration pressure, and compared the algorithm's ‘optimized’ settings to the settings that had been chosen by the physicians. The presented algorithm visualizes the patient-specific relations between inspiration pressure and inspiration time. The algorithm's calculated results highly correlate to the physician's ventilation settings with r = 0.975 for the inspiration pressure, and r = 0.902 for the inspiration time. The nonlinear patient-specific relations of ventilation parameters become transparent and support the determination of individualized ventilator settings according to therapeutic goals. Thus, the algorithm is feasible for a variety of ventilated ICU patients and has the potential of improving lung-protective ventilation by minimizing inspiratory pressures and by helping to avoid the build-up of clinically significant intrinsic positive end
Model-based setting of inspiratory pressure and respiratory rate in pressure-controlled ventilation.
Schranz, C; Becher, T; Schädler, D; Weiler, N; Möller, K
2014-03-01
Mechanical ventilation carries the risk of ventilator-induced-lung-injury (VILI). To minimize the risk of VILI, ventilator settings should be adapted to the individual patient properties. Mathematical models of respiratory mechanics are able to capture the individual physiological condition and can be used to derive personalized ventilator settings. This paper presents model-based calculations of inspiration pressure (pI), inspiration and expiration time (tI, tE) in pressure-controlled ventilation (PCV) and a retrospective evaluation of its results in a group of mechanically ventilated patients. Incorporating the identified first order model of respiratory mechanics in the basic equation of alveolar ventilation yielded a nonlinear relation between ventilation parameters during PCV. Given this patient-specific relation, optimized settings in terms of minimal pI and adequate tE can be obtained. We then retrospectively analyzed data from 16 ICU patients with mixed pathologies, whose ventilation had been previously optimized by ICU physicians with the goal of minimization of inspiration pressure, and compared the algorithm's 'optimized' settings to the settings that had been chosen by the physicians. The presented algorithm visualizes the patient-specific relations between inspiration pressure and inspiration time. The algorithm's calculated results highly correlate to the physician's ventilation settings with r = 0.975 for the inspiration pressure, and r = 0.902 for the inspiration time. The nonlinear patient-specific relations of ventilation parameters become transparent and support the determination of individualized ventilator settings according to therapeutic goals. Thus, the algorithm is feasible for a variety of ventilated ICU patients and has the potential of improving lung-protective ventilation by minimizing inspiratory pressures and by helping to avoid the build-up of clinically significant intrinsic positive end-expiratory pressure.
Energy Technology Data Exchange (ETDEWEB)
Avanzo, Michele; Stancanello, Joseph; Franchin, Giovanni; Sartor, Giovanna; Jena, Rajesh; Drigo, Annalisa; Dassie, Andrea; Gigante, Marco; Capra, Elvira [Department of Medical Physics, Centro di Riferimento Oncologico, Aviano 33081 (Italy); Research and Clinical Collaborations, Siemens Healthcare, Erlangen 91052 (Germany); Department of Radiation Oncology, Centro di Riferimento Oncologico, Aviano 33081 (Italy); Department of Medical Physics, Centro di Riferimento Oncologico, Aviano 33081 (Italy); Oncology Centre, Cambridge University Hospitals NHS Foundation Trust, Cambridge CB2 0QQ (United Kingdom); Department of Medical Physics, Centro di Riferimento Oncologico, Aviano 33081 (Italy); Department of Radiation Oncology, Centro di Riferimento Oncologico, Aviano 33081 (Italy); Department of Medical Physics, Centro di Riferimento Oncologico, Aviano 33081 (Italy)
2010-04-15
Purpose: To extend the application of current radiation therapy (RT) based tumor control probability (TCP) models of nasopharyngeal carcinoma (NPC) to include the effects of hypoxia and chemoradiotherapy (CRT). Methods: A TCP model is described based on the linear-quadratic model modified to account for repopulation, chemotherapy, heterogeneity of dose to the tumor, and hypoxia. Sensitivity analysis was performed to determine which parameters exert the greatest influence on the uncertainty of modeled TCP. On the basis of the sensitivity analysis, the values of specific radiobiological parameters were set to nominal values reported in the literature for NPC or head and neck tumors. The remaining radiobiological parameters were determined by fitting TCP to clinical local control data from published randomized studies using both RT and CRT. Validation of the model was performed by comparison of estimated TCP and average overall local control rate (LCR) for 45 patients treated at the institution with conventional linear-accelerator-based or helical tomotherapy based intensity-modulated RT and neoadjuvant chemotherapy. Results: Sensitivity analysis demonstrates that the model is most sensitive to the radiosensitivity term {alpha} and the dose per fraction. The estimated values of {alpha} and OER from data fitting were 0.396 Gy{sup -1} and 1.417. The model estimate of TCP (average 90.9%, range 26.9%-99.2%) showed good correlation with the LCR (86.7%). Conclusions: The model implemented in this work provides clinicians with a useful tool to predict the success rate of treatment, optimize treatment plans, and compare the effects of multimodality therapy.
International Nuclear Information System (INIS)
Avanzo, Michele; Stancanello, Joseph; Franchin, Giovanni; Sartor, Giovanna; Jena, Rajesh; Drigo, Annalisa; Dassie, Andrea; Gigante, Marco; Capra, Elvira
2010-01-01
Purpose: To extend the application of current radiation therapy (RT) based tumor control probability (TCP) models of nasopharyngeal carcinoma (NPC) to include the effects of hypoxia and chemoradiotherapy (CRT). Methods: A TCP model is described based on the linear-quadratic model modified to account for repopulation, chemotherapy, heterogeneity of dose to the tumor, and hypoxia. Sensitivity analysis was performed to determine which parameters exert the greatest influence on the uncertainty of modeled TCP. On the basis of the sensitivity analysis, the values of specific radiobiological parameters were set to nominal values reported in the literature for NPC or head and neck tumors. The remaining radiobiological parameters were determined by fitting TCP to clinical local control data from published randomized studies using both RT and CRT. Validation of the model was performed by comparison of estimated TCP and average overall local control rate (LCR) for 45 patients treated at the institution with conventional linear-accelerator-based or helical tomotherapy based intensity-modulated RT and neoadjuvant chemotherapy. Results: Sensitivity analysis demonstrates that the model is most sensitive to the radiosensitivity term α and the dose per fraction. The estimated values of α and OER from data fitting were 0.396 Gy -1 and 1.417. The model estimate of TCP (average 90.9%, range 26.9%-99.2%) showed good correlation with the LCR (86.7%). Conclusions: The model implemented in this work provides clinicians with a useful tool to predict the success rate of treatment, optimize treatment plans, and compare the effects of multimodality therapy.
van den Ende, M. P. A.; Chen, J.; Ampuero, J.-P.; Niemeijer, A. R.
2018-05-01
Rate-and-state friction (RSF) is commonly used for the characterisation of laboratory friction experiments, such as velocity-step tests. However, the RSF framework provides little physical basis for the extrapolation of these results to the scales and conditions of natural fault systems, and so open questions remain regarding the applicability of the experimentally obtained RSF parameters for predicting seismic cycle transients. As an alternative to classical RSF, microphysics-based models offer means for interpreting laboratory and field observations, but are generally over-simplified with respect to heterogeneous natural systems. In order to bridge the temporal and spatial gap between the laboratory and nature, we have implemented existing microphysical model formulations into an earthquake cycle simulator. Through this numerical framework, we make a direct comparison between simulations exhibiting RSF-controlled fault rheology, and simulations in which the fault rheology is dictated by the microphysical model. Even though the input parameters for the RSF simulation are directly derived from the microphysical model, the microphysics-based simulations produce significantly smaller seismic event sizes than the RSF-based simulation, and suggest a more stable fault slip behaviour. Our results reveal fundamental limitations in using classical rate-and-state friction for the extrapolation of laboratory results. The microphysics-based approach offers a more complete framework in this respect, and may be used for a more detailed study of the seismic cycle in relation to material properties and fault zone pressure-temperature conditions.
Luque-Fernandez, Miguel Angel; Belot, Aurélien; Quaresma, Manuela; Maringe, Camille; Coleman, Michel P; Rachet, Bernard
2016-10-01
In population-based cancer research, piecewise exponential regression models are used to derive adjusted estimates of excess mortality due to cancer using the Poisson generalized linear modelling framework. However, the assumption that the conditional mean and variance of the rate parameter given the set of covariates x i are equal is strong and may fail to account for overdispersion given the variability of the rate parameter (the variance exceeds the mean). Using an empirical example, we aimed to describe simple methods to test and correct for overdispersion. We used a regression-based score test for overdispersion under the relative survival framework and proposed different approaches to correct for overdispersion including a quasi-likelihood, robust standard errors estimation, negative binomial regression and flexible piecewise modelling. All piecewise exponential regression models showed the presence of significant inherent overdispersion (p-value regression modelling, with either a quasi-likelihood or robust standard errors, was the best approach as it deals with both, overdispersion due to model misspecification and true or inherent overdispersion.
Inflation Rate Modelling in Indonesia
Directory of Open Access Journals (Sweden)
Rezzy Eko Caraka
2016-10-01
Full Text Available The purposes of this research were to analyse: (i Modelling the inflation rate in Indonesia with parametric regression. (ii Modelling the inflation rate in Indonesia using non-parametric regression spline multivariable (iii Determining the best model the inflation rate in Indonesia (iv Explaining the relationship inflation model parametric and non-parametric regression spline multivariable. Based on the analysis using the two methods mentioned the coefficient of determination (R2 in parametric regression of 65.1% while non-parametric amounted to 99.39%. To begin with, the factor of money supply or money stock, crude oil prices and the rupiah exchange rate against the dollar is significant on the rate of inflation. The stability of inflation is essential to support sustainable economic development and improve people's welfare. In conclusion, unstable inflation will complicate business planning business activities, both in production and investment activities as well as in the pricing of goods and services produced.DOI: 10.15408/etk.v15i2.3260
Energy Technology Data Exchange (ETDEWEB)
Pirsing, A. [Technische Univ. Berlin (Germany). Inst. fuer Verfahrenstechnik; Wiesmann, U. [Technische Univ. Berlin (Germany). Inst. fuer Verfahrenstechnik; Kelterbach, G. [Technische Univ. Berlin (Germany). Inst. fuer Mess- und Regelungstechnik; Schaffranietz, U. [Technische Univ. Berlin (Germany). Inst. fuer Mess- und Regelungstechnik; Roeck, H. [Technische Univ. Berlin (Germany). Inst. fuer Mess- und Regelungstechnik; Eichner, B. [Technische Univ. Berlin (Germany). Inst. fuer Anorganische und Analytische Chemie; Szukal, S. [Technische Univ. Berlin (Germany). Inst. fuer Anorganische und Analytische Chemie; Schulze, G. [Technische Univ. Berlin (Germany). Inst. fuer Anorganische und Analytische Chemie
1996-09-01
This paper presents a new concept for the control of nitrification in highly polluted waste waters. The approach is based on mathematical modelling. To determine the substrate degradation rates of the microorganisms involved, a mathematical model using gas measurement is used. A fuzzy-controller maximises the capacity utilisation efficiencies. The experiments carried out in a lab-scale reactor demonstrate that even with highly varying ammonia concentrations in the influent, the nitrogen concentrations in the effluent can be kept within legal limits. (orig.). With 11 figs.
Klamt, Steffen; Müller, Stefan; Regensburger, Georg; Zanghellini, Jürgen
2018-02-07
The optimization of metabolic rates (as linear objective functions) represents the methodical core of flux-balance analysis techniques which have become a standard tool for the study of genome-scale metabolic models. Besides (growth and synthesis) rates, metabolic yields are key parameters for the characterization of biochemical transformation processes, especially in the context of biotechnological applications. However, yields are ratios of rates, and hence the optimization of yields (as nonlinear objective functions) under arbitrary linear constraints is not possible with current flux-balance analysis techniques. Despite the fundamental importance of yields in constraint-based modeling, a comprehensive mathematical framework for yield optimization is still missing. We present a mathematical theory that allows one to systematically compute and analyze yield-optimal solutions of metabolic models under arbitrary linear constraints. In particular, we formulate yield optimization as a linear-fractional program. For practical computations, we transform the linear-fractional yield optimization problem to a (higher-dimensional) linear problem. Its solutions determine the solutions of the original problem and can be used to predict yield-optimal flux distributions in genome-scale metabolic models. For the theoretical analysis, we consider the linear-fractional problem directly. Most importantly, we show that the yield-optimal solution set (like the rate-optimal solution set) is determined by (yield-optimal) elementary flux vectors of the underlying metabolic model. However, yield- and rate-optimal solutions may differ from each other, and hence optimal (biomass or product) yields are not necessarily obtained at solutions with optimal (growth or synthesis) rates. Moreover, we discuss phase planes/production envelopes and yield spaces, in particular, we prove that yield spaces are convex and provide algorithms for their computation. We illustrate our findings by a small
Modeling ventilation rates in bedrooms based on building characteristics and occupant behavior
DEFF Research Database (Denmark)
Bekö, Gabriel; Toftum, Jørn; Clausen, Geo
2011-01-01
Air change rate (ACR) data obtained from the bedrooms of 500 Danish children and presented in an earlier paper were analyzed in more detail. Questionnaires distributed to the families, home inspections and interviews with the parents provided information about a broad range of residential charact...
Dispersal distances for airborne spores based on deposition rates and stochastic modeling
DEFF Research Database (Denmark)
Stockmarr, Anders; Andreasen, Viggo; Østergård, Hanne
2007-01-01
in terms of time to deposition, and show how this concept is equivalent to the deposition rate for fungal spores. Special cases where parameter values for wind and gravitation lead to exponentially or polynomially decreasing densities are discussed, and formulas for one- and two-dimensional densities...
International Nuclear Information System (INIS)
Yang Mingyi; Liu Puling; Li Liqing
2004-01-01
The soil samples were collected in 6 cultivated runoff plots with grid sampling method, and the soil erosion rates derived from 137 Cs measurements were calculated. The models precision of Zhang Xinbao, Zhou Weizhi, Yang Hao and Walling were compared with predictions based on empirical relationship, data showed that the precision of 4 models is high within 50m slope length except for the slope with low slope angle and short length. Relatively, the precision of Walling's model is better than that of Zhang Xinbao, Zhou Weizhi and Yang Hao. In addition, the relationship between parameter Γ in Walling's improved model and slope angle was analyzed, the ralation is: Y=0.0109 X 1.0072 . (authors)
Czech Academy of Sciences Publication Activity Database
Pozníková, Gabriela; Fischer, Milan; Pohanková, Eva; Trnka, Miroslav
2014-01-01
Roč. 62, č. 5 (2014), s. 1079-1086 ISSN 1211-8516 R&D Projects: GA MŠk LH12037; GA MŠk(CZ) EE2.3.20.0248 Institutional support: RVO:67179843 Keywords : evapotranspiration * dual crop coefficient model * Bowen ratio/energy balance method * transpiration * soil evaporation * spring barley Subject RIV: EH - Ecology, Behaviour OBOR OECD: Environmental sciences (social aspects to be 5.7)
Porta, Alberto; Bari, Vlasta; Ranuzzi, Giovanni; De Maria, Beatrice; Baselli, Giuseppe
2017-09-01
We propose a multiscale complexity (MSC) method assessing irregularity in assigned frequency bands and being appropriate for analyzing the short time series. It is grounded on the identification of the coefficients of an autoregressive model, on the computation of the mean position of the poles generating the components of the power spectral density in an assigned frequency band, and on the assessment of its distance from the unit circle in the complex plane. The MSC method was tested on simulations and applied to the short heart period (HP) variability series recorded during graded head-up tilt in 17 subjects (age from 21 to 54 years, median = 28 years, 7 females) and during paced breathing protocols in 19 subjects (age from 27 to 35 years, median = 31 years, 11 females) to assess the contribution of time scales typical of the cardiac autonomic control, namely in low frequency (LF, from 0.04 to 0.15 Hz) and high frequency (HF, from 0.15 to 0.5 Hz) bands to the complexity of the cardiac regulation. The proposed MSC technique was compared to a traditional model-free multiscale method grounded on information theory, i.e., multiscale entropy (MSE). The approach suggests that the reduction of HP variability complexity observed during graded head-up tilt is due to a regularization of the HP fluctuations in LF band via a possible intervention of sympathetic control and the decrement of HP variability complexity observed during slow breathing is the result of the regularization of the HP variations in both LF and HF bands, thus implying the action of physiological mechanisms working at time scales even different from that of respiration. MSE did not distinguish experimental conditions at time scales larger than 1. Over a short time series MSC allows a more insightful association between cardiac control complexity and physiological mechanisms modulating cardiac rhythm compared to a more traditional tool such as MSE.
Pulungan, Ditho Ardiansyah; Yudhanto, Arief; Goutham, Shiva; Lubineau, Gilles; Yaldiz, Recep; Schijve, Warden
2018-01-01
Polymers in general exhibit pressure- and rate-dependent behavior. Modeling such behavior requires extensive, costly and time-consuming experimental work. Common simplifications may lead to severe inaccuracy when using the model for predicting
Bieliński, Henryk
2016-09-01
The current paper presents the experimental validation of the generalized model of the two-phase thermosyphon loop. The generalized model is based on mass, momentum, and energy balances in the evaporators, rising tube, condensers and the falling tube. The theoretical analysis and the experimental data have been obtained for a new designed variant. The variant refers to a thermosyphon loop with both minichannels and conventional tubes. The thermosyphon loop consists of an evaporator on the lower vertical section and a condenser on the upper vertical section. The one-dimensional homogeneous and separated two-phase flow models were used in calculations. The latest minichannel heat transfer correlations available in literature were applied. A numerical analysis of the volumetric flow rate in the steady-state has been done. The experiment was conducted on a specially designed test apparatus. Ultrapure water was used as a working fluid. The results show that the theoretical predictions are in good agreement with the measured volumetric flow rate at steady-state.
International Nuclear Information System (INIS)
Brogan, J.D.; Cashwell, J.W.
1992-01-01
This paper presents an overview of techniques for merging highway accident record and roadway inventory files and employing the combined data set to identify spots or sections on highway facilities in urban and suburban areas with unusually high large truck accident rates. A statistical technique, the rate/quality control method, is used to calculate a critical rate for each location of interest. This critical rate may then be compared to the location's actual accident rate to identify locations for further study. Model enhancements and modifications are described to enable the technique to be employed in the evaluation of routing alternatives for the transport of radioactive material
Vali, Leila; Mastaneh, Zahra; Mouseli, Ali; Kardanmoghadam, Vida; Kamali, Sodabeh
2017-07-01
One of the ways to improve the quality of services in the health system is through clinical governance. This method aims to create a framework for clinical services providers to be accountable in return for continuing improvement of quality and maintaining standards of services. To evaluate the success rate of clinical governance implementation in Kerman teaching hospitals based on 9 steps of Karsh's Model. This cross-sectional study was conducted in 2015 on 94 people including chief executive officers (CEOs), nursing managers, clinical governance managers and experts, head nurses and nurses. The required data were collected through a researcher-made questionnaire containing 38 questions with three-point Likert Scale (good, moderate, and weak). The Karsh's Model consists of nine steps including top management commitment to change, accountability for change, creating a structured approach for change, training, pilot implementation, communication, feedback, simulation, and end-user participation. Data analysis using descriptive statistics and Mann-Whitney-Wilcoxon test was done by SPSS software version 16. About 81.9 % of respondents were female and 74.5 have a Bachelor of Nursing (BN) degree. In general, the status of clinical governance implementation in studied hospitals based on 9 steps of the model was 44 % (moderate). A significant relationship was observed among accountability and organizational position (p=0.0012) and field of study (p=0.000). Also, there were significant relationships between structure-based approach and organizational position (p=0.007), communication and demographic characteristics (p=0.000), and end-user participation with organizational position (p=0.03). Clinical governance should be implemented by correct needs assessment and participation of all stakeholders, to ensure its enforcement in practice, and to enhance the quality of services.
Badve, Mandar P; Alpar, Tibor; Pandit, Aniruddha B; Gogate, Parag R; Csoka, Levente
2015-01-01
A mathematical model describing the shear rate and pressure variation in a complex flow field created in a hydrodynamic cavitation reactor (stator and rotor assembly) has been depicted in the present study. The design of the reactor is such that the rotor is provided with surface indentations and cavitational events are expected to occur on the surface of the rotor as well as within the indentations. The flow characteristics of the fluid have been investigated on the basis of high accuracy compact difference schemes and Navier-Stokes method. The evolution of streamlining structures during rotation, pressure field and shear rate of a Newtonian fluid flow have been numerically established. The simulation results suggest that the characteristics of shear rate and pressure area are quite different based on the magnitude of the rotation velocity of the rotor. It was observed that area of the high shear zone at the indentation leading edge shrinks with an increase in the rotational speed of the rotor, although the magnitude of the shear rate increases linearly. It is therefore concluded that higher rotational speeds of the rotor, tends to stabilize the flow, which in turn results into less cavitational activity compared to that observed around 2200-2500RPM. Experiments were carried out with initial concentration of KI as 2000ppm. Maximum of 50ppm of iodine liberation was observed at 2200RPM. Experimental as well as simulation results indicate that the maximum cavitational activity can be seen when rotation speed is around 2200-2500RPM. Copyright © 2014 Elsevier B.V. All rights reserved.
Qin, Shunda; Ge, Hongxia; Cheng, Rongjun
2018-02-01
In this paper, a new lattice hydrodynamic model is proposed by taking delay feedback and flux change rate effect into account in a single lane. The linear stability condition of the new model is derived by control theory. By using the nonlinear analysis method, the mKDV equation near the critical point is deduced to describe the traffic congestion. Numerical simulations are carried out to demonstrate the advantage of the new model in suppressing traffic jam with the consideration of flux change rate effect in delay feedback model.
Directory of Open Access Journals (Sweden)
Xiaobo Luo
2017-04-01
Full Text Available Carbon capture and storage (CCS technology will play a critical role in reducing anthropogenic carbon dioxide (CO2 emission from fossil-fired power plants and other energy-intensive processes. However, the increment of energy cost caused by equipping a carbon capture process is the main barrier to its commercial deployment. To reduce the capital and operating costs of carbon capture, great efforts have been made to achieve optimal design and operation through process modeling, simulation, and optimization. Accurate models form an essential foundation for this purpose. This paper presents a study on developing a more accurate rate-based model in Aspen Plus® for the monoethanolamine (MEA-based carbon capture process by multistage model validations. The modeling framework for this process was established first. The steady-state process model was then developed and validated at three stages, which included a thermodynamic model, physical properties calculations, and a process model at the pilot plant scale, covering a wide range of pressures, temperatures, and CO2 loadings. The calculation correlations of liquid density and interfacial area were updated by coding Fortran subroutines in Aspen Plus®. The validation results show that the correlation combination for the thermodynamic model used in this study has higher accuracy than those of three other key publications and the model prediction of the process model has a good agreement with the pilot plant experimental data. A case study was carried out for carbon capture from a 250 MWe combined cycle gas turbine (CCGT power plant. Shorter packing height and lower specific duty were achieved using this accurate model.
Relaxed Poisson cure rate models.
Rodrigues, Josemar; Cordeiro, Gauss M; Cancho, Vicente G; Balakrishnan, N
2016-03-01
The purpose of this article is to make the standard promotion cure rate model (Yakovlev and Tsodikov, ) more flexible by assuming that the number of lesions or altered cells after a treatment follows a fractional Poisson distribution (Laskin, ). It is proved that the well-known Mittag-Leffler relaxation function (Berberan-Santos, ) is a simple way to obtain a new cure rate model that is a compromise between the promotion and geometric cure rate models allowing for superdispersion. So, the relaxed cure rate model developed here can be considered as a natural and less restrictive extension of the popular Poisson cure rate model at the cost of an additional parameter, but a competitor to negative-binomial cure rate models (Rodrigues et al., ). Some mathematical properties of a proper relaxed Poisson density are explored. A simulation study and an illustration of the proposed cure rate model from the Bayesian point of view are finally presented. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sugiyanto; Zukhronah, Etik; Nur Aini, Anis
2017-12-01
Several times Indonesia has experienced to face a financial crisis, but the crisis occurred in 1997 had a tremendous impact on the economy and national stability. The impact of the crisis fall the exchange rate of rupiah against the dollar so it is needed the financial crisis detection system. Some data of bank deposits, real exchange rate and terms of trade indicators are used in this paper. Data taken from January 1990 until December 2016 are used to form the models with three state. Combination of volatility and Markov switching models are used to model the data. The result suggests that the appropriate model for bank deposit and terms of trade is SWARCH (3,1), and for real exchange rates is SWARCH (3,2).
Hou, Huirang; Zheng, Dandan; Nie, Laixiao
2015-04-01
For gas ultrasonic flowmeters, the signals received by ultrasonic sensors are susceptible to noise interference. If signals are mingled with noise, a large error in flow measurement can be caused by triggering mistakenly using the traditional double-threshold method. To solve this problem, genetic-ant colony optimization (GACO) based on the ultrasonic pulse received signal model is proposed. Furthermore, in consideration of the real-time performance of the flow measurement system, the improvement of processing only the first three cycles of the received signals rather than the whole signal is proposed. Simulation results show that the GACO algorithm has the best estimation accuracy and ant-noise ability compared with the genetic algorithm, ant colony optimization, double-threshold and enveloped zero-crossing. Local convergence doesn’t appear with the GACO algorithm until -10 dB. For the GACO algorithm, the converging accuracy and converging speed and the amount of computation are further improved when using the first three cycles (called GACO-3cycles). Experimental results involving actual received signals show that the accuracy of single-gas ultrasonic flow rate measurement can reach 0.5% with GACO-3 cycles, which is better than with the double-threshold method.
International Nuclear Information System (INIS)
Hou, Huirang; Zheng, Dandan; Nie, Laixiao
2015-01-01
For gas ultrasonic flowmeters, the signals received by ultrasonic sensors are susceptible to noise interference. If signals are mingled with noise, a large error in flow measurement can be caused by triggering mistakenly using the traditional double-threshold method. To solve this problem, genetic-ant colony optimization (GACO) based on the ultrasonic pulse received signal model is proposed. Furthermore, in consideration of the real-time performance of the flow measurement system, the improvement of processing only the first three cycles of the received signals rather than the whole signal is proposed. Simulation results show that the GACO algorithm has the best estimation accuracy and ant-noise ability compared with the genetic algorithm, ant colony optimization, double-threshold and enveloped zero-crossing. Local convergence doesn’t appear with the GACO algorithm until –10 dB. For the GACO algorithm, the converging accuracy and converging speed and the amount of computation are further improved when using the first three cycles (called GACO-3cycles). Experimental results involving actual received signals show that the accuracy of single-gas ultrasonic flow rate measurement can reach 0.5% with GACO-3 cycles, which is better than with the double-threshold method. (paper)
International Nuclear Information System (INIS)
Li, Kangkang; Yu, Hai; Qi, Guojie; Feron, Paul; Tade, Moses; Yu, Jingwen; Wang, Shujuan
2015-01-01
Highlights: • A rigorous, rate-based model for an NH 3 –CO 2 –SO 2 –H 2 O system was developed. • Model predictions are in good agreement with pilot plant results. • >99.9% of SO 2 was captured and >99.9% of slipped ammonia was reused. • The process is highly adaptable to the variations of SO 2 /NH 3 level, temperatures. - Abstract: To reduce the costs of controlling emissions from coal-fired power stations, we propose an advanced and effective process of combined SO 2 removal and NH 3 recycling, which can be integrated with the aqueous NH 3 -based CO 2 capture process to simultaneously achieve SO 2 and CO 2 removal, NH 3 recycling and flue gas cooling in one process. A rigorous, rate-based model for an NH 3 –CO 2 –SO 2 –H 2 O system was developed and used to simulate the proposed process. The model was thermodynamically and kinetically validated by experimental results from the open literature and pilot-plant trials, respectively. Under typical flue gas conditions, the proposed process has SO 2 removal and NH 3 reuse efficiencies of >99.9%. The process is strongly adaptable to different scenarios such as high SO 2 levels in flue gas, high NH 3 levels from the CO 2 absorber and high flue gas temperatures, and has a low energy requirement. Because the process simplifies flue gas desulphurisation and resolves the problems of NH 3 loss and SO 2 removal, it could significantly reduce the cost of CO 2 and SO 2 capture by aqueous NH 3
Killeen, G F; McKenzie, F E; Foy, B D; Schieffelin, C; Billingsley, P F; Beier, J C
2000-05-01
Malaria transmission intensity is modeled from the starting perspective of individual vector mosquitoes and is expressed directly as the entomologic inoculation rate (EIR). The potential of individual mosquitoes to transmit malaria during their lifetime is presented graphically as a function of their feeding cycle length and survival, human biting preferences, and the parasite sporogonic incubation period. The EIR is then calculated as the product of 1) the potential of individual vectors to transmit malaria during their lifetime, 2) vector emergence rate relative to human population size, and 3) the infectiousness of the human population to vectors. Thus, impacts on more than one of these parameters will amplify each other's effects. The EIRs transmitted by the dominant vector species at four malaria-endemic sites from Papua New Guinea, Tanzania, and Nigeria were predicted using field measurements of these characteristics together with human biting rate and human reservoir infectiousness. This model predicted EIRs (+/- SD) that are 1.13 +/- 0.37 (range = 0.84-1.59) times those measured in the field. For these four sites, mosquito emergence rate and lifetime transmission potential were more important determinants of the EIR than human reservoir infectiousness. This model and the input parameters from the four sites allow the potential impacts of various control measures on malaria transmission intensity to be tested under a range of endemic conditions. The model has potential applications for the development and implementation of transmission control measures and for public health education.
Ghaffarzadegan, Navid; Stewart, Thomas R.
2011-01-01
Elwin, Juslin, Olsson, and Enkvist (2007) and Henriksson, Elwin, and Juslin (2010) offered the constructivist coding hypothesis to describe how people code the outcomes of their decisions when availability of feedback is conditional on the decision. They provided empirical evidence only for the 0.5 base rate condition. This commentary argues that…
Hu, Jing-Xiao; Ran, Jia-Bing; Chen, Si; Shen, Xin-Yu; Tong, Hua
2015-12-01
In order to prepare sophisticated biomaterials using a biomimetic approach, a deeper understanding of biomineralization is needed. Of particular importance is the control and regulation of the mineralization process. In this study, a novel bilayer rate-controlling model was designed to investigate the factors potentially influencing mineralization. In the absence of a rate-controlling layer, nano-scale hydroxyapatite (HA) crystallites exhibited a spherical morphology, whereas, in the presence of a rate-controlling layer, HA crystallites were homogeneously dispersed and spindle-like in structure. The mineralization rate had a significant effect on controlling the morphology of crystals. Furthermore, in vitro tests demonstrated that the reaction layer containing spindle-like HA crystallites possessed superior biological properties. These results suggest that a slow mineralization rate is required for controlling the morphology of inorganic crystallites, and consumption by the rate-controlling layer ensured that the ammonia concentration remained low. This study demonstrates that a biomimetic approach can be used to prepare novel biomaterials containing HA crystallites that have different morphologies and biological properties. Copyright © 2015 Elsevier B.V. All rights reserved.
DEFF Research Database (Denmark)
Yebra, Diego Meseguer; Kiil, Søren; Erik Weinell, Claus
2006-01-01
The antifouling (AF) paint model of Kiil et al. [S. Kiil, C.E. Weinell, M.S. Pedersen, K. Dam-Johansen, Analysis of self-polishing antifouling paints using rotary experiments and mathematical modelling, Ind. Eng. Chem. Res. 40 (2001) 3906-3920] and the simplified biofilm. growth model of Gujer...... and Warmer [W. Gujer, O. Warmer, Modeling mixed population biofilms, in: W.G. Characklis, K.C. Marshall (Eds.), Biofilms, Wiley-Interscience, New York, 1990] are used to provide a reaction engineering-based insight to the effects of marine microbial slimes on biocide leaching and, to a minor extent...
Intuitive Understanding of Base Rates
DEFF Research Database (Denmark)
Austin, Laurel
Purpose: This study examines whether physicians and other adults intuitively understand that the probability a positive test result is a true positive (positive predictive value, PPV) depends on the base rate of disease in the population tested. In particular, this research seeks to examine perce...
Wang, Yan; Tian, Qing-Jiu; Huang, Yan; Wei, Hong-Wei
2013-04-01
The present paper takes Chuzhou in Anhui Province as the research area, and deciduous broad-leaved forest as the research object. Then it constructs the recognition model about deciduous broad-leaved forest was constructed using NDVI difference rate between leaf expansion and flowering and fruit-bearing, and the model was applied to HJ-CCD remote sensing image on April 1, 2012 and May 4, 2012. At last, the spatial distribution map of deciduous broad-leaved forest was extracted effectively, and the results of extraction were verified and evaluated. The result shows the validity of NDVI difference rate extraction method proposed in this paper and also verifies the applicability of using HJ-CCD data for vegetation classification and recognition.
KILLEEN, GERRY F.; McKENZIE, F. ELLIS; FOY, BRIAN D.; SCHIEFFELIN, CATHERINE; BILLINGSLEY, PETER F.; BEIER, JOHN C.
2000-01-01
Malaria transmission intensity is modeled from the starting perspective of individual vector mosquitoes and is expressed directly as the entomologic inoculation rate (EIR). The potential of individual mosquitoes to transmit malaria during their lifetime is presented graphically as a function of their feeding cycle length and survival, human biting preferences, and the parasite sporogonic incubation period. The EIR is then calculated as the product of 1) the potential of individual vectors to ...
Tatiana Danescu; Ovidiu Spatacean; Paula Nistor; Andrea Cristina Danescu
2010-01-01
Designing and performing analytical procedures aimed to assess the rating of theFinancial Investment Companies are essential activities both in the phase of planning a financialaudit mission and in the phase of issuing conclusions regarding the suitability of using by themanagement and other persons responsible for governance of going concern, as the basis forpreparation and disclosure of financial statements. The paper aims to examine the usefulness ofrecognized models used in the practice o...
International Nuclear Information System (INIS)
Thompson, C.D.; Krasodomski, H.T.; Lewis, N.; Makar, G.L.
1995-01-01
The Ford/Andresen slip dissolution SCC model, originally developed for stainless steel components in BWR environments, has been applied to Alloy 600 and Alloy X-750 tested in deaerated pure water chemistry. A method is described whereby the crack growth rates measured in compact tension specimens can be used to estimate crack growth in a component. Good agreement was found between model prediction and measured SCC in X-750 threaded fasteners over a wide range of temperatures, stresses, and material condition. Most data support the basic assumption of this model that cracks initiate early in life. The evidence supporting a particular SCC mechanism is mixed. Electrochemical repassivation data and estimates of oxide fracture strain indicate that the slip dissolution model can account for the observed crack growth rates, provided primary rather than secondary creep rates are used. However, approximately 100 cross-sectional TEM foils of SCC cracks including crack tips reveal no evidence of enhanced plasticity or unique dislocation patterns at the crack tip or along the crack to support a classic slip dissolution mechanism. No voids, hydrides, or microcracks are found in the vicinity of the crack tips creating doubt about classic hydrogen related mechanisms. The bulk oxide films exhibit a surface oxide which is often different than the oxides found within a crack. Although bulk chromium concentration affects the rate of SCC, analytical data indicates the mechanism does not result from chromium depletion at the grain boundaries. The overall findings support a corrosion/dissolution mechanism but not one necessarily related to slip at the crack tip
International Nuclear Information System (INIS)
Webb, J F; Yong, K S C; Haldar, M K
2015-01-01
Using results that come out of a simplified rate equation model, the suppression of residual amplitude modulation in injection locked quantum cascade lasers with the master laser modulated by its drive current is investigated. Quasi-static and dynamic expressions for intensity modulation are used. The suppression peaks at a specific value of the injection ratio for a given detuning and linewidth enhancement factor. The intensity modulation suppression remains constant over a range of frequencies. The effects of injection ratio, detuning, coupling efficiency and linewidth enhancement factor are considered. (paper)
International Nuclear Information System (INIS)
Zheng, X.J.; Metzger, D.R.; Sauve, R.G.
1995-01-01
A fracture criterion based on energy balance is proposed for elasto-plastic cracking at hydrides in zirconium, assuming a finite length of crack advance. The proposed elasto-plastic energy release rate is applied to the crack initiation at hydrides in smooth and notched surfaces, as well as the subsequent delayed hydride cracking (DHC) considering limited crack-tip plasticity. For a smooth or notched surface of an elastic body, the fracture parameter is related to the stress intensity factor for the initiated crack. For DHC, a unique curve relates the non-dimensionalized elasto-plastic energy release rate with the length of crack extension relative to the plastic zone size. This fracture criterion explains experimental observations concerning DHC in a qualitative manner. Quantitative comparison with experiments is made for fracture toughness and DHC tests on specimens containing certain hydride structures; very good agreement is obtained. ((orig.))
Multistate cohort models with proportional transfer rates
DEFF Research Database (Denmark)
Schoen, Robert; Canudas-Romo, Vladimir
2006-01-01
of transfer rates. The two living state case and hierarchical multistate models with any number of living states are analyzed in detail. Applying our approach to 1997 U.S. fertility data, we find that observed rates of parity progression are roughly proportional over age. Our proportional transfer rate...... approach provides trajectories by parity state and facilitates analyses of the implications of changes in parity rate levels and patterns. More women complete childbearing at parity 2 than at any other parity, and parity 2 would be the modal parity in models with total fertility rates (TFRs) of 1.40 to 2......We present a new, broadly applicable approach to summarizing the behavior of a cohort as it moves through a variety of statuses (or states). The approach is based on the assumption that all rates of transfer maintain a constant ratio to one another over age. We present closed-form expressions...
Modeling Real Exchange Rate Persistence in Chile
Directory of Open Access Journals (Sweden)
Leonardo Salazar
2017-07-01
Full Text Available The long and persistent swings in the real exchange rate have for a long time puzzled economists. Recent models built on imperfect knowledge economics seem to provide a theoretical explanation for this persistence. Empirical results, based on a cointegrated vector autoregressive (CVAR model, provide evidence of error-increasing behavior in prices and interest rates, which is consistent with the persistence observed in the data. The movements in the real exchange rate are compensated by movements in the interest rate spread, which restores the equilibrium in the product market when the real exchange rate moves away from its long-run benchmark value. Fluctuations in the copper price also explain the deviations of the real exchange rate from its long-run equilibrium value.
Penloglou, Giannis; Vasileiadou, Athina; Chatzidoukas, Christos; Kiparissides, Costas
2017-08-01
An integrated metabolic-polymerization-macroscopic model, describing the microbial production of polyhydroxybutyrate (PHB) in Azohydromonas lata bacteria, was developed and validated using a comprehensive series of experimental measurements. The model accounted for biomass growth, biopolymer accumulation, carbon and nitrogen sources utilization, oxygen mass transfer and uptake rates and average molecular weights of the accumulated PHB, produced under batch and fed-batch cultivation conditions. Model predictions were in excellent agreement with experimental measurements. The validated model was subsequently utilized to calculate optimal operating conditions and feeding policies for maximizing PHB productivity for desired PHB molecular properties. More specifically, two optimal fed-batch strategies were calculated and experimentally tested: (1) a nitrogen-limited fed-batch policy and (2) a nitrogen sufficient one. The calculated optimal operating policies resulted in a maximum PHB content (94% g/g) in the cultivated bacteria and a biopolymer productivity of 4.2 g/(l h), respectively. Moreover, it was demonstrated that different PHB grades with weight average molecular weights of up to 1513 kg/mol could be produced via the optimal selection of bioprocess operating conditions.
International Nuclear Information System (INIS)
Turkdogan-Aydinol, F. Ilter; Yetilmezsoy, Kaan
2010-01-01
A MIMO (multiple inputs and multiple outputs) fuzzy-logic-based model was developed to predict biogas and methane production rates in a pilot-scale 90-L mesophilic up-flow anaerobic sludge blanket (UASB) reactor treating molasses wastewater. Five input variables such as volumetric organic loading rate (OLR), volumetric total chemical oxygen demand (TCOD) removal rate (R V ), influent alkalinity, influent pH and effluent pH were fuzzified by the use of an artificial intelligence-based approach. Trapezoidal membership functions with eight levels were conducted for the fuzzy subsets, and a Mamdani-type fuzzy inference system was used to implement a total of 134 rules in the IF-THEN format. The product (prod) and the centre of gravity (COG, centroid) methods were employed as the inference operator and defuzzification methods, respectively. Fuzzy-logic predicted results were compared with the outputs of two exponential non-linear regression models derived in this study. The UASB reactor showed a remarkable performance on the treatment of molasses wastewater, with an average TCOD removal efficiency of 93 (±3)% and an average volumetric TCOD removal rate of 6.87 (±3.93) kg TCOD removed /m 3 -day, respectively. Findings of this study clearly indicated that, compared to non-linear regression models, the proposed MIMO fuzzy-logic-based model produced smaller deviations and exhibited a superior predictive performance on forecasting of both biogas and methane production rates with satisfactory determination coefficients over 0.98.
Sudhakaran, Sairam; Amy, Gary L.
2013-01-01
. In this study, quantitative structure activity relationships (QSAR) models for O3 and AOP processes were developed, and rate constants, kOH and kO3, were predicted based on target compound properties. The kO3 and kOH values ranged from 5 * 10-4 to 105 M-1s-1
Directory of Open Access Journals (Sweden)
Ming-wei Li
2015-01-01
Full Text Available Recent years have witnessed the rapid development of intelligent transportation system around the world, which helps to relieve urban traffic congestion problems. For instance, many mega-cities in China have devoted a large amount of money and resources to the development of intelligent transportation system. This poses an intriguing and important issue: how to measure and quantify the contribution of intelligent transportation system to the urban city, which is still a puzzle. This paper proposes a matching difference-in-difference model to calculate the contribution rate of intelligent transportation system on traffic smoothness. Within the model, the main effect indicators of traffic smoothness are first identified, and then the evaluation index system is built, and finally the ideas of the matching pool are introduced. The proposed model is illustrated in Guangzhou, China (capital city of Guangdong province. The results show that introduction of ITS contributes 9.25% to the improvement of traffic smooth in Guangzhou. Also, the research explains the working mechanism of how ITS improves urban traffic smooth. Eventually, some strategy recommendations are put forward to improve urban traffic smooth.
DEFF Research Database (Denmark)
such that conventional LDF (linear driving force) type models are extended to inactive zones without loosing their generality. Based on a limiting component constraint, an exchange probability kernel is developed for multi-component systems. The LDF-type model with the kernel is continuous with time and axial direction....... Two tuning parameters such as concentration layer thickness and function change rate at the threshold point are needed for the probability kernels, which are not sensitive to problems considered....
Chávez Muñoz, Pablo; Fernandes da Silva, Marcus; Vivas Miranda, José; Claro, Francisco; Gomez Diniz, Raimundo
2007-12-01
We have studied the performance of the Hurst's index associated with the currency exchange rate in Brazil and Chile. It is shown that this index maps the degree of government control in the exchange rate. A model of supply and demand based in an autonomous agent is proposed, that simulates a virtual market of sale and purchase, where buyer or seller are forced to negotiate through an intermediary. According to this model, the average of the price of daily transactions correspond to the theoretical balance proposed by the law of supply and demand. The influence of an added tendency factor is also analyzed.
2010-10-01
... 47 Telecommunication 3 2010-10-01 2010-10-01 false Rate base. 65.800 Section 65.800 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Rate Base § 65.800 Rate base. The rate base shall...
International Nuclear Information System (INIS)
Beetz, Ivo; Schilstra, Cornelis; Luijk, Peter van; Christianen, Miranda E.M.C.; Doornaert, Patricia; Bijl, Henk P.; Chouvalova, Olga; Heuvel, Edwin R. van den; Steenbakkers, Roel J.H.M.; Langendijk, Johannes A.
2012-01-01
Purpose: The purpose of this study was to investigate the ability of predictive models for patient-rated xerostomia (XER 6M ) and sticky saliva (STIC 6M ) at 6 months after completion of primary (chemo)radiation developed in head and neck cancer patients treated with 3D-conformal radiotherapy (3D-CRT) to predict outcome in patients treated with intensity modulated radiotherapy (IMRT). Methods and materials: Recently, we published the results of a prospective study on predictive models for patient-rated xerostomia and sticky saliva in head and neck cancer patients treated with 3D-CRT (3D-CRT based NTCP models). The 3D-CRT based model for XER 6M consisted of three factors, including the mean parotid dose, age, and baseline xerostomia (none versus a bit). The 3D-CRT based model for STIC 6M consisted of the mean submandibular dose, age, the mean sublingual dose, and baseline sticky saliva (none versus a bit). In the current study, a population consisting of 162 patients treated with IMRT was used to test the external validity of these 3D-CRT based models. External validity was described by the explained variation (R 2 Nagelkerke) and the Brier score. The discriminative abilities of the models were calculated using the area under the receiver operating curve (AUC) and calibration (i.e. the agreement between predicted and observed outcome) was assessed with the Hosmer–Lemeshow “goodness-of-fit” test. Results: Overall model performance of the 3D-CRT based predictive models for XER 6M and STIC 6M was significantly worse in terms of the Brier score and R 2 Nagelkerke among patients treated with IMRT. Moreover the AUC for both 3D-CRT based models in the IMRT treated patients were markedly lower. The Hosmer–Lemeshow test showed a significant disagreement for both models between predicted risk and observed outcome. Conclusion: 3D-CRT based models for patient-rated xerostomia and sticky saliva among head and neck cancer patients treated with primary radiotherapy or
Vitale, M.; Matteucci, G.; Fares, S.; Davison, B.
2009-02-01
This paper concerns the application of a process-based model (MOCA, Modelling of Carbon Assessment) as an useful tool for estimating gas exchange, and integrating the empirical algorithms for calculation of monoterpene fluxes, in a Mediterranean maquis of central Italy (Castelporziano, Rome). Simulations were carried out for a range of hypothetical but realistic canopies of the evergreen Quercus ilex (holm oak), Arbutus unedo (strawberry tree) and Phillyrea latifolia. More, the dependence on total leaf area and leaf distribution of monoterpene fluxes at the canopy scale has been considered in the algorithms. Simulation of the gas exchange rates showed higher values for P. latifolia and A. unedo (2.39±0.30 and 3.12±0.27 gC m-2 d-1, respectively) with respect to Q. ilex (1.67±0.08 gC m-2 d-1) in the measuring campaign (May-June). Comparisons of the average Gross Primary Production (GPP) values with those measured by eddy covariance were well in accordance (7.98±0.20 and 6.00±1.46 gC m-2 d-1, respectively, in May-June), although some differences (of about 30%) were evident in a point-to-point comparison. These differences could be explained by considering the non uniformity of the measuring site where diurnal winds blown S-SW direction affecting thus calculations of CO2 and water fluxes. The introduction of some structural parameters in the algorithms for monoterpene calculation allowed to simulate monoterpene emission rates and fluxes which were in accord to those measured (6.50±2.25 vs. 9.39±4.5μg g-1DW h-1 for Q. ilex, and 0.63±0.207μg g-1DW h-1 vs. 0.98±0.30μg g-1DW h-1 for P. latifolia). Some constraints of the MOCA model are discussed, but it is demonstrated to be an useful tool to simulate physiological processes and BVOC fluxes in a very complicated plant distributions and environmental conditions, and necessitating also of a low number of input data.
Decay rates of quarkonia and potential models
International Nuclear Information System (INIS)
Rai, Ajay Kumar; Pandya, J N; Vinodkumar, P C
2005-01-01
The decay rates of cc-bar and b-barb mesons have been studied with contributions from different correction terms. The corrections based on hard processes involved in the decays are quantitatively studied in the framework of different phenomenological potential models
Weil, Joyce; Hutchinson, Susan R; Traxler, Karen
2014-11-01
Data from the Women's Health and Aging Study were used to test a model of factors explaining depressive symptomology. The primary purpose of the study was to explore the association between performance-based measures of functional ability and depression and to examine the role of self-rated physical difficulties and perceived instrumental support in mediating the relationship between performance-based functioning and depression. The inclusion of performance-based measures allows for the testing of functional ability as a clinical precursor to disability and depression: a critical, but rarely examined, association in the disablement process. Structural equation modeling supported the overall fit of the model and found an indirect relationship between performance-based functioning and depression, with perceived physical difficulties serving as a significant mediator. Our results highlight the complementary nature of performance-based and self-rated measures and the importance of including perception of self-rated physical difficulties when examining depression in older persons. © The Author(s) 2014.
Li, Mingchao; Han, Shuai; Zhou, Sibao; Zhang, Ye
2018-06-01
Based on a 3D model of a discrete fracture network (DFN) in a rock mass, an improved projective method for computing the 3D mechanical connectivity rate was proposed. The Monte Carlo simulation method, 2D Poisson process and 3D geological modeling technique were integrated into a polyhedral DFN modeling approach, and the simulation results were verified by numerical tests and graphical inspection. Next, the traditional projective approach for calculating the rock mass connectivity rate was improved using the 3D DFN models by (1) using the polyhedral model to replace the Baecher disk model; (2) taking the real cross section of the rock mass, rather than a part of the cross section, as the test plane; and (3) dynamically searching the joint connectivity rates using different dip directions and dip angles at different elevations to calculate the maximum, minimum and average values of the joint connectivity at each elevation. In a case study, the improved method and traditional method were used to compute the mechanical connectivity rate of the slope of a dam abutment. The results of the two methods were further used to compute the cohesive force of the rock masses. Finally, a comparison showed that the cohesive force derived from the traditional method had a higher error, whereas the cohesive force derived from the improved method was consistent with the suggested values. According to the comparison, the effectivity and validity of the improved method were verified indirectly.
Base Rates: Both Neglected and Intuitive
Pennycook, Gordon; Trippas, Dries; Handley, Simon J.; Thompson, Valerie A.
2014-01-01
Base-rate neglect refers to the tendency for people to underweight base-rate probabilities in favor of diagnostic information. It is commonly held that base-rate neglect occurs because effortful (Type 2) reasoning is required to process base-rate information, whereas diagnostic information is accessible to fast, intuitive (Type 1) processing…
Wen, Yu-Wen; Wu, Hsin; Chang, Chee-Jen
2015-05-01
Vaccination can reduce the incidence and mortality of an infectious disease and thus increase the years of life and productivity for the entire society. But when determining the vaccination coverage rate, its economic burden is usually not taken into account. This article aimed to use a dynamic transmission modeling (DTM), which is based on a susceptible-infectious-recovered model and is a system of differential equations, to find the optimal vaccination coverage rate based on the economic burden of an infectious disease. Vaccination for pneumococcal diseases was used as an example to demonstrate the main purpose. 23-Valent pneumococcal polysaccharide vaccines (PPV23) and 13-valent pneumococcal conjugate vaccines (PCV13) have shown their cost-effectiveness in elderly and children, respectively. Scenarios analysis of PPV23 to elderly aged 65+ years and of PCV13 to children aged 0 to 4 years was applied to assess the optimal vaccination coverage rate based on the 5-year economic burden. Model parameters were derived from Taiwan's National Health Insurance Research Database, government data, and published literature. Various vaccination coverage rates, the vaccine efficacy, and all epidemiologic parameters were substituted into DTM, and all differential equations were solved in R Statistical Software. If the coverage rate of PPV23 for the elderly and of PCV13 for the children both reach 50%, the economic burden due to pneumococcal disease will be acceptable. This article provided an alternative perspective from the economic burden of diseases to obtain a vaccination coverage rate using the DTM. This will provide valuable information for vaccination policy decision makers. Copyright © 2015 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Turkdogan-Aydinol, F. Ilter, E-mail: aydin@yildiz.edu.tr [Yildiz Technical University, Faculty of Civil Engineering, Department of Environmental Engineering, 34220 Davutpasa, Esenler, Istanbul (Turkey); Yetilmezsoy, Kaan, E-mail: yetilmez@yildiz.edu.tr [Yildiz Technical University, Faculty of Civil Engineering, Department of Environmental Engineering, 34220 Davutpasa, Esenler, Istanbul (Turkey)
2010-10-15
A MIMO (multiple inputs and multiple outputs) fuzzy-logic-based model was developed to predict biogas and methane production rates in a pilot-scale 90-L mesophilic up-flow anaerobic sludge blanket (UASB) reactor treating molasses wastewater. Five input variables such as volumetric organic loading rate (OLR), volumetric total chemical oxygen demand (TCOD) removal rate (R{sub V}), influent alkalinity, influent pH and effluent pH were fuzzified by the use of an artificial intelligence-based approach. Trapezoidal membership functions with eight levels were conducted for the fuzzy subsets, and a Mamdani-type fuzzy inference system was used to implement a total of 134 rules in the IF-THEN format. The product (prod) and the centre of gravity (COG, centroid) methods were employed as the inference operator and defuzzification methods, respectively. Fuzzy-logic predicted results were compared with the outputs of two exponential non-linear regression models derived in this study. The UASB reactor showed a remarkable performance on the treatment of molasses wastewater, with an average TCOD removal efficiency of 93 ({+-}3)% and an average volumetric TCOD removal rate of 6.87 ({+-}3.93) kg TCOD{sub removed}/m{sup 3}-day, respectively. Findings of this study clearly indicated that, compared to non-linear regression models, the proposed MIMO fuzzy-logic-based model produced smaller deviations and exhibited a superior predictive performance on forecasting of both biogas and methane production rates with satisfactory determination coefficients over 0.98.
Affinity functions for modeling glass dissolution rates
Energy Technology Data Exchange (ETDEWEB)
Bourcier, W.L. [Lawrence Livermore National Lab., CA (United States)
1997-07-01
Glass dissolution rates decrease dramatically as glass approach ''saturation'' with respect to the leachate solution. Most repository sites are chosen where water fluxes are minimal, and therefore the waste glass is most likely to dissolve under conditions close to ''saturation''. The key term in the rate expression used to predict glass dissolution rates close to ''saturation'' is the affinity term, which accounts for saturation effects on dissolution rates. Interpretations of recent experimental data on the dissolution behaviour of silicate glasses and silicate minerals indicate the following: 1) simple affinity control does not explain the observed dissolution rate for silicate minerals or glasses; 2) dissolution rates can be significantly modified by dissolved cations even under conditions far from saturation where the affinity term is near unity; 3) the effects of dissolved species such as Al and Si on the dissolution rate vary with pH, temperature, and saturation state; and 4) as temperature is increased, the effect of both pH and temperature on glass and mineral dissolution rates decrease, which strongly suggests a switch in rate control from surface reaction-based to diffusion control. Borosilicate glass dissolution models need to be upgraded to account for these recent experimental observations. (A.C.)
Modeling the Volatility of Exchange Rates: GARCH Models
Directory of Open Access Journals (Sweden)
Fahima Charef
2017-03-01
Full Text Available The modeling of the dynamics of the exchange rate at a long time remains a financial and economic research center. In our research we tried to study the relationship between the evolution of exchange rates and macroeconomic fundamentals. Our empirical study is based on a series of exchange rates for the Tunisian dinar against three currencies of major trading partners (dollar, euro, yen and fundamentals (the terms of trade, the inflation rate, the interest rate differential, of monthly data, from jan 2000 to dec-2014, for the case of the Tunisia. We have adopted models of conditional heteroscedasticity (ARCH, GARCH, EGARCH, TGARCH. The results indicate that there is a partial relationship between the evolution of the Tunisian dinar exchange rates and macroeconomic variables.
Konrad, Paul Markus
2014-01-01
All across Europe, a drama of historical proportions is unfolding as the debt crisis continues to rock the worldwide financial landscape. Whilst insecurity rises, the general public, policy makers, scientists and academics are searching high and low for independent and objective analyses that may help to assess this unusual situation. For more than a century, rating agencies had developed methods and standards to evaluate and analyze companies, projects or even sovereign countries. However, due to their dated internal processes, the independence of these rating agencies is being questioned, ra
International Nuclear Information System (INIS)
Hallam, Brett; Abbott, Malcolm; Nampalli, Nitin; Hamer, Phill; Wenham, Stuart
2016-01-01
A three-state model is used to explore the influence of defect formation- and passivation rates of carrier-induced degradation related to boron-oxygen complexes in boron-doped p-type silicon solar cells within a hydrogen-based model. The model highlights that the inability to effectively mitigate carrier-induced degradation at elevated temperatures in previous studies is due to the limited availability of defects for hydrogen passivation, rather than being limited by the defect passivation rate. An acceleration of the defect formation rate is also observed to increase both the effectiveness and speed of carrier-induced degradation mitigation, whereas increases in the passivation rate do not lead to a substantial acceleration of the hydrogen passivation process. For high-throughput mitigation of such carrier-induced degradation on finished solar cell devices, two key factors were found to be required, high-injection conditions (such as by using high intensity illumination) to enable an acceleration of defect formation whilst simultaneously enabling a rapid passivation of the formed defects, and a high temperature to accelerate both defect formation and defect passivation whilst still ensuring an effective mitigation of carrier-induced degradation
Gao, Min-Jie; Zheng, Zhi-Yong; Wu, Jian-Rong; Dong, Shi-Juan; Li, Zhen; Jin, Hu; Zhan, Xiao-Bei; Lin, Chi-Chung
2012-02-01
Effective expression of porcine interferon-α (pIFN-α) with recombinant Pichia pastoris was conducted in a bench-scale fermentor. The influence of the glycerol feeding strategy on the specific growth rate and protein production was investigated. The traditional DO-stat feeding strategy led to very low cell growth rate resulting in low dry cell weight (DCW) of about 90 g/L during the subsequent induction phase. The previously reported Artificial Neural Network Pattern Recognition (ANNPR) model-based glycerol feeding strategy improved the cell density to 120 g DCW/L, while the specific growth rate decreased from 0.15 to 0.18 to 0.03-0.08 h(-1) during the last 10 h of the glycerol feeding stage leading to a variation of the porcine interferon-α production, as the glycerol feeding scheme had a significant effect on the induction phase. This problem was resolved by an improved ANNPR model-based feeding strategy to maintain the specific growth rate above 0.11 h(-1). With this feeding strategy, the pIFN-α concentration reached a level of 1.43 g/L, more than 1.5-fold higher than that obtained with the previously adopted feeding strategy. Our results showed that increasing the specific growth rate favored the target protein production and the glycerol feeding methods directly influenced the induction stage. Consequently, higher cell density and specific growth rate as well as effective porcine interferon-α production have been achieved by our novel glycerol feeding strategy.
Gaussian Mixture Model of Heart Rate Variability
Costa, Tommaso; Boccignone, Giuseppe; Ferraro, Mario
2012-01-01
Heart rate variability (HRV) is an important measure of sympathetic and parasympathetic functions of the autonomic nervous system and a key indicator of cardiovascular condition. This paper proposes a novel method to investigate HRV, namely by modelling it as a linear combination of Gaussians. Results show that three Gaussians are enough to describe the stationary statistics of heart variability and to provide a straightforward interpretation of the HRV power spectrum. Comparisons have been made also with synthetic data generated from different physiologically based models showing the plausibility of the Gaussian mixture parameters. PMID:22666386
Sudhakaran, Sairam
2013-03-01
Ozonation is an oxidation process for the removal of organic micropollutants (OMPs) from water and the chemical reaction is governed by second-order kinetics. An advanced oxidation process (AOP), wherein the hydroxyl radicals (OH radicals) are generated, is more effective in removing a wider range of OMPs from water than direct ozonation. Second-order rate constants (kOH and kO3) are good indices to estimate the oxidation efficiency, where higher rate constants indicate more rapid oxidation. In this study, quantitative structure activity relationships (QSAR) models for O3 and AOP processes were developed, and rate constants, kOH and kO3, were predicted based on target compound properties. The kO3 and kOH values ranged from 5 * 10-4 to 105 M-1s-1 and 0.04 to 18 * (109) M-1 s-1, respectively. Several molecular descriptors which potentially influence O3 and OH radical oxidation were identified and studied. The QSAR-defining descriptors were double bond equivalence (DBE), ionisation potential (IP), electron-affinity (EA) and weakly-polar component of solvent accessible surface area (WPSA), and the chemical and statistical significance of these descriptors was discussed. Multiple linear regression was used to build the QSAR models, resulting in high goodness-of-fit, r2 (>0.75). The models were validated by internal and external validation along with residual plots. © 2012 Elsevier Ltd.
2015-11-05
This final rule will update Home Health Prospective Payment System (HH PPS) rates, including the national, standardized 60-day episode payment rates, the national per-visit rates, and the non-routine medical supply (NRS) conversion factor under the Medicare prospective payment system for home health agencies (HHAs), effective for episodes ending on or after January 1, 2016. As required by the Affordable Care Act, this rule implements the 3rd year of the 4-year phase-in of the rebasing adjustments to the HH PPS payment rates. This rule updates the HH PPS case-mix weights using the most current, complete data available at the time of rulemaking and provides a clarification regarding the use of the "initial encounter'' seventh character applicable to certain ICD-10-CM code categories. This final rule will also finalize reductions to the national, standardized 60-day episode payment rate in CY 2016, CY 2017, and CY 2018 of 0.97 percent in each year to account for estimated case-mix growth unrelated to increases in patient acuity (nominal case-mix growth) between CY 2012 and CY 2014. In addition, this rule implements a HH value-based purchasing (HHVBP) model, beginning January 1, 2016, in which all Medicare-certified HHAs in selected states will be required to participate. Finally, this rule finalizes minor changes to the home health quality reporting program and minor technical regulations text changes.
International Nuclear Information System (INIS)
Steigelmann, W.H.
1986-01-01
In most states, all costs associated with building new power plants and other facilities are born by the owning utility until the facility becomes useful to customers. For a nuclear plant, this means that a utility must raise several billion dollars in capital over a period of 10 to 20 years, often undergrowing pressures to cancel the project. None of the possible ways of mitigating rate shock is free of controversy, but the objective should be to do that which is most equitable and has least adverse effects. The basic options are discussed, and all but one involves phasing in over time the full economic effects of the new plant. The impacts on average electricity price of three of the seven approaches are illustrated
2016-11-03
This final rule updates the Home Health Prospective Payment System (HH PPS) payment rates, including the national, standardized 60-day episode payment rates, the national per-visit rates, and the non-routine medical supply (NRS) conversion factor; effective for home health episodes of care ending on or after January 1, 2017. This rule also: Implements the last year of the 4-year phase-in of the rebasing adjustments to the HH PPS payment rates; updates the HH PPS case-mix weights using the most current, complete data available at the time of rulemaking; implements the 2nd-year of a 3-year phase-in of a reduction to the national, standardized 60-day episode payment to account for estimated case-mix growth unrelated to increases in patient acuity (that is, nominal case-mix growth) between CY 2012 and CY 2014; finalizes changes to the methodology used to calculate payments made under the HH PPS for high-cost "outlier" episodes of care; implements changes in payment for furnishing Negative Pressure Wound Therapy (NPWT) using a disposable device for patients under a home health plan of care; discusses our efforts to monitor the potential impacts of the rebasing adjustments; includes an update on subsequent research and analysis as a result of the findings from the home health study; and finalizes changes to the Home Health Value-Based Purchasing (HHVBP) Model, which was implemented on January 1, 2016; and updates to the Home Health Quality Reporting Program (HH QRP).
Stochastic interest rates model in compounding | Galadima ...
African Journals Online (AJOL)
Stochastic interest rates model in compounding. ... in finance, real estate, insurance, accounting and other areas of business administration. The assumption that future rates are fixed and known with certainty at the beginning of an investment, ...
Model Uncertainty and Exchange Rate Forecasting
Kouwenberg, R.; Markiewicz, A.; Verhoeks, R.; Zwinkels, R.C.J.
2017-01-01
Exchange rate models with uncertain and incomplete information predict that investors focus on a small set of fundamentals that changes frequently over time. We design a model selection rule that captures the current set of fundamentals that best predicts the exchange rate. Out-of-sample tests show
Directory of Open Access Journals (Sweden)
Maja eStikic
2014-11-01
Full Text Available The objective of this study was to investigate the feasibility of physiological metrics such as ECG-derived heart rate and EEG-derived cognitive workload and engagement as potential predictors of performance on different training tasks. An unsupervised approach based on self-organizing neural network (NN was utilized to model cognitive state changes over time. The feature vector comprised EEG-engagement, EEG-workload, and heart rate metrics, all self-normalized to account for individual differences. During the competitive training process, a linear topology was developed where the feature vectors similar to each other activated the same NN nodes. The NN model was trained and auto-validated on combat marksmanship training data from 51 participants that were required to make deadly force decisions in challenging combat scenarios. The trained NN model was cross validated using 10-fold cross-validation. It was also validated on a golf study in which additional 22 participants were asked to complete 10 sessions of 10 putts each. Temporal sequences of the activated nodes for both studies followed the same pattern of changes, demonstrating the generalization capabilities of the approach. Most node transition changes were local, but important events typically caused significant changes in the physiological metrics, as evidenced by larger state changes. This was investigated by calculating a transition score as the sum of subsequent state transitions between the activated NN nodes. Correlation analysis demonstrated statistically significant correlations between the transition scores and subjects’ performances in both studies. This paper explored the hypothesis that temporal sequences of physiological changes comprise the discriminative patterns for performance prediction. These physiological markers could be utilized in future training improvement systems (e.g., through neurofeedback, and applied across a variety of training environments.
Dengfeng Yan; Jaideep Sengupta
2013-01-01
This research examines how consumers use base rate (e.g., disease prevalence in a population) and case information (e.g., an individual's disease symptoms) to estimate health risks. Drawing on construal level theory, we propose that consumers' reliance on base rate (case information) will be enhanced (weakened) by psychological distance. A corollary of this premise is that self-positivity (i.e., underestimating self-risk vs. other-risk) is likely when the disease base rate is high but the cas...
Modeling inflation rates and exchange rates in Ghana: application of multivariate GARCH models.
Nortey, Ezekiel Nn; Ngoh, Delali D; Doku-Amponsah, Kwabena; Ofori-Boateng, Kenneth
2015-01-01
This paper was aimed at investigating the volatility and conditional relationship among inflation rates, exchange rates and interest rates as well as to construct a model using multivariate GARCH DCC and BEKK models using Ghana data from January 1990 to December 2013. The study revealed that the cumulative depreciation of the cedi to the US dollar from 1990 to 2013 is 7,010.2% and the yearly weighted depreciation of the cedi to the US dollar for the period is 20.4%. There was evidence that, the fact that inflation rate was stable, does not mean that exchange rates and interest rates are expected to be stable. Rather, when the cedi performs well on the forex, inflation rates and interest rates react positively and become stable in the long run. The BEKK model is robust to modelling and forecasting volatility of inflation rates, exchange rates and interest rates. The DCC model is robust to model the conditional and unconditional correlation among inflation rates, exchange rates and interest rates. The BEKK model, which forecasted high exchange rate volatility for the year 2014, is very robust for modelling the exchange rates in Ghana. The mean equation of the DCC model is also robust to forecast inflation rates in Ghana.
Verner, Marc-André; Loccisano, Anne E; Morken, Nils-Halvdan; Yoon, Miyoung; Wu, Huali; McDougall, Robin; Maisonet, Mildred; Marcus, Michele; Kishi, Reiko; Miyashita, Chihiro; Chen, Mei-Huei; Hsieh, Wu-Shiun; Andersen, Melvin E; Clewell, Harvey J; Longnecker, Matthew P
2015-12-01
Prenatal exposure to perfluoroalkyl substances (PFAS) has been associated with lower birth weight in epidemiologic studies. This association could be attributable to glomerular filtration rate (GFR), which is related to PFAS concentration and birth weight. We used a physiologically based pharmacokinetic (PBPK) model of pregnancy to assess how much of the PFAS-birth weight association observed in epidemiologic studies might be attributable to GFR. We modified a PBPK model to reflect the association of GFR with birth weight (estimated from three studies of GFR and birth weight) and used it to simulate PFAS concentrations in maternal and cord plasma. The model was run 250,000 times, with variation in parameters, to simulate a population. Simulated data were analyzed to evaluate the association between PFAS levels and birth weight due to GFR. We compared simulated estimates with those from a meta-analysis of epidemiologic data. The reduction in birth weight for each 1-ng/mL increase in simulated cord plasma for perfluorooctane sulfonate (PFOS) was 2.72 g (95% CI: -3.40, -2.04), and for perfluorooctanoic acid (PFOA) was 7.13 g (95% CI: -8.46, -5.80); results based on maternal plasma at term were similar. Results were sensitive to variations in PFAS level distributions and the strength of the GFR-birth weight association. In comparison, our meta-analysis of epidemiologic studies suggested that each 1-ng/mL increase in prenatal PFOS and PFOA levels was associated with 5.00 g (95% CI: -21.66, -7.78) and 14.72 g (95% CI: -8.92, -1.09) reductions in birth weight, respectively. Results of our simulations suggest that a substantial proportion of the association between prenatal PFAS and birth weight may be attributable to confounding by GFR and that confounding by GFR may be more important in studies with sample collection later in pregnancy.
International Nuclear Information System (INIS)
Leborgne, Felix; Fowler, Jack F.; Leborgne, Jose H.; Zubizarreta, Eduardo; Curochquin, Rene
1999-01-01
Purpose: To compare results and complications of our previous low-dose-rate (LDR) brachytherapy schedule for early-stage cancer of the cervix, with a prospectively designed medium-dose-rate (MDR) schedule, based on the linear-quadratic model (LQ). Methods and Materials: A combination of brachytherapy, external beam pelvic and parametrial irradiation was used in 102 consecutive Stage Ib-IIb LDR treated patients (1986-1990) and 42 equally staged MDR treated patients (1994-1996). The planned MDR schedule consisted of three insertions on three treatment days with six 8-Gy brachytherapy fractions to Point A, two on each treatment day with an interfraction interval of 6 hours, plus 18 Gy external whole pelvic dose, and followed by additional parametrial irradiation. The calculated biologically effective dose (BED) for tumor was 90 Gy 10 and for rectum below 125 Gy 3 . Results: In practice the MDR brachytherapy schedule achieved a tumor BED of 86 Gy 10 and a rectal BED of 101 Gy 3 . The latter was better than originally planned due to a reduction from 85% to 77% in the percentage of the mean dose to the rectum in relation to Point A. The mean overall treatment time was 10 days shorter for MDR in comparison with LDR. The 3-year actuarial central control for LDR and MDR was 97% and 98% (p = NS), respectively. The Grades 2 and 3 late complications (scale 0 to 3) were 1% and 2.4%, respectively for LDR (3-year) and MDR (2-year). Conclusions: LQ is a reliable tool for designing new schedules with altered fractionation and dose rates. The MDR schedule has proven to be an equivalent treatment schedule compared with LDR, with an additional advantage of having a shorter overall treatment time. The mean rectal BED Gy 3 was lower than expected
Collazo, Andrés A.
2018-01-01
A model derived from the theory of planned behavior was empirically assessed for understanding faculty intention to use student ratings for teaching improvement. A sample of 175 professors participated in the study. The model was statistically significant and had a very large explanatory power. Instrumental attitude, affective attitude, perceived…
International Nuclear Information System (INIS)
Rohay, A.C.
1991-01-01
Gable Mountain is a segment of the Umtanum Ridge-Gable Mountain structural trend, an east-west trending series of anticlines, one of the major geologic structures on the Hanford Site. A probabilistic seismic exposure model indicates that Gable Mountain and two adjacent segments contribute significantly to the seismic hazard at the Hanford Site. Geologic measurements of the uplift of initially horizontal (11-12 Ma) basalt flows indicate that a broad, continuous, primary anticline grew at an average rate of 0.009-0.011 mm/a, and narrow, segmented, secondary anticlines grew at rates of 0.009 mm/a at Gable Butte and 0.018 mm/a at Gable Mountain. The buried Southeast Anticline appears to have a different geometry, consisting of a single, intermediate-width anticline with an estimated growth rate of 0.007 mm/a. The recurrence rate and maximum magnitude of earthquakes for the fault models were used to estimate the fault slip rate for each of the fault models and to determine the implied structural growth rate of the segments. The current model for Gable Mountain-Gable Butte predicts 0.004 mm/a of vertical uplift due to primary faulting and 0.008 mm/a due to secondary faulting. These rates are roughly half the structurally estimated rates for Gable Mountain, but the model does not account for the smaller secondary fold at Gable Butte. The model predicted an uplift rate for the Southeast Anticline of 0.006 mm/a, caused by the low open-quotes fault capabilityclose quotes weighting rather than a different fault geometry. The effects of previous modifications to the fault models are examined and potential future modifications are suggested. For example, the earthquake recurrence relationship used in the current exposure model has a b-value of 1.15, compared to a previous value of 0.85. This increases the implied deformation rates due to secondary fault models, and therefore supports the use of this regionally determined b-value to this fault/fold system
Nicholas A. Povak; Paul F. Hessburg; Todd C. McDonnell; Keith M. Reynolds; Timothy J. Sullivan; R. Brion Salter; Bernard J. Crosby
2014-01-01
Accurate estimates of soil mineral weathering are required for regional critical load (CL) modeling to identify ecosystems at risk of the deleterious effects from acidification. Within a correlative modeling framework, we used modeled catchment-level base cation weathering (BCw) as the response variable to identify key environmental correlates and predict a continuous...
Du, E.; Cai, X.; Minsker, B. S.; Sun, Z.
2017-12-01
Flood warnings from various information sources are important for individuals to make evacuation decisions during a flood event. In this study, we develop a general opinion dynamics model to simulate how individuals update their flood hazard awareness when exposed to multiple information sources, including global broadcast, social media, and observations of neighbors' actions. The opinion dynamics model is coupled with a traffic model to simulate the evacuation processes of a residential community with a given transportation network. Through various scenarios, we investigate how social media affect the opinion dynamics and evacuation processes. We find that stronger social media can make evacuation processes more sensitive to the change of global broadcast and neighbor observations, and thus, impose larger uncertainty on evacuation rates (i.e., a large range of evacuation rates corresponding to sources of information). For instance, evacuation rates are lower when social media become more influential and individuals have less trust in global broadcast. Stubborn individuals can significantly affect the opinion dynamics and reduce evacuation rates. In addition, evacuation rates respond to the percentage of stubborn agents in a non-linear manner, i.e., above a threshold, the impact of stubborn agents will be intensified by stronger social media. These results highlight the role of social media in flood evacuation processes and the need to monitor social media so that misinformation can be corrected in a timely manner. The joint impacts of social media, quality of flood warnings and transportation capacity on evacuation rates are also discussed.
Exchange rate predictability and state-of-the-art models
Yeșin, Pınar
2016-01-01
This paper empirically evaluates the predictive performance of the International Monetary Fund's (IMF) exchange rate assessments with respect to future exchange rate movements. The assessments of real trade-weighted exchange rates were conducted from 2006 to 2011, and were based on three state-of-the-art exchange rate models with a medium-term focus which were developed by the IMF. The empirical analysis using 26 advanced and emerging market economy currencies reveals that the "diagnosis" of ...
Solid formation in piperazine rate-based simulation
DEFF Research Database (Denmark)
Gaspar, Jozsef; Thomsen, Kaj; von Solms, Nicolas
2014-01-01
of view but also from a modeling perspective. The present work develops a rate-based model for CO2 absorption and desorption modeling for gas-liquid-solid systems and it is demonstrated for the piperazine CO2 capture process. This model is an extension of the DTU CAPCO2 model to precipitating systems....... It uses the extended UNIQUAC thermodynamic model for phase equilibria and thermal properties estimation. The mass and heat transfer phenomena is implemented in a film model approach, based on second order reactions kinetics. The transfer fluxes are calculated using the concentration of the dissolved...
Directory of Open Access Journals (Sweden)
Rocha-Leão M.H.M.
2003-01-01
Full Text Available In S. cerevisiae, catabolite repression controls glycogen accumulation and glucose consumption. Glycogen is responsible for stress resistance, and its accumulation in derepression conditions results in a yeast with good quality. In yeast cells, catabolite repression also named glucose effect takes place at the transcriptional levels, decreasing enzyme respiration and causing the cells to enter a fermentative metabolism, low cell mass yield and yeast with poor quality. Since glucose is always present in molasses the glucose effect occurs in industrial media. A quantitative characterization of cell growth, substrate consumption and glycogen formation was undertaken based on an unstructured macrokinetic model for a reg1/hex2 mutant, capable of the respiration while growing on glucose, and its isogenic repressible strain (REG1/HEX2. The results show that the estimated value to maximum specific glycogen accumulation rate (muG,MAX is eight times greater in the reg1/hex2 mutant than its isogenic strain, and the glucose affinity constant (K SS is fifth times greater in reg1/hex2 mutant than in its isogenic strain with less glucose uptake by the former channeling glucose into cell mass growth and glycogen accumulation simultaneously. This approach may be one more tool to improve the glucose removal in yeast production. Thus, disruption of the REG1/HEX2 gene may constitute an important strategy for producing commercial yeast.
Energy Technology Data Exchange (ETDEWEB)
Ballester, Facundo, E-mail: Facundo.Ballester@uv.es [Department of Atomic, Molecular and Nuclear Physics, University of Valencia, Burjassot 46100 (Spain); Carlsson Tedgren, Åsa [Department of Medical and Health Sciences (IMH), Radiation Physics, Faculty of Health Sciences, Linköping University, Linköping SE-581 85, Sweden and Department of Medical Physics, Karolinska University Hospital, Stockholm SE-171 76 (Sweden); Granero, Domingo [Department of Radiation Physics, ERESA, Hospital General Universitario, Valencia E-46014 (Spain); Haworth, Annette [Department of Physical Sciences, Peter MacCallum Cancer Centre and Royal Melbourne Institute of Technology, Melbourne, Victoria 3000 (Australia); Mourtada, Firas [Department of Radiation Oncology, Helen F. Graham Cancer Center, Christiana Care Health System, Newark, Delaware 19713 (United States); Fonseca, Gabriel Paiva [Instituto de Pesquisas Energéticas e Nucleares – IPEN-CNEN/SP, São Paulo 05508-000, Brazil and Department of Radiation Oncology (MAASTRO), GROW, School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht 6201 BN (Netherlands); Zourari, Kyveli; Papagiannis, Panagiotis [Medical Physics Laboratory, Medical School, University of Athens, 75 MikrasAsias, Athens 115 27 (Greece); Rivard, Mark J. [Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States); Siebert, Frank-André [Clinic of Radiotherapy, University Hospital of Schleswig-Holstein, Campus Kiel, Kiel 24105 (Germany); Sloboda, Ron S. [Department of Medical Physics, Cross Cancer Institute, Edmonton, Alberta T6G 1Z2, Canada and Department of Oncology, University of Alberta, Edmonton, Alberta T6G 2R3 (Canada); and others
2015-06-15
Purpose: In order to facilitate a smooth transition for brachytherapy dose calculations from the American Association of Physicists in Medicine (AAPM) Task Group No. 43 (TG-43) formalism to model-based dose calculation algorithms (MBDCAs), treatment planning systems (TPSs) using a MBDCA require a set of well-defined test case plans characterized by Monte Carlo (MC) methods. This also permits direct dose comparison to TG-43 reference data. Such test case plans should be made available for use in the software commissioning process performed by clinical end users. To this end, a hypothetical, generic high-dose rate (HDR) {sup 192}Ir source and a virtual water phantom were designed, which can be imported into a TPS. Methods: A hypothetical, generic HDR {sup 192}Ir source was designed based on commercially available sources as well as a virtual, cubic water phantom that can be imported into any TPS in DICOM format. The dose distribution of the generic {sup 192}Ir source when placed at the center of the cubic phantom, and away from the center under altered scatter conditions, was evaluated using two commercial MBDCAs [Oncentra{sup ®} Brachy with advanced collapsed-cone engine (ACE) and BrachyVision ACUROS{sup TM}]. Dose comparisons were performed using state-of-the-art MC codes for radiation transport, including ALGEBRA, BrachyDose, GEANT4, MCNP5, MCNP6, and PENELOPE2008. The methodologies adhered to recommendations in the AAPM TG-229 report on high-energy brachytherapy source dosimetry. TG-43 dosimetry parameters, an along-away dose-rate table, and primary and scatter separated (PSS) data were obtained. The virtual water phantom of (201){sup 3} voxels (1 mm sides) was used to evaluate the calculated dose distributions. Two test case plans involving a single position of the generic HDR {sup 192}Ir source in this phantom were prepared: (i) source centered in the phantom and (ii) source displaced 7 cm laterally from the center. Datasets were independently produced by
Rantala, Olavi
1992-01-01
The paper presents a model ofexchange rate movements within a specified exchange rate band enforced by central bank interventions. The model is based on the empirical observation that the exchange rate has usually been strictly inside the band, at least in Finland. In this model the distribution of the exchange rate is truncated lognormal from the edges towards the center of the band and hence quite different from the bimodal distribution of the standard target zone model. The model is estima...
Rain-rate data base development and rain-rate climate analysis
Crane, Robert K.
1993-01-01
The single-year rain-rate distribution data available within the archives of Consultative Committee for International Radio (CCIR) Study Group 5 were compiled into a data base for use in rain-rate climate modeling and for the preparation of predictions of attenuation statistics. The four year set of tip-time sequences provided by J. Goldhirsh for locations near Wallops Island were processed to compile monthly and annual distributions of rain rate and of event durations for intervals above and below preset thresholds. A four-year data set of tropical rain-rate tip-time sequences were acquired from the NASA TRMM program for 30 gauges near Darwin, Australia. They were also processed for inclusion in the CCIR data base and the expanded data base for monthly observations at the University of Oklahoma. The empirical rain-rate distributions (edfs) accepted for inclusion in the CCIR data base were used to estimate parameters for several rain-rate distribution models: the lognormal model, the Crane two-component model, and the three parameter model proposed by Moupfuma. The intent of this segment of the study is to obtain a limited set of parameters that can be mapped globally for use in rain attenuation predictions. If the form of the distribution can be established, then perhaps available climatological data can be used to estimate the parameters rather than requiring years of rain-rate observations to set the parameters. The two-component model provided the best fit to the Wallops Island data but the Moupfuma model provided the best fit to the Darwin data.
Martingale Regressions for a Continuous Time Model of Exchange Rates
Guo, Zi-Yi
2017-01-01
One of the daunting problems in international finance is the weak explanatory power of existing theories of the nominal exchange rates, the so-called “foreign exchange rate determination puzzle”. We propose a continuous-time model to study the impact of order flow on foreign exchange rates. The model is estimated by a newly developed econometric tool based on a time-change sampling from calendar to volatility time. The estimation results indicate that the effect of order flow on exchange rate...
A MODEL OF RATING FOR BANKS IN ROMANIA
Directory of Open Access Journals (Sweden)
POPA ANAMARIA
2012-07-01
Full Text Available Abstract.In the paper the authors present a model of rating for the banking system. Thus we took into account the records of 11 banks in Romania, based on annual financial reports. The model classified the banks in seven categories according with notes used by Standard Poorâ€™s and Moodyâ€™s rating Agencies.
MONETARY MODELS AND EXCHANGE RATE DETERMINATION ...
African Journals Online (AJOL)
Power Party [PPP] based on the law of one price asserts that the change in the exchange rate between .... exchange in international economic transactions has made it vitally evident that the management of ... One lesson from this episode is to ...
Marchenko, S. S.; Genet, H.; Euskirchen, E. S.; Breen, A. L.; McGuire, A. D.; Rupp, S. T.; Romanovsky, V. E.; Bolton, W. R.; Walsh, J. E.
2016-12-01
The impact of climate warming on permafrost and the potential of climate feedbacks resulting from permafrost thawing have recently received a great deal of attention. Permafrost temperature has increased in most locations in the Arctic and Sub-Arctic during the past 30-40 years. The typical increase in permafrost temperature is 1-3°C. The process-based permafrost dynamics model GIPL developed in the Geophysical Institute Permafrost Lab, and which is the permafrost module of the Integrated Ecosystem Model (IEM) has been using to quantify the nature and rate of permafrost degradation and its impact on ecosystems, infrastructure, CO2 and CH4fluxes and net C storage following permafrost thaw across Alaska and Northwest Canada. The IEM project is a multi-institutional and multi-disciplinary effort aimed at understanding potential landscape, habitat and ecosystem change across the IEM domain. The IEM project also aims to tie three scientific models together Terrestrial Ecosystem Model (TEM), the ALFRESCO (ALaska FRame-based EcoSystem Code) and GIPL so that they exchange data at run-time. The models produce forecasts of future fire, vegetation, organic matter, permafrost and hydrology regimes. The climate forcing data are based on the historical CRU3.1 data set for the retrospective analysis period (1901-2009) and the CMIP3 CCCMA-CGCM3.1 and MPI-ECHAM5/MPI-OM climate models for the future period (2009-2100). All data sets were downscaled to a 1 km resolution, using a differencing methodology (i.e., a delta method) and the Parameter-elevation Regressions on Independent Slopes Model (PRISM) climatology. We estimated the dynamics of permafrost temperature, active layer thickness, area occupied by permafrost, and volume of thawed soils across the IEM domain. The modeling results indicate how different types of ecosystems affect the thermal state of permafrost and its stability. Although the rate of soil warming and permafrost degradation in peatland areas are slower than
Micromechanical modeling of rate-dependent behavior of Connective tissues.
Fallah, A; Ahmadian, M T; Firozbakhsh, K; Aghdam, M M
2017-03-07
In this paper, a constitutive and micromechanical model for prediction of rate-dependent behavior of connective tissues (CTs) is presented. Connective tissues are considered as nonlinear viscoelastic material. The rate-dependent behavior of CTs is incorporated into model using the well-known quasi-linear viscoelasticity (QLV) theory. A planar wavy representative volume element (RVE) is considered based on the tissue microstructure histological evidences. The presented model parameters are identified based on the available experiments in the literature. The presented constitutive model introduced to ABAQUS by means of UMAT subroutine. Results show that, monotonic uniaxial test predictions of the presented model at different strain rates for rat tail tendon (RTT) and human patellar tendon (HPT) are in good agreement with experimental data. Results of incremental stress-relaxation test are also presented to investigate both instantaneous and viscoelastic behavior of connective tissues. Copyright © 2017 Elsevier Ltd. All rights reserved.
Leak rate models and leak detection
International Nuclear Information System (INIS)
1992-01-01
Leak detection may be carried out by a number of detection systems, but selection of the systems must be carefully adapted to the fluid state and the location of the leak in the reactor coolant system. Computer programs for the calculation of leak rates contain different models to take into account the fluid state before its entrance into the crack, and they have to be verified by experiments; agreement between experiments and calculations is generally not satisfactory for very small leak rates resulting from narrow cracks or from a closing bending moment
Zhao, Qingying; Li, Min; Luo, Jun
2017-12-04
In nanomachine applications towards targeted drug delivery, drug molecules released by nanomachines propagate and chemically react with tumor cells in aqueous environment. If the nanomachines release drug molecules faster than the tumor cells react, it will result in loss and waste of drug molecules. It is a potential issue associated with the relationship among reaction rate, release rate and efficiency. This paper aims to investigate the relationship among reaction rate, release rate and efficiency based on two drug reception models. We expect to pave a way for designing a control method of drug release. We adopted two analytical methods that one is drug reception process based on collision with tumors and another is based on Michaelis Menten enzymatic kinetics. To evaluate the analytical formulations, we used the well-known simulation framework N3Sim to establish simulations. The analytical results of the relationship among reaction rate, release rate and efficiency is obtained, which match well with the numerical simulation results in a 3-D environment. Based upon two drug reception models, the results of this paper would be beneficial for designing a control method of nanomahine-based drug release.
Directory of Open Access Journals (Sweden)
B. Orcutt
2008-11-01
Full Text Available Anaerobic oxidation of methane (AOM is the main process responsible for the removal of methane generated in Earth's marine subsurface environments. However, the biochemical mechanism of AOM remains elusive. By explicitly resolving the observed spatial arrangement of methanotrophic archaea and sulfate reducing bacteria found in consortia mediating AOM, potential intermediates involved in the electron transfer between the methane oxidizing and sulfate reducing partners were investigated via a consortium-scale reaction transport model that integrates the effect of diffusional transport with thermodynamic and kinetic controls on microbial activity. Model simulations were used to assess the impact of poorly constrained microbial characteristics such as minimum energy requirements to sustain metabolism and cell specific rates. The role of environmental conditions such as the influence of methane levels on the feasibility of H_{2}, formate and acetate as intermediate species, and the impact of the abundance of intermediate species on pathway reversal were examined. The results show that higher production rates of intermediates via AOM lead to increased diffusive fluxes from the methane oxidizing archaea to sulfate reducing bacteria, but the build-up of the exchangeable species can cause the energy yield of AOM to drop below that required for ATP production. Comparison to data from laboratory experiments shows that under the experimental conditions of Nauhaus et al. (2007, none of the potential intermediates considered here is able to support metabolic activity matching the measured rates.
Empirical Model for Predicting Rate of Biogas Production | Adamu ...
African Journals Online (AJOL)
Rate of biogas production using cow manure as substrate was monitored in two laboratory scale batch reactors (13 liter and 108 liter capacities). Two empirical models based on the Gompertz and the modified logistic equations were used to fit the experimental data based on non-linear regression analysis using Solver tool ...
Hirata, Akimasa; Laakso, Ilkka; Oizumi, Takuya; Hanatani, Ryuto; Chan, Kwok Hung; Wiart, Joe
2013-02-21
According to the international safety guidelines/standard, the whole-body-averaged specific absorption rate (Poljak et al 2003 IEEE Trans. Electromagn. Compat. 45 141-5) and the peak spatial average SAR are used as metrics for human protection from whole-body and localized exposures, respectively. The IEEE standard (IEEE 2006 IEEE C95.1) indicates that the upper boundary frequency, over which the whole-body-averaged SAR is deemed to be the basic restriction, has been reduced from 6 to 3 GHz, because radio-wave energy is absorbed around the body surface when the frequency is increased. However, no quantitative discussion has been provided to support this description especially from the standpoint of temperature elevation. It is of interest to investigate the maximum temperature elevation in addition to the core temperature even for a whole-body exposure. In the present study, using anatomically based human models, we computed the SAR and the temperature elevation for a plane-wave exposure from 30 MHz to 6 GHz, taking into account the thermoregulatory response. As the primary result, we found that the ratio of the core temperature elevation to the whole-body-averaged SAR is almost frequency independent for frequencies below a few gigahertz; the ratio decreases above this frequency. At frequencies higher than a few gigahertz, core temperature elevation for the same whole-body averaged SAR becomes lower due to heat convection from the skin to air. This lower core temperature elevation is attributable to skin temperature elevation caused by the power absorption around the body surface. Then, core temperature elevation even for whole-body averaged SAR of 4 W kg(-1) with the duration of 1 h was at most 0.8 °C, which is smaller than a threshold considered in the safety guidelines/standard. Further, the peak 10 g averaged SAR is correlated with the maximum body temperature elevations without extremities and pinna over the frequencies considered. These findings
Li, Jian; Fei, Ze-yuan; Xu, Yi-feng; Wang, Jie; Fan, Bing-feng; Ma, Xue-jin; Wang, Gang
2018-02-01
Metal-organic chemical vapour deposition (MOCVD) is a key technique for fabricating GaN thin film structures for light-emitting and semiconductor laser diodes. Film uniformity is an important index to measure equipment performance and chip processes. This paper introduces a method to improve the quality of thin films by optimizing the rotation speed of different substrates of a model consisting of a planetary with seven 6-inch wafers for the planetary GaN-MOCVD. A numerical solution to the transient state at low pressure is obtained using computational fluid dynamics. To evaluate the role of the different zone speeds on the growth uniformity, single factor analysis is introduced. The results show that the growth rate and uniformity are strongly related to the rotational speed. Next, a response surface model was constructed by using the variables and the corresponding simulation results. The optimized combination of the matching of different speeds is also proposed as a useful reference for applications in industry, obtained by a response surface model and genetic algorithm with a balance between the growth rate and the growth uniformity. This method can save time, and the optimization can obtain the most uniform and highest thin film quality.
Predicting extinction rates in stochastic epidemic models
International Nuclear Information System (INIS)
Schwartz, Ira B; Billings, Lora; Dykman, Mark; Landsman, Alexandra
2009-01-01
We investigate the stochastic extinction processes in a class of epidemic models. Motivated by the process of natural disease extinction in epidemics, we examine the rate of extinction as a function of disease spread. We show that the effective entropic barrier for extinction in a susceptible–infected–susceptible epidemic model displays scaling with the distance to the bifurcation point, with an unusual critical exponent. We make a direct comparison between predictions and numerical simulations. We also consider the effect of non-Gaussian vaccine schedules, and show numerically how the extinction process may be enhanced when the vaccine schedules are Poisson distributed
Tax Rate and Tax Base Competition for Foreign Direct Investment
Peter Egger; Horst Raff
2011-01-01
This paper argues that the large reduction in corporate tax rates and only gradual widening of tax bases in many countries over the last decades are consistent with tougher international competition for foreign direct investment (FDI). To make this point we develop a model in which governments compete for FDI using corporate tax rates and tax bases. The modelâ€™s predictions regarding the slope of policy reaction functions and the response of equilibrium tax parameters to trade costs and mark...
Choi, Gi Heung; Loh, Byoung Gook
2017-06-01
Despite the recent efforts to prevent industrial accidents in the Republic of Korea, the industrial accident rate has not improved much. Industrial safety policies and safety management are also known to be inefficient. This study focused on dynamic characteristics of industrial safety systems and their effects on safety performance in the Republic of Korea. Such dynamic characteristics are particularly important for restructuring of the industrial safety system. The effects of damping and elastic characteristics of the industrial safety system model on safety performance were examined and feedback control performance was explained in view of cost and benefit. The implications on safety policies of restructuring the industrial safety system were also explored. A strong correlation between the safety budget and the industrial accident rate enabled modeling of an industrial safety system with these variables as the input and the output, respectively. A more effective and efficient industrial safety system could be realized by having weaker elastic characteristics and stronger damping characteristics in it. A substantial decrease in total social cost is expected as the industrial safety system is restructured accordingly. A simple feedback control with proportional-integral action is effective in prevention of industrial accidents. Securing a lower level of elastic industrial accident-driving energy appears to have dominant effects on the control performance compared with the damping effort to dissipate such energy. More attention needs to be directed towards physical and social feedbacks that have prolonged cumulative effects. Suggestions for further improvement of the safety system including physical and social feedbacks are also made.
Mobarhan, Milad Hobbi; Halnes, Geir; Martínez-Cañada, Pablo; Hafting, Torkel; Fyhn, Marianne; Einevoll, Gaute T
2018-05-01
Visually evoked signals in the retina pass through the dorsal geniculate nucleus (dLGN) on the way to the visual cortex. This is however not a simple feedforward flow of information: there is a significant feedback from cortical cells back to both relay cells and interneurons in the dLGN. Despite four decades of experimental and theoretical studies, the functional role of this feedback is still debated. Here we use a firing-rate model, the extended difference-of-Gaussians (eDOG) model, to explore cortical feedback effects on visual responses of dLGN relay cells. For this model the responses are found by direct evaluation of two- or three-dimensional integrals allowing for fast and comprehensive studies of putative effects of different candidate organizations of the cortical feedback. Our analysis identifies a special mixed configuration of excitatory and inhibitory cortical feedback which seems to best account for available experimental data. This configuration consists of (i) a slow (long-delay) and spatially widespread inhibitory feedback, combined with (ii) a fast (short-delayed) and spatially narrow excitatory feedback, where (iii) the excitatory/inhibitory ON-ON connections are accompanied respectively by inhibitory/excitatory OFF-ON connections, i.e. following a phase-reversed arrangement. The recent development of optogenetic and pharmacogenetic methods has provided new tools for more precise manipulation and investigation of the thalamocortical circuit, in particular for mice. Such data will expectedly allow the eDOG model to be better constrained by data from specific animal model systems than has been possible until now for cat. We have therefore made the Python tool pyLGN which allows for easy adaptation of the eDOG model to new situations.
Modelling of rate effects at multiple scales
DEFF Research Database (Denmark)
Pedersen, R.R.; Simone, A.; Sluys, L. J.
2008-01-01
, the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from the micro-scale.......At the macro- and meso-scales a rate dependent constitutive model is used in which visco-elasticity is coupled to visco-plasticity and damage. A viscous length scale effect is introduced to control the size of the fracture process zone. By comparison of the widths of the fracture process zone...
Death Rates in the Calorie Model
Directory of Open Access Journals (Sweden)
Martin Machay
2016-01-01
Full Text Available The Calorie model unifies the Classical demand and the supply in the food market. Hence, solves the major problem of Classical stationary state. It is, hence, formalization of the Classical theory of population. The model does not reflect the imperfections of reality mentioned by Malthus himself. It is the aim of this brief paper to relax some of the strong assumptions of the Calorie model to make it more realistic. As the results show the political economists were correct. The death resulting from malnutrition can occur way sooner than the stationary state itself. Moreover, progressive and retrograde movements can be easily described by the death rate derived in the paper. JEL Classification: J11, Q11, Q15, Q21, Y90.
Variable selection for mixture and promotion time cure rate models.
Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng
2016-11-16
Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.
Directory of Open Access Journals (Sweden)
Santosh Subedi
2015-08-01
Full Text Available Soil fertility is an important component of forest ecosystems, yet evaluating soil fertility remains one of the least understood aspects of forest science. We hypothesized that the fertility rating (FR used in the model 3-PG could be predicted from site index (SI for loblolly pine in the southeastern US and then developed a method to predict FR from SI to test this hypothesis. Our results indicate that FR values derived from SI when used in 3-PG explain 89% of the variation in loblolly pine yield. The USDA SSURGO dataset contains SI values for loblolly pine for the major soil series in most of the counties in the southeastern US. The potential of using SI from SSURGO data to predict regional productivity of loblolly pine was assessed by comparing SI values from SSURGO with field inventory data in the study sites. When the 3-PG model was used with FR values derived using SI values from SSURGO database to predict loblolly pine productivity across the broader regions, the model provided realistic outputs of loblolly pine productivity. The results of this study show that FR values can be estimated from SI and used in 3-PG to predict loblolly pine productivity in the southeastern US.
Wang, Jun; Yue, Yun; Wang, Yi; Ichoku, Charles; Ellison, Luke; Zeng, Jing
2018-01-01
Largely used in several independent estimates of fire emissions, fire products based on MODIS sensors aboard the Terra and Aqua polar-orbiting satellites have a number of inherent limitations, including (a) inability to detect fires below clouds, (b) significant decrease of detection sensitivity at the edge of scan where pixel sizes are much larger than at nadir, and (c) gaps between adjacent swaths in tropical regions. To remedy these limitations, an empirical method is developed here and applied to correct fire emission estimates based on MODIS pixel level fire radiative power measurements and emission coefficients from the Fire Energetics and Emissions Research (FEER) biomass burning emission inventory. The analysis was performed for January 2010 over the northern sub-Saharan African region. Simulations from WRF-Chem model using original and adjusted emissions are compared with the aerosol optical depth (AOD) products from MODIS and AERONET as well as aerosol vertical profile from CALIOP data. The comparison confirmed an 30-50% improvement in the model simulation performance (in terms of correlation, bias, and spatial pattern of AOD with respect to observations) by the adjusted emissions that not only increases the original emission amount by a factor of two but also results in the spatially continuous estimates of instantaneous fire emissions at daily time scales. Such improvement cannot be achieved by simply scaling the original emission across the study domain. Even with this improvement, a factor of two underestimations still exists in the modeled AOD, which is within the current global fire emissions uncertainty envelope.
Annonaceae substitution rates: a codon model perspective
Directory of Open Access Journals (Sweden)
Lars Willem Chatrou
2014-01-01
Full Text Available The Annonaceae includes cultivated species of economic interest and represents an important source of information for better understanding the evolution of tropical rainforests. In phylogenetic analyses of DNA sequence data that are used to address evolutionary questions, it is imperative to use appropriate statistical models. Annonaceae are cases in point: Two sister clades, the subfamilies Annonoideae and Malmeoideae, contain the majority of Annonaceae species diversity. The Annonoideae generally show a greater degree of sequence divergence compared to the Malmeoideae, resulting in stark differences in branch lengths in phylogenetic trees. Uncertainty in how to interpret and analyse these differences has led to inconsistent results when estimating the ages of clades in Annonaceae using molecular dating techniques. We ask whether these differences may be attributed to inappropriate modelling assumptions in the phylogenetic analyses. Specifically, we test for (clade-specific differences in rates of non-synonymous and synonymous substitutions. A high ratio of nonsynonymous to synonymous substitutions may lead to similarity of DNA sequences due to convergence instead of common ancestry, and as a result confound phylogenetic analyses. We use a dataset of three chloroplast genes (rbcL, matK, ndhF for 129 species representative of the family. We find that differences in branch lengths between major clades are not attributable to different rates of non-synonymous and synonymous substitutions. The differences in evolutionary rate between the major clades of Annonaceae pose a challenge for current molecular dating techniques that should be seen as a warning for the interpretation of such results in other organisms.
Ohsumi, Akihiro; Hamasaki, Akihiro; Nakagawa, Hiroshi; Yoshida, Hiroe; Shiraiwa, Tatsuhiko; Horie, Takeshi
2007-02-01
Identification of physiological traits associated with leaf photosynthetic rate (Pn) is important for improving potential productivity of rice (Oryza sativa). The objectives of this study were to develop a model which can explain genotypic variation and ontogenetic change of Pn in rice under optimal conditions as a function of leaf nitrogen content per unit area (N) and stomatal conductance (g(s)), and to quantify the effects of interaction between N and g(s) on the variation of Pn. Pn, N and g(s) were measured at different developmental stages for the topmost fully expanded leaves in ten rice genotypes with diverse backgrounds grown in pots (2002) and in the field (2001 and 2002). A model of Pn that accounts for carboxylation and CO diffusion processes, and assumes that the ratio of internal conductance to g(s) is constant, was constructed, and its goodness of fit was examined. Considerable genotypic differences in Pn were evident for rice throughout development in both the pot and field experiments. The genotypic variation of Pn was correlated with that of g(s) at a given stage, and the change of Pn with plant development was closely related to the change of N. The variation of g(s) among genotypes was independent of that of N. The model explained well the variation in Pn of the ten genotypes grown under different conditions at different developmental stages. Conclusions The response of Pn to increased N differs with g(s), and the increase in Pn of genotypes with low g(s) is smaller than that of genotypes with high g(s). Therefore, simultaneous improvements of these two traits are essential for an effective breeding of rice genotypes with increased Pn.
A model of clearance rate regulation in mussels
Fréchette, Marcel
2012-10-01
Clearance rate regulation has been modelled as an instantaneous response to food availability, independent of the internal state of the animals. This view is incompatible with latent effects during ontogeny and phenotypic flexibility in clearance rate. Internal-state regulation of clearance rate is required to account for these patterns. Here I develop a model of internal-state based regulation of clearance rate. External factors such as suspended sediments are included in the model. To assess the relative merits of instantaneous regulation and internal-state regulation, I modelled blue mussel clearance rate and growth using a DEB model. In the usual standard feeding module, feeding is governed by a Holling's Type II response to food concentration. In the internal-state feeding module, gill ciliary activity and thus clearance rate are driven by internal reserve level. Factors such as suspended sediments were not included in the simulations. The two feeding modules were compared on the basis of their ability to capture the impact of latent effects, of environmental heterogeneity in food abundance and of physiological flexibility on clearance rate and individual growth. The Holling feeding module was unable to capture the effect of any of these sources of variability. In contrast, the internal-state feeding module did so without any modification or ad hoc calibration. Latent effects, however, appeared transient. With simple annual variability in temperature and food concentration, the relationship between clearance rate and food availability predicted by the internal-state feeding module was quite similar to that observed in Norwegian fjords. I conclude that in contrast with the usual Holling feeding module, internal-state regulation of clearance rate is consistent with well-documented growth and clearance rate patterns.
ECONOMETRIC APPROACH TO DIFFERENCE EQUATIONS MODELING OF EXCHANGE RATES CHANGES
Directory of Open Access Journals (Sweden)
Josip Arnerić
2010-12-01
Full Text Available Time series models that are commonly used in econometric modeling are autoregressive stochastic linear models (AR and models of moving averages (MA. Mentioned models by their structure are actually stochastic difference equations. Therefore, the objective of this paper is to estimate difference equations containing stochastic (random component. Estimated models of time series will be used to forecast observed data in the future. Namely, solutions of difference equations are closely related to conditions of stationary time series models. Based on the fact that volatility is time varying in high frequency data and that periods of high volatility tend to cluster, the most successful and popular models in modeling time varying volatility are GARCH type models and their variants. However, GARCH models will not be analyzed because the purpose of this research is to predict the value of the exchange rate in the levels within conditional mean equation and to determine whether the observed variable has a stable or explosive time path. Based on the estimated difference equation it will be examined whether Croatia is implementing a stable policy of exchange rates.
Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)
2000-01-01
Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.
Prediction of interest rate using CKLS model with stochastic parameters
International Nuclear Information System (INIS)
Ying, Khor Chia; Hin, Pooi Ah
2014-01-01
The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ (j) of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ (j) , we assume that φ (j) depends on φ (j−m) , φ (j−m+1) ,…, φ (j−1) and the interest rate r j+n at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r j+n+1 of the interest rate at the next time point when the value r j+n of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r j+n+d at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters
Prediction of interest rate using CKLS model with stochastic parameters
Energy Technology Data Exchange (ETDEWEB)
Ying, Khor Chia [Faculty of Computing and Informatics, Multimedia University, Jalan Multimedia, 63100 Cyberjaya, Selangor (Malaysia); Hin, Pooi Ah [Sunway University Business School, No. 5, Jalan Universiti, Bandar Sunway, 47500 Subang Jaya, Selangor (Malaysia)
2014-06-19
The Chan, Karolyi, Longstaff and Sanders (CKLS) model is a popular one-factor model for describing the spot interest rates. In this paper, the four parameters in the CKLS model are regarded as stochastic. The parameter vector φ{sup (j)} of four parameters at the (J+n)-th time point is estimated by the j-th window which is defined as the set consisting of the observed interest rates at the j′-th time point where j≤j′≤j+n. To model the variation of φ{sup (j)}, we assume that φ{sup (j)} depends on φ{sup (j−m)}, φ{sup (j−m+1)},…, φ{sup (j−1)} and the interest rate r{sub j+n} at the (j+n)-th time point via a four-dimensional conditional distribution which is derived from a [4(m+1)+1]-dimensional power-normal distribution. Treating the (j+n)-th time point as the present time point, we find a prediction interval for the future value r{sub j+n+1} of the interest rate at the next time point when the value r{sub j+n} of the interest rate is given. From the above four-dimensional conditional distribution, we also find a prediction interval for the future interest rate r{sub j+n+d} at the next d-th (d≥2) time point. The prediction intervals based on the CKLS model with stochastic parameters are found to have better ability of covering the observed future interest rates when compared with those based on the model with fixed parameters.
Constitutive law for seismicity rate based on rate and state friction: Dieterich 1994 revisited.
Heimisson, E. R.; Segall, P.
2017-12-01
Dieterich [1994] derived a constitutive law for seismicity rate based on rate and state friction, which has been applied widely to aftershocks, earthquake triggering, and induced seismicity in various geological settings. Here, this influential work is revisited, and re-derived in a more straightforward manner. By virtue of this new derivation the model is generalized to include changes in effective normal stress associated with background seismicity. Furthermore, the general case when seismicity rate is not constant under constant stressing rate is formulated. The new derivation provides directly practical integral expressions for the cumulative number of events and rate of seismicity for arbitrary stressing history. Arguably, the most prominent limitation of Dieterich's 1994 theory is the assumption that seismic sources do not interact. Here we derive a constitutive relationship that considers source interactions between sub-volumes of the crust, where the stress in each sub-volume is assumed constant. Interactions are considered both under constant stressing rate conditions and for arbitrary stressing history. This theory can be used to model seismicity rate due to stress changes or to estimate stress changes using observed seismicity from triggered earthquake swarms where earthquake interactions and magnitudes are take into account. We identify special conditions under which influence of interactions cancel and the predictions reduces to those of Dieterich 1994. This remarkable result may explain the apparent success of the model when applied to observations of triggered seismicity. This approach has application to understanding and modeling induced and triggered seismicity, and the quantitative interpretation of geodetic and seismic data. It enables simultaneous modeling of geodetic and seismic data in a self-consistent framework. To date physics-based modeling of seismicity with or without geodetic data has been found to give insight into various processes
Modeling Equity for Alternative Water Rate Structures
Griffin, R.; Mjelde, J.
2011-12-01
The rising popularity of increasing block rates for urban water runs counter to mainstream economic recommendations, yet decision makers in rate design forums are attracted to the notion of higher prices for larger users. Among economists, it is widely appreciated that uniform rates have stronger efficiency properties than increasing block rates, especially when volumetric prices incorporate intrinsic water value. Yet, except for regions where water market purchases have forced urban authorities to include water value in water rates, economic arguments have weakly penetrated policy. In this presentation, recent evidence will be reviewed regarding long term trends in urban rate structures while observing economic principles pertaining to these choices. The main objective is to investigate the equity of increasing block rates as contrasted to uniform rates for a representative city. Using data from four Texas cities, household water demand is established as a function of marginal price, income, weather, number of residents, and property characteristics. Two alternative rate proposals are designed on the basis of recent experiences for both water and wastewater rates. After specifying a reasonable number (~200) of diverse households populating the city and parameterizing each household's characteristics, every household's consumption selections are simulated for twelve months. This procedure is repeated for both rate systems. Monthly water and wastewater bills are also computed for each household. Most importantly, while balancing the budget of the city utility we compute the effect of switching rate structures on the welfares of households of differing types. Some of the empirical findings are as follows. Under conditions of absent water scarcity, households of opposing characters such as low versus high income do not have strong preferences regarding rate structure selection. This changes as water scarcity rises and as water's opportunity costs are allowed to
Pipe fracture evaluations for leak-rate detection: Probabilistic models
International Nuclear Information System (INIS)
Rahman, S.; Wilkowski, G.; Ghadiali, N.
1993-01-01
This is the second in series of three papers generated from studies on nuclear pipe fracture evaluations for leak-rate detection. This paper focuses on the development of novel probabilistic models for stochastic performance evaluation of degraded nuclear piping systems. It was accomplished here in three distinct stages. First, a statistical analysis was conducted to characterize various input variables for thermo-hydraulic analysis and elastic-plastic fracture mechanics, such as material properties of pipe, crack morphology variables, and location of cracks found in nuclear piping. Second, a new stochastic model was developed to evaluate performance of degraded piping systems. It is based on accurate deterministic models for thermo-hydraulic and fracture mechanics analyses described in the first paper, statistical characterization of various input variables, and state-of-the-art methods of modem structural reliability theory. From this model. the conditional probability of failure as a function of leak-rate detection capability of the piping systems can be predicted. Third, a numerical example was presented to illustrate the proposed model for piping reliability analyses. Results clearly showed that the model provides satisfactory estimates of conditional failure probability with much less computational effort when compared with those obtained from Monte Carlo simulation. The probabilistic model developed in this paper will be applied to various piping in boiling water reactor and pressurized water reactor plants for leak-rate detection applications
Base Station Performance Model
Walsh, Barbara; Farrell, Ronan
2005-01-01
At present the testing of power amplifiers within base station transmitters is limited to testing at component level as opposed to testing at the system level. While the detection of catastrophic failure is possible, that of performance degradation is not. This paper proposes a base station model with respect to transmitter output power with the aim of introducing system level monitoring of the power amplifier behaviour within the base station. Our model reflects the expe...
Neural Networks Modelling of Municipal Real Estate Market Rent Rates
Directory of Open Access Journals (Sweden)
Muczyński Andrzej
2016-12-01
Full Text Available This paper presents the results of research on the application of neural networks modelling of municipal real estate market rent rates. The test procedure was based on selected networks trained on the local real estate market data and transformation of the detected dependencies – through established models – to estimate the potential market rent rates of municipal premises. On this basis, the assessment of the adequacy of the actual market rent rates of municipal properties was made. Empirical research was conducted on the local real estate market of the city of Olsztyn in Poland. In order to describe the phenomenon of market rent rates formation an unidirectional three-layer network and a network of radial base was selected. Analyses showed a relatively low degree of convergence of the actual municipal rent rents with potential market rent rates. This degree was strongly varied depending on the type of business ran on the property and its’ social and economic impact. The applied research methodology and the obtained results can be used in order to rationalize municipal property management, including the activation of rental policy.
Model Uncertainty and Exchange Rate Forecasting
R.R.P. Kouwenberg (Roy); A. Markiewicz (Agnieszka); R. Verhoeks (Ralph); R.C.J. Zwinkels (Remco)
2013-01-01
textabstractWe propose a theoretical framework of exchange rate behavior where investors focus on a subset of economic fundamentals. We find that any adjustment in the set of predictors used by investors leads to changes in the relation between the exchange rate and fundamentals. We test the
Kiss, S.; Sarfraz, M.
2004-01-01
Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling
Strain Rate Dependant Material Model for Orthotropic Metals
International Nuclear Information System (INIS)
Vignjevic, Rade
2016-01-01
In manufacturing processes anisotropic metals are often exposed to the loading with high strain rates in the range from 10"2 s"-"1 to 10"6 s"-"1 (e.g. stamping, cold spraying and explosive forming). These types of loading often involve generation and propagation of shock waves within the material. The material behaviour under such a complex loading needs to be accurately modelled, in order to optimise the manufacturing process and achieve appropriate properties of the manufactured component. The presented research is related to development and validation of a thermodynamically consistent physically based constitutive model for metals under high rate loading. The model is capable of modelling damage, failure and formation and propagation of shock waves in anisotropic metals. The model has two main parts: the strength part which defines the material response to shear deformation and an equation of state (EOS) which defines the material response to isotropic volumetric deformation [1]. The constitutive model was implemented into the transient nonlinear finite element code DYNA3D [2] and our in house SPH code. Limited model validation was performed by simulating a number of high velocity material characterisation and validation impact tests. The new damage model was developed in the framework of configurational continuum mechanics and irreversible thermodynamics with internal state variables. The use of the multiplicative decomposition of deformation gradient makes the model applicable to arbitrary plastic and damage deformations. To account for the physical mechanisms of failure, the concept of thermally activated damage initially proposed by Tuller and Bucher [3], Klepaczko [4] was adopted as the basis for the new damage evolution model. This makes the proposed damage/failure model compatible with the Mechanical Threshold Strength (MTS) model Follansbee and Kocks [5], 1988; Chen and Gray [6] which was used to control evolution of flow stress during plastic
Lightning-produced nitrogen oxides (NOX=NO+NO2) in the middle and upper troposphere play an essential role in the production of ozone (O3) and influence the oxidizing capacity of the troposphere. Despite much effort in both observing and modeling lightning NOX during the past dec...
Further Results on Dynamic Additive Hazard Rate Model
Directory of Open Access Journals (Sweden)
Zhengcheng Zhang
2014-01-01
Full Text Available In the past, the proportional and additive hazard rate models have been investigated in the works. Nanda and Das (2011 introduced and studied the dynamic proportional (reversed hazard rate model. In this paper we study the dynamic additive hazard rate model, and investigate its aging properties for different aging classes. The closure of the model under some stochastic orders has also been investigated. Some examples are also given to illustrate different aging properties and stochastic comparisons of the model.
On a Corporate Bond Pricing Model with Credit Rating Migration Risksand Stochastic Interest Rate
Directory of Open Access Journals (Sweden)
Jin Liang
2017-10-01
Full Text Available In this paper we study a corporate bond-pricing model with credit rating migration and astochastic interest rate. The volatility of bond price in the model strongly depends on potential creditrating migration and stochastic change of the interest rate. This new model improves the previousexisting models in which the interest rate is considered to be a constant. The existence, uniquenessand regularity of the solution for the model are established. Moreover, some properties includingthe smoothness of the free boundary are obtained. Furthermore, some numerical computations arepresented to illustrate the theoretical results.
Modeling Electric Discharges with Entropy Production Rate Principles
Directory of Open Access Journals (Sweden)
Thomas Christen
2009-12-01
Full Text Available Under which circumstances are variational principles based on entropy production rate useful tools for modeling steady states of electric (gas discharge systems far from equilibrium? It is first shown how various different approaches, as Steenbeck’s minimum voltage and Prigogine’s minimum entropy production rate principles are related to the maximum entropy production rate principle (MEPP. Secondly, three typical examples are discussed, which provide a certain insight in the structure of the models that are candidates for MEPP application. It is then thirdly argued that MEPP, although not being an exact physical law, may provide reasonable model parameter estimates, provided the constraints contain the relevant (nonlinear physical effects and the parameters to be determined are related to disregarded weak constraints that affect mainly global entropy production. Finally, it is additionally conjectured that a further reason for the success of MEPP in certain far from equilibrium systems might be based on a hidden linearity of the underlying kinetic equation(s.
Modeling and predicting historical volatility in exchange rate markets
Lahmiri, Salim
2017-04-01
Volatility modeling and forecasting of currency exchange rate is an important task in several business risk management tasks; including treasury risk management, derivatives pricing, and portfolio risk evaluation. The purpose of this study is to present a simple and effective approach for predicting historical volatility of currency exchange rate. The approach is based on a limited set of technical indicators as inputs to the artificial neural networks (ANN). To show the effectiveness of the proposed approach, it was applied to forecast US/Canada and US/Euro exchange rates volatilities. The forecasting results show that our simple approach outperformed the conventional GARCH and EGARCH with different distribution assumptions, and also the hybrid GARCH and EGARCH with ANN in terms of mean absolute error, mean of squared errors, and Theil's inequality coefficient. Because of the simplicity and effectiveness of the approach, it is promising for US currency volatility prediction tasks.
Latency-Rate servers & Dataflow models
Wiggers, M.H.; Bekooij, Marco; Bekooij, Marco Jan Gerrit
2006-01-01
In the signal processing domain, dataflow graphs [2] [10] and their associated analysis techniques are a well-accepted modeling paradigm. The vertices of a dataflow graph represent functionality and are called actors, while the edges model which actors communicate with each other. Traditionally,
Sensitivity of tropospheric heating rates to aerosols: A modeling study
International Nuclear Information System (INIS)
Hanna, A.F.; Shankar, U.; Mathur, R.
1994-01-01
The effect of aerosols on the radiation balance is critical to the energetics of the atmosphere. Because of the relatively long residence of specific types of aerosols in the atmosphere and their complex thermal and chemical interactions, understanding their behavior is crucial for understanding global climate change. The authors used the Regional Particulate Model (RPM) to simulate aerosols in the eastern United States in order to identify the aerosol characteristics of specific rural and urban areas these characteristics include size, concentration, and vertical profile. A radiative transfer model based on an improved δ-Eddington approximation with 26 spectral intervals spanning the solar spectrum was then used to analyze the tropospheric heating rates associated with these different aerosol distributions. The authors compared heating rates forced by differences in surface albedo associated with different land-use characteristics, and found that tropospheric heating and surface cooling are sensitive to surface properties such as albedo
Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate
Directory of Open Access Journals (Sweden)
Minh Vu Trieu
2017-03-01
Full Text Available This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS, Brazilian tensile strength (BTS, rock brittleness index (BI, the distance between planes of weakness (DPW, and the alpha angle (Alpha between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP. Four (4 statistical regression models (two linear and two nonlinear are built to predict the ROP of TBM. Finally a fuzzy logic model is developed as an alternative method and compared to the four statistical regression models. Results show that the fuzzy logic model provides better estimations and can be applied to predict the TBM performance. The R-squared value (R2 of the fuzzy logic model scores the highest value of 0.714 over the second runner-up of 0.667 from the multiple variables nonlinear regression model.
Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate
Minh, Vu Trieu; Katushin, Dmitri; Antonov, Maksim; Veinthal, Renno
2017-03-01
This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM) based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), rock brittleness index (BI), the distance between planes of weakness (DPW), and the alpha angle (Alpha) between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP). Four (4) statistical regression models (two linear and two nonlinear) are built to predict the ROP of TBM. Finally a fuzzy logic model is developed as an alternative method and compared to the four statistical regression models. Results show that the fuzzy logic model provides better estimations and can be applied to predict the TBM performance. The R-squared value (R2) of the fuzzy logic model scores the highest value of 0.714 over the second runner-up of 0.667 from the multiple variables nonlinear regression model.
Modeling emission rates and exposures from outdoor cooking
Edwards, Rufus; Princevac, Marko; Weltman, Robert; Ghasemian, Masoud; Arora, Narendra K.; Bond, Tami
2017-09-01
Approximately 3 billion individuals rely on solid fuels for cooking globally. For a large portion of these - an estimated 533 million - cooking is outdoors, where emissions from cookstoves pose a health risk to both cooks and other household and village members. Models that estimate emissions rates from stoves in indoor environments that would meet WHO air quality guidelines (AQG), explicitly don't account for outdoor cooking. The objectives of this paper are to link health based exposure guidelines with emissions from outdoor cookstoves, using a Monte Carlo simulation of cooking times from Haryana India coupled with inverse Gaussian dispersion models. Mean emission rates for outdoor cooking that would result in incremental increases in personal exposure equivalent to the WHO AQG during a 24-h period were 126 ± 13 mg/min for cooking while squatting and 99 ± 10 mg/min while standing. Emission rates modeled for outdoor cooking are substantially higher than emission rates for indoor cooking to meet AQG, because the models estimate impact of emissions on personal exposure concentrations rather than microenvironment concentrations, and because the smoke disperses more readily outdoors compared to indoor environments. As a result, many more stoves including the best performing solid-fuel biomass stoves would meet AQG when cooking outdoors, but may also result in substantial localized neighborhood pollution depending on housing density. Inclusion of the neighborhood impact of pollution should be addressed more formally both in guidelines on emissions rates from stoves that would be protective of health, and also in wider health impact evaluation efforts and burden of disease estimates. Emissions guidelines should better represent the different contexts in which stoves are being used, especially because in these contexts the best performing solid fuel stoves have the potential to provide significant benefits.
Tantalum strength model incorporating temperature, strain rate and pressure
Lim, Hojun; Battaile, Corbett; Brown, Justin; Lane, Matt
Tantalum is a body-centered-cubic (BCC) refractory metal that is widely used in many applications in high temperature, strain rate and pressure environments. In this work, we propose a physically-based strength model for tantalum that incorporates effects of temperature, strain rate and pressure. A constitutive model for single crystal tantalum is developed based on dislocation kink-pair theory, and calibrated to measurements on single crystal specimens. The model is then used to predict deformations of single- and polycrystalline tantalum. In addition, the proposed strength model is implemented into Sandia's ALEGRA solid dynamics code to predict plastic deformations of tantalum in engineering-scale applications at extreme conditions, e.g. Taylor impact tests and Z machine's high pressure ramp compression tests, and the results are compared with available experimental data. Sandia National Laboratories is a multi program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Model Based Temporal Reasoning
Rabin, Marla J.; Spinrad, Paul R.; Fall, Thomas C.
1988-03-01
Systems that assess the real world must cope with evidence that is uncertain, ambiguous, and spread over time. Typically, the most important function of an assessment system is to identify when activities are occurring that are unusual or unanticipated. Model based temporal reasoning addresses both of these requirements. The differences among temporal reasoning schemes lies in the methods used to avoid computational intractability. If we had n pieces of data and we wanted to examine how they were related, the worst case would be where we had to examine every subset of these points to see if that subset satisfied the relations. This would be 2n, which is intractable. Models compress this; if several data points are all compatible with a model, then that model represents all those data points. Data points are then considered related if they lie within the same model or if they lie in models that are related. Models thus address the intractability problem. They also address the problem of determining unusual activities if the data do not agree with models that are indicated by earlier data then something out of the norm is taking place. The models can summarize what we know up to that time, so when they are not predicting correctly, either something unusual is happening or we need to revise our models. The model based reasoner developed at Advanced Decision Systems is thus both intuitive and powerful. It is currently being used on one operational system and several prototype systems. It has enough power to be used in domains spanning the spectrum from manufacturing engineering and project management to low-intensity conflict and strategic assessment.
Sideridis, Georgios; Padeliadu, Susana
2013-01-01
The purpose of the present studies was to provide the means to create brief versions of instruments that can aid the diagnosis and classification of students with learning disabilities and comorbid disorders (e.g., attention-deficit/hyperactivity disorder). A sample of 1,108 students with and without a diagnosis of learning disabilities took part in study 1. Using information from modern theory methods (i.e., the Rasch model), a scale was created that included fewer than one third of the original battery items designed to assess reading skills. This best item synthesis was then evaluated for its predictive and criterion validity with a valid external reading battery (study 2). Using a sample of 232 students with and without learning disabilities, results indicated that the brief version of the scale was equally effective as the original scale in predicting reading achievement. Analysis of the content of the brief scale indicated that the best item synthesis involved items from cognition, motivation, strategy use, and advanced reading skills. It is suggested that multiple psychometric criteria be employed in evaluating the psychometric adequacy of scales used for the assessment and identification of learning disabilities and comorbid disorders.
Bayes estimation of the general hazard rate model
International Nuclear Information System (INIS)
Sarhan, A.
1999-01-01
In reliability theory and life testing models, the life time distributions are often specified by choosing a relevant hazard rate function. Here a general hazard rate function h(t)=a+bt c-1 , where c, a, b are constants greater than zero, is considered. The parameter c is assumed to be known. The Bayes estimators of (a,b) based on the data of type II/item-censored testing without replacement are obtained. A large simulation study using Monte Carlo Method is done to compare the performance of Bayes with regression estimators of (a,b). The criterion for comparison is made based on the Bayes risk associated with the respective estimator. Also, the influence of the number of failed items on the accuracy of the estimators (Bayes and regression) is investigated. Estimations for the parameters (a,b) of the linearly increasing hazard rate model h(t)=a+bt, where a, b are greater than zero, can be obtained as the special case, letting c=2
75 FR 72581 - Assessments, Assessment Base and Rates
2010-11-24
... Part III Federal Deposit Insurance Corporation 12 CFR Part 327 Assessments, Assessment Base and... Assessments, Assessment Base and Rates AGENCY: Federal Deposit Insurance Corporation. ACTION: Notice of... Consumer Protection Act regarding the definition of an institution's deposit insurance assessment base...
2017-08-01
k2 – k1) 3.3 Universal Kinetic Rate Platform Development Kinetic rate models range from pure chemical reactions to mass transfer...14 8. The rate model that best fits the experimental data is a first-order or homogeneous catalytic reaction ...Avrami (7), and intraparticle diffusion (6) rate equations to name a few. A single fitting algorithm (kinetic rate model ) for a reaction does not
Risk management under a two-factor model of the term structure of interest rates
Manuel Moreno
1997-01-01
This paper presents several applications to interest rate risk management based on a two-factor continuous-time model of the term structure of interest rates previously presented in Moreno (1996). This model assumes that default free discount bond prices are determined by the time to maturity and two factors, the long-term interest rate and the spread (difference between the long-term rate and the short-term (instantaneous) riskless rate). Several new measures of ``generalized duration" are p...
Radionuclide release rates from spent fuel for performance assessment modeling
International Nuclear Information System (INIS)
Curtis, D.B.
1994-01-01
In a scenario of aqueous transport from a high-level radioactive waste repository, the concentration of radionuclides in water in contact with the waste constitutes the source term for transport models, and as such represents a fundamental component of all performance assessment models. Many laboratory experiments have been done to characterize release rates and understand processes influencing radionuclide release rates from irradiated nuclear fuel. Natural analogues of these waste forms have been studied to obtain information regarding the long-term stability of potential waste forms in complex natural systems. This information from diverse sources must be brought together to develop and defend methods used to define source terms for performance assessment models. In this manuscript examples of measures of radionuclide release rates from spent nuclear fuel or analogues of nuclear fuel are presented. Each example represents a very different approach to obtaining a numerical measure and each has its limitations. There is no way to obtain an unambiguous measure of this or any parameter used in performance assessment codes for evaluating the effects of processes operative over many millennia. The examples are intended to suggest by example that in the absence of the ability to evaluate accuracy and precision, consistency of a broadly based set of data can be used as circumstantial evidence to defend the choice of parameters used in performance assessments
2017-11-07
This final rule updates the home health prospective payment system (HH PPS) payment rates, including the national, standardized 60-day episode payment rates, the national per-visit rates, and the non-routine medical supply (NRS) conversion factor, effective for home health episodes of care ending on or after January 1, 2018. This rule also: Updates the HH PPS case-mix weights using the most current, complete data available at the time of rulemaking; implements the third year of a 3-year phase-in of a reduction to the national, standardized 60-day episode payment to account for estimated case-mix growth unrelated to increases in patient acuity (that is, nominal case-mix growth) between calendar year (CY) 2012 and CY 2014; and discusses our efforts to monitor the potential impacts of the rebasing adjustments that were implemented in CY 2014 through CY 2017. In addition, this rule finalizes changes to the Home Health Value-Based Purchasing (HHVBP) Model and to the Home Health Quality Reporting Program (HH QRP). We are not finalizing the implementation of the Home Health Groupings Model (HHGM) in this final rule.
Exchange-Rate-Based Stabilization under Imperfect Credibility
Guillermo Calvo; Carlos A. Végh Gramont
1991-01-01
This paper analyzes stabilization policy under predetermined exchange rates in a cash-in-advance, staggered-prices model. Under full credibility, a reduction in the rate of devaluation results in an immediate and permanent reduction in the inflation rate, with no effect on output or consumption. In contrast, a non-credible stabilization results in an initial expansion of output, followed by a later recession. The inflation rate of home goods remains above the rate of devaluation throughout th...
PSA-based evaluation and rating of operational events
International Nuclear Information System (INIS)
Gomez Cobo, A.
1997-01-01
The presentation discusses the PSA-based evaluation and rating of operational events, including the following: historical background, procedures for event evaluation using PSA, use of PSA for event rating, current activities
Monetary models and exchange rate determination: The Nigerian ...
African Journals Online (AJOL)
Monetary models and exchange rate determination: The Nigerian evidence. ... income levels and real interest rate differentials provide better forecasts of the ... partner can expect to suffer depreciation in the external value of her currency.
Mechanistic Modeling of Water Replenishment Rate of Zeer Refrigerator
Directory of Open Access Journals (Sweden)
B. N. Nwankwojike
2017-06-01
Full Text Available A model for predicting the water replenishment rate of zeer pot refrigerator was developed in this study using mechanistic modeling approach and evaluated at Obowo, Imo State, Nigeria using six fruits, tomatoes, guava, okra, banana, orange and avocado pear. The developed model confirmed zeer pot water replenishment rate as a function of ambient temperature, relative humidity, wind speed, thermal conductivity of the pot materials and sand, density of air and water vapor, permeability coefficient of clay and heat transfer coefficient of water into air, circumferential length, height of pot, geometrical profile of the pot, heat load of the food preserved, heat flow into the device and gradient at which the pot is placed above ground level. Compared to the conventional approach of water replenishment, performance analysis results revealed 44% to 58% water economy when the zeer pot’s water was replenished based on the model’s prediction; while there was no significant difference in the shelf-life of the fruits preserved with both replenishment methods. Application of the developed water replenishment model facilitates optimal water usage in this system, thereby reducing operational cost of zeer pot refrigerator.
Empirical rate equation model and rate calculations of hydrogen generation for Hanford tank waste
International Nuclear Information System (INIS)
HU, T.A.
1999-01-01
Empirical rate equations are derived to estimate hydrogen generation based on chemical reactions, radiolysis of water and organic compounds, and corrosion processes. A comparison of the generation rates observed in the field with the rates calculated for twenty eight tanks shows agreement within a factor of two to three
Gragnaniello, Cristian; Nader, Remi; van Doormaal, Tristan; Kamel, Mahmoud; Voormolen, Eduard H J; Lasio, Giovanni; Aboud, Emad; Regli, Luca; Tulleken, Cornelius A F; Al-Mefty, Ossama
2010-11-01
Resident duty-hours restrictions have now been instituted in many countries worldwide. Shortened training times and increased public scrutiny of surgical competency have led to a move away from the traditional apprenticeship model of training. The development of educational models for brain anatomy is a fascinating innovation allowing neurosurgeons to train without the need to practice on real patients and it may be a solution to achieve competency within a shortened training period. The authors describe the use of Stratathane resin ST-504 polymer (SRSP), which is inserted at different intracranial locations to closely mimic meningiomas and other pathological entities of the skull base, in a cadaveric model, for use in neurosurgical training. Silicone-injected and pressurized cadaveric heads were used for studying the SRSP model. The SRSP presents unique intrinsic metamorphic characteristics: liquid at first, it expands and foams when injected into the desired area of the brain, forming a solid tumorlike structure. The authors injected SRSP via different passages that did not influence routes used for the surgical approach for resection of the simulated lesion. For example, SRSP injection routes included endonasal transsphenoidal or transoral approaches if lesions were to be removed through standard skull base approach, or, alternatively, SRSP was injected via a cranial approach if the removal was planned to be via the transsphenoidal or transoral route. The model was set in place in 3 countries (US, Italy, and The Netherlands), and a pool of 13 physicians from 4 different institutions (all surgeons and surgeons in training) participated in evaluating it and provided feedback. All 13 evaluating physicians had overall positive impressions of the model. The overall score on 9 components evaluated--including comparison between the tumor model and real tumor cases, perioperative requirements, general impression, and applicability--was 88% (100% being the best possible
Aspect-Aware Latent Factor Model: Rating Prediction with Ratings and Reviews
Cheng, Zhiyong; Ding, Ying; Zhu, Lei; Kankanhalli, Mohan
2018-01-01
Although latent factor models (e.g., matrix factorization) achieve good accuracy in rating prediction, they suffer from several problems including cold-start, non-transparency, and suboptimal recommendation for local users or items. In this paper, we employ textual review information with ratings to tackle these limitations. Firstly, we apply a proposed aspect-aware topic model (ATM) on the review text to model user preferences and item features from different aspects, and estimate the aspect...
Spallation model for the high strain rates range
Dekel, E.; Eliezer, S.; Henis, Z.; Moshe, E.; Ludmirsky, A.; Goldberg, I. B.
1998-11-01
Measurements of the dynamic spall strength in aluminum and copper shocked by a high power laser to pressures of hundreds of kbars show a rapid increase in the spall strength with the strain rate at values of about 107 s-1. We suggest that this behavior is a result of a change in the spall mechanism. At low strain rates the spall is caused by the motion and coalescence of material's initial flaws. At high strain rates there is not enough time for the flaws to move and the spall is produced by the formation and coalescence of additional cavities where the interatomic forces become dominant. Material under tensile stress is in a metastable condition and cavities of a critical radius are formed in it due to thermal fluctuations. These cavities grow due to the tension. The total volume of the voids grow until the material disintegrates at the spall plane. Simplified calculations based on this model, describing the metal as a viscous liquid, give results in fairly good agreement with the experimental data and predict the increase in spall strength at high strain rates.
Polynomial Chaos Expansion Approach to Interest Rate Models
Directory of Open Access Journals (Sweden)
Luca Di Persio
2015-01-01
Full Text Available The Polynomial Chaos Expansion (PCE technique allows us to recover a finite second-order random variable exploiting suitable linear combinations of orthogonal polynomials which are functions of a given stochastic quantity ξ, hence acting as a kind of random basis. The PCE methodology has been developed as a mathematically rigorous Uncertainty Quantification (UQ method which aims at providing reliable numerical estimates for some uncertain physical quantities defining the dynamic of certain engineering models and their related simulations. In the present paper, we use the PCE approach in order to analyze some equity and interest rate models. In particular, we take into consideration those models which are based on, for example, the Geometric Brownian Motion, the Vasicek model, and the CIR model. We present theoretical as well as related concrete numerical approximation results considering, without loss of generality, the one-dimensional case. We also provide both an efficiency study and an accuracy study of our approach by comparing its outputs with the ones obtained adopting the Monte Carlo approach, both in its standard and its enhanced version.
The fitting parameters extraction of conversion model of the low dose rate effect in bipolar devices
International Nuclear Information System (INIS)
Bakerenkov, Alexander
2011-01-01
The Enhanced Low Dose Rate Sensitivity (ELDRS) in bipolar devices consists of in base current degradation of NPN and PNP transistors increase as the dose rate is decreased. As a result of almost 20-year studying, the some physical models of effect are developed, being described in detail. Accelerated test methods, based on these models use in standards. The conversion model of the effect, that allows to describe the inverse S-shaped excess base current dependence versus dose rate, was proposed. This paper presents the problem of conversion model fitting parameters extraction.
A model for C-14 tracer evaporative rate analysis (ERA)
International Nuclear Information System (INIS)
Gardner, R.P.; Verghese, K.
1993-01-01
A simple model has been derived and tested for the C-14 tracer evaporative rate analysis (ERA) method. It allows the accurate determination of the evaporative rate coefficient of the C-14 tracer detector in the presence of variable evaporation rates of the detector solvent and variable background counting rates. The evaporation rate coefficient should be the most fundamental parameter available in this analysis method and, therefore, its measurements with the proposed model should allow the most direct correlations to be made with the system properties of interest such as surface cleanliness. (author)
A critique of recent models for human error rate assessment
International Nuclear Information System (INIS)
Apostolakis, G.E.
1988-01-01
This paper critically reviews two groups of models for assessing human error rates under accident conditions. The first group, which includes the US Nuclear Regulatory Commission (NRC) handbook model and the human cognitive reliability (HCR) model, considers as fundamental the time that is available to the operators to act. The second group, which is represented by the success likelihood index methodology multiattribute utility decomposition (SLIM-MAUD) model, relies on ratings of the human actions with respect to certain qualitative factors and the subsequent derivation of error rates. These models are evaluated with respect to two criteria: the treatment of uncertainties and the internal coherence of the models. In other words, this evaluation focuses primarily on normative aspects of these models. The principal findings are as follows: (1) Both of the time-related models provide human error rates as a function of the available time for action and the prevailing conditions. However, the HCR model ignores the important issue of state-of-knowledge uncertainties, dealing exclusively with stochastic uncertainty, whereas the model presented in the NRC handbook handles both types of uncertainty. (2) SLIM-MAUD provides a highly structured approach for the derivation of human error rates under given conditions. However, the treatment of the weights and ratings in this model is internally inconsistent. (author)
Data analysis using the Binomial Failure Rate common cause model
International Nuclear Information System (INIS)
Atwood, C.L.
1983-09-01
This report explains how to use the Binomial Failure Rate (BFR) method to estimate common cause failure rates. The entire method is described, beginning with the conceptual model, and covering practical issues of data preparation, treatment of variation in the failure rates, Bayesian estimation of the quantities of interest, checking the model assumptions for lack of fit to the data, and the ultimate application of the answers
Satellite altimetry based rating curves throughout the entire Amazon basin
Paris, A.; Calmant, S.; Paiva, R. C.; Collischonn, W.; Silva, J. S.; Bonnet, M.; Seyler, F.
2013-05-01
The Amazonian basin is the largest hydrological basin all over the world. In the recent past years, the basin has experienced an unusual succession of extreme draughts and floods, which origin is still a matter of debate. Yet, the amount of data available is poor, both over time and space scales, due to factor like basin's size, access difficulty and so on. One of the major locks is to get discharge series distributed over the entire basin. Satellite altimetry can be used to improve our knowledge of the hydrological stream flow conditions in the basin, through rating curves. Rating curves are mathematical relationships between stage and discharge at a given place. The common way to determine the parameters of the relationship is to compute the non-linear regression between the discharge and stage series. In this study, the discharge data was obtained by simulation through the entire basin using the MGB-IPH model with TRMM Merge input rainfall data and assimilation of gage data, run from 1998 to 2010. The stage dataset is made of ~800 altimetry series at ENVISAT and JASON-2 virtual stations. Altimetry series span between 2002 and 2010. In the present work we present the benefits of using stochastic methods instead of probabilistic ones to determine a dataset of rating curve parameters which are consistent throughout the entire Amazon basin. The rating curve parameters have been computed using a parameter optimization technique based on Markov Chain Monte Carlo sampler and Bayesian inference scheme. This technique provides an estimate of the best parameters for the rating curve, but also their posterior probability distribution, allowing the determination of a credibility interval for the rating curve. Also is included in the rating curve determination the error over discharges estimates from the MGB-IPH model. These MGB-IPH errors come from either errors in the discharge derived from the gage readings or errors in the satellite rainfall estimates. The present
Real Exchange Rate and Productivity in an OLG Model
Thi Hong Thinh DOAN; Karine GENTE
2013-01-01
This article develops an overlapping generations model to show how demography and savings affect the relationship between real exchange rate (RER) and productivity. In high-saving (low-saving) countries and/or low-population-growth-rate countries, a rise in productivity leads to a real depreciation (appreciation) whereas the RER may appreciate or depreciate in highproduction-growth-rate. Using panel data, we conclude that a rise in productivity generally causes a real exchange rate appreciati...
SEE rate estimation based on diffusion approximation of charge collection
Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.
2018-03-01
The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.
Mechanism of Strain Rate Effect Based on Dislocation Theory
International Nuclear Information System (INIS)
Kun, Qin; Shi-Sheng, Hu; Li-Ming, Yang
2009-01-01
Based on dislocation theory, we investigate the mechanism of strain rate effect. Strain rate effect and dislocation motion are bridged by Orowan's relationship, and the stress dependence of dislocation velocity is considered as the dynamics relationship of dislocation motion. The mechanism of strain rate effect is then investigated qualitatively by using these two relationships although the kinematics relationship of dislocation motion is absent due to complicated styles of dislocation motion. The process of strain rate effect is interpreted and some details of strain rate effect are adequately discussed. The present analyses agree with the existing experimental results. Based on the analyses, we propose that strain rate criteria rather than stress criteria should be satisfied when a metal is fully yielded at a given strain rate. (condensed matter: structure, mechanical and thermal properties)
Application of a mechanism-based rate equation to black liquor gasification rate data
Energy Technology Data Exchange (ETDEWEB)
Overacker, N.L.; Waag, K.J.; Frederick, W.J. [Oregon State University, OR (United States). Dept. of Chemical Engineering; Whitty, K.J.
1995-09-01
There is growing interest worldwide to develop alternate chemical recovery processes for paper mills which are cheaper, safer, more efficient and more environmentally sound than traditional technology. Pressurized gasification of black liquor is the basis for many proposed schemes and offers the possibility to double the amount of electricity generated per unit of dry black liquor solids. Such technology also has capital, safety and environmental advantages. One of the most important considerations regarding this emerging technology is the kinetics of the gasification reaction. This has been studied empirically at Aabo Akademi University for the pressurized gasification with carbon dioxide and steam. For the purposes of reactor modeling and scale-up, however, a thorough understanding of the mechanism behind the reaction is desirable. This report discusses the applicability of a mechanism-based rate equation to gasification of black liquor. The mechanism considered was developed for alkali-catalyzed gasification of carbon and is tested using black liquor gasification data obtained during simultaneous reaction with H{sub 2}O and CO. Equilibrium considerations and the influence of the water-gas shift reaction are also discussed. The work presented here is a cooperative effort between Aabo Akademi University and Oregon State University. The experimental work and some of the data analysis was performed at Aabo Akademi University. Development of the models and consideration of their applicability was performed primarily at Oregon State University
EXCHANGE-RATES FORECASTING: EXPONENTIAL SMOOTHING TECHNIQUES AND ARIMA MODELS
Directory of Open Access Journals (Sweden)
Dezsi Eva
2011-07-01
Full Text Available Exchange rates forecasting is, and has been a challenging task in finance. Statistical and econometrical models are widely used in analysis and forecasting of foreign exchange rates. This paper investigates the behavior of daily exchange rates of the Romanian Leu against the Euro, United States Dollar, British Pound, Japanese Yen, Chinese Renminbi and the Russian Ruble. Smoothing techniques are generated and compared with each other. These models include the Simple Exponential Smoothing technique, as the Double Exponential Smoothing technique, the Simple Holt-Winters, the Additive Holt-Winters, namely the Autoregressive Integrated Moving Average model.
Kimura, Wataru; Miyata, Hiroaki; Gotoh, Mitsukazu; Hirai, Ichiro; Kenjo, Akira; Kitagawa, Yuko; Shimada, Mitsuo; Baba, Hideo; Tomita, Naohiro; Nakagoe, Tohru; Sugihara, Kenichi; Mori, Masaki
2014-04-01
To create a mortality risk model after pancreaticoduodenectomy (PD) using a Web-based national database system. PD is a major gastroenterological surgery with relatively high mortality. Many studies have reported factors to analyze short-term outcomes. After initiation of National Clinical Database, approximately 1.2 million surgical cases from more than 3500 Japanese hospitals were collected through a Web-based data entry system. After data cleanup, 8575 PD patients (mean age, 68.2 years) recorded in 2011 from 1167 hospitals were analyzed using variables and definitions almost identical to those of American College of Surgeons-National Surgical Quality Improvement Program. The 30-day postoperative and in-hospital mortality rates were 1.2% and 2.8% (103 and 239 patients), respectively. Thirteen significant risk factors for in-hospital mortality were identified: age, respiratory distress, activities of daily living within 30 days before surgery, angina, weight loss of more than 10%, American Society of Anesthesiologists class of greater than 3, Brinkman index of more than 400, body mass index of more than 25 kg/m, white blood cell count of more than 11,000 cells per microliter, platelet count of less than 120,000 per microliter, prothrombin time/international normalized ratio of more than 1.1, activated partial thromboplastin time of more than 40 seconds, and serum creatinine levels of more than 3.0 mg/dL. Five variables, including male sex, emergency surgery, chronic obstructive pulmonary disease, bleeding disorders, and serum urea nitrogen levels of less than 8.0 mg/dL, were independent variables in the 30-day mortality group. The overall PD complication rate was 40.0%. Grade B and C pancreatic fistulas in the International Study Group on Pancreatic Fistula occurred in 13.2% cases. The 30-day and in-hospital mortality rates for pancreatic cancer were significantly lower than those for nonpancreatic cancer. We conducted the reported risk stratification study for PD
Bannach, Andreas; Hauer, Rene; Martin, Streibel; Stienstra, Gerard; Kühn, Michael
2015-04-01
The IPCC Report 2014 strengthens the need for CO2 storage as part of CCS or BECCS to reach ambitious climate goals despite growing energy demand in the future. The further expansion of renewable energy sources is a second major pillar. As it is today in Germany the weather becomes the controlling factor for electricity production by fossil fuelled power plants which lead to significant fluctuations of CO2-emissions which can be traced in injection rates if the CO2 were captured and stored. To analyse the impact of such changing injection rates on a CO2 storage reservoir. two reservoir simulation models are applied: a. An (smaller) reservoir model approved by gas storage activities for decades, to investigate the dynamic effects in the early stage of storage filling (initial aquifer displacement). b. An anticline structure big enough to accommodate a total amount of ≥ 100 Mega tons CO2 to investigate the dynamic effects for the entire operational life time of the storage under particular consideration of very high filling levels (highest aquifer compression). Therefore a reservoir model was generated. The defined yearly injection rate schedule is based on a study performed on behalf of IZ Klima (DNV GL, 2014). According to this study the exclusive consideration of a pool of coal-fired power plants causes the most intensive dynamically changing CO2 emissions and hence accounts for variations of a system which includes industry driven CO2 production. Besides short-term changes (daily & weekly cycles) seasonal influences are also taken into account. Simulation runs cover a variation of injection points (well locations at the top vs. locations at the flank of the structure) and some other largely unknown reservoir parameters as aquifer size and aquifer mobility. Simulation of a 20 year storage operation is followed by a post-operational shut-in phase which covers approximately 500 years to assess possible effects of changing injection rates on the long-term reservoir
International Nuclear Information System (INIS)
Nishimura, M.
1998-04-01
To predict thermal-hydraulic phenomena in actual plant under various conditions accurately, adequate simulation of laminar-turbulent flow transition is of importance. A low Reynolds number turbulence model is commonly used for a numerical simulation of the laminar-turbulent transition. The existing low Reynolds number turbulence models generally demands very thin mesh width between a wall and a first computational node from the wall, to keep accuracy and stability of numerical analyses. There is a criterion for the distance between the wall and the first computational node in which non-dimensional distance y + must be less than 0.5. Due to this criterion the suitable distance depends on Reynolds number. A liquid metal sodium is used for a coolant in first reactors therefore, Reynolds number is usually one or two order higher than that of the usual plants in which air and water are used for the work fluid. This makes the load of thermal-hydraulic numerical simulation of the liquid sodium relatively heavier. From above context, a new method is proposed for providing wall boundary condition of turbulent kinetic energy dissipation rate ε. The present method enables the wall-first node distance 10 times larger compared to the existing models. A function of the ε wall boundary condition has been constructed aided by a direct numerical simulation (DNS) data base. The method was validated through calculations of a turbulent Couette flow and a fully developed pipe flow and its laminar-turbulent transition. Thus the present method and modeling are capable of predicting the laminar-turbulent transition with less mesh numbers i.e. lighter computational loads. (J.P.N.)
Heart rate-based lactate minimum test: a reproducible method.
Strupler, M.; Muller, G.; Perret, C.
2009-01-01
OBJECTIVE: To find the individual intensity for aerobic endurance training, the lactate minimum test (LMT) seems to be a promising method. LMTs described in the literature consist of speed or work rate-based protocols, but for training prescription in daily practice mostly heart rate is used. The
Rate adaptation in ad hoc networks based on pricing
CSIR Research Space (South Africa)
Awuor, F
2011-09-01
Full Text Available that incorporates penalty (pricing) obtruded to users’ choices of transmission parameters to curb the self-interest behaviour. Therefore users determine their data rates and transmit power based on the perceived coupled interference at the intended receiver...
Not that neglected! Base rates influence related and unrelated judgments.
Białek, Michał
2017-06-01
It is claimed that people are unable (or unwilling) to incorporate prior probabilities into posterior assessments, such as their estimation of the likelihood of a person with characteristics typical of an engineer actually being an engineer given that they are drawn from a sample including a very small number of engineers. This paper shows that base rates are incorporated in classifications (Experiment 1) and, moreover, that base rates also affect unrelated judgments, such as how well a provided description of a person fits a stereotypical engineer (Experiment 2). Finally, Experiment 3 shows that individuals who make both types of assessments - though using base rates to the same extent in the former judgments - are able to decrease the extent to which they incorporate base rates in the latter judgments. Copyright © 2017 Elsevier B.V. All rights reserved.
Can producer currency pricing models generate volatile real exchange rates?
Povoledo, L.
2012-01-01
If the elasticities of substitution between traded and nontraded and between Home and Foreign traded goods are sufficiently low, then the real exchange rate generated by a model with full producer currency pricing is as volatile as in the data.
Modelling Exchange Rate Volatility by Macroeconomic Fundamentals in Pakistan
Munazza Jabeen; Saud Ahmad Khan
2014-01-01
What drives volatility in foreign exchange market in Pakistan? This paper undertakes an analysis of modelling exchange rate volatility in Pakistan by potential macroeconomic fundamentals well-known in the economic literature. For this, monthly data on Pak Rupee exchange rates in the terms of major currencies (US Dollar, British Pound, Canadian Dollar and Japanese Yen) and macroeconomics fundamentals is taken from April, 1982 to November, 2011. The results show thatthe PKR-USD exchange rate vo...
Growth rate in the dynamical dark energy models
International Nuclear Information System (INIS)
Avsajanishvili, Olga; Arkhipova, Natalia A.; Samushia, Lado; Kahniashvili, Tina
2014-01-01
Dark energy models with a slowly rolling cosmological scalar field provide a popular alternative to the standard, time-independent cosmological constant model. We study the simultaneous evolution of background expansion and growth in the scalar field model with the Ratra-Peebles self-interaction potential. We use recent measurements of the linear growth rate and the baryon acoustic oscillation peak positions to constrain the model parameter α that describes the steepness of the scalar field potential. (orig.)
Growth rate in the dynamical dark energy models.
Avsajanishvili, Olga; Arkhipova, Natalia A; Samushia, Lado; Kahniashvili, Tina
Dark energy models with a slowly rolling cosmological scalar field provide a popular alternative to the standard, time-independent cosmological constant model. We study the simultaneous evolution of background expansion and growth in the scalar field model with the Ratra-Peebles self-interaction potential. We use recent measurements of the linear growth rate and the baryon acoustic oscillation peak positions to constrain the model parameter [Formula: see text] that describes the steepness of the scalar field potential.
Jiang, GJ
1998-01-01
This paper develops a nonparametric model of interest rate term structure dynamics based an a spot rate process that permits only positive interest rates and a market price of interest rate risk that precludes arbitrage opportunities. Both the spot rate process and the market price of interest rate
Exchange rate based stabilization : tales from Europe and Latin America
Ades, Alberto F.; Kiguel, Miguel; Liviatan, Nissan
1993-01-01
There is convincing empirical evidence that the cycle for exchange-rate-based disinflation in high-inflation Latin American economies typically begins with expansion and ends in recession - a surprising pattern. The authors explore whether a similar cycle can be observed in exchange-rate-based disinflation in low-inflation economies. They draw on empirical evidence from stabilizaton programs in three European countries in the early 1980s: in Denmark (1982), Ireland (1982), and France (1983). ...
Analysis of sensory ratings data with cumulative link models
DEFF Research Database (Denmark)
Christensen, Rune Haubo Bojesen; Brockhoff, Per B.
2013-01-01
Examples of categorical rating scales include discrete preference, liking and hedonic rating scales. Data obtained on these scales are often analyzed with normal linear regression methods or with omnibus Pearson chi2 tests. In this paper we propose to use cumulative link models that allow for reg...
Rate-control algorithms testing by using video source model
DEFF Research Database (Denmark)
Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna
2008-01-01
In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set....
A nonparametric mixture model for cure rate estimation.
Peng, Y; Dear, K B
2000-03-01
Nonparametric methods have attracted less attention than their parametric counterparts for cure rate analysis. In this paper, we study a general nonparametric mixture model. The proportional hazards assumption is employed in modeling the effect of covariates on the failure time of patients who are not cured. The EM algorithm, the marginal likelihood approach, and multiple imputations are employed to estimate parameters of interest in the model. This model extends models and improves estimation methods proposed by other researchers. It also extends Cox's proportional hazards regression model by allowing a proportion of event-free patients and investigating covariate effects on that proportion. The model and its estimation method are investigated by simulations. An application to breast cancer data, including comparisons with previous analyses using a parametric model and an existing nonparametric model by other researchers, confirms the conclusions from the parametric model but not those from the existing nonparametric model.
Equivalence of interest rate models and lattice gases.
Pirjol, Dan
2012-04-01
We consider the class of short rate interest rate models for which the short rate is proportional to the exponential of a Gaussian Markov process x(t) in the terminal measure r(t)=a(t)exp[x(t)]. These models include the Black-Derman-Toy and Black-Karasinski models in the terminal measure. We show that such interest rate models are equivalent to lattice gases with attractive two-body interaction, V(t(1),t(2))=-Cov[x(t(1)),x(t(2))]. We consider in some detail the Black-Karasinski model with x(t) as an Ornstein-Uhlenbeck process, and show that it is similar to a lattice gas model considered by Kac and Helfand, with attractive long-range two-body interactions, V(x,y)=-α(e(-γ|x-y|)-e(-γ(x+y))). An explicit solution for the model is given as a sum over the states of the lattice gas, which is used to show that the model has a phase transition similar to that found previously in the Black-Derman-Toy model in the terminal measure.
On rate-state and Coulomb failure models
Gomberg, J.; Beeler, N.; Blanpied, M.
2000-01-01
We examine the predictions of Coulomb failure stress and rate-state frictional models. We study the change in failure time (clock advance) Δt due to stress step perturbations (i.e., coseismic static stress increases) added to "background" stressing at a constant rate (i.e., tectonic loading) at time t0. The predictability of Δt implies a predictable change in seismicity rate r(t)/r0, testable using earthquake catalogs, where r0 is the constant rate resulting from tectonic stressing. Models of r(t)/r0, consistent with general properties of aftershock sequences, must predict an Omori law seismicity decay rate, a sequence duration that is less than a few percent of the mainshock cycle time and a return directly to the background rate. A Coulomb model requires that a fault remains locked during loading, that failure occur instantaneously, and that Δt is independent of t0. These characteristics imply an instantaneous infinite seismicity rate increase of zero duration. Numerical calculations of r(t)/r0 for different state evolution laws show that aftershocks occur on faults extremely close to failure at the mainshock origin time, that these faults must be "Coulomb-like," and that the slip evolution law can be precluded. Real aftershock population characteristics also may constrain rate-state constitutive parameters; a may be lower than laboratory values, the stiffness may be high, and/or normal stress may be lower than lithostatic. We also compare Coulomb and rate-state models theoretically. Rate-state model fault behavior becomes more Coulomb-like as constitutive parameter a decreases relative to parameter b. This is because the slip initially decelerates, representing an initial healing of fault contacts. The deceleration is more pronounced for smaller a, more closely simulating a locked fault. Even when the rate-state Δt has Coulomb characteristics, its magnitude may differ by some constant dependent on b. In this case, a rate-state model behaves like a modified
Staff background paper on performance-based rate making
International Nuclear Information System (INIS)
Fraser, J.; Brownell, B.
1998-10-01
An alternative to the traditional cost of service (COS) regulation for electric utilities in British Columbia has been proposed. The alternative to pure COS regulation is performance-based rate making (PBR). PBR partially decouples a utility's rates from its costs and ties utility profits to performance relative to specific benchmarks. The motivation underlying PBR is that ideally, it provides incentives for utilities to cost-effectively achieve pre-defined goals. This report describes the design of PBR mechanisms, base rate PBR formulas, base rate PBR in other jurisdictions including New York, California, Maine and New Jersey. The report also describes gas procurement PBR in other jurisdictions, as well as British Columbia Utilities' Commission's own experience with PBR. In general, PBR has the potential to provide resource efficiency, allocative efficiency, support for introduction of new services, and reduced regulatory administrative costs. 15 refs., 4 tabs
Inflation, Exchange Rates and Interest Rates in Ghana: an Autoregressive Distributed Lag Model
Directory of Open Access Journals (Sweden)
Dennis Nchor
2015-01-01
Full Text Available This paper investigates the impact of exchange rate movement and the nominal interest rate on inflation in Ghana. It also looks at the presence of the Fisher Effect and the International Fisher Effect scenarios. It makes use of an autoregressive distributed lag model and an unrestricted error correction model. Ordinary Least Squares regression methods were also employed to determine the presence of the Fischer Effect and the International Fisher Effect. The results from the study show that in the short run a percentage point increase in the level of depreciation of the Ghana cedi leads to an increase in the rate of inflation by 0.20%. A percentage point increase in the level of nominal interest rates however results in a decrease in inflation by 0.98%. Inflation increases by 1.33% for every percentage point increase in the nominal interest rate in the long run. An increase in inflation on the other hand increases the nominal interest rate by 0.51% which demonstrates the partial Fisher effect. A 1% increase in the interest rate differential leads to a depreciation of the Ghana cedi by approximately 1% which indicates the full International Fisher effect.
Modeling baroreflex regulation of heart rate during orthostatic stress
DEFF Research Database (Denmark)
Olufsen, Mette; Tran, Hien T.; Ottesen, Johnny T.
2006-01-01
. The model uses blood pressure measured in the finger as an input to model heart rate dynamics in response to changes in baroreceptor nerve firing rate, sympathetic and parasympathetic responses, vestibulo-sympathetic reflex, and concentrations of norepinephrine and acetylcholine. We formulate an inverse...... in healthy and hypertensive elderly people the hysteresis loop shifts to higher blood pressure values and its area is diminished. Finally, for hypertensive elderly people the hysteresis loop is generally not closed indicating that during postural change from sitting to standing, the blood pressure resettles......During orthostatic stress, arterial and cardiopulmonary baroreflexes play a key role in maintaining arterial pressure by regulating heart rate. This study, presents a mathematical model that can predict the dynamics of heart rate regulation in response to postural change from sitting to standing...
Validated analytical modeling of diesel engine regulated exhaust CO emission rate
Directory of Open Access Journals (Sweden)
Waleed F Faris
2016-06-01
Full Text Available Albeit vehicle analytical models are often favorable for explainable mathematical trends, no analytical model has been developed of the regulated diesel exhaust CO emission rate for trucks yet. This research unprecedentedly develops and validates for trucks a model of the steady speed regulated diesel exhaust CO emission rate analytically. It has been found that the steady speed–based CO exhaust emission rate is based on (1 CO2 dissociation, (2 the water–gas shift reaction, and (3 the incomplete combustion of hydrocarbon. It has been found as well that the steady speed–based CO exhaust emission rate based on CO2 dissociation is considerably less than the rate that is based on the water–gas shift reaction. It has also been found that the steady speed–based CO exhaust emission rate based on the water–gas shift reaction is the dominant source of CO exhaust emission. The study shows that the average percentage of deviation of the steady speed–based simulated results from the corresponding field data is 1.7% for all freeway cycles with 99% coefficient of determination at the confidence level of 95%. This deviation of the simulated results from field data outperforms its counterpart of widely recognized models such as the comprehensive modal emissions model and VT-Micro for all freeway cycles.
An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation
Energy Technology Data Exchange (ETDEWEB)
Kim, S. K.; Kang, G. B.; Ko, W. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2013-10-15
Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole.
An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation
International Nuclear Information System (INIS)
Kim, S. K.; Kang, G. B.; Ko, W. I.
2013-01-01
Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole
Dose rate modelled for the outdoors of a gamma irradiation
International Nuclear Information System (INIS)
Mangussi, J
2012-01-01
A model for the absorbed dose rate calculation on the surroundings of a gamma irradiation plant is developed. In such plants, a part of the radiation emitted upwards reach's the outdoors. The Compton scatterings on the wall of the exhausting pipes through de plant roof and on the outdoors air are modelled. The absorbed dose rate generated by the scattered radiation as far as 200 m is calculated. The results of the models, to be used for the irradiation plant design and for the environmental studies, are showed on graphics (author)
CONTINUOUS MODELING OF FOREIGN EXCHANGE RATE OF USD VERSUS TRY
Directory of Open Access Journals (Sweden)
Yakup Arı
2011-01-01
Full Text Available This study aims to construct continuous-time autoregressive (CAR model and continuous-time GARCH (COGARCH model from discrete time data of foreign exchange rate of United States Dollar (USD versus Turkish Lira (TRY. These processes are solutions to stochastic differential equation Lévy-driven processes. We have shown that CAR(1 and COGARCH(1,1 processes are proper models to represent foreign exchange rate of USD and TRY for different periods of time February 2002- June 2010.
Evaluating the Impact of Prescription Fill Rates on Risk Stratification Model Performance.
Chang, Hsien-Yen; Richards, Thomas M; Shermock, Kenneth M; Elder Dalpoas, Stacy; J Kan, Hong; Alexander, G Caleb; Weiner, Jonathan P; Kharrazi, Hadi
2017-12-01
Risk adjustment models are traditionally derived from administrative claims. Prescription fill rates-extracted by comparing electronic health record prescriptions and pharmacy claims fills-represent a novel measure of medication adherence and may improve the performance of risk adjustment models. We evaluated the impact of prescription fill rates on claims-based risk adjustment models in predicting both concurrent and prospective costs and utilization. We conducted a retrospective cohort study of 43,097 primary care patients from HealthPartners network between 2011 and 2012. Diagnosis and/or pharmacy claims of 2011 were used to build 3 base models using the Johns Hopkins ACG system, in addition to demographics. Model performances were compared before and after adding 3 types of prescription fill rates: primary 0-7 days, primary 0-30 days, and overall. Overall fill rates utilized all ordered prescriptions from electronic health record while primary fill rates excluded refill orders. The overall, primary 0-7, and 0-30 days fill rates were 72.30%, 59.82%, and 67.33%. The fill rates were similar between sexes but varied across different medication classifications, whereas the youngest had the highest rate. Adding fill rates modestly improved the performance of all models in explaining medical costs (improving concurrent R by 1.15% to 2.07%), followed by total costs (0.58% to 1.43%), and pharmacy costs (0.07% to 0.65%). The impact was greater for concurrent costs compared with prospective costs. Base models without diagnosis information showed the highest improvement using prescription fill rates. Prescription fill rates can modestly enhance claims-based risk prediction models; however, population-level improvements in predicting utilization are limited.
Permanence for a Delayed Nonautonomous SIR Epidemic Model with Density-Dependent Birth Rate
Directory of Open Access Journals (Sweden)
Li Yingke
2011-01-01
Full Text Available Based on some well-known SIR models, a revised nonautonomous SIR epidemic model with distributed delay and density-dependent birth rate was considered. Applying some classical analysis techniques for ordinary differential equations and the method proposed by Wang (2002, the threshold value for the permanence and extinction of the model was obtained.
The Multi-state Latent Factor Intensity Model for Credit Rating Transitions
Koopman, S.J.; Lucas, A.; Monteiro, A.
2008-01-01
A new empirical reduced-form model for credit rating transitions is introduced. It is a parametric intensity-based duration model with multiple states and driven by exogenous covariates and latent dynamic factors. The model has a generalized semi-Markov structure designed to accommodate many of the
Pal, Suvra; Balakrishnan, Narayanaswamy
2018-05-01
In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.
Improving Rice Modeling Success Rate with Ternary Non-structural Fertilizer Response Model.
Li, Juan; Zhang, Mingqing; Chen, Fang; Yao, Baoquan
2018-06-13
Fertilizer response modelling is an important technical approach to realize metrological fertilization on rice. With the goal of solving the problems of a low success rate of a ternary quadratic polynomial model (TPFM) and to expand the model's applicability, this paper established a ternary non-structural fertilizer response model (TNFM) based on the experimental results from N, P and K fertilized rice fields. Our research results showed that the TNFM significantly improved the modelling success rate by addressing problems arising from setting the bias and multicollinearity in a TPFM. The results from 88 rice field trials in China indicated that the proportion of typical TNFMs that satisfy the general fertilizer response law of plant nutrition was 40.9%, while the analogous proportion of TPFMs was only 26.1%. The recommended fertilization showed a significant positive linear correlation between the two models, and the parameters N 0 , P 0 and K 0 that estimated the value of soil supplying nutrient equivalents can be used as better indicators of yield potential in plots where no N or P or K fertilizer was applied. The theoretical analysis showed that the new model has a higher fitting accuracy and a wider application range.
Akihiko Takahashi; Kohta Takehara
2007-01-01
This paper proposes an asymptotic expansion scheme of currency options with a libor market model of interest rates and stochastic volatility models of spot exchange rates. In particular, we derive closed-form approximation formulas for the density functions of the underlying assets and for pricing currency options based on the third order asymptotic expansion scheme; we do not model a foreign exchange rate's variance such as in Heston[1993], but its volatility that follows a general time-inho...
Visual Perception Based Rate Control Algorithm for HEVC
Feng, Zeqi; Liu, PengYu; Jia, Kebin
2018-01-01
For HEVC, rate control is an indispensably important video coding technology to alleviate the contradiction between video quality and the limited encoding resources during video communication. However, the rate control benchmark algorithm of HEVC ignores subjective visual perception. For key focus regions, bit allocation of LCU is not ideal and subjective quality is unsatisfied. In this paper, a visual perception based rate control algorithm for HEVC is proposed. First bit allocation weight of LCU level is optimized based on the visual perception of luminance and motion to ameliorate video subjective quality. Then λ and QP are adjusted in combination with the bit allocation weight to improve rate distortion performance. Experimental results show that the proposed algorithm reduces average 0.5% BD-BR and maximum 1.09% BD-BR at no cost in bitrate accuracy compared with HEVC (HM15.0). The proposed algorithm devotes to improving video subjective quality under various video applications.
Arduino-based noise robust online heart-rate detection.
Das, Sangita; Pal, Saurabh; Mitra, Madhuchhanda
2017-04-01
This paper introduces a noise robust real time heart rate detection system from electrocardiogram (ECG) data. An online data acquisition system is developed to collect ECG signals from human subjects. Heart rate is detected using window-based autocorrelation peak localisation technique. A low-cost Arduino UNO board is used to implement the complete automated process. The performance of the system is compared with PC-based heart rate detection technique. Accuracy of the system is validated through simulated noisy ECG data with various levels of signal to noise ratio (SNR). The mean percentage error of detected heart rate is found to be 0.72% for the noisy database with five different noise levels.
A model for reaction rates in turbulent reacting flows
Chinitz, W.; Evans, J. S.
1984-01-01
To account for the turbulent temperature and species-concentration fluctuations, a model is presented on the effects of chemical reaction rates in computer analyses of turbulent reacting flows. The model results in two parameters which multiply the terms in the reaction-rate equations. For these two parameters, graphs are presented as functions of the mean values and intensity of the turbulent fluctuations of the temperature and species concentrations. These graphs will facilitate incorporation of the model into existing computer programs which describe turbulent reacting flows. When the model was used in a two-dimensional parabolic-flow computer code to predict the behavior of an experimental, supersonic hydrogen jet burning in air, some improvement in agreement with the experimental data was obtained in the far field in the region near the jet centerline. Recommendations are included for further improvement of the model and for additional comparisons with experimental data.
Modeling decay rates of dead wood in a neotropical forest.
Hérault, Bruno; Beauchêne, Jacques; Muller, Félix; Wagner, Fabien; Baraloto, Christopher; Blanc, Lilian; Martin, Jean-Michel
2010-09-01
Variation of dead wood decay rates among tropical trees remains one source of uncertainty in global models of the carbon cycle. Taking advantage of a broad forest plot network surveyed for tree mortality over a 23-year period, we measured the remaining fraction of boles from 367 dead trees from 26 neotropical species widely varying in wood density (0.23-1.24 g cm(-3)) and tree circumference at death time (31.5-272.0 cm). We modeled decay rates within a Bayesian framework assuming a first order differential equation to model the decomposition process and tested for the effects of forest management (selective logging vs. unexploited), of mode of death (standing vs. downed) and of topographical levels (bottomlands vs. hillsides vs. hilltops) on wood decay rates. The general decay model predicts the observed remaining fraction of dead wood (R2 = 60%) with only two biological predictors: tree circumference at death time and wood specific density. Neither selective logging nor local topography had a differential effect on wood decay rates. Including the mode of death into the model revealed that standing dead trees decomposed faster than downed dead trees, but the gain of model accuracy remains rather marginal. Overall, these results suggest that the release of carbon from tropical dead trees to the atmosphere can be simply estimated using tree circumference at death time and wood density.
Modeling the intracellular pathogen-immune interaction with cure rate
Dubey, Balram; Dubey, Preeti; Dubey, Uma S.
2016-09-01
Many common and emergent infectious diseases like Influenza, SARS, Hepatitis, Ebola etc. are caused by viral pathogens. These infections can be controlled or prevented by understanding the dynamics of pathogen-immune interaction in vivo. In this paper, interaction of pathogens with uninfected and infected cells in presence or absence of immune response are considered in four different cases. In the first case, the model considers the saturated nonlinear infection rate and linear cure rate without absorption of pathogens into uninfected cells and without immune response. The next model considers the effect of absorption of pathogens into uninfected cells while all other terms are same as in the first case. The third model incorporates innate immune response, humoral immune response and Cytotoxic T lymphocytes (CTL) mediated immune response with cure rate and without absorption of pathogens into uninfected cells. The last model is an extension of the third model in which the effect of absorption of pathogens into uninfected cells has been considered. Positivity and boundedness of solutions are established to ensure the well-posedness of the problem. It has been found that all the four models have two equilibria, namely, pathogen-free equilibrium point and pathogen-present equilibrium point. In each case, stability analysis of each equilibrium point is investigated. Pathogen-free equilibrium is globally asymptotically stable when basic reproduction number is less or equal to unity. This implies that control or prevention of infection is independent of initial concentration of uninfected cells, infected cells, pathogens and immune responses in the body. The proposed models show that introduction of immune response and cure rate strongly affects the stability behavior of the system. Further, on computing basic reproduction number, it has been found to be minimum for the fourth model vis-a-vis other models. The analytical findings of each model have been exemplified by
Modelling of behaviour of metals at high strain rates
Panov, Vili
2006-01-01
The aim of the work presented in this thesis was to produce the improvement of the existing simulation tools used for the analysis of materials and structures, which are dynamically loaded and subjected to the different levels of temperatures and strain rates. The main objective of this work was development of tools for modelling of strain rate and temperature dependant behaviour of aluminium alloys, typical for aerospace structures with pronounced orthotropic properties, and their implementa...
Do expert ratings or economic models explain champagne prices?
DEFF Research Database (Denmark)
Bentzen, Jan Børsen; Smith, Valdemar
2008-01-01
Champagne is bought with low frequency and many consumers most likely do not have or seek full information on the quality of champagne. Some consumers may rely on the reputation of particular brands, e.g. "Les Grandes Marques", some consumers choose to gain information from sensory ratings...... of champagne. The aim of this paper is to analyse the champagne prices on the Scandinavian markets by applying a hedonic price function in a comparative framework with minimal models using sensory ratings....
Ifenthaler, Dirk; Seel, Norbert M.
2013-01-01
In this paper, there will be a particular focus on mental models and their application to inductive reasoning within the realm of instruction. A basic assumption of this study is the observation that the construction of mental models and related reasoning is a slowly developing capability of cognitive systems that emerges effectively with proper…
Release rate of diazinon from microcapsule based on melamine formaldehyde
Noviana Utami C., S.; Rochmadi
2018-04-01
The microcapsule containing diazinon as the core material and melamine formaldehyde as the membrane material have been synthesized by in situ polymerization method. The microcapsule membrane in this research is melamine formaldehyde (MF). This research aims to study the effect of pH and temperature on the release rate of diazinon from microcapsule based on melamine formaldehyde in aqueous medium. The results showed that pH and temperature has little effect on the release rate of diazinon from microcapsule based on melamine formaldehyde. This is due to the diffusion through the microcapsule membrane is not influenced by the pH and temperature of the solution outside of microcapsule.
Forecast model of safety economy contribution rate of China
Institute of Scientific and Technical Information of China (English)
LIU Li-jun; SHI Shi-liang
2005-01-01
It is the rational and exact computation of the safety economy contribution rate that has the far-reaching realistic meaning to the improvement of society cognition to safety and the investment to the nation safety and the national macro-safety decision-makings. The accurate function between safety inputs and outputs was obtained through a founded econometric model. Then the forecasted safety economy contribution rate is 3.01% and the forecasted ratio between safety inputs and outputs is 1:1.81 in China in 2005. And the model accords with the practice of China and the results are satisfying.
Modeling gallic acid production rate by empirical and statistical analysis
Directory of Open Access Journals (Sweden)
Bratati Kar
2000-01-01
Full Text Available For predicting the rate of enzymatic reaction empirical correlation based on the experimental results obtained under various operating conditions have been developed. Models represent both the activation as well as deactivation conditions of enzymatic hydrolysis and the results have been analyzed by analysis of variance (ANOVA. The tannase activity was found maximum at incubation time 5 min, reaction temperature 40ºC, pH 4.0, initial enzyme concentration 0.12 v/v, initial substrate concentration 0.42 mg/ml, ionic strength 0.2 M and under these optimal conditions, the maximum rate of gallic acid production was 33.49 mumoles/ml/min.Para predizer a taxa das reações enzimaticas uma correlação empírica baseada nos resultados experimentais foi desenvolvida. Os modelos representam a ativação e a desativativação da hydrolise enzimatica. Os resultados foram avaliados pela análise de variança (ANOVA. A atividade máxima da tannase foi obtida após 5 minutos de incubação, temperatura 40ºC, pH 4,0, concentração inicial da enzima de 0,12 v/v, concentração inicial do substrato 0,42 mg/ml, força iônica 0,2 M. Sob essas condições a taxa máxima de produção ácido galico foi de 33,49 µmoles/ml/min.
A global reference for caesarean section rates (C-Model): a multicountry cross-sectional study.
Souza, J P; Betran, A P; Dumont, A; de Mucio, B; Gibbs Pickens, C M; Deneux-Tharaux, C; Ortiz-Panozo, E; Sullivan, E; Ota, E; Togoobaatar, G; Carroli, G; Knight, H; Zhang, J; Cecatti, J G; Vogel, J P; Jayaratne, K; Leal, M C; Gissler, M; Morisaki, N; Lack, N; Oladapo, O T; Tunçalp, Ö; Lumbiganon, P; Mori, R; Quintana, S; Costa Passos, A D; Marcolin, A C; Zongo, A; Blondel, B; Hernández, B; Hogue, C J; Prunet, C; Landman, C; Ochir, C; Cuesta, C; Pileggi-Castro, C; Walker, D; Alves, D; Abalos, E; Moises, Ecd; Vieira, E M; Duarte, G; Perdona, G; Gurol-Urganci, I; Takahiko, K; Moscovici, L; Campodonico, L; Oliveira-Ciabati, L; Laopaiboon, M; Danansuriya, M; Nakamura-Pereira, M; Costa, M L; Torloni, M R; Kramer, M R; Borges, P; Olkhanud, P B; Pérez-Cuevas, R; Agampodi, S B; Mittal, S; Serruya, S; Bataglia, V; Li, Z; Temmerman, M; Gülmezoglu, A M
2016-02-01
To generate a global reference for caesarean section (CS) rates at health facilities. Cross-sectional study. Health facilities from 43 countries. Thirty eight thousand three hundred and twenty-four women giving birth from 22 countries for model building and 10,045,875 women giving birth from 43 countries for model testing. We hypothesised that mathematical models could determine the relationship between clinical-obstetric characteristics and CS. These models generated probabilities of CS that could be compared with the observed CS rates. We devised a three-step approach to generate the global benchmark of CS rates at health facilities: creation of a multi-country reference population, building mathematical models, and testing these models. Area under the ROC curves, diagnostic odds ratio, expected CS rate, observed CS rate. According to the different versions of the model, areas under the ROC curves suggested a good discriminatory capacity of C-Model, with summary estimates ranging from 0.832 to 0.844. The C-Model was able to generate expected CS rates adjusted for the case-mix of the obstetric population. We have also prepared an e-calculator to facilitate use of C-Model (www.who.int/reproductivehealth/publications/maternal_perinatal_health/c-model/en/). This article describes the development of a global reference for CS rates. Based on maternal characteristics, this tool was able to generate an individualised expected CS rate for health facilities or groups of health facilities. With C-Model, obstetric teams, health system managers, health facilities, health insurance companies, and governments can produce a customised reference CS rate for assessing use (and overuse) of CS. The C-Model provides a customized benchmark for caesarean section rates in health facilities and systems. © 2015 World Health Organization; licensed by John Wiley & Sons Ltd on behalf of Royal College of Obstetricians and Gynaecologists.
A new constitutive model for prediction of impact rates response of polypropylene
Directory of Open Access Journals (Sweden)
Buckley C.P.
2012-08-01
Full Text Available This paper proposes a new constitutive model for predicting the impact rates response of polypropylene. Impact rates, as used here, refer to strain rates greater than 1000 1/s. The model is a physically based, three-dimensional constitutive model which incorporates the contributions of the amorphous, crystalline, pseudo-amorphous and entanglement networks to the constitutive response of polypropylene. The model mathematics is based on the well-known Glass-Rubber model originally developed for glassy polymers but the arguments have herein been extended to semi-crystalline polymers. In order to predict the impact rates behaviour of polypropylene, the model exploits the well-known framework of multiple processes yielding of polymers. This work argues that two dominant viscoelastic relaxation processes – the alpha- and beta-processes – can be associated with the yield responses of polypropylene observed at low-rate-dominant and impact-rates dominant loading regimes. Compression test data on polypropylene have been used to validate the model. The study has found that the model predicts quite well the experimentally observed nonlinear rate-dependent impact response of polypropylene.
Model-based Software Engineering
DEFF Research Database (Denmark)
Kindler, Ekkart
2010-01-01
The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...
Credit Rating via Dynamic Slack-Based Measure And It´s Optimal Investment Strategy
A. Delavarkhalafi; A. Poursherafatan
2015-01-01
In this paper we check the credit rating of firms applied for a loan. In this regard we introduce a model, named Dynamic Slack-Based Measure (DSBM) for measuring credit rating of applicant companies. Selection of financial ratios that represent the financial state of a company -in the best possible way- is one of the most challenging parts of any credit rating analysis. At first, ranking needs to identify the appropriate variables. Therefore we introduce five financial variables to provide a ...
Principles of models based engineering
Energy Technology Data Exchange (ETDEWEB)
Dolin, R.M.; Hefele, J.
1996-11-01
This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.
Planning for rate base treatment of large power plants
International Nuclear Information System (INIS)
Faruki, C.J.
1986-01-01
This paper addresses two related areas of planning for inclusion in rate base of large generating stations. First, the paper discusses the range of options available as to how the plant is to go into rate base, e.g., phase-in plans. In this connection the process of generating the entire range of options that may be available is described and examined. Second, the paper examines innovative ways of using procedures (e.g., accounting proceedings, settlement procedures, cost caps, and other ideas short of a full-blown rate case) and the resources available in the ratemaking arena, to obtain, in the least painful way possible, the necessary ratemaking orders. The thesis is that there must be better alternatives to the many proceedings that have either begun as, or seem to be leading to, endless retrospective examinations of multiple questions (from load forecasting to construction management to continuation-of-construction decisions) under the label of prudence inquiries
International Nuclear Information System (INIS)
Chapman, O.J.V.; Baker, A.E.
1993-01-01
Risk based analysis is a tool becoming available to both engineers and managers to aid decision making concerning plant matters such as In-Service Inspection (ISI). In order to develop a risk based method, some form of Structural Reliability Risk Assessment (SRRA) needs to be performed to provide a probability of failure ranking for all sites around the plant. A Probabilistic Risk Assessment (PRA) can then be carried out to combine these possible events with the capability of plant safety systems and procedures, to establish the consequences of failure for the sites. In this way the probability of failures are converted into a risk based ranking which can be used to assist the process of deciding which sites should be included in an ISI programme. This paper reviews the technique and typical results of a risk based ranking assessment carried out for nuclear power plant pipework. (author)
Evaporation rate-based selection of supramolecular chirality.
Hattori, Shingo; Vandendriessche, Stefaan; Koeckelberghs, Guy; Verbiest, Thierry; Ishii, Kazuyuki
2017-03-09
We demonstrate the evaporation rate-based selection of supramolecular chirality for the first time. P-type aggregates prepared by fast evaporation, and M-type aggregates prepared by slow evaporation are kinetic and thermodynamic products under dynamic reaction conditions, respectively. These findings provide a novel solution reaction chemistry under the dynamic reaction conditions.
International Nuclear Information System (INIS)
Huang Mingxin; Rivera-Diaz-del-Castillo, Pedro E J; Zwaag, Sybrand van der; Bouaziz, Olivier
2009-01-01
Based on the theory of irreversible thermodynamics, the present work proposes a dislocation-based model to describe the plastic deformation of FCC metals over wide ranges of strain rates. The stress-strain behaviour and the evolution of the average dislocation density are derived. It is found that there is a transitional strain rate (∼ 10 4 s -1 ) over which the phonon drag effects appear, resulting in a significant increase in the flow stress and the average dislocation density. The model is applied to pure Cu deformed at room temperature and at strain rates ranging from 10 -5 to 10 6 s -1 showing good agreement with experimental results.
Vapor generation rate model for dispersed drop flow
International Nuclear Information System (INIS)
Unal, C.; Tuzla, K.; Cokmez-Tuzla, A.F.; Chen, J.C.
1991-01-01
A comparison of predictions of existing nonequilibrium post-CHF heat transfer models with the recently obtained rod bundle data has been performed. The models used the experimental conditions and wall temperatures to predict the heat flux and vapor temperatures at the location of interest. No existing model was able to reasonably predict the vapor superheat and the wall heat flux simultaneously. Most of the models, except Chen-Sundaram-Ozkaynak, failed to predict the wall heat flux, while all of the models could not predict the vapor superheat data or trends. A recently developed two-region heat transfer model, the Webb-Chen two-region model, did not give a reasonable prediction of the vapor generation rate in the far field of the CHF point. A new correlation was formulated to predict the vapor generation rate in convective dispersed droplet flow in terms of thermal-hydraulic parameters and thermodynamic properties. A comparison of predictions of the two-region heat transfer model, with the use of a presently developed correlation, with all the existing post-CHF data, including single-tube and rod bundle, showed significant improvements in predicting the vapor superheat and tube wall heat flux trends. (orig.)
Risky forward interest rates and swaptions: Quantum finance model and empirical results
Baaquie, Belal Ehsan; Yu, Miao; Bhanap, Jitendra
2018-02-01
Risk free forward interest rates (Diebold and Li, 2006 [1]; Jamshidian, 1991 [2 ]) - and their realization by US Treasury bonds as the leading exemplar - have been studied extensively. In Baaquie (2010), models of risk free bonds and their forward interest rates based on the quantum field theoretic formulation of the risk free forward interest rates have been discussed, including the empirical evidence supporting these models. The quantum finance formulation of risk free forward interest rates is extended to the case of risky forward interest rates. The examples of the Singapore and Malaysian forward interest rates are used as specific cases. The main feature of the quantum finance model is that the risky forward interest rates are modeled both a) as a stand-alone case as well as b) being driven by the US forward interest rates plus a spread - having its own term structure -above the US forward interest rates. Both the US forward interest rates and the term structure for the spread are modeled by a two dimensional Euclidean quantum field. As a precursor to the evaluation of put option of the Singapore coupon bond, the quantum finance model for swaptions is tested using empirical study of swaptions for the US Dollar -showing that the model is quite accurate. A prediction for the market price of the put option for the Singapore coupon bonds is obtained. The quantum finance model is generalized to study the Malaysian case and the Malaysian forward interest rates are shown to have anomalies absent for the US and Singapore case. The model's prediction for a Malaysian interest rate swap is obtained.
Level-ARCH Short Rate Models with Regime Switching
DEFF Research Database (Denmark)
Christiansen, Charlotte
This paper introduces regime switching volatility into level- ARCH models for the short rates of the US, the UK, and Germany. Once regime switching and level effects are included there are no gains from including ARCH effects. It is of secondary importance exactly how the regime switching is spec...
Modeling sludge accumulation rates in lined pit latrines in slum ...
African Journals Online (AJOL)
Yvonne
should include geo-physical characterization of soil and drainage of pit latrine sites so as ... Key words: Faecal, sludge accumulation rates, slum areas, lined pit latrines. .... Value and its unit Source .... overall quality of the models had to be assessed by validation on ..... Sanitation partnership series: Bringing pit emptying out.
Modeling for Dose Rate Calculation of the External Exposure to Gamma Emitters in Soil
International Nuclear Information System (INIS)
Allam, K. A.; El-Mongy, S. A.; El-Tahawy, M. S.; Mohsen, M. A.
2004-01-01
Based on the model proposed and developed in Ph.D thesis of the first author of this work, the dose rate conversion factors (absorbed dose rate in air per specific activity of soil in nGy.hr - 1 per Bq.kg - 1) are calculated 1 m above the ground for photon emitters of natural radionuclides uniformly distributed in the soil. This new and simple dose rate calculation software was used for calculation of the dose rate in air 1 m above the ground. Then the results were compared with those obtained by five different groups. Although the developed model is extremely simple, the obtained results of calculations, based on this model, show excellent agreement with those obtained by the above-mentioned models specially that one adopted by UNSCEAR. (authors)
Prediction of creamy mouthfeel based on texture attribute ratings of dairy desserts
Weenen, H.; Jellema, R.H.; Wijk, de R.A.
2006-01-01
A quantitative predictive model for creamy mouthfeel in dairy desserts was developed, using PLS multivariate analysis of texture attributes. Based on 40 experimental custard desserts, a good correlation was obtained between measured and predicted creamy mouthfeel ratings. The model was validated by
International Nuclear Information System (INIS)
Odette, G.R.; Donahue, E.; Lucas, G.E.; Sheckherd, J.W.
1996-01-01
The influence of loading rate and constraint on the effective fracture toughness as a function of temperature [K e (T)] of the fusion program heat of V-4Cr-4Ti was measured using subsized, three point bend specimens. The constitutive behavior was characterized as a function of temperature and strain rate using small tensile specimens. Data in the literature on this alloy was also analysed to determine the effect of irradiation on K e (T) and the energy temperature (E-T) curves measured in subsized Charpy V-notch tests. It was found that V-4Cr-4Ti undergoes open-quotes normalclose quotes stress-controlled cleavage fracture below a temperature marking a sharp ductile-to-brittle transition. The transition temperature is increased by higher loading rates, irradiation hardening and triaxial constraint. Shifts in a reference transition temperature due to higher loading rates and irradiation can be reasonably predicted by a simple equivalent yield stress model. These results also suggest that size and geometry effects, which mediate constraint, can be modeled by combining local critical stressed area σ*/A* fracture criteria with finite element method simulations of crack tip stress fields. The fundamental understanding reflected in these models will be needed to develop K e (T) curves for a range of loading rates, irradiation conditions, structural size scales and geometries relying (in large part) on small specimen tests. Indeed, it may be possible to develop a master K e (T) curve-shift method to account for these variables. Such reliable and flexible failure assessment methods are critical to the design and safe operation of defect tolerant vanadium structures
CEAI: CCM based Email Authorship Identification Model
DEFF Research Database (Denmark)
Nizamani, Sarwat; Memon, Nasrullah
2013-01-01
In this paper we present a model for email authorship identification (EAI) by employing a Cluster-based Classification (CCM) technique. Traditionally, stylometric features have been successfully employed in various authorship analysis tasks; we extend the traditional feature-set to include some...... more interesting and effective features for email authorship identification (e.g. the last punctuation mark used in an email, the tendency of an author to use capitalization at the start of an email, or the punctuation after a greeting or farewell). We also included Info Gain feature selection based...... reveal that the proposed CCM-based email authorship identification model, along with the proposed feature set, outperforms the state-of-the-art support vector machine (SVM)-based models, as well as the models proposed by Iqbal et al. [1, 2]. The proposed model attains an accuracy rate of 94% for 10...
Characterization of infiltration rates from landfills: supporting groundwater modeling efforts.
Moo-Young, Horace; Johnson, Barnes; Johnson, Ann; Carson, David; Lew, Christine; Liu, Salley; Hancocks, Katherine
2004-01-01
The purpose of this paper is to review the literature to characterize infiltration rates from landfill liners to support groundwater modeling efforts. The focus of this investigation was on collecting studies that describe the performance of liners 'as installed' or 'as operated'. This document reviews the state of the science and practice on the infiltration rate through compacted clay liner (CCL) for 149 sites and geosynthetic clay liner (GCL) for 1 site. In addition, it reviews the leakage rate through geomembrane (GM) liners and composite liners for 259 sites. For compacted clay liners (CCL), there was limited information on infiltration rates (i.e., only 9 sites reported infiltration rates.), thus, it was difficult to develop a national distribution. The field hydraulic conductivities for natural clay liners range from 1 x 10(-9) cm s(-1) to 1 x 10(-4) cm s(-1), with an average of 6.5 x 10(-8) cm s(-1). There was limited information on geosynthetic clay liner. For composite lined and geomembrane systems, the leak detection system flow rates were utilized. The average monthly flow rate for composite liners ranged from 0-32 lphd for geomembrane and GCL systems to 0 to 1410 lphd for geomembrane and CCL systems. The increased infiltration for the geomembrane and CCL system may be attributed to consolidation water from the clay.
A quasi-independence model to estimate failure rates
International Nuclear Information System (INIS)
Colombo, A.G.
1988-01-01
The use of a quasi-independence model to estimate failure rates is investigated. Gate valves of nuclear plants are considered, and two qualitative covariates are taken into account: plant location and reactor system. Independence between the two covariates and an exponential failure model are assumed. The failure rate of the components of a given system and plant is assumed to be a constant, but it may vary from one system to another and from one plant to another. This leads to the analysis of a contingency table. A particular feature of the model is the different operating time of the components in the various cells which can also be equal to zero. The concept of independence of the covariates is then replaced by that of quasi-independence. The latter definition, however, is used in a broader sense than usual. Suitable statistical tests are discussed and a numerical example illustrates the use of the method. (author)
Boumans, M.; Martini, C.; Boumans, M.
2014-01-01
The aim of the rational-consensus method is to produce "rational consensus", that is, "mathematical aggregation", by weighing the performance of each expert on the basis of his or her knowledge and ability to judge relevant uncertainties. The measurement of the performance of the experts is based on
Boumans, Marcel
2014-01-01
The aim of the rational-consensus method is to produce “rational consensus”, that is, “mathematical aggregation”, by weighing the performance of each expert on the basis of his or her knowledge and ability to judge relevant uncertainties. The measurement of the performance of the experts is based on
Modeling of the interest rate policy of the central bank of Russia
Shelomentsev, A. G.; Berg, D. B.; Detkov, A. A.; Rylova, A. P.
2017-11-01
This paper investigates interactions among money supply, exchange rates, inflation, and nominal interest rates, which are regulating parameters of the Central bank policy. The study is based on the data received from Russian source in 2002-2016. The major findings are 1) the interest rate demonstrates almost no relation with inflation; 2) ties of money supply and the nominal interest rate are strong; 3) money supply and inflation show meaningful relations only in comparison to their growth rates. We have developed a dynamic model, which can be used in forecasting of macroeconomic processes.
A novel multitemporal insar model for joint estimation of deformation rates and orbital errors
Zhang, Lei; Ding, Xiaoli; Lu, Zhong; Jung, Hyungsup; Hu, Jun; Feng, Guangcai
2014-01-01
be corrected efficiently and reliably. We propose a novel model that is able to jointly estimate deformation rates and orbital errors based on the different spatialoral characteristics of the two types of signals. The proposed model is able to isolate a long
Index Option Pricing Models with Stochastic Volatility and Stochastic Interest Rates
Jiang, G.J.; van der Sluis, P.J.
2000-01-01
This paper specifies a multivariate stochastic volatility (SV) model for the S&P500 index and spot interest rate processes. We first estimate the multivariate SV model via the efficient method of moments (EMM) technique based on observations of underlying state variables, and then investigate the
DEFF Research Database (Denmark)
Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram
2018-01-01
architecture and the UML concepts. In this paper, we further this work by grounding Activity-based DEVS modeling and developing a fully-fledged modeling engine to demonstrate applicability. We also detail the relevant aspects of the created metamodel in terms of modeling and simulation. A significant number......Use of model-driven approaches has been increasing to significantly benefit the process of building complex systems. Recently, an approach for specifying model behavior using UML activities has been devised to support the creation of DEVS models in a disciplined manner based on the model driven...... of the artifacts of the UML 2.5 activities and actions, from the vantage point of DEVS behavioral modeling, is covered in details. Their semantics are discussed to the extent of time-accurate requirements for simulation. We characterize them in correspondence with the specification of the atomic model behavior. We...
The fusion rate in the transmission resonance model
International Nuclear Information System (INIS)
Jaendel, M.
1992-01-01
Resonant transmission of deuterons through a chain of target deuterons in a metal matrix has been suggested as an explanation for the cold fusion phenomena. In this paper the fusion rate in such transmission resonance models is estimated, and the basic physical constraints are discussed. The dominating contribution to the fusion yield is found to come from metastable states. The fusion rate is well described by the Wentzel-Kramer-Brillouin approximation and appears to be much too small to explain the experimental anomalies
Division-Based, Growth Rate Diversity in Bacteria
Directory of Open Access Journals (Sweden)
Ghislain Y. Gangwe Nana
2018-05-01
Full Text Available To investigate the nature and origins of growth rate diversity in bacteria, we grew Escherichia coli and Bacillus subtilis in liquid minimal media and, after different periods of 15N-labeling, analyzed and imaged isotope distributions in individual cells with Secondary Ion Mass Spectrometry. We find a striking inter- and intra-cellular diversity, even in steady state growth. This is consistent with the strand-dependent, hyperstructure-based hypothesis that a major function of the cell cycle is to generate coherent, growth rate diversity via the semi-conservative pattern of inheritance of strands of DNA and associated macromolecular assemblies. We also propose quantitative, general, measures of growth rate diversity for studies of cell physiology that include antibiotic resistance.
A Matérn model of the spatial covariance structure of point rain rates
Sun, Ying
2014-07-15
It is challenging to model a precipitation field due to its intermittent and highly scale-dependent nature. Many models of point rain rates or areal rainfall observations have been proposed and studied for different time scales. Among them, the spectral model based on a stochastic dynamical equation for the instantaneous point rain rate field is attractive, since it naturally leads to a consistent space–time model. In this paper, we note that the spatial covariance structure of the spectral model is equivalent to the well-known Matérn covariance model. Using high-quality rain gauge data, we estimate the parameters of the Matérn model for different time scales and demonstrate that the Matérn model is superior to an exponential model, particularly at short time scales.
A Matérn model of the spatial covariance structure of point rain rates
Sun, Ying; Bowman, Kenneth P.; Genton, Marc G.; Tokay, Ali
2014-01-01
It is challenging to model a precipitation field due to its intermittent and highly scale-dependent nature. Many models of point rain rates or areal rainfall observations have been proposed and studied for different time scales. Among them, the spectral model based on a stochastic dynamical equation for the instantaneous point rain rate field is attractive, since it naturally leads to a consistent space–time model. In this paper, we note that the spatial covariance structure of the spectral model is equivalent to the well-known Matérn covariance model. Using high-quality rain gauge data, we estimate the parameters of the Matérn model for different time scales and demonstrate that the Matérn model is superior to an exponential model, particularly at short time scales.
Predicting online ratings based on the opinion spreading process
He, Xing-Sheng; Zhou, Ming-Yang; Zhuo, Zhao; Fu, Zhong-Qian; Liu, Jian-Guo
2015-10-01
Predicting users' online ratings is always a challenge issue and has drawn lots of attention. In this paper, we present a rating prediction method by combining the user opinion spreading process with the collaborative filtering algorithm, where user similarity is defined by measuring the amount of opinion a user transfers to another based on the primitive user-item rating matrix. The proposed method could produce a more precise rating prediction for each unrated user-item pair. In addition, we introduce a tunable parameter λ to regulate the preferential diffusion relevant to the degree of both opinion sender and receiver. The numerical results for Movielens and Netflix data sets show that this algorithm has a better accuracy than the standard user-based collaborative filtering algorithm using Cosine and Pearson correlation without increasing computational complexity. By tuning λ, our method could further boost the prediction accuracy when using Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) as measurements. In the optimal cases, on Movielens and Netflix data sets, the corresponding algorithmic accuracy (MAE and RMSE) are improved 11.26% and 8.84%, 13.49% and 10.52% compared to the item average method, respectively.
[Performance based regulation: a strategy to increase breastfeeding rates].
Cobo-Armijo, Fernanda; Charvel, Sofía; Hernández-Ávila, Mauricio
2017-01-01
The decreasing breastfeeding rate in México is of public health concern. In this paper we discus an innovative regulatory approach -Performance Based Regulation- and its application to improve breastfeeding rates. This approach, forces industry to take responsibility for the lack of breastfeeding and its consequences. Failure to comply with this targets results in financial penalties. Applying performance based regulation as a strategy to improve breastfeeding is feasible because: the breastmilk substitutes market is an oligopoly, hence it is easy to identify the contribution of each market participant; the regulation's target population is clearly defined; it has a clear regulatory standard which can be easily evaluated, and sanctions to infringement can be defined under objective parameters. modify public policy, celebrate concertation agreements with the industry, create persuasive sanctions, strengthen enforcement activities and coordinate every action with the International Code of Marketing of Breast-milk Substitutes.
Performance based regulation: a strategy to increase breastfeeding rates
Directory of Open Access Journals (Sweden)
Fernanda Cobo-Armijo
2017-05-01
Full Text Available The decreasing breastfeeding rate in México is of public health concern. In this paper we discus an innovative regulatory approach -Performance Based Regulation- and its application to improve breastfeeding rates. This approach, forces industry to take responsibility for the lack of breastfeeding and its consequences. Failure to comply with this targets results in financial penalties. Applying performance based regulation as a strategy to improve breastfeeding is feasible because: the breastmilk substitutes market is an oligopoly, hence it is easy to identify the contribution of each market participant; the regulation’s target population is clearly defined; it has a clear regulatory standard which can be easily evaluated, and sanctions to infringement can be defined under objective parameters. Recommendations: modify public policy, celebrate concertation agreements with the industry, create persuasive sanctions, strengthen enforcement activities and coordinate every action with the International Code of Marketing of Breast-milk Substitutes.
Heart rate measurement based on face video sequence
Xu, Fang; Zhou, Qin-Wu; Wu, Peng; Chen, Xing; Yang, Xiaofeng; Yan, Hong-jian
2015-03-01
This paper proposes a new non-contact heart rate measurement method based on photoplethysmography (PPG) theory. With this method we can measure heart rate remotely with a camera and ambient light. We collected video sequences of subjects, and detected remote PPG signals through video sequences. Remote PPG signals were analyzed with two methods, Blind Source Separation Technology (BSST) and Cross Spectral Power Technology (CSPT). BSST is a commonly used method, and CSPT is used for the first time in the study of remote PPG signals in this paper. Both of the methods can acquire heart rate, but compared with BSST, CSPT has clearer physical meaning, and the computational complexity of CSPT is lower than that of BSST. Our work shows that heart rates detected by CSPT method have good consistency with the heart rates measured by a finger clip oximeter. With good accuracy and low computational complexity, the CSPT method has a good prospect for the application in the field of home medical devices and mobile health devices.
Zhou, Wei; Wen, Junhao; Qu, Qiang; Zeng, Jun; Cheng, Tian
2018-01-01
Recommender systems are vulnerable to shilling attacks. Forged user-generated content data, such as user ratings and reviews, are used by attackers to manipulate recommendation rankings. Shilling attack detection in recommender systems is of great significance to maintain the fairness and sustainability of recommender systems. The current studies have problems in terms of the poor universality of algorithms, difficulty in selection of user profile attributes, and lack of an optimization mechanism. In this paper, a shilling behaviour detection structure based on abnormal group user findings and rating time series analysis is proposed. This paper adds to the current understanding in the field by studying the credibility evaluation model in-depth based on the rating prediction model to derive proximity-based predictions. A method for detecting suspicious ratings based on suspicious time windows and target item analysis is proposed. Suspicious rating time segments are determined by constructing a time series, and data streams of the rating items are examined and suspicious rating segments are checked. To analyse features of shilling attacks by a group user's credibility, an abnormal group user discovery method based on time series and time window is proposed. Standard testing datasets are used to verify the effect of the proposed method.
Modeling of Bit Error Rate in Cascaded 2R Regenerators
DEFF Research Database (Denmark)
Öhman, Filip; Mørk, Jesper
2006-01-01
and the regenerating nonlinearity is investigated. It is shown that an increase in nonlinearity can compensate for an increase in noise figure or decrease in signal power. Furthermore, the influence of the improvement in signal extinction ratio along the cascade and the importance of choosing the proper threshold......This paper presents a simple and efficient model for estimating the bit error rate in a cascade of optical 2R-regenerators. The model includes the influences of of amplifier noise, finite extinction ratio and nonlinear reshaping. The interplay between the different signal impairments...
An SIRS model with a nonlinear incidence rate
International Nuclear Information System (INIS)
Jin Yu; Wang, Wendi; Xiao Shiwu
2007-01-01
The global dynamics of an SIRS model with a nonlinear incidence rate is investigated. We establish a threshold for a disease to be extinct or endemic, analyze the existence and asymptotic stability of equilibria, and verify the existence of bistable states, i.e., a stable disease free equilibrium and a stable endemic equilibrium or a stable limit cycle. In particular, we find that the model admits stability switches as a parameter changes. We also investigate the backward bifurcation, the Hopf bifurcation and Bogdanov-Takens bifurcation and obtain the Hopf bifurcation criteria and Bogdanov-Takens bifurcation curves, which are important for making strategies for controlling a disease
Malaria model with periodic mosquito birth and death rates.
Dembele, Bassidy; Friedman, Avner; Yakubu, Abdul-Aziz
2009-07-01
In this paper, we introduce a model of malaria, a disease that involves a complex life cycle of parasites, requiring both human and mosquito hosts. The novelty of the model is the introduction of periodic coefficients into the system of one-dimensional equations, which account for the seasonal variations (wet and dry seasons) in the mosquito birth and death rates. We define a basic reproduction number R(0) that depends on the periodic coefficients and prove that if R(0)1 then the disease is endemic and may even be periodic.
Mathematical model for predicting molecular-beam epitaxy growth rates for wafer production
International Nuclear Information System (INIS)
Shi, B.Q.
2003-01-01
An analytical mathematical model for predicting molecular-beam epitaxy (MBE) growth rates is reported. The mathematical model solves the mass-conservation equation for liquid sources in conical crucibles and predicts the growth rate by taking into account the effect of growth source depletion on the growth rate. Assumptions made for deducing the analytical model are discussed. The model derived contains only one unknown parameter, the value of which can be determined by using data readily available to MBE growers. Procedures are outlined for implementing the model in MBE production of III-V compound semiconductor device wafers. Results from use of the model to obtain targeted layer compositions and thickness of InP-based heterojunction bipolar transistor wafers are presented
Event-Based Conceptual Modeling
DEFF Research Database (Denmark)
Bækgaard, Lars
The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event......-based modeling approaches are analyzed and the results are used to formulate a general event concept that can be used for unifying the seemingly unrelated event concepts. Events are characterized as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms...... of information structures. The general event concept can be used to guide systems analysis and design and to improve modeling approaches....
Yagasaki, Y.; Shirato, Y.
2014-08-01
Future potentials of the sequestration of soil organic carbon (SOC) in agricultural lands in Japan were estimated using a simulation system we recently developed to simulate SOC stock change at country-scale under varying land-use change, climate, soil, and agricultural practices, in a spatially explicit manner. Simulation was run from 1970 to 2006 with historical inventories, and subsequently to 2020 with future scenarios of agricultural activity comprised of various agricultural policy targets advocated by the Japanese government. Furthermore, the simulation was run subsequently until 2100 while forcing no temporal changes in land-use and agricultural activity to investigate duration and course of SOC stock change at country scale. A scenario with an increased rate of organic carbon input to agricultural fields by intensified crop rotation in combination with the suppression of conversion of agricultural lands to other land-use types was found to have a greater reduction of CO2 emission by enhanced soil carbon sequestration, but only under a circumstance in which the converted agricultural lands will become settlements that were considered to have a relatively lower rate of organic carbon input. The size of relative reduction of CO2 emission in this scenario was comparable to that in another contrasting scenario (business-as-usual scenario of agricultural activity) in which a relatively lower rate of organic matter input to agricultural fields was assumed in combination with an increased rate of conversion of the agricultural fields to unmanaged grasslands through abandonment. Our simulation experiment clearly demonstrated that net-net-based accounting on SOC stock change, defined as the differences between the emissions and removals during the commitment period and the emissions and removals during a previous period (base year or base period of Kyoto Protocol), can be largely influenced by variations in future climate. Whereas baseline-based accounting, defined
Kalman Filter or VAR Models to Predict Unemployment Rate in Romania?
Directory of Open Access Journals (Sweden)
Simionescu Mihaela
2015-06-01
Full Text Available This paper brings to light an economic problem that frequently appears in practice: For the same variable, more alternative forecasts are proposed, yet the decision-making process requires the use of a single prediction. Therefore, a forecast assessment is necessary to select the best prediction. The aim of this research is to propose some strategies for improving the unemployment rate forecast in Romania by conducting a comparative accuracy analysis of unemployment rate forecasts based on two quantitative methods: Kalman filter and vector-auto-regressive (VAR models. The first method considers the evolution of unemployment components, while the VAR model takes into account the interdependencies between the unemployment rate and the inflation rate. According to the Granger causality test, the inflation rate in the first difference is a cause of the unemployment rate in the first difference, these data sets being stationary. For the unemployment rate forecasts for 2010-2012 in Romania, the VAR models (in all variants of VAR simulations determined more accurate predictions than Kalman filter based on two state space models for all accuracy measures. According to mean absolute scaled error, the dynamic-stochastic simulations used in predicting unemployment based on the VAR model are the most accurate. Another strategy for improving the initial forecasts based on the Kalman filter used the adjusted unemployment data transformed by the application of the Hodrick-Prescott filter. However, the use of VAR models rather than different variants of the Kalman filter methods remains the best strategy in improving the quality of the unemployment rate forecast in Romania. The explanation of these results is related to the fact that the interaction of unemployment with inflation provides useful information for predictions of the evolution of unemployment related to its components (i.e., natural unemployment and cyclical component.
A Comparison of Moment Rates for the Eastern Mediterranean Region from Competitive Kinematic Models
Klein, E. C.; Ozeren, M. S.; Shen-Tu, B.; Galgana, G. A.
2017-12-01
Relatively continuous, complex, and long-lived episodes of tectonic deformation gradually shaped the lithosphere of the eastern Mediterranean region into its present state. This large geodynamically interconnected and seismically active region absorbs, accumulates and transmits strains arising from stresses associated with: (1) steady northward convergence of the Arabian and African plates; (2) differences in lithospheric gravitational potential energy; and (3) basal tractions exerted by subduction along the Hellenic and Cyprus Arcs. Over the last twenty years, numerous kinematic models have been built using a variety of assumptions to take advantage of the extensive and dense GPS observations made across the entire region resulting in a far better characterization of the neotectonic deformation field than ever previously achieved. In this study, three separate horizontal strain rate field solutions obtained from three, region-wide, GPS only based kinematic models (i.e., a regional block model, a regional continuum model, and global continuum model) are utilized to estimate the distribution and uncertainty of geodetic moment rates within the eastern Mediterranean region. The geodetic moment rates from each model are also compared with seismic moment release rates gleaned from historic earthquake data. Moreover, kinematic styles of deformation derived from each of the modeled horizontal strain rate fields are examined for their degree of correlation with earthquake rupture styles defined by proximal centroid moment tensor solutions. This study suggests that significant differences in geodetically obtained moment rates from competitive kinematic models may introduce unforeseen bias into regularly updated, geodetically constrained, regional seismic hazard assessments.
Estimation of rates-across-sites distributions in phylogenetic substitution models.
Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J
2003-10-01
Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.
DEFF Research Database (Denmark)
ElSalamouny, Ehab; Nielsen, Mogens; Sassone, Vladimiro
2010-01-01
Probabilistic trust has been adopted as an approach to taking security sensitive decisions in modern global computing environments. Existing probabilistic trust frameworks either assume fixed behaviour for the principals or incorporate the notion of ‘decay' as an ad hoc approach to cope...... with their dynamic behaviour. Using Hidden Markov Models (HMMs) for both modelling and approximating the behaviours of principals, we introduce the HMM-based trust model as a new approach to evaluating trust in systems exhibiting dynamic behaviour. This model avoids the fixed behaviour assumption which is considered...... the major limitation of existing Beta trust model. We show the consistency of the HMM-based trust model and contrast it against the well known Beta trust model with the decay principle in terms of the estimation precision....
Inverse modelling of radionuclide release rates using gamma dose rate observations
Hamburger, Thomas; Evangeliou, Nikolaos; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian
2015-04-01
Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. Observations and dispersion modelling of the released radionuclides help to assess the regional impact of such nuclear accidents. Modelling the increase of regional radionuclide activity concentrations, which results from nuclear accidents, underlies a multiplicity of uncertainties. One of the most significant uncertainties is the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source term may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on estimates given by the operators of the nuclear power plant. Precise measurements are mostly missing due to practical limitations during the accident. The release rates of radionuclides at the accident site can be estimated using inverse modelling (Davoine and Bocquet, 2007). The accuracy of the method depends amongst others on the availability, reliability and the resolution in time and space of the used observations. Radionuclide activity concentrations are observed on a relatively sparse grid and the temporal resolution of available data may be low within the order of hours or a day. Gamma dose rates, on the other hand, are observed routinely on a much denser grid and higher temporal resolution and provide therefore a wider basis for inverse modelling (Saunier et al., 2013). We present a new inversion approach, which combines an atmospheric dispersion model and observations of radionuclide activity concentrations and gamma dose rates to obtain the source term of radionuclides. We use the Lagrangian particle dispersion model FLEXPART (Stohl et al., 1998; Stohl et al., 2005) to model the atmospheric transport of the released radionuclides. The
Directory of Open Access Journals (Sweden)
Matteo Anselmino
Full Text Available Despite the routine prescription of rate control therapy for atrial fibrillation (AF, clinical evidence demonstrating a heart rate target is lacking. Aim of the present study was to run a mathematical model simulating AF episodes with a different heart rate (HR to predict hemodynamic parameters for each situation.The lumped model, representing the pumping heart together with systemic and pulmonary circuits, was run to simulate AF with HR of 50, 70, 90, 110 and 130 bpm, respectively.Left ventricular pressure increased by 57%, from 33.92±37.56 mmHg to 53.15±47.56 mmHg, and mean systemic arterial pressure increased by 27%, from 82.66±14.04 mmHg to 105.3±7.6 mmHg, at the 50 and 130 bpm simulations, respectively. Stroke volume (from 77.45±8.50 to 39.09±8.08 mL, ejection fraction (from 61.10±4.40 to 39.32±5.42% and stroke work (SW, from 0.88±0.04 to 0.58±0.09 J decreased by 50, 36 and 34%, at the 50 and 130 bpm simulations, respectively. In addition, oxygen consumption indexes (rate pressure product - RPP, tension time index per minute - TTI/min, and pressure volume area per minute - PVA/min increased from the 50 to the 130 bpm simulation, respectively, by 186% (from 5598±1939 to 15995±3219 mmHg/min, 56% (from 2094±265 to 3257±301 mmHg s/min and 102% (from 57.99±17.90 to 117.4±26.0 J/min. In fact, left ventricular efficiency (SW/PVA decreased from 80.91±2.91% at 50 bpm to 66.43±3.72% at the 130 bpm HR simulation.Awaiting compulsory direct clinical evidences, the present mathematical model suggests that lower HRs during permanent AF relates to improved hemodynamic parameters, cardiac efficiency, and lower oxygen consumption.
Anselmino, Matteo; Scarsoglio, Stefania; Camporeale, Carlo; Saglietto, Andrea; Gaita, Fiorenzo; Ridolfi, Luca
2015-01-01
Despite the routine prescription of rate control therapy for atrial fibrillation (AF), clinical evidence demonstrating a heart rate target is lacking. Aim of the present study was to run a mathematical model simulating AF episodes with a different heart rate (HR) to predict hemodynamic parameters for each situation. The lumped model, representing the pumping heart together with systemic and pulmonary circuits, was run to simulate AF with HR of 50, 70, 90, 110 and 130 bpm, respectively. Left ventricular pressure increased by 57%, from 33.92±37.56 mmHg to 53.15±47.56 mmHg, and mean systemic arterial pressure increased by 27%, from 82.66±14.04 mmHg to 105.3±7.6 mmHg, at the 50 and 130 bpm simulations, respectively. Stroke volume (from 77.45±8.50 to 39.09±8.08 mL), ejection fraction (from 61.10±4.40 to 39.32±5.42%) and stroke work (SW, from 0.88±0.04 to 0.58±0.09 J) decreased by 50, 36 and 34%, at the 50 and 130 bpm simulations, respectively. In addition, oxygen consumption indexes (rate pressure product - RPP, tension time index per minute - TTI/min, and pressure volume area per minute - PVA/min) increased from the 50 to the 130 bpm simulation, respectively, by 186% (from 5598±1939 to 15995±3219 mmHg/min), 56% (from 2094±265 to 3257±301 mmHg s/min) and 102% (from 57.99±17.90 to 117.4±26.0 J/min). In fact, left ventricular efficiency (SW/PVA) decreased from 80.91±2.91% at 50 bpm to 66.43±3.72% at the 130 bpm HR simulation. Awaiting compulsory direct clinical evidences, the present mathematical model suggests that lower HRs during permanent AF relates to improved hemodynamic parameters, cardiac efficiency, and lower oxygen consumption.
A model for predicting wear rates in tooth enamel.
Borrero-Lopez, Oscar; Pajares, Antonia; Constantino, Paul J; Lawn, Brian R
2014-09-01
It is hypothesized that wear of enamel is sensitive to the presence of sharp particulates in oral fluids and masticated foods. To this end, a generic model for predicting wear rates in brittle materials is developed, with specific application to tooth enamel. Wear is assumed to result from an accumulation of elastic-plastic micro-asperity events. Integration over all such events leads to a wear rate relation analogous to Archard׳s law, but with allowance for variation in asperity angle and compliance. The coefficient K in this relation quantifies the wear severity, with an arbitrary distinction between 'mild' wear (low K) and 'severe' wear (high K). Data from the literature and in-house wear-test experiments on enamel specimens in lubricant media (water, oil) with and without sharp third-body particulates (silica, diamond) are used to validate the model. Measured wear rates can vary over several orders of magnitude, depending on contact asperity conditions, accounting for the occurrence of severe enamel removal in some human patients (bruxing). Expressions for the depth removal rate and number of cycles to wear down occlusal enamel in the low-crowned tooth forms of some mammals are derived, with tooth size and enamel thickness as key variables. The role of 'hard' versus 'soft' food diets in determining evolutionary paths in different hominin species is briefly considered. A feature of the model is that it does not require recourse to specific material removal mechanisms, although processes involving microplastic extrusion and microcrack coalescence are indicated. Published by Elsevier Ltd.
Modeling Guru: Knowledge Base for NASA Modelers
Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.
2009-05-01
Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the
Characteristics of quantum dash laser under the rate equation model framework
Khan, Mohammed Zahed Mustafa
2010-09-01
The authors present a numerical model to study the carrier dynamics of InAs/InP quantum dash (QDash) lasers. The model is based on single-state rate equations, which incorporates both, the homogeneous and the inhomogeneous broadening of lasing spectra. The numerical technique also considers the unique features of the QDash gain medium. This model has been applied successfully to analyze the laser spectra of QDash laser. ©2010 IEEE.
Structure-Based Turbulence Model
National Research Council Canada - National Science Library
Reynolds, W
2000-01-01
.... Maire carried out this work as part of his Phi) research. During the award period we began to explore ways to simplify the structure-based modeling so that it could be used in repetitive engineering calculations...
International Nuclear Information System (INIS)
Zhang, Xinliang; Tan, Yonghong; Su, Miyong; Xie, Yangqiu
2010-01-01
This paper presents a method of the identification for the rate-dependent hysteresis in the piezoelectric actuator (PEA) by use of neural networks. In this method, a special hysteretic operator is constructed from the Prandtl-Ishlinskii (PI) model to extract the changing tendency of the static hysteresis. Then, an expanded input space is constructed by introducing the proposed hysteretic operator to transform the multi-valued mapping of the hysteresis into a one-to-one mapping. Thus, a feedforward neural network is applied to the approximation of the rate-independent hysteresis on the constructed expanded input space. Moreover, in order to describe the rate-dependent performance of the hysteresis, a special hybrid model, which is constructed by a linear auto-regressive exogenous input (ARX) sub-model preceded with the previously obtained neural network based rate-independent hysteresis sub-model, is proposed. For the compensation of the effect of the hysteresis in PEA, the PID feedback controller with a feedforward hysteresis compensator is developed for the tracking control of the PEA. Thus, a corresponding inverse model based on the proposed modeling method is developed for the feedforward hysteresis compensator. Finally, both simulations and experimental results on piezoelectric actuator are presented to verify the effectiveness of the proposed approach for the rate-dependent hysteresis.
Event-Based Conceptual Modeling
DEFF Research Database (Denmark)
Bækgaard, Lars
2009-01-01
The purpose of the paper is to obtain insight into and provide practical advice for event-based conceptual modeling. We analyze a set of event concepts and use the results to formulate a conceptual event model that is used to identify guidelines for creation of dynamic process models and static...... information models. We characterize events as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms of information structures. The conceptual event model is used to characterize a variety of event concepts and it is used to illustrate how events can...... be used to integrate dynamic modeling of processes and static modeling of information structures. The results are unique in the sense that no other general event concept has been used to unify a similar broad variety of seemingly incompatible event concepts. The general event concept can be used...
Modelling the bioconversion of cellulose into microbial products: rate limitations
Energy Technology Data Exchange (ETDEWEB)
Asenjo, J A
1984-12-01
The direct bioconversion of cellulose into microbial products carried out as a simultaneous saccharification and fermentation has a strong effect on the rates of cellulose degradation because cellobiose and glucose inhibition of the reaction are circumvented. A general mathematical model of the kinetics of this bioconversion has been developed. Its use in representing aerobic systems and in the analysis of the kinetic limitations has been investigated. Simulations have been carried out to find the rate limiting steps in slow fermentations and in rapid ones as determined by the specific rate of product formation. The requirements for solubilising and depolymerising enzyme activities (cellulase and cellobiase) in these systems has been determined. The activity that have been obtained for fungal cellulases are adequate for the kinetic requirements of the fastest fermentative strains. The results also show that for simultaneous bioconversions where strong cellobiose and glucose inhibition is overcome, no additional cellobiase is necessary to increase the rate of product formation. These results are useful for the selection of cellolytic micro-organisms and in the determination of enzymes to be cloned in recombinant strains. 17 references.
Stochastic model of microcredit interest rate in Morocco
Directory of Open Access Journals (Sweden)
Ghita Bennouna
2016-11-01
Full Text Available Access to microcredit can have a beneficial effect on the well-being of low-income households excluded from the traditional banking system. It allows this population to receive affordable financial services to help them to meet their needs and to improve their living conditions. However to provide access to credit, microfinance institutions should ensure not only their social mission but also commercial and financial mission to enable the institution to perpetuate and become self-sufficient. To this end, MFIs (microfinance institutions must apply an interest rate that covers their costs and risk, while generating profits, Also microentrepreneurs need, to this end, to ensure the profitability of their activities. This paper presents the microfinance sector in Morocco. It focuses then on the interest rate applied by the Moroccan microfinance institutions; it provides also a comparative study between Morocco and other comparable countries in terms of interest rates charged to borrowers. Finally, this article presents a stochastic model of the interest rate in microcredit built in random loan repayment periods and on a real example of the program of loans of microfinance institution in Morocco
Crash rates analysis in China using a spatial panel model
Directory of Open Access Journals (Sweden)
Wonmongo Lacina Soro
2017-10-01
Full Text Available The consideration of spatial externalities in traffic safety analysis is of paramount importance for the success of road safety policies. Yet, the quasi-totality of spatial dependence studies on crash rates is performed within the framework of single-equation spatial cross-sectional studies. The present study extends the spatial cross-sectional scheme to a spatial fixed-effects panel model estimated using the maximum likelihood method. The spatial units are the 31 administrative regions of mainland China over the period 2004–2013. The presence of neighborhood effects is evidenced through the Moran's I statistic. Consistent with previous studies, the analysis reveals that omitting the spatial effects in traffic safety analysis is likely to bias the estimation results. The spatial and error lags are all positive and statistically significant suggesting similarities of crash rates pattern in neighboring regions. Some other explanatory variables, such as freight traffic, the length of paved roads and the populations of age 65 and above are related to higher rates while the opposite trend is observed for the Gross Regional Product, the urban unemployment rate and passenger traffic.
High Data Rate Optical Wireless Communications Based on Ultraviolet Band
Sun, Xiaobin
2017-10-01
Optical wireless communication systems based on ultraviolet (UV)-band has a lot inherent advantages, such as low background solar radiation, low device dark noise. Besides, it also has small restrictive requirements for PAT (pointing, acquisition, and tracking) because of its high atmospheric scattering with molecules and aerosols. And these advantages are driving people to explore and utilize UV band for constructing and implementing a high-data-rate, less PAT communication links, such as diffuse-line-of-sight links (diffuse-LOS) and non-line-of-sight (NLOS). The responsivity of the photodetector at UV range is far lower than that of visible range, high power UV transmitters which can be easily modulated are under investigation. These factors make it is hard to realize a high-data-rate diffuse-LOS or NLOS UV communication links. To achieve a UV link mentioned above with current devices and modulation schemes, this thesis presents some efficient modulation schemes and available devices for the time being. Besides, a demonstration of ultraviolet-B (UVB) communication link is implemented utilizing quadrature amplitude modulation (QAM) orthogonal frequency-division multiplexing (OFDM). The demonstration is based on a 294-nm UVB-light-emitting-diode (UVB-LED) with a full-width at half-maximum (FWHM) of 9 nm, and according to the measured L-I-V curve, we set the bias voltage as 7V for maximum the ac amplitude and thus get a high signal-noise-ratio (SNR) channel, and the light output power is 190 μW with such bias voltage. Besides, there is a unique silica gel lens on top of the LED to concentrate the beam. A -3-dB bandwidth of 29 MHz was measured and a high-speed near-solar-blind communication link with a data rate of 71 Mbit/s was achieved using 8-QAM-OFDM at perfect alignment, and 23.6 Mbit/s using 2-QAM-OFDM when the angle subtended by the pointing direction of the UVB-LED and photodetector (PD) is 12 degrees, thus establishing a diffuse-line-of-sight (LOS) link
Computer Based Modelling and Simulation
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 3. Computer Based Modelling and Simulation - Modelling Deterministic Systems. N K Srinivasan. General Article Volume 6 Issue 3 March 2001 pp 46-54. Fulltext. Click here to view fulltext PDF. Permanent link:
Fair premium rate of the deposit insurance system based on banks' creditworthiness
Yoshino, Naoyuki; Taghizadeh-Hesary, Farhad; Nili, Farhad
2017-01-01
Purpose: Deposit insurance is a key element in modern banking, as it guarantees the financial safety of deposits at depository financial institutions. It is necessary to have at least a dual fair premium rate system based on the creditworthiness of financial institutions, as considering a singular premium system for all banks will have a moral hazard. In this paper, we develop a theoretical as well as an empirical model for calculating dual fair premium rates. Design/methodology/approach: Our...
Forecasting the mortality rates using Lee-Carter model and Heligman-Pollard model
Ibrahim, R. I.; Ngataman, N.; Abrisam, W. N. A. Wan Mohd
2017-09-01
Improvement in life expectancies has driven further declines in mortality. The sustained reduction in mortality rates and its systematic underestimation has been attracting the significant interest of researchers in recent years because of its potential impact on population size and structure, social security systems, and (from an actuarial perspective) the life insurance and pensions industry worldwide. Among all forecasting methods, the Lee-Carter model has been widely accepted by the actuarial community and Heligman-Pollard model has been widely used by researchers in modelling and forecasting future mortality. Therefore, this paper only focuses on Lee-Carter model and Heligman-Pollard model. The main objective of this paper is to investigate how accurately these two models will perform using Malaysian data. Since these models involves nonlinear equations that are explicitly difficult to solve, the Matrix Laboratory Version 8.0 (MATLAB 8.0) software will be used to estimate the parameters of the models. Autoregressive Integrated Moving Average (ARIMA) procedure is applied to acquire the forecasted parameters for both models as the forecasted mortality rates are obtained by using all the values of forecasted parameters. To investigate the accuracy of the estimation, the forecasted results will be compared against actual data of mortality rates. The results indicate that both models provide better results for male population. However, for the elderly female population, Heligman-Pollard model seems to underestimate to the mortality rates while Lee-Carter model seems to overestimate to the mortality rates.
Modelling Counterparty Credit Risk in Czech Interest Rate Swaps
Directory of Open Access Journals (Sweden)
Lenka Křivánková
2017-01-01
Full Text Available According to the Basel Committee’s estimate, three quarters of counterparty credit risk losses during the financial crisis in 2008 originate from credit valuation adjustment’s losses and not from actual defaults. Therefore, from 2015, the Third Basel Accord (EU, 2013a and (EU, 2013b instructed banks to calculate the capital requirement for the risk of credit valuation adjustment (CVA. Banks are trying to model CVA to hold the prescribed standards and also reach the lowest possible impact on their profit. In this paper, we try to model CVA using methods that are in compliance with the prescribed standards and also achieve the smallest possible impact on the bank’s earnings. To do so, a data set of interest rate swaps from 2015 is used. The interest rate term structure is simulated using the Hull-White one-factor model and Monte Carlo methods. Then, the probability of default for each counterparty is constructed. A safe level of CVA is reached in spite of the calculated the CVA achieving a lower level than CVA previously used by the bank. This allows a reduction of capital requirements for banks.
Zhu, Mingshan; Zeng, Bixin
2015-03-01
In this paper, we designed an oxygen saturation, heart rate, respiration rate monitoring system based on smartphone of android operating system, physiological signal acquired by MSP430 microcontroller and transmitted by Bluetooth module.
Directory of Open Access Journals (Sweden)
O. M. Pshinko
2016-12-01
Full Text Available Purpose. The paper aims to develop rating models and related information technologies designed to resolve the tasks of strategic planning of the administrative and territorial units’ development, as well as the tasks of multi-criteria control of inhomogeneous multiparameter objects operation. Methodology. When solving problems of strategic planning of administrative and territorial development and heterogeneous classes management of objects under control, a set of agreed methods is used. Namely the multi-criteria properties analysis for objects of planning and management, diagnostics of the state parameters, forecasting and management of complex systems of different classes. Their states are estimated by sets of different quality indicators, as well as represented by the individual models of operation process. A new information technology is proposed and created to implement the strategic planning and management tasks. This technology uses the procedures for solving typical tasks, that are implemented in MS SQL Server. Findings. A new approach to develop models of analyze and management of complex systems classes based on the ratings has been proposed. Rating models development for analysis of multicriteria and multiparameter systems has been obtained. The management of these systems is performed on the base of parameters of the current and predicted state by non-uniform distribution of resources. The procedure of sensitivity analysis of the changes in the rating model of inhomogeneous distribution of resources parameters has been developed. The information technology of strategic planning and management of heterogeneous classes of objects based on the rating model has been created. Originality. This article proposes a new approach of the rating indicators’ using as a general model for strategic planning of the development and management of heterogeneous objects that can be characterized by the sets of parameters measured on different scales
Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.
Directory of Open Access Journals (Sweden)
Kezi Yu
Full Text Available In this paper, we propose an application of non-parametric Bayesian (NPB models for classification of fetal heart rate (FHR recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP and the Chinese restaurant process with finite capacity (CRFC. Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR recordings in a real-time setting.
Video rate morphological processor based on a redundant number representation
Kuczborski, Wojciech; Attikiouzel, Yianni; Crebbin, Gregory A.
1992-03-01
This paper presents a video rate morphological processor for automated visual inspection of printed circuit boards, integrated circuit masks, and other complex objects. Inspection algorithms are based on gray-scale mathematical morphology. Hardware complexity of the known methods of real-time implementation of gray-scale morphology--the umbra transform and the threshold decomposition--has prompted us to propose a novel technique which applied an arithmetic system without carrying propagation. After considering several arithmetic systems, a redundant number representation has been selected for implementation. Two options are analyzed here. The first is a pure signed digit number representation (SDNR) with the base of 4. The second option is a combination of the base-2 SDNR (to represent gray levels of images) and the conventional twos complement code (to represent gray levels of structuring elements). Operation principle of the morphological processor is based on the concept of the digit level systolic array. Individual processing units and small memory elements create a pipeline. The memory elements store current image windows (kernels). All operation primitives of processing units apply a unified direction of digit processing: most significant digit first (MSDF). The implementation technology is based on the field programmable gate arrays by Xilinx. This paper justified the rationality of a new approach to logic design, which is the decomposition of Boolean functions instead of Boolean minimization.
Dose rate effect models for biological reaction to ionizing radiation in human cell lines
International Nuclear Information System (INIS)
Magae, Junji; Ogata, Hiromitsu
2008-01-01
, suggesting that dose rate effect predicted by MOE model is dependent on DNA repair system. Dose rate effect in a resting normal fibroblast cultured in serum-depleted medium also followed MOE model. In contrast, dose-rate effect was observed in these cell lines deficient of DNA repair system, when they were cultured for more than several month. This dose rate effect did not fit MOE model, and followed a model based on elimination of damaged cells. In conclusion, dose rate effect in growth inhibition and micronucleus formation in cultured cell lines is dependent on dose rate and irradiation time: In higher range of dose rates and short irradiation time, biological effect is determined by dose but not dose rate, and dose rate effect is not observed. In middle range of dose rates and irradiation time, dose rate effect is dependent on DNA repair system, and follows MOE model. In low range of dose-rates and irradiation time longer than several months, dose rate effect is mainly dependent on elimination of damaged cells, and biological effect is determined by dose rate rather than total dose. Our results suggest that dose rate and irradiation time should be included in estimation of long-term radiation risk at low dose rates. (author)
BIM-Based Decision Support System for Material Selection Based on Supplier Rating
Directory of Open Access Journals (Sweden)
Abiola Akanmu
2015-12-01
Full Text Available Material selection is a delicate process, typically hinged on a number of factors which can be either cost or environmental related. This process becomes more complicated when designers are faced with several material options of building elements and each option can be supplied by different suppliers whose selection criteria may affect the budgetary and environmental requirements of the project. This paper presents the development of a decision support system based on the integration of building information models, a modified harmony search algorithm and supplier performance rating. The system is capable of producing the cost and environmental implications of different material combinations or building designs. A case study is presented to illustrate the functionality of the developed system.
Development of a generic data base for failure rate
International Nuclear Information System (INIS)
Mosleh, A.; Apostolakis, G.
1985-01-01
The data analysis task in a probabilistic risk assessment (PRA) involves the assessment of data needs, the collection of information, and, finally, the analysis of the data to generate estimates for various parameters. This paper describes a framework for developing a data base for component failure rates and presents mathematical methods for the analysis of various types of information. The discussion is focused on the development of generic data bases used in PRAs. For plants without an operating history, the generic distributions are used directly to calculate component unavailability. In the case of plants that have operated for some time, the generic distributions can be used as priors in Bayesian analysis and, thus, specialized by plant-specific experience
An interval-valued reliability model with bounded failure rates
DEFF Research Database (Denmark)
Kozine, Igor; Krymsky, Victor
2012-01-01
The approach to deriving interval-valued reliability measures described in this paper is distinctive from other imprecise reliability models in that it overcomes the issue of having to impose an upper bound on time to failure. It rests on the presupposition that a constant interval-valued failure...... rate is known possibly along with other reliability measures, precise or imprecise. The Lagrange method is used to solve the constrained optimization problem to derive new reliability measures of interest. The obtained results call for an exponential-wise approximation of failure probability density...
An Optimal Commitment Model of Exchange Rate Stabilization
Kyung-Soo Kim
2006-01-01
Recently East Asian countries that have amassed large US dollar reserves face a growing threat of big losses from a sudden decline in the dollar. This threat evokes an issue of the optimal commitment of exchange rate stabilization once raised by Isard (1995) who interpreted the cost of breaking the parity as the capital gain awarded to speculators, in the event the domestic currency is devalued. The only difference in this paper is revaluation. This paper models the central bankï¿½ï¿½s optima...
Thompson, Shirley; Sawyer, Jennifer; Bonam, Rathan; Valdivia, J E
2009-07-01
The German EPER, TNO, Belgium, LandGEM, and Scholl Canyon models for estimating methane production were compared to methane recovery rates for 35 Canadian landfills, assuming that 20% of emissions were not recovered. Two different fractions of degradable organic carbon (DOC(f)) were applied in all models. Most models performed better when the DOC(f) was 0.5 compared to 0.77. The Belgium, Scholl Canyon, and LandGEM version 2.01 models produced the best results of the existing models with respective mean absolute errors compared to methane generation rates (recovery rates + 20%) of 91%, 71%, and 89% at 0.50 DOC(f) and 171%, 115%, and 81% at 0.77 DOC(f). The Scholl Canyon model typically overestimated methane recovery rates and the LandGEM version 2.01 model, which modifies the Scholl Canyon model by dividing waste by 10, consistently underestimated methane recovery rates; this comparison suggested that modifying the divisor for waste in the Scholl Canyon model between one and ten could improve its accuracy. At 0.50 DOC(f) and 0.77 DOC(f) the modified model had the lowest absolute mean error when divided by 1.5 yielding 63 +/- 45% and 2.3 yielding 57 +/- 47%, respectively. These modified models reduced error and variability substantially and both have a strong correlation of r = 0.92.
Total dose and dose rate models for bipolar transistors in circuit simulation.
Energy Technology Data Exchange (ETDEWEB)
Campbell, Phillip Montgomery; Wix, Steven D.
2013-05-01
The objective of this work is to develop a model for total dose effects in bipolar junction transistors for use in circuit simulation. The components of the model are an electrical model of device performance that includes the effects of trapped charge on device behavior, and a model that calculates the trapped charge densities in a specific device structure as a function of radiation dose and dose rate. Simulations based on this model are found to agree well with measurements on a number of devices for which data are available.
Bishop, Christopher M
2013-02-13
Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.
Interest rate modeling post-crisis challenges and approaches
Grbac, Zorana
2015-01-01
Filling a gap in the literature caused by the recent financial crisis, this book provides a treatment of the techniques needed to model and evaluate interest rate derivatives according to the new paradigm for fixed income markets. Concerning this new development, there presently exist only research articles and two books, one of them an edited volume, both being written by researchers working mainly in practice. The aim of this book is to concentrate primarily on the methodological side, thereby providing an overview of the state-of-the-art and also clarifying the link between the new models and the classical literature. The book is intended to serve as a guide for graduate students and researchers as well as practitioners interested in the paradigm change for fixed income markets. A basic knowledge of fixed income markets and related stochastic methodology is assumed as a prerequisite.
Directory of Open Access Journals (Sweden)
Uri Barenholz
Full Text Available Most proteins show changes in level across growth conditions. Many of these changes seem to be coordinated with the specific growth rate rather than the growth environment or the protein function. Although cellular growth rates, gene expression levels and gene regulation have been at the center of biological research for decades, there are only a few models giving a base line prediction of the dependence of the proteome fraction occupied by a gene with the specific growth rate. We present a simple model that predicts a widely coordinated increase in the fraction of many proteins out of the proteome, proportionally with the growth rate. The model reveals how passive redistribution of resources, due to active regulation of only a few proteins, can have proteome wide effects that are quantitatively predictable. Our model provides a potential explanation for why and how such a coordinated response of a large fraction of the proteome to the specific growth rate arises under different environmental conditions. The simplicity of our model can also be useful by serving as a baseline null hypothesis in the search for active regulation. We exemplify the usage of the model by analyzing the relationship between growth rate and proteome composition for the model microorganism E.coli as reflected in recent proteomics data sets spanning various growth conditions. We find that the fraction out of the proteome of a large number of proteins, and from different cellular processes, increases proportionally with the growth rate. Notably, ribosomal proteins, which have been previously reported to increase in fraction with growth rate, are only a small part of this group of proteins. We suggest that, although the fractions of many proteins change with the growth rate, such changes may be partially driven by a global effect, not necessarily requiring specific cellular control mechanisms.
A microscopic model of rate and state friction evolution
Li, Tianyi; Rubin, Allan M.
2017-08-01
Whether rate- and state-dependent friction evolution is primarily slip dependent or time dependent is not well resolved. Although slide-hold-slide experiments are traditionally interpreted as supporting the aging law, implying time-dependent evolution, recent studies show that this evidence is equivocal. In contrast, the slip law yields extremely good fits to velocity step experiments, although a clear physical picture for slip-dependent friction evolution is lacking. We propose a new microscopic model for rate and state friction evolution in which each asperity has a heterogeneous strength, with individual portions recording the velocity at which they became part of the contact. Assuming an exponential distribution of asperity sizes on the surface, the model produces results essentially similar to the slip law, yielding very good fits to velocity step experiments but not improving much the fits to slide-hold-slide experiments. A numerical kernel for the model is developed, and an analytical expression is obtained for perfect velocity steps, which differs from the slip law expression by a slow-decaying factor. By changing the quantity that determines the intrinsic strength, we use the same model structure to investigate aging-law-like time-dependent evolution. Assuming strength to increase logarithmically with contact age, for two different definitions of age we obtain results for velocity step increases significantly different from the aging law. Interestingly, a solution very close to the aging law is obtained if we apply a third definition of age that we consider to be nonphysical. This suggests that under the current aging law, the state variable is not synonymous with contact age.
Petrov, Alexander A.
2011-01-01
Context effects in category rating on a 7-point scale are shown to reverse direction depending on feedback. Context (skewed stimulus frequencies) was manipulated between and feedback within subjects in two experiments. The diverging predictions of prototype- and exemplar-based scaling theories were tested using two representative models: ANCHOR…
Sensitivity of the polypropylene to the strain rate: experiments and modeling
International Nuclear Information System (INIS)
Abdul-Latif, A.; Aboura, Z.; Mosleh, L.
2002-01-01
Full text.The main goal of this work is first to evaluate experimentally the strain rate dependent deformation of the polypropylene under tensile load; and secondly is to propose a model capable to appropriately describe the mechanical behavior of this material and especially its sensitivity to the strain rate. Several experimental tensile tests are performed at different quasi-static strain rates in the range of 10 -5 s -1 to 10 -1 s -1 . In addition to some relaxation tests are also conducted introducing the strain rate jumping state during testing. Within the framework of elastoviscoplasticity, a phenomenological model is developed for describing the non-linear mechanical behavior of the material under uniaxial loading paths. With the small strain assumption, the sensitivity of the polypropylene to the strain rate being of particular interest in this work, is accordingly taken into account. As a matter of fact, since this model is based on internal state variables, we assume thus that the material sensitivity to the strain rate is governed by the kinematic hardening variable notably its modulus and the accumulated viscoplastic strain. As far as the elastic behavior is concerned, it is noticed that such a behavior is slightly influenced by the employed strain rate rage. For this reason, the elastic behavior is classically determined, i.e. without coupling with the strain rate dependent deformation. It is obvious that the inelastic behavior of the used material is thoroughly dictated by the applied strain rate. Hence, the model parameters are well calibrated utilizing several experimental databases for different strain rates (10 -5 s -1 to 10 -1 s -1 ). Actually, among these experimental results, some experiments related to the relaxation phenomenon and strain rate jumping during testing (increasing or decreasing) are also used in order to more perfect the model parameters identification. To validate the calibrated model parameters, simulation tests are achieved
Modeling oil production based on symbolic regression
International Nuclear Information System (INIS)
Yang, Guangfei; Li, Xianneng; Wang, Jianliang; Lian, Lian; Ma, Tieju
2015-01-01
Numerous models have been proposed to forecast the future trends of oil production and almost all of them are based on some predefined assumptions with various uncertainties. In this study, we propose a novel data-driven approach that uses symbolic regression to model oil production. We validate our approach on both synthetic and real data, and the results prove that symbolic regression could effectively identify the true models beneath the oil production data and also make reliable predictions. Symbolic regression indicates that world oil production will peak in 2021, which broadly agrees with other techniques used by researchers. Our results also show that the rate of decline after the peak is almost half the rate of increase before the peak, and it takes nearly 12 years to drop 4% from the peak. These predictions are more optimistic than those in several other reports, and the smoother decline will provide the world, especially the developing countries, with more time to orchestrate mitigation plans. -- Highlights: •A data-driven approach has been shown to be effective at modeling the oil production. •The Hubbert model could be discovered automatically from data. •The peak of world oil production is predicted to appear in 2021. •The decline rate after peak is half of the increase rate before peak. •Oil production projected to decline 4% post-peak
DEPENDENCE OF X-RAY BURST MODELS ON NUCLEAR REACTION RATES
Energy Technology Data Exchange (ETDEWEB)
Cyburt, R. H.; Keek, L.; Schatz, H. [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States); Amthor, A. M. [Department of Physics and Astronomy, Bucknell University, Lewisburg, PA 17837 (United States); Heger, A.; Meisel, Z.; Smith, K. [Joint Institute for Nuclear Astrophysics (JINA), Michigan State University, East Lansing, MI 48824 (United States); Johnson, E. [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States)
2016-10-20
X-ray bursts are thermonuclear flashes on the surface of accreting neutron stars, and reliable burst models are needed to interpret observations in terms of properties of the neutron star and the binary system. We investigate the dependence of X-ray burst models on uncertainties in (p, γ ), ( α , γ ), and ( α , p) nuclear reaction rates using fully self-consistent burst models that account for the feedbacks between changes in nuclear energy generation and changes in astrophysical conditions. A two-step approach first identified sensitive nuclear reaction rates in a single-zone model with ignition conditions chosen to match calculations with a state-of-the-art 1D multi-zone model based on the Kepler stellar evolution code. All relevant reaction rates on neutron-deficient isotopes up to mass 106 were individually varied by a factor of 100 up and down. Calculations of the 84 changes in reaction rate with the highest impact were then repeated in the 1D multi-zone model. We find a number of uncertain reaction rates that affect predictions of light curves and burst ashes significantly. The results provide insights into the nuclear processes that shape observables from X-ray bursts, and guidance for future nuclear physics work to reduce nuclear uncertainties in X-ray burst models.
Developing models for the prediction of hospital healthcare waste generation rate.
Tesfahun, Esubalew; Kumie, Abera; Beyene, Abebe
2016-01-01
An increase in the number of health institutions, along with frequent use of disposable medical products, has contributed to the increase of healthcare waste generation rate. For proper handling of healthcare waste, it is crucial to predict the amount of waste generation beforehand. Predictive models can help to optimise healthcare waste management systems, set guidelines and evaluate the prevailing strategies for healthcare waste handling and disposal. However, there is no mathematical model developed for Ethiopian hospitals to predict healthcare waste generation rate. Therefore, the objective of this research was to develop models for the prediction of a healthcare waste generation rate. A longitudinal study design was used to generate long-term data on solid healthcare waste composition, generation rate and develop predictive models. The results revealed that the healthcare waste generation rate has a strong linear correlation with the number of inpatients (R(2) = 0.965), and a weak one with the number of outpatients (R(2) = 0.424). Statistical analysis was carried out to develop models for the prediction of the quantity of waste generated at each hospital (public, teaching and private). In these models, the number of inpatients and outpatients were revealed to be significant factors on the quantity of waste generated. The influence of the number of inpatients and outpatients treated varies at different hospitals. Therefore, different models were developed based on the types of hospitals. © The Author(s) 2015.
Modeling the Interest Rate Term Structure: Derivatives Contracts Dynamics and Evaluation
Directory of Open Access Journals (Sweden)
Pedro L. Valls Pereira
2005-06-01
Full Text Available This article deals with a model for the term structure of interest rates and the valuation of derivative contracts directly dependent on it. The work is of a theoretical nature and deals, exclusively, with continuous time models, making ample use of stochastic calculus results and presents original contributions that we consider relevant to the development of the fixed income market modeling. We develop a new multifactorial model of the term structure of interest rates. The model is based on the decomposition of the yield curve into the factors level, slope, curvature, and the treatment of their collective dynamics. We show that this model may be applied to serve various objectives: analysis of bond price dynamics, valuation of derivative contracts and also market risk management and formulation of operational strategies which is presented in another article.
Cluster Based Text Classification Model
DEFF Research Database (Denmark)
Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock
2011-01-01
We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases...... the accuracy at the same time. The test example is classified using simpler and smaller model. The training examples in a particular cluster share the common vocabulary. At the time of clustering, we do not take into account the labels of the training examples. After the clusters have been created......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...
Modelling hourly rates of evaporation from small lakes
Directory of Open Access Journals (Sweden)
R. J. Granger
2011-01-01
Full Text Available The paper presents the results of a field study of open water evaporation carried out on three small lakes in Western and Northern Canada. In this case small lakes are defined as those for which the temperature above the water surface is governed by the upwind land surface conditions; that is, a continuous boundary layer exists over the lake, and large-scale atmospheric effects such as entrainment do not come into play. Lake evaporation was measured directly using eddy covariance equipment; profiles of wind speed, air temperature and humidity were also obtained over the water surfaces. Observations were made as well over the upwind land surface.
The major factors controlling open water evaporation were examined. The study showed that for time periods shorter than daily, the open water evaporation bears no relationship to the net radiation; the wind speed is the most significant factor governing the evaporation rates, followed by the land-water temperature contrast and the land-water vapour pressure contrast. The effect of the stability on the wind field was demonstrated; relationships were developed relating the land-water wind speed contrast to the land-water temperature contrast. The open water period can be separated into two distinct evaporative regimes: the warming period in the Spring, when the land is warmer than the water, the turbulent fluxes over water are suppressed; and the cooling period, when the water is warmer than the land, the turbulent fluxes over water are enhanced.
Relationships were developed between the hourly rates of lake evaporation and the following significant variables and parameters (wind speed, land-lake temperature and humidity contrasts, and the downwind distance from shore. The result is a relatively simple versatile model for estimating the hourly lake evaporation rates. The model was tested using two independent data sets. Results show that the modelled evaporation follows the observed values
Laser Rate Equation Based Filtering for Carrier Recovery in Characterization and Communication
DEFF Research Database (Denmark)
Piels, Molly; Iglesias Olmedo, Miguel; Xue, Weiqi
2015-01-01
We formulate a semiconductor laser rate equationbased approach to carrier recovery in a Bayesian filtering framework. Filter stability and the effect of model inaccuracies (unknown or un-useable rate equation coefficients) are discussed. Two potential application areas are explored: laser...... characterization and carrier recovery in coherent communication. Two rate equation based Bayesian filters, the particle filter and extended Kalman filter, are used in conjunction with a coherent receiver to measure frequency noise spectrum of a photonic crystal cavity laser with less than 20 nW of fiber...
USE OF ROUGH SETS AND SPECTRAL DATA FOR BUILDING PREDICTIVE MODELS OF REACTION RATE CONSTANTS
A model for predicting the log of the rate constants for alkaline hydrolysis of organic esters has been developed with the use of gas-phase min-infrared library spectra and a rule-building software system based on the mathematical theory of rough sets. A diverse set of 41 esters ...
Kandasamy, Palani; Moitra, Ranabir; Mukherjee, Souti
2015-01-01
Experiments were conducted to determine the respiration rate of tomato at 10, 20 and 30 °C using closed respiration system. Oxygen depletion and carbon dioxide accumulation in the system containing tomato was monitored. Respiration rate was found to decrease with increasing CO2 and decreasing O2 concentration. Michaelis-Menten type model based on enzyme kinetics was evaluated using experimental data generated for predicting the respiration rate. The model parameters that obtained from the respiration rate at different O2 and CO2 concentration levels were used to fit the model against the storage temperatures. The fitting was fair (R2 = 0.923 to 0.970) when the respiration rate was expressed as O2 concentation. Since inhibition constant for CO2 concentration tended towards negetive, the model was modified as a function of O2 concentration only. The modified model was fitted to the experimental data and showed good agreement (R2 = 0.998) with experimentally estimated respiration rate.
Relationship between soil erodibility and modeled infiltration rate in different soils
Wang, Guoqiang; Fang, Qingqing; Wu, Binbin; Yang, Huicai; Xu, Zongxue
2015-09-01
The relationship between soil erodibility, which is hard to measure, and modeled infiltration rate were rarely researched. Here, the soil erodibility factors (K and Ke in the USLE, Ki and K1 in the WEPP) were calculated and the infiltration rates were modeled based on the designed laboratory simulation experiments and proposed infiltration model, in order to build their relationship. The impacts of compost amendment on the soil erosion characteristics and relationship were also studied. Two contrasting agricultural soils (bare and cultivated fluvo-aquic soils) were used, and different poultry compost contents (control, low and high) were applied to both soils. The results indicated that the runoff rate, sediment yield rate and soil erodibility of the bare soil treatments were generally higher than those of the corresponding cultivated soil treatments. The application of composts generally decreased sediment yield and soil erodibility but did not always decrease runoff. The comparison of measured and modeled infiltration rates indicated that the model represented the infiltration processes well with an N-S coefficient of 0.84 for overall treatments. Significant negative logarithmic correlations have been found between final infiltration rate (FIR) and the four soil erodibility factors, and the relationship between USLE-K and FIR demonstrated the best correlation. The application of poultry composts would not influence the logarithmic relationship between FIR and soil erodibility. Our study provided a useful tool to estimate soil erodibility.
Stationarity test with a direct test for heteroskedasticity in exchange rate forecasting models
Khin, Aye Aye; Chau, Wong Hong; Seong, Lim Chee; Bin, Raymond Ling Leh; Teng, Kevin Low Lock
2017-05-01
Global economic has been decreasing in the recent years, manifested by the greater exchange rates volatility on international commodity market. This study attempts to analyze some prominent exchange rate forecasting models on Malaysian commodity trading: univariate ARIMA, ARCH and GARCH models in conjunction with stationarity test on residual diagnosis direct testing of heteroskedasticity. All forecasting models utilized the monthly data from 1990 to 2015. Given a total of 312 observations, the data used to forecast both short-term and long-term exchange rate. The forecasting power statistics suggested that the forecasting performance of ARIMA (1, 1, 1) model is more efficient than the ARCH (1) and GARCH (1, 1) models. For ex-post forecast, exchange rate was increased from RM 3.50 per USD in January 2015 to RM 4.47 per USD in December 2015 based on the baseline data. For short-term ex-ante forecast, the analysis results indicate a decrease in exchange rate on 2016 June (RM 4.27 per USD) as compared with 2015 December. A more appropriate forecasting method of exchange rate is vital to aid the decision-making process and planning on the sustainable commodities' production in the world economy.
An empirical model to predict infield thin layer drying rate of cut switchgrass
International Nuclear Information System (INIS)
Khanchi, A.; Jones, C.L.; Sharma, B.; Huhnke, R.L.; Weckler, P.; Maness, N.O.
2013-01-01
A series of 62 thin layer drying experiments were conducted to evaluate the effect of solar radiation, vapor pressure deficit and wind speed on drying rate of switchgrass. An environmental chamber was fabricated that can simulate field drying conditions. An empirical drying model based on maturity stage of switchgrass was also developed during the study. It was observed that solar radiation was the most significant factor in improving the drying rate of switchgrass at seed shattering and seed shattered maturity stage. Therefore, drying switchgrass in wide swath to intercept the maximum amount of radiation at these stages of maturity is recommended. Moreover, it was observed that under low radiation intensity conditions, wind speed helps to improve the drying rate of switchgrass. Field operations such as raking or turning of the windrows are recommended to improve air circulation within a swath on cloudy days. Additionally, it was found that the effect of individual weather parameters on the drying rate of switchgrass was dependent on maturity stage. Vapor pressure deficit was strongly correlated with the drying rate during seed development stage whereas, vapor pressure deficit was weakly correlated during seed shattering and seed shattered stage. These findings suggest the importance of using separate drying rate models for each maturity stage of switchgrass. The empirical models developed in this study can predict the drying time of switchgrass based on the forecasted weather conditions so that the appropriate decisions can be made. -- Highlights: • An environmental chamber was developed in the present study to simulate field drying conditions. • An empirical model was developed that can estimate drying rate of switchgrass based on forecasted weather conditions. • Separate equations were developed based on maturity stage of switchgrass. • Designed environmental chamber can be used to evaluate the effect of other parameters that affect drying of crops
Smartphone-based photoplethysmographic imaging for heart rate monitoring.
Alafeef, Maha
2017-07-01
The purpose of this study is to make use of visible light reflected mode photoplethysmographic (PPG) imaging for heart rate (HR) monitoring via smartphones. The system uses the built-in camera feature in mobile phones to capture video from the subject's index fingertip. The video is processed, and then the PPG signal resulting from the video stream processing is used to calculate the subject's heart rate. Records from 19 subjects were used to evaluate the system's performance. The HR values obtained by the proposed method were compared with the actual HR. The obtained results show an accuracy of 99.7% and a maximum absolute error of 0.4 beats/min where most of the absolute errors lay in the range of 0.04-0.3 beats/min. Given the encouraging results, this type of HR measurement can be adopted with great benefit, especially in the conditions of personal use or home-based care. The proposed method represents an efficient portable solution for HR accurate detection and recording.
Mobile Phone-Based Mood Ratings Prospectively Predict Psychotherapy Attendance.
Bruehlman-Senecal, Emma; Aguilera, Adrian; Schueller, Stephen M
2017-09-01
Psychotherapy nonattendance is a costly and pervasive problem. While prior research has identified stable patient-level predictors of attendance, far less is known about dynamic (i.e., time-varying) factors. Identifying dynamic predictors can clarify how clinical states relate to psychotherapy attendance and inform effective "just-in-time" interventions to promote attendance. The present study examines whether daily mood, as measured by responses to automated mobile phone-based text messages, prospectively predicts attendance in group cognitive-behavioral therapy (CBT) for depression. Fifty-six Spanish-speaking Latino patients with elevated depressive symptoms (46 women, mean age=50.92years, SD=10.90years), enrolled in a manualized program of group CBT, received daily automated mood-monitoring text messages. Patients' daily mood ratings, message response rate, and delay in responding were recorded. Patients' self-reported mood the day prior to a scheduled psychotherapy session significantly predicted attendance, even after controlling for patients' prior attendance history and age (OR=1.33, 95% CI [1.04, 1.70], p=.02). Positive mood corresponded to a greater likelihood of attendance. Our results demonstrate the clinical utility of automated mood-monitoring text messages in predicting attendance. These results underscore the value of text messaging, and other mobile technologies, as adjuncts to psychotherapy. Future work should explore the use of such monitoring to guide interventions to increase attendance, and ultimately the efficacy of psychotherapy. Copyright © 2017. Published by Elsevier Ltd.
Adaptive Rate Sampling and Filtering Based on Level Crossing Sampling
Directory of Open Access Journals (Sweden)
Saeed Mian Qaisar
2009-01-01
Full Text Available The recent sophistications in areas of mobile systems and sensor networks demand more and more processing resources. In order to maintain the system autonomy, energy saving is becoming one of the most difficult industrial challenges, in mobile computing. Most of efforts to achieve this goal are focused on improving the embedded systems design and the battery technology, but very few studies target to exploit the input signal time-varying nature. This paper aims to achieve power efficiency by intelligently adapting the processing activity to the input signal local characteristics. It is done by completely rethinking the processing chain, by adopting a non conventional sampling scheme and adaptive rate filtering. The proposed approach, based on the LCSS (Level Crossing Sampling Scheme presents two filtering techniques, able to adapt their sampling rate and filter order by online analyzing the input signal variations. Indeed, the principle is to intelligently exploit the signal local characteristics—which is usually never considered—to filter only the relevant signal parts, by employing the relevant order filters. This idea leads towards a drastic gain in the computational efficiency and hence in the processing power when compared to the classical techniques.
Directory of Open Access Journals (Sweden)
Shim S.M.
2012-01-01
Full Text Available The performance of the CO2 absorber column using mono-ethanolamine (MEA solution as chemical solvent are predicted by a One-Dimensional (1-D rate based model in the present study. 1-D Mass and heat balance equations of vapor and liquid phase are coupled with interfacial mass transfer model and vapor-liquid equilibrium model. The two-film theory is used to estimate the mass transfer between the vapor and liquid film. Chemical reactions in MEA-CO2-H2O system are considered to predict the equilibrium pressure of CO2 in the MEA solution. The mathematical and reaction kinetics models used in this work are calculated by using in-house code. The numerical results are validated in the comparison of simulation results with experimental and simulation data given in the literature. The performance of CO2 absorber column is evaluated by the 1-D rate based model using various reaction rate coefficients suggested by various researchers. When the rate of liquid to gas mass flow rate is about 8.3, 6.6, 4.5 and 3.1, the error of CO2 loading and the CO2 removal efficiency using the reaction rate coefficients of Aboudheir et al. is within about 4.9 % and 5.2 %, respectively. Therefore, the reaction rate coefficient suggested by Aboudheir et al. among the various reaction rate coefficients used in this study is appropriate to predict the performance of CO2 absorber column using MEA solution. [Acknowledgement. This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF, funded by the Ministry of Education, Science and Technology (2011-0017220].
Improved model for the angular dependence of excimer laser ablation rates in polymer materials
Pedder, J. E. A.; Holmes, A. S.; Dyer, P. E.
2009-10-01
Measurements of the angle-dependent ablation rates of polymers that have applications in microdevice fabrication are reported. A simple model based on Beer's law, including plume absorption, is shown to give good agreement with the experimental findings for polycarbonate and SU8, ablated using the 193 and 248 nm excimer lasers, respectively. The modeling forms a useful tool for designing masks needed to fabricate complex surface relief by ablation.
Directory of Open Access Journals (Sweden)
Boris V Schmid
Full Text Available BACKGROUND: A large trial to investigate the effectiveness of population based screening for chlamydia infections was conducted in the Netherlands in 2008-2012. The trial was register based and consisted of four rounds of screening of women and men in the age groups 16-29 years in three regions in the Netherlands. Data were collected on participation rates and positivity rates per round. A modeling study was conducted to project screening effects for various screening strategies into the future. METHODS AND FINDINGS: We used a stochastic network simulation model incorporating partnership formation and dissolution, aging and a sexual life course perspective. Trends in baseline rates of chlamydia testing and treatment were used to describe the epidemiological situation before the start of the screening program. Data on participation rates was used to describe screening uptake in rural and urban areas. Simulations were used to project the effectiveness of screening on chlamydia prevalence for a time period of 10 years. In addition, we tested alternative screening strategies, such as including only women, targeting different age groups, and biennial screening. Screening reduced prevalence by about 1% in the first two screening rounds and leveled off after that. Extrapolating observed participation rates into the future indicated very low participation in the long run. Alternative strategies only marginally changed the effectiveness of screening. Higher participation rates as originally foreseen in the program would have succeeded in reducing chlamydia prevalence to very low levels in the long run. CONCLUSIONS: Decreasing participation rates over time profoundly impact the effectiveness of population based screening for chlamydia infections. Using data from several consecutive rounds of screening in a simulation model enabled us to assess the future effectiveness of screening on prevalence. If participation rates cannot be kept at a sufficient level
Modeling and Model Predictive Power and Rate Control of Wireless Communication Networks
Directory of Open Access Journals (Sweden)
Cunwu Han
2014-01-01
Full Text Available A novel power and rate control system model for wireless communication networks is presented, which includes uncertainties, input constraints, and time-varying delays in both state and control input. A robust delay-dependent model predictive power and rate control method is proposed, and the state feedback control law is obtained by solving an optimization problem that is derived by using linear matrix inequality (LMI techniques. Simulation results are given to illustrate the effectiveness of the proposed method.
International Nuclear Information System (INIS)
Chatterjee, Bishu; Sharp, Peter A.
2006-01-01
Electric transmission and other rate cases use a form of the discounted cash flow model with a single long-term growth rate to estimate rates of return on equity. It cannot incorporate information about the appropriate time horizon for which analysts' estimates of earnings growth have predictive powers. Only a non-constant growth model can explicitly recognize the importance of the time horizon in an ROE calculation. (author)
Volatility modeling for IDR exchange rate through APARCH model with student-t distribution
Nugroho, Didit Budi; Susanto, Bambang
2017-08-01
The aim of this study is to empirically investigate the performance of APARCH(1,1) volatility model with the Student-t error distribution on five foreign currency selling rates to Indonesian rupiah (IDR), including the Swiss franc (CHF), the Euro (EUR), the British pound (GBP), Japanese yen (JPY), and the US dollar (USD). Six years daily closing rates over the period of January 2010 to December 2016 for a total number of 1722 observations have analysed. The Bayesian inference using the efficient independence chain Metropolis-Hastings and adaptive random walk Metropolis methods in the Markov chain Monte Carlo (MCMC) scheme has been applied to estimate the parameters of model. According to the DIC criterion, this study has found that the APARCH(1,1) model under Student-t distribution is a better fit than the model under normal distribution for any observed rate return series. The 95% highest posterior density interval suggested the APARCH models to model the IDR/JPY and IDR/USD volatilities. In particular, the IDR/JPY and IDR/USD data, respectively, have significant negative and positive leverage effect in the rate returns. Meanwhile, the optimal power coefficient of volatility has been found to be statistically different from 2 in adopting all rate return series, save the IDR/EUR rate return series.
White, H; Racine, J
2001-01-01
We propose tests for individual and joint irrelevance of network inputs. Such tests can be used to determine whether an input or group of inputs "belong" in a particular model, thus permitting valid statistical inference based on estimated feedforward neural-network models. The approaches employ well-known statistical resampling techniques. We conduct a small Monte Carlo experiment showing that our tests have reasonable level and power behavior, and we apply our methods to examine whether there are predictable regularities in foreign exchange rates. We find that exchange rates do appear to contain information that is exploitable for enhanced point prediction, but the nature of the predictive relations evolves through time.
Unprecedented rates of land-use transformation in modeled climate change mitigation pathways
Turner, P. A.; Field, C. B.; Lobell, D. B.; Sanchez, D.; Mach, K. J.
2017-12-01
Integrated assessment models (IAMs) generate climate change mitigation scenarios consistent with global temperature targets. To limit warming to 2°, stylized cost-effective mitigation pathways rely on extensive deployments of carbon dioxide (CO2) removal (CDR) technologies, including multi-gigatonne yearly carbon removal from the atmosphere through bioenergy with carbon capture and storage (BECCS) and afforestation/reforestation. These assumed CDR deployments keep ambitious temperature limits in reach, but associated rates of land-use transformation have not been evaluated. For IAM scenarios from the IPCC Fifth Assessment Report, we compare rates of modeled land-use conversion to recent observed commodity crop expansions. In scenarios with a likely chance of limiting warming to 2° in 2100, the rate of energy cropland expansion supporting BECCS exceeds past commodity crop rates by several fold. In some cases, mitigation scenarios include abrupt reversal of deforestation, paired with massive afforestation/reforestation. Specifically, energy cropland in crop. If energy cropland instead increases at rates equal to recent soybean and oil palm expansions, the scale of CO2 removal possible with BECCS is 2.6 to 10-times lower, respectively, than the deployments <2° IAM scenarios rely upon in 2100. IAM mitigation pathways may favor multi-gigatonne biomass-based CDR given undervalued sociopolitical and techno-economic deployment barriers. Heroic modeled rates for land-use transformation imply that large-scale biomass-based CDR is not an easy solution to the climate challenge.
The importance of the strain rate and creep on the stress corrosion cracking mechanisms and models
International Nuclear Information System (INIS)
Aly, Omar F.; Mattar Neto, Miguel; Schvartzman, Monica M.A.M.
2011-01-01
Stress corrosion cracking is a nuclear, power, petrochemical, and other industries equipment and components (like pressure vessels, nozzles, tubes, accessories) life degradation mode, involving fragile fracture. The stress corrosion cracking failures can produce serious accidents, and incidents which can put on risk the safety, reliability, and efficiency of many plants. These failures are of very complex prediction. The stress corrosion cracking mechanisms are based on three kinds of factors: microstructural, mechanical and environmental. Concerning the mechanical factors, various authors prefer to consider the crack tip strain rate rather than stress, as a decisive factor which contributes to the process: this parameter is directly influenced by the creep strain rate of the material. Based on two KAPL-Knolls Atomic Power Laboratory experimental studies in SSRT (slow strain rate test) and CL (constant load) test, for prediction of primary water stress corrosion cracking in nickel based alloys, it has done a data compilation of the film rupture mechanism parameters, for modeling PWSCC of Alloy 600 and discussed the importance of the strain rate and the creep on the stress corrosion cracking mechanisms and models. As derived from this study, a simple theoretical model is proposed, and it is showed that the crack growth rate estimated with Brazilian tests results with Alloy 600 in SSRT, are according with the KAPL ones and other published literature. (author)
Data-driven techniques to estimate parameters in a rate-dependent ferromagnetic hysteresis model
International Nuclear Information System (INIS)
Hu Zhengzheng; Smith, Ralph C.; Ernstberger, Jon M.
2012-01-01
The quantification of rate-dependent ferromagnetic hysteresis is important in a range of applications including high speed milling using Terfenol-D actuators. There exist a variety of frameworks for characterizing rate-dependent hysteresis including the magnetic model in Ref. , the homogenized energy framework, Preisach formulations that accommodate after-effects, and Prandtl-Ishlinskii models. A critical issue when using any of these models to characterize physical devices concerns the efficient estimation of model parameters through least squares data fits. A crux of this issue is the determination of initial parameter estimates based on easily measured attributes of the data. In this paper, we present data-driven techniques to efficiently and robustly estimate parameters in the homogenized energy model. This framework was chosen due to its physical basis and its applicability to ferroelectric, ferromagnetic and ferroelastic materials.
International Nuclear Information System (INIS)
Han, Jaeyoung; Jung, Mooncheong; Yu, Sangseok; Yi, Sun
2016-01-01
In this study, a model reference adaptive controller is developed to regulate the outlet air flow rate of centrifugal compressor for automotive supercharger. The centrifugal compressor is developed using the analytical based method to predict the transient behavior of operating and the designed model is validated with experimental data to confirm the system accuracy. The model reference adaptive control structure consists of a compressor model and a MRAC(model reference adaptive control) mechanism. The feedback control do not robust with variation of system parameter but the applied adaptive control is robust even if the system parameter is changed. As a result, the MRAC was regulated to reference air flow rate. Also MRAC was found to be more robust control compared with the feedback control even if the system parameter is changed.
Energy Technology Data Exchange (ETDEWEB)
Han, Jaeyoung; Jung, Mooncheong; Yu, Sangseok [Chungnam Nat’l Univ., Daejeon (Korea, Republic of); Yi, Sun [North Carolina A and T State Univ., Raleigh (United States)
2016-08-15
In this study, a model reference adaptive controller is developed to regulate the outlet air flow rate of centrifugal compressor for automotive supercharger. The centrifugal compressor is developed using the analytical based method to predict the transient behavior of operating and the designed model is validated with experimental data to confirm the system accuracy. The model reference adaptive control structure consists of a compressor model and a MRAC(model reference adaptive control) mechanism. The feedback control do not robust with variation of system parameter but the applied adaptive control is robust even if the system parameter is changed. As a result, the MRAC was regulated to reference air flow rate. Also MRAC was found to be more robust control compared with the feedback control even if the system parameter is changed.
A mesoscopic reaction rate model for shock initiation of multi-component PBX explosives.
Liu, Y R; Duan, Z P; Zhang, Z Y; Ou, Z C; Huang, F L
2016-11-05
The primary goal of this research is to develop a three-term mesoscopic reaction rate model that consists of a hot-spot ignition, a low-pressure slow burning and a high-pressure fast reaction terms for shock initiation of multi-component Plastic Bonded Explosives (PBX). Thereinto, based on the DZK hot-spot model for a single-component PBX explosive, the hot-spot ignition term as well as its reaction rate is obtained through a "mixing rule" of the explosive components; new expressions for both the low-pressure slow burning term and the high-pressure fast reaction term are also obtained by establishing the relationships between the reaction rate of the multi-component PBX explosive and that of its explosive components, based on the low-pressure slow burning term and the high-pressure fast reaction term of a mesoscopic reaction rate model. Furthermore, for verification, the new reaction rate model is incorporated into the DYNA2D code to simulate numerically the shock initiation process of the PBXC03 and the PBXC10 multi-component PBX explosives, and the numerical results of the pressure histories at different Lagrange locations in explosive are found to be in good agreements with previous experimental data. Copyright © 2016 Elsevier B.V. All rights reserved.
Graph Model Based Indoor Tracking
DEFF Research Database (Denmark)
Jensen, Christian Søndergaard; Lu, Hua; Yang, Bin
2009-01-01
The tracking of the locations of moving objects in large indoor spaces is important, as it enables a range of applications related to, e.g., security and indoor navigation and guidance. This paper presents a graph model based approach to indoor tracking that offers a uniform data management...
Computer Based Modelling and Simulation
Indian Academy of Sciences (India)
GENERAL I ARTICLE. Computer Based ... universities, and later did system analysis, ... sonal computers (PC) and low cost software packages and tools. They can serve as useful learning experience through student projects. Models are .... Let us consider a numerical example: to calculate the velocity of a trainer aircraft ...
International Nuclear Information System (INIS)
Milgram, J.; Dormoy, J.L.
1994-09-01
Running a nuclear power plant involves monitoring data provided by the installation's sensors. Operators and computerized systems then use these data to establish a diagnostic of the plant. However, the instrumentation system is complex, and is not immune to faults and failures. This paper presents a system for detecting sensor failures using a topological description of the installation and a set of component models. This model of the plant implicitly contains relations between sensor data. These relations must always be checked if all the components are functioning correctly. The failure detection task thus consists of checking these constraints. The constraints are extracted in two stages. Firstly, a qualitative model of their existence is built using structural analysis. Secondly, the models are formally handled according to the results of the structural analysis, in order to establish the constraints on the sensor data. This work constitutes an initial step in extending model-based diagnosis, as the information on which it is based is suspect. This work will be followed by surveillance of the detection system. When the instrumentation is assumed to be sound, the unverified constraints indicate errors on the plant model. (authors). 8 refs., 4 figs
Dependence and risk assessment for oil prices and exchange rate portfolios: A wavelet based approach
Aloui, Chaker; Jammazi, Rania
2015-10-01
In this article, we propose a wavelet-based approach to accommodate the stylized facts and complex structure of financial data, caused by frequent and abrupt changes of markets and noises. Specifically, we show how the combination of both continuous and discrete wavelet transforms with traditional financial models helps improve portfolio's market risk assessment. In the empirical stage, three wavelet-based models (wavelet-EGARCH with dynamic conditional correlations, wavelet-copula, and wavelet-extreme value) are considered and applied to crude oil price and US dollar exchange rate data. Our findings show that the wavelet-based approach provides an effective and powerful tool for detecting extreme moments and improving the accuracy of VaR and Expected Shortfall estimates of oil-exchange rate portfolios after noise is removed from the original data.
Model Predictive Control based on Finite Impulse Response Models
DEFF Research Database (Denmark)
Prasath, Guru; Jørgensen, John Bagterp
2008-01-01
We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...... and related to the uncertainty of the impulse response coefficients. The simulations can be used to benchmark l2 MPC against FIR based robust MPC as well as to estimate the maximum performance improvements by robust MPC....
A simulation model for the determination of tabarru' rate in a family takaful
Ismail, Hamizun bin
2014-06-01
The concept of tabarru' that is incorporated in family takaful serves to eliminate the element of uncertainty in the contract as a participant agree to relinquish as donation certain portion of his contribution. The most important feature in family takaful is that it does not guarantee a definite return on a participant's contribution, unlike its conventional counterpart where a premium is paid in return for a guaranteed amount of insurance benefit. In other words, investment return on contributed funds by the participants are based on actual investment experience. The objective of this study is to set up a framework for the determination of tabarru' rate by simulation. The model is based on binomial death process. Specifically, linear tabarru' rate and flat tabarru' rate are introduced. The results of the simulation trials show that the linear assumption on the tabarru' rate has an advantage over the flat counterpart as far as the risk of the investment accumulation on maturity is concerned.
Directory of Open Access Journals (Sweden)
Yan-jie Ni
2016-04-01
Full Text Available A 30 mm electrothermal-chemical (ETC gun experimental system is employed to research the burning rate characteristics of 4/7 high-nitrogen solid propellant. Enhanced gas generation rates (EGGR of propellants during and after electrical discharges are verified in the experiments. A modified 0D internal ballistic model is established to simulate the ETC launch. According to the measured pressure and electrical parameters, a transient burning rate law including the influence of EGGR coefficient by electric power and pressure gradient (dp/dt is added into the model. The EGGR coefficient of 4/7 high-nitrogen solid propellant is equal to 0.005 MW−1. Both simulated breech pressure and projectile muzzle velocity accord with the experimental results well. Compared with Woodley's modified burning rate law, the breech pressure curves acquired by the transient burning rate law are more consistent with test results. Based on the parameters calculated in the model, the relationship among propellant burning rate, pressure gradient (dp/dt and electric power is analyzed. Depending on the transient burning rate law and experimental data, the burning of solid propellant under the condition of plasma is described more accurately.
A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates.
An, Qian; Kang, Jian; Song, Ruiguang; Hall, H Irene
2016-04-30
Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. Copyright © 2015 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Javed Akram
2018-04-01
Full Text Available A microstructural simulation method is adopted to predict the location specific strain rates, temperatures, grain evolution, and accumulated strains in the Inconel 718 friction welds. Cellular automata based 2D microstructure model was developed for Inconel 718 alloy using theoretical aspects of dynamic recrystallization. Flow curves were simulated and compared with experimental results using hot deformation parameter obtained from literature work. Using validated model, simulations were performed for friction welds of Inconel 718 alloy generated at three rotational speed i.e., 1200, 1500, and 1500 RPM. Results showed the increase in strain rates with increasing rotational speed. These simulated strain rates were found to match with the analytical results. Temperature difference of 150 K was noticed from center to edge of the weld. At all the rotational speeds, the temperature was identical implying steady state temperature (0.89Tm attainment. Keywords: Microstructure modeling, Dynamic recrystallization, Friction welding, Inconel 718, EBSD, Hot deformation, Strain map
Influence of satellite-derived photolysis rates and NOx emissions on Texas ozone modeling
Tang, W.; Cohan, D. S.; Pour-Biazar, A.; Lamsal, L. N.; White, A. T.; Xiao, X.; Zhou, W.; Henderson, B. H.; Lash, B. F.
2015-02-01
Uncertain photolysis rates and emission inventory impair the accuracy of state-level ozone (O3) regulatory modeling. Past studies have separately used satellite-observed clouds to correct the model-predicted photolysis rates, or satellite-constrained top-down NOx emissions to identify and reduce uncertainties in bottom-up NOx emissions. However, the joint application of multiple satellite-derived model inputs to improve O3 state implementation plan (SIP) modeling has rarely been explored. In this study, Geostationary Operational Environmental Satellite (GOES) observations of clouds are applied to derive the photolysis rates, replacing those used in Texas SIP modeling. This changes modeled O3 concentrations by up to 80 ppb and improves O3 simulations by reducing modeled normalized mean bias (NMB) and normalized mean error (NME) by up to 0.1. A sector-based discrete Kalman filter (DKF) inversion approach is incorporated with the Comprehensive Air Quality Model with extensions (CAMx)-decoupled direct method (DDM) model to adjust Texas NOx emissions using a high-resolution Ozone Monitoring Instrument (OMI) NO2 product. The discrepancy between OMI and CAMx NO2 vertical column densities (VCDs) is further reduced by increasing modeled NOx lifetime and adding an artificial amount of NO2 in the upper troposphere. The region-based DKF inversion suggests increasing NOx emissions by 10-50% in most regions, deteriorating the model performance in predicting ground NO2 and O3, while the sector-based DKF inversion tends to scale down area and nonroad NOx emissions by 50%, leading to a 2-5 ppb decrease in ground 8 h O3 predictions. Model performance in simulating ground NO2 and O3 are improved using sector-based inversion-constrained NOx emissions, with 0.25 and 0.04 reductions in NMBs and 0.13 and 0.04 reductions in NMEs, respectively. Using both GOES-derived photolysis rates and OMI-constrained NOx emissions together reduces modeled NMB and NME by 0.05, increases the model
Estimating the Per-Base-Pair Mutation Rate in the Yeast Saccharomyces cerevisiae
Lang, Gregory I.; Murray, Andrew W.
2008-01-01
Although mutation rates are a key determinant of the rate of evolution they are difficult to measure precisely and global mutations rates (mutations per genome per generation) are often extrapolated from the per-base-pair mutation rate assuming that mutation rate is uniform across the genome. Using budding yeast, we describe an improved method for the accurate calculation of mutation rates based on the fluctuation assay. Our analysis suggests that the per-base-pair mutation rates at two genes...
Simple mass transport model for metal uptake by marine macroalgae growing at different rates
Energy Technology Data Exchange (ETDEWEB)
Rice, D.L.
1984-01-01
Although algae growing at different rates may exhibit different concentrations of a given metal, such differences in algal chemistry may or may not reflect actual effects of environmental growth factors on the kinetics of metal uptake. Published data on uptake of rubidium, cadmium, and manganese by the green seaweed Ulva fasciata Delile grown at different rates in open system sea water was interpreted using the model. Differences in exposure time to sea water of relatively old and relatively young thalli were responsible for significant decreases in algal rubidium and cadmium concentrations with increases in specific growth rate. The biomass-specific growth rates of uptake of these two metals did not vary with growth rate. Both algal concentrations and specific rates of uptake of manganese increase significantly with increasing growth rate, thus indicating a distinct link between the kinetics of manganese uptake and metabolic rate. Under some circumstances, seaweed bioassay coupled with an interpretive model may provide the only reasonable approach to the study of chemical uptake-growth phenomena. In practice, if the residence time of sea water in culture chambers is sufficiently low to preclude pseudo-closed system artifacts, differences in trace metal concentrations between input and output sea water may be difficult to detect. In the field and in situ experiments based on time-series monitoring of changes in the water chemistry would be technically difficult or perhaps impossible to perform. 13 references, 1 figure.
Sutiani, Ani; Silitonga, Mei Y.
2017-08-01
This research focused on the effect of learning models and emotional intelligence in students' chemistry learning outcomes on reaction rate teaching topic. In order to achieve the objectives of the research, with 2x2 factorial research design was used. There were two factors tested, namely: the learning models (factor A), and emotional intelligence (factor B) factors. Then, two learning models were used; problem-based learning/PBL (A1), and project-based learning/PjBL (A2). While, the emotional intelligence was divided into higher and lower types. The number of population was six classes containing 243 grade X students of SMAN 10 Medan, Indonesia. There were 15 students of each class were chosen as the sample of the research by applying purposive sampling technique. The data were analyzed by applying two-ways analysis of variance (2X2) at the level of significant α = 0.05. Based on hypothesis testing, there was the interaction between learning models and emotional intelligence in students' chemistry learning outcomes. Then, the finding of the research showed that students' learning outcomes in reaction rate taught by using PBL with higher emotional intelligence is higher than those who were taught by using PjBL. There was no significant effect between students with lower emotional intelligence taught by using both PBL and PjBL in reaction rate topic. Based on the finding, the students with lower emotional intelligence were quite hard to get in touch with other students in group discussion.
Modeling the exchange rate of the euro against the dollar using the ARCH/GARCH models
Directory of Open Access Journals (Sweden)
Kovačević Radovan
2016-01-01
Full Text Available The analysis of time series with conditional heteroskedasticity (changeable time variability, conditional variance instability, the phenomenon called volatility is the main task of ARCH and GARCH models. The aim of these models is to calculate some of the volatility indicators needed for financial decisions. This paper examines the performance of generalized autoregressive conditional heteroscedasticity (GARCH model in modeling the daily changes of the log exchange rate of the euro against the dollar. Several GARCH models have been applied for modeling the daily log exchange rate returns of the euro, with a different number of parameters. The characteristic of estimated GARCH models is that the obtained coefficients of lagged squared residuals and the conditional variance parameters in the equation of conditional variance have a strong statistical significance. The sum of these two coefficients' estimates is close to a unit, which is typical for GARCH models that are applied on the data of financial assets returns. This means that the shocks in the conditional variance equation will be long lasting. The great value of the sum of these two coefficients shows that the high rates of positive or negative returns leads to a large forecasted value of the variance in the prolonged period. The asymmetrical EGARCH (1,1 model has showed the best results in modeling the euro exchange rate returns. The asymmetry term in the conditional variance equation of this model is negative and statistically significant. A negative value of this term suggests that the positive shock has less impact on the conditional variance than the negative shocks. The asymmetric EGARCH (1,1 model provides evidence of a leverage effect.
Estimating glomerular filtration rate in a population-based study
Directory of Open Access Journals (Sweden)
Anoop Shankar
2010-07-01
Full Text Available Anoop Shankar1, Kristine E Lee2, Barbara EK Klein2, Paul Muntner3, Peter C Brazy4, Karen J Cruickshanks2,5, F Javier Nieto5, Lorraine G Danforth2, Carla R Schubert2,5, Michael Y Tsai6, Ronald Klein21Department of Community Medicine, West Virginia University School of Medicine, Morgantown, WV, USA; 2Department of Ophthalmology and Visual Sciences, 4Department of Medicine, 5Department of Population Health Sciences, University of Wisconsin, School of Medicine and Public Health, Madison, WI, USA; 3Department of Community Medicine, Mount Sinai School of Medicine, NY, USA; 6Department of Laboratory Medicine and Pathology, University of Minnesota, Minneapolis, MN, USABackground: Glomerular filtration rate (GFR-estimating equations are used to determine the prevalence of chronic kidney disease (CKD in population-based studies. However, it has been suggested that since the commonly used GFR equations were originally developed from samples of patients with CKD, they underestimate GFR in healthy populations. Few studies have made side-by-side comparisons of the effect of various estimating equations on the prevalence estimates of CKD in a general population sample.Patients and methods: We examined a population-based sample comprising adults from Wisconsin (age, 43–86 years; 56% women. We compared the prevalence of CKD, defined as a GFR of <60 mL/min per 1.73 m2 estimated from serum creatinine, by applying various commonly used equations including the modification of diet in renal disease (MDRD equation, Cockcroft–Gault (CG equation, and the Mayo equation. We compared the performance of these equations against the CKD definition of cystatin C >1.23 mg/L.Results: We found that the prevalence of CKD varied widely among different GFR equations. Although the prevalence of CKD was 17.2% with the MDRD equation and 16.5% with the CG equation, it was only 4.8% with the Mayo equation. Only 24% of those identified to have GFR in the range of 50–59 mL/min per 1
Rate-base determination through real-time efficiency assessment
International Nuclear Information System (INIS)
Eckhardt, J.H.; Bishop, T.W.
1990-01-01
One of the main problems with nuclear power is the extremely high construction costs and long schedules for plant construction and start-up. It is unlikely that utility executives will risk their companies' financial health by committing the necessary capital resources given the prevailing uncertainties. For new nuclear plants to play a major role in preventing future electric supply shortages, the financial uncertainties associated with high construction costs must be minimized. To contain costs and maintain reasonable schedules for future plants, the utilities, vendors, the US Nuclear Regulatory Commission (NRC), and the state regulatory commissions can make specific changes. One of the key factors to reduce uncertainty and improve cost and schedule performance is for the state regulatory commissions to change the method of determining reasonable plant costs and placing those costs in the rate base. Currently, most state regulatory commissions assess the reasonableness of costs only after completion of construction, resulting in years of financial uncertainty and untimely conclusions as to what should have been done better
The RBANS Effort Index: base rates in geriatric samples.
Duff, Kevin; Spering, Cynthia C; O'Bryant, Sid E; Beglinger, Leigh J; Moser, David J; Bayless, John D; Culp, Kennith R; Mold, James W; Adams, Russell L; Scott, James G
2011-01-01
The Effort Index (EI) of the RBANS was developed to assist clinicians in discriminating patients who demonstrate good effort from those with poor effort. However, there are concerns that older adults might be unfairly penalized by this index, which uses uncorrected raw scores. Using five independent samples of geriatric patients with a broad range of cognitive functioning (e.g., cognitively intact, nursing home residents, probable Alzheimer's disease), base rates of failure on the EI were calculated. In cognitively intact and mildly impaired samples, few older individuals were classified as demonstrating poor effort (e.g., 3% in cognitively intact). However, in the more severely impaired geriatric patients, over one third had EI scores that fell above suggested cutoff scores (e.g., 37% in nursing home residents, 33% in probable Alzheimer's disease). In the cognitively intact sample, older and less educated patients were more likely to have scores suggestive of poor effort. Education effects were observed in three of the four clinical samples. Overall cognitive functioning was significantly correlated with EI scores, with poorer cognition being associated with greater suspicion of low effort. The current results suggest that age, education, and level of cognitive functioning should be taken into consideration when interpreting EI results and that significant caution is warranted when examining EI scores in elders suspected of having dementia.
Model-Based Power Plant Master Control
Energy Technology Data Exchange (ETDEWEB)
Boman, Katarina; Thomas, Jean; Funkquist, Jonas
2010-08-15
The main goal of the project has been to evaluate the potential of a coordinated master control for a solid fuel power plant in terms of tracking capability, stability and robustness. The control strategy has been model-based predictive control (MPC) and the plant used in the case study has been the Vattenfall power plant Idbaecken in Nykoeping. A dynamic plant model based on nonlinear physical models was used to imitate the true plant in MATLAB/SIMULINK simulations. The basis for this model was already developed in previous Vattenfall internal projects, along with a simulation model of the existing control implementation with traditional PID controllers. The existing PID control is used as a reference performance, and it has been thoroughly studied and tuned in these previous Vattenfall internal projects. A turbine model was developed with characteristics based on the results of steady-state simulations of the plant using the software EBSILON. Using the derived model as a representative for the actual process, an MPC control strategy was developed using linearization and gain-scheduling. The control signal constraints (rate of change) and constraints on outputs were implemented to comply with plant constraints. After tuning the MPC control parameters, a number of simulation scenarios were performed to compare the MPC strategy with the existing PID control structure. The simulation scenarios also included cases highlighting the robustness properties of the MPC strategy. From the study, the main conclusions are: - The proposed Master MPC controller shows excellent set-point tracking performance even though the plant has strong interactions and non-linearity, and the controls and their rate of change are bounded. - The proposed Master MPC controller is robust, stable in the presence of disturbances and parameter variations. Even though the current study only considered a very small number of the possible disturbances and modelling errors, the considered cases are
Schieferdecker, Ina; Großmann, Jürgen; Schneider, Martin
2012-01-01
Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security...
Modeling the time--varying subjective quality of HTTP video streams with rate adaptations.
Chen, Chao; Choi, Lark Kwon; de Veciana, Gustavo; Caramanis, Constantine; Heath, Robert W; Bovik, Alan C
2014-05-01
Newly developed hypertext transfer protocol (HTTP)-based video streaming technologies enable flexible rate-adaptation under varying channel conditions. Accurately predicting the users' quality of experience (QoE) for rate-adaptive HTTP video streams is thus critical to achieve efficiency. An important aspect of understanding and modeling QoE is predicting the up-to-the-moment subjective quality of a video as it is played, which is difficult due to hysteresis effects and nonlinearities in human behavioral responses. This paper presents a Hammerstein-Wiener model for predicting the time-varying subjective quality (TVSQ) of rate-adaptive videos. To collect data for model parameterization and validation, a database of longer duration videos with time-varying distortions was built and the TVSQs of the videos were measured in a large-scale subjective study. The proposed method is able to reliably predict the TVSQ of rate adaptive videos. Since the Hammerstein-Wiener model has a very simple structure, the proposed method is suitable for online TVSQ prediction in HTTP-based streaming.
The economic production lot size model with several production rates
DEFF Research Database (Denmark)
Larsen, Christian
should be chosen in the interval between the demand rate and the production rate, which minimize unit production costs, and should be used in an increasing order. Then, given the production rates, we derive closed form solutions for the optimal runtimes as well as the minimum average cost. Finally we...
Directory of Open Access Journals (Sweden)
Ina Schieferdecker
2012-02-01
Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.
MATHEMATICAL MODELING OF HEATING RATE PRODUCT AT HIGH HEAT TREATMENT
Directory of Open Access Journals (Sweden)
M. M. Akhmedova
2014-01-01
Full Text Available Methods of computing and mathematical modeling are all widely used in the study of various heat exchange processes that provide the ability to study the dynamics of the processes, as well as to conduct a reasonable search for the optimal technological parameters of heat treatment.This work is devoted to the identification of correlations among the factors that have the greatest effect on the rate of heating of the product at hightemperature heat sterilization in a stream of hot air, which are chosen as the temperature difference (between the most and least warming up points and speed cans during heat sterilization.As a result of the experimental data warming of the central and peripheral layers compote of apples in a 3 liter pot at high-temperature heat treatment in a stream of hot air obtained by the regression equation in the form of a seconddegree polynomial, taking into account the effects of pair interaction of these parameters.
Analysis of Factors that Influence Infiltration Rates using the HELP Model
International Nuclear Information System (INIS)
Dyer, J.; Shipmon, J.
2017-01-01
The Hydrologic Evaluation of Landfill Performance (HELP) model is used by Savannah River National Laboratory (SRNL) in conjunction with PORFLOW groundwater flow simulation software to make longterm predictions of the fate and transport of radionuclides in the environment at radiological waste sites. The work summarized in this report supports preparation of the planned 2018 Performance Assessment for the E-Area Low-Level Waste Facility (LLWF) at the Savannah River Site (SRS). More specifically, this project focused on conducting a sensitivity analysis of infiltration (i.e., the rate at which water travels vertically in soil) through the proposed E-Area LLWF closure cap. A sensitivity analysis was completed using HELP v3.95D to identify the cap design and material property parameters that most impact infiltration rates through the proposed closure cap for a 10,000-year simulation period. The results of the sensitivity analysis indicate that saturated hydraulic conductivity (Ksat) for select cap layers, precipitation rate, surface vegetation type, and geomembrane layer defect density are dominant factors limiting infiltration rate. Interestingly, calculated infiltration rates were substantially influenced by changes in the saturated hydraulic conductivity of the Upper Foundation and Lateral Drainage layers. For example, an order-of-magnitude decrease in Ksat for the Upper Foundation layer lowered the maximum infiltration rate from a base-case 11 inches per year to only two inches per year. Conversely, an order-of-magnitude increase in Ksat led to an increase in infiltration rate from 11 to 15 inches per year. This work and its results provide a framework for quantifying uncertainty in the radionuclide transport and dose models for the planned 2018 E-Area Performance Assessment. Future work will focus on the development of a nonlinear regression model for infiltration rate using Minitab 17® to facilitate execution of probabilistic simulations in the GoldSim® overall
Analysis of Factors that Influence Infiltration Rates using the HELP Model
Energy Technology Data Exchange (ETDEWEB)
Dyer, J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Shipmon, J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-09-28
The Hydrologic Evaluation of Landfill Performance (HELP) model is used by Savannah River National Laboratory (SRNL) in conjunction with PORFLOW groundwater flow simulation software to make longterm predictions of the fate and transport of radionuclides in the environment at radiological waste sites. The work summarized in this report supports preparation of the planned 2018 Performance Assessment for the E-Area Low-Level Waste Facility (LLWF) at the Savannah River Site (SRS). More specifically, this project focused on conducting a sensitivity analysis of infiltration (i.e., the rate at which water travels vertically in soil) through the proposed E-Area LLWF closure cap. A sensitivity analysis was completed using HELP v3.95D to identify the cap design and material property parameters that most impact infiltration rates through the proposed closure cap for a 10,000-year simulation period. The results of the sensitivity analysis indicate that saturated hydraulic conductivity (Ksat) for select cap layers, precipitation rate, surface vegetation type, and geomembrane layer defect density are dominant factors limiting infiltration rate. Interestingly, calculated infiltration rates were substantially influenced by changes in the saturated hydraulic conductivity of the Upper Foundation and Lateral Drainage layers. For example, an order-of-magnitude decrease in Ksat for the Upper Foundation layer lowered the maximum infiltration rate from a base-case 11 inches per year to only two inches per year. Conversely, an order-of-magnitude increase in Ksat led to an increase in infiltration rate from 11 to 15 inches per year. This work and its results provide a framework for quantifying uncertainty in the radionuclide transport and dose models for the planned 2018 E-Area Performance Assessment. Future work will focus on the development of a nonlinear regression model for infiltration rate using Minitab 17® to facilitate execution of probabilistic simulations in the GoldSim® overall
Skrepnek, Grant H
2004-01-01
Accounting-based profits have indicated that pharmaceutical firms have achieved greater returns relative to other sectors. However, partially due to the theoretically inappropriate reporting of research and development (R&D) expenditures according to generally accepted accounting principles, evidence suggests that a substantial and upward bias is present in accounting-based rates of return for corporations with high levels of intangible assets. Given the intensity of R&D in pharmaceutical firms, accounting-based profit metrics in the drug sector may be affected to a greater extent than other industries. The aim of this work was to address measurement issues associated with corporate performance and factors that contribute to the bias within accounting-based rates of return. Seminal and broadly cited works on the subject of accounting- versus economic-based rates of return were reviewed from the economic and finance literature, with an emphasis placed on issues and scientific evidence directly related to the drug development process and pharmaceutical industry. With international convergence and harmonization of accounting standards being imminent, stricter adherence to theoretically sound economic principles is advocated, particularly those based on discounted cash-flow methods. Researchers, financial analysts, and policy makers must be cognizant of the biases and limitations present within numerous corporate performance measures. Furthermore, the development of more robust and valid economic models of the pharmaceutical industry is required to capture the unique dimensions of risk and return of the drug development process. Empiric work has illustrated that estimates of economic-based rates of return range from approximately 2 to approximately 11 percentage points below various accounting-based rates of return for drug companies. Because differences in the nature of risk and uncertainty borne by drug manufacturers versus other sectors make comparative assessments
International Nuclear Information System (INIS)
Browning, R.V.; Scammon, R.J.
1998-01-01
Modeling impact events on systems containing plastic bonded explosive materials requires accurate models for stress evolution at high strain rates out to large strains. For example, in the Steven test geometry reactions occur after strains of 0.5 or more are reached for PBX-9501. The morphology of this class of materials and properties of the constituents are briefly described. We then review the viscoelastic behavior observed at small strains for this class of material, and evaluate large strain models used for granular materials such as cap models. Dilatation under shearing deformations of the PBX is experimentally observed and is one of the key features modeled in cap style plasticity theories, together with bulk plastic flow at high pressures. We propose a model that combines viscoelastic behavior at small strains but adds intergranular stresses at larger strains. A procedure using numerical simulations and comparisons with results from flyer plate tests and low rate uniaxial stress tests is used to develop a rough set of constants for PBX-9501. Comparisons with the high rate flyer plate tests demonstrate that the observed characteristic behavior is captured by this viscoelastic based model. copyright 1998 American Institute of Physics
Lexa, Frank James; Berlin, Jonathan W
2005-03-01
In this article, the authors cover tools for financial modeling. Commonly used time lines and cash flow diagrams are discussed. Commonly used but limited terms such as payback and breakeven are introduced. The important topics of the time value of money and discount rates are introduced to lay the foundation for their use in modeling and in more advanced metrics such as the internal rate of return. Finally, the authors broach the more sophisticated topic of net present value.
Low Base-Substitution Mutation Rate in the Germline Genome of the Ciliate Tetrahymena thermophila
2016-09-15
Tetrahymena thermophila, a model eukaryote. PLoS Biol. 4:e286. Farlow A, et al. 2015. The spontaneous mutation rate in the fission yeast Schizosaccharomyces...spontane- ous mutations in yeast . Proc Natl Acad Sci U S A. 105:9272–9277. Lynn DH, Doerder FP. 2012. The life and times of Tetrahymena. Methods Cell...Low Base-Substitution Mutation Rate in the Germline Genome of the Ciliate Tetrahymena thermophila Hongan Long1,2,y, David J. Winter3,*,y, Allan Y.-C
MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.
2005-01-01
Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.
Energy Technology Data Exchange (ETDEWEB)
Hernandez-Mangas, J.M. [Dpto. de Electricidad y Electronica, Universidad de Valladolid, ETSI Telecomunicaciones, Campus Miguel Delibes, Valladolid E-47011 (Spain)]. E-mail: jesus.hernandez.mangas@tel.uva.es; Arias, J. [Dpto. de Electricidad y Electronica, Universidad de Valladolid, ETSI Telecomunicaciones, Campus Miguel Delibes, Valladolid E-47011 (Spain); Marques, L.A. [Dpto. de Electricidad y Electronica, Universidad de Valladolid, ETSI Telecomunicaciones, Campus Miguel Delibes, Valladolid E-47011 (Spain); Ruiz-Bueno, A. [Dpto. de Electricidad y Electronica, Universidad de Valladolid, ETSI Telecomunicaciones, Campus Miguel Delibes, Valladolid E-47011 (Spain); Bailon, L. [Dpto. de Electricidad y Electronica, Universidad de Valladolid, ETSI Telecomunicaciones, Campus Miguel Delibes, Valladolid E-47011 (Spain)
2005-01-01
Currently there are extensive atomistic studies that model some characteristics of the damage buildup due to ion irradiation (e.g. L. Pelaz et al., Appl. Phys. Lett. 82 (2003) 2038-2040). Our interest is to develop a novel statistical damage buildup model for our BCA ion implant simulator (IIS) code in order to extend its ranges of applicability. The model takes into account the abrupt regime of the crystal-amorphous transition. It works with different temperatures and dose-rates and also models the transition temperature. We have tested it with some projectiles (Ge, P) implanted into silicon. In this work we describe the new statistical damage accumulation model based on the modified Kinchin-Pease model. The results obtained have been compared with existing experimental results.
International Nuclear Information System (INIS)
Hernandez-Mangas, J.M.; Arias, J.; Marques, L.A.; Ruiz-Bueno, A.; Bailon, L.
2005-01-01
Currently there are extensive atomistic studies that model some characteristics of the damage buildup due to ion irradiation (e.g. L. Pelaz et al., Appl. Phys. Lett. 82 (2003) 2038-2040). Our interest is to develop a novel statistical damage buildup model for our BCA ion implant simulator (IIS) code in order to extend its ranges of applicability. The model takes into account the abrupt regime of the crystal-amorphous transition. It works with different temperatures and dose-rates and also models the transition temperature. We have tested it with some projectiles (Ge, P) implanted into silicon. In this work we describe the new statistical damage accumulation model based on the modified Kinchin-Pease model. The results obtained have been compared with existing experimental results
Classification rates: non‐parametric verses parametric models using ...
African Journals Online (AJOL)
This research sought to establish if non parametric modeling achieves a higher correct classification ratio than a parametric model. The local likelihood technique was used to model fit the data sets. The same sets of data were modeled using parametric logit and the abilities of the two models to correctly predict the binary ...
An Assessment of the Internal Rating Based Approach in Basel II
Simone Varotto
2008-01-01
The new bank capital regulation commonly known as Basel II includes a internal rating based approach (IRB) to measuring credit risk in bank portfolios. The IRB relies on the assumptions that the portfolio is fully diversified and that systematic risk is driven by one common factor. In this work we empirically investigate the impact of these assumptions by comparing the risk measures produced by the IRB with those of a more general credit risk model that allows for multiple systematic risk fac...
Modeling study on the effects of pulse rise rate in atmospheric pulsed discharges
Zhang, Yuan-Tao; Wang, Yan-Hui
2018-02-01
In this paper, we present a modeling study on the discharge characteristics driven by short pulsed voltages, focusing on the effects of pulse rise rate based on the fluid description of atmospheric plasmas. The numerical results show that the breakdown voltage of short pulsed discharge is almost linearly dependent on the pulse rise rate, which is also confirmed by the derived equations from the fluid model. In other words, if the pulse rise rate is fixed as a constant, the simulation results clearly suggest that the breakdown voltage is almost unchanged, although the amplitude of pulsed voltage increases significantly. The spatial distribution of the electric field and electron density are given to reveal the underpinning physics. Additionally, the computational data and the analytical expression also indicate that an increased repetition frequency can effectively decrease the breakdown voltage and current density, which is consistent with the experimental observation.
Estimating time-based instantaneous total mortality rate based on the age-structured abundance index
Wang, Yingbin; Jiao, Yan
2015-05-01
The instantaneous total mortality rate ( Z) of a fish population is one of the important parameters in fisheries stock assessment. The estimation of Z is crucial to fish population dynamics analysis, abundance and catch forecast, and fisheries management. A catch curve-based method for estimating time-based Z and its change trend from catch per unit effort (CPUE) data of multiple cohorts is developed. Unlike the traditional catch-curve method, the method developed here does not need the assumption of constant Z throughout the time, but the Z values in n continuous years are assumed constant, and then the Z values in different n continuous years are estimated using the age-based CPUE data within these years. The results of the simulation analyses show that the trends of the estimated time-based Z are consistent with the trends of the true Z, and the estimated rates of change from this approach are close to the true change rates (the relative differences between the change rates of the estimated Z and the true Z are smaller than 10%). Variations of both Z and recruitment can affect the estimates of Z value and the trend of Z. The most appropriate value of n can be different given the effects of different factors. Therefore, the appropriate value of n for different fisheries should be determined through a simulation analysis as we demonstrated in this study. Further analyses suggested that selectivity and age estimation are also two factors that can affect the estimated Z values if there is error in either of them, but the estimated change rates of Z are still close to the true change rates. We also applied this approach to the Atlantic cod ( Gadus morhua) fishery of eastern Newfoundland and Labrador from 1983 to 1997, and obtained reasonable estimates of time-based Z.
On a sparse pressure-flow rate condensation of rigid circulation models
Schiavazzi, D. E.; Hsia, T. Y.; Marsden, A. L.
2015-01-01
Cardiovascular simulation has shown potential value in clinical decision-making, providing a framework to assess changes in hemodynamics produced by physiological and surgical alterations. State-of-the-art predictions are provided by deterministic multiscale numerical approaches coupling 3D finite element Navier Stokes simulations to lumped parameter circulation models governed by ODEs. Development of next-generation stochastic multiscale models whose parameters can be learned from available clinical data under uncertainty constitutes a research challenge made more difficult by the high computational cost typically associated with the solution of these models. We present a methodology for constructing reduced representations that condense the behavior of 3D anatomical models using outlet pressure-flow polynomial surrogates, based on multiscale model solutions spanning several heart cycles. Relevance vector machine regression is compared with maximum likelihood estimation, showing that sparse pressure/flow rate approximations offer superior performance in producing working surrogate models to be included in lumped circulation networks. Sensitivities of outlets flow rates are also quantified through a Sobol’ decomposition of their total variance encoded in the orthogonal polynomial expansion. Finally, we show that augmented lumped parameter models including the proposed surrogates accurately reproduce the response of multiscale models they were derived from. In particular, results are presented for models of the coronary circulation with closed loop boundary conditions and the abdominal aorta with open loop boundary conditions. PMID:26671219
A GPS Satellite Clock Offset Prediction Method Based on Fitting Clock Offset Rates Data
Directory of Open Access Journals (Sweden)
WANG Fuhong
2016-12-01
Full Text Available It is proposed that a satellite atomic clock offset prediction method based on fitting and modeling clock offset rates data. This method builds quadratic model or linear model combined with periodic terms to fit the time series of clock offset rates, and computes the model coefficients of trend with the best estimation. The clock offset precisely estimated at the initial prediction epoch is directly adopted to calculate the model coefficient of constant. The clock offsets in the rapid ephemeris (IGR provided by IGS are used as modeling data sets to perform certain experiments for different types of GPS satellite clocks. The results show that the clock prediction accuracies of the proposed method for 3, 6, 12 and 24 h achieve 0.43, 0.58, 0.90 and 1.47 ns respectively, which outperform the traditional prediction method based on fitting original clock offsets by 69.3%, 61.8%, 50.5% and 37.2%. Compared with the IGU real-time clock products provided by IGS, the prediction accuracies of the new method have improved about 15.7%, 23.7%, 27.4% and 34.4% respectively.
Crowdsourcing Based 3d Modeling
Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.
2016-06-01
Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.
CROWDSOURCING BASED 3D MODELING
Directory of Open Access Journals (Sweden)
A. Somogyi
2016-06-01
Full Text Available Web-based photo albums that support organizing and viewing the users’ images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.
Testing and Modeling Fuel Regression Rate in a Miniature Hybrid Burner
Directory of Open Access Journals (Sweden)
Luciano Fanton
2012-01-01
Full Text Available Ballistic characterization of an extended group of innovative HTPB-based solid fuel formulations for hybrid rocket propulsion was performed in a lab-scale burner. An optical time-resolved technique was used to assess the quasisteady regression history of single perforation, cylindrical samples. The effects of metalized additives and radiant heat transfer on the regression rate of such formulations were assessed. Under the investigated operating conditions and based on phenomenological models from the literature, analyses of the collected experimental data show an appreciable influence of the radiant heat flux from burnt gases and soot for both unloaded and loaded fuel formulations. Pure HTPB regression rate data are satisfactorily reproduced, while the impressive initial regression rates of metalized formulations require further assessment.
Study on Rail Profile Optimization Based on the Nonlinear Relationship between Profile and Wear Rate
Directory of Open Access Journals (Sweden)
Jianxi Wang
2017-01-01
Full Text Available This paper proposes a rail profile optimization method that takes account of wear rate within design cycle so as to minimize rail wear at the curve in heavy haul railway and extend the service life of rail. Taking rail wear rate as the object function, the vertical coordinate of rail profile at range optimization as independent variable, and the geometric characteristics and grinding depth of rail profile as constraint conditions, the support vector machine regression theory was used to fit the nonlinear relationship between rail profile and its wear rate. Then, the profile optimization model was built. Based on the optimization principle of genetic algorithm, the profile optimization model was solved to achieve the optimal rail profile. A multibody dynamics model was used to check the dynamic performance of carriage running on optimal rail profile. The result showed that the average relative error of support vector machine regression model remained less than 10% after a number of training processes. The dynamic performance of carriage running on optimized rail profile met the requirements on safety index and stability. The wear rate of optimized profile was lower than that of standard profile by 5.8%; the allowable carrying gross weight increased by 12.7%.
Distributed Fair Auto Rate Medium Access Control for IEEE 802.11 Based WLANs
Zhu, Yanfeng; Niu, Zhisheng
Much research has shown that a carefully designed auto rate medium access control can utilize the underlying physical multi-rate capability to exploit the time-variation of the channel. In this paper, we develop a simple analytical model to elucidate the rule that maximizes the throughput of RTS/CTS based multi-rate wireless local area networks. Based on the discovered rule, we propose two distributed fair auto rate medium access control schemes called FARM and FARM+ from the view-point of throughput fairness and time-share fairness, respectively. With the proposed schemes, after receiving a RTS frame, the receiver selectively returns the CTS frame to inform the transmitter the maximum feasible rate probed by the signal-to-noise ratio of the received RTS frame. The key feature of the proposed schemes is that they are capable of maintaining throughput/time-share fairness in asymmetric situation where the distribution of SNR varies with stations. Extensive simulation results show that the proposed schemes outperform the existing throughput/time-share fair auto rate schemes in time-varying channel conditions.
SDOF models for reinforced concrete beams under impulsive loads accounting for strain rate effects
Energy Technology Data Exchange (ETDEWEB)
Stochino, F., E-mail: fstochino@unica.it [Department of Civil and Environmental Engineering and Architecture, University of Cagliari, Via Marengo 2, 09123 Cagliari (Italy); Carta, G., E-mail: giorgio_carta@unica.it [Department of Mechanical, Chemical and Materials Engineering, University of Cagliari, Via Marengo 2, 09123 Cagliari (Italy)
2014-09-15
Highlights: • Flexural failure of reinforced concrete beams under blast and impact loads is studied. • Two single degree of freedom models are formulated to predict the beam response. • Strain rate effects are taken into account for both models. • The theoretical response obtained from each model is compared with experimental data. • The two models give a good estimation of the maximum deflection at collapse. - Abstract: In this paper, reinforced concrete beams subjected to blast and impact loads are examined. Two single degree of freedom models are proposed to predict the response of the beam. The first model (denoted as “energy model”) is developed from the law of energy balance and assumes that the deformed shape of the beam is represented by its first vibration mode. In the second model (named “dynamic model”), the dynamic behavior of the beam is simulated by a spring-mass oscillator. In both formulations, the strain rate dependencies of the constitutive properties of the beams are considered by varying the parameters of the models at each time step of the computation according to the values of the strain rates of the materials (i.e. concrete and reinforcing steels). The efficiency of each model is evaluated by comparing the theoretical results with experimental data found in literature. The comparison shows that the energy model gives a good estimation of the maximum deflection of the beam at collapse, defined as the attainment of the ultimate strain in concrete. On the other hand, the dynamic model generally provides a smaller value of the maximum displacement. However, both approaches yield reliable results, even though they are based on some approximations. Being also very simple to implement, they may serve as an useful tool in practical applications.
Rate-dependent extensions of the parametric magneto-dynamic model with magnetic hysteresis
Directory of Open Access Journals (Sweden)
S. Steentjes
2017-05-01
Full Text Available This paper extends the parametric magneto-dynamic model of soft magnetic steel sheets to account for the phase shift between local magnetic flux density and magnetic field strength. This phase shift originates from the damped motion of domain walls and is strongly dependent on the microstructure of the material. In this regard, two different approaches to include the rate-dependent effects are investigated: a purely phenomenological, mathematical approach and a physical-based one.
Verification of Sulfate Attack Penetration Rates for Saltstone Disposal Unit Modeling
Energy Technology Data Exchange (ETDEWEB)
Flach, G. P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2015-05-12
Recent Special Analysis modeling of Saltstone Disposal Units consider sulfate attack on concrete and utilize degradation rates estimated from Cementitious Barriers Partnership software simulations. This study provides an independent verification of those simulation results using an alternative analysis method and an independent characterization data source. The sulfate penetration depths estimated herein are similar to the best-estimate values in SRNL-STI-2013-00118 Rev. 2 and well below the nominal values subsequently used to define Saltstone Special Analysis base cases.
A Constitutive Model for Superelastic Shape Memory Alloys Considering the Influence of Strain Rate
Directory of Open Access Journals (Sweden)
Hui Qian
2013-01-01
Full Text Available Shape memory alloys (SMAs are a relatively new class of functional materials, exhibiting special thermomechanical behaviors, such as shape memory effect and superelasticity, which enable their applications in seismic engineering as energy dissipation devices. This paper investigates the properties of superelastic NiTi shape memory alloys, emphasizing the influence of strain rate on superelastic behavior under various strain amplitudes by cyclic tensile tests. A novel constitutive equation based on Graesser and Cozzarelli’s model is proposed to describe the strain-rate-dependent hysteretic behavior of superelastic SMAs at different strain levels. A stress variable including the influence of strain rate is introduced into Graesser and Cozzarelli’s model. To verify the effectiveness of the proposed constitutive equation, experiments on superelastic NiTi wires with different strain rates and strain levels are conducted. Numerical simulation results based on the proposed constitutive equation and experimental results are in good agreement. The findings in this paper will assist the future design of superelastic SMA-based energy dissipation devices for seismic protection of structures.
SLS Navigation Model-Based Design Approach
Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas
2018-01-01
The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and
Interest rate models for pension and insurance regulation
Broeders, Dirk; de Jong, Frank; Schotman, Peter
2016-01-01
Liabilities of pension funds and life insurers typically have very long times to maturity. The valuation of such liabilities introduces particular challenges as it relies on long term interest rates. As the market for long term interest rates is less liquid, financial institutions and the regulator
Interest Rate Models for Pension and Insurance Regulation
Broeders, D.W.G.A.; de Jong, Frank; Schotman, Peter
Liabilities of pension funds and life insurers typically have very long times to maturity. The valuation of such liabilities introduces particular challenges as it relies on long term interest rates. As the market for long term interest rates is less liquid, financial institutions and the regulator
Modelling the filling rate of pit latrines | Brouckaert | Water SA
African Journals Online (AJOL)
Excreta (faeces and urine) that are deposited into a pit latrine are subject to biodegradation, which substantially reduces the volume that remains. On the other hand, other matter that is not biodegradable usually finds itsway into pit latrines. The net filling rate is thus dependent on both the rate of addition of material and its ...
Oracle posterior rates in the White Noise Model
Babenko, A.
2010-01-01
All the results about posterior rates obtained until now are related to the optimal (minimax) rates for the estimation problem over the corresponding nonparametric smoothness classes, i.e. of a global nature. In the meantime, a new local approach to optimality has been developed within the
Male sexual strategies modify ratings of female models with specific waist-to-hip ratios.
Brase, Gary L; Walker, Gary
2004-06-01
Female waist-to-hip ratio (WHR) has generally been an important general predictor of ratings of physical attractiveness and related characteristics. Individual differences in ratings do exist, however, and may be related to differences in the reproductive tactics of the male raters such as pursuit of short-term or long-term relationships and adjustments based on perceptions of one's own quality as a mate. Forty males, categorized according to sociosexual orientation and physical qualities (WHR, Body Mass Index, and self-rated desirability), rated female models on both attractiveness and likelihood they would approach them. Sociosexually restricted males were less likely to approach females rated as most attractive (with 0.68-0.72 WHR), as compared with unrestricted males. Males with lower scores in terms of physical qualities gave ratings indicating more favorable evaluations of female models with lower WHR. The results indicate that attractiveness and willingness to approach are overlapping but distinguishable constructs, both of which are influenced by variations in characteristics of the raters.
Process-Based Modeling of Constructed Wetlands
Baechler, S.; Brovelli, A.; Rossi, L.; Barry, D. A.
2007-12-01
Constructed wetlands (CWs) are widespread facilities for wastewater treatment. In subsurface flow wetlands, contaminated wastewater flows through a porous matrix, where oxidation and detoxification phenomena occur. Despite the large number of working CWs, system design and optimization are still mainly based upon empirical equations or simplified first-order kinetics. This results from an incomplete understanding of the system functioning, and may in turn hinder the performance and effectiveness of the treatment process. As a result, CWs are often considered not suitable to meet high water quality-standards, or to treat water contaminated with recalcitrant anthropogenic contaminants. To date, only a limited number of detailed numerical models have been developed and successfully applied to simulate constructed wetland behavior. Among these, one of the most complete and powerful is CW2D, which is based on Hydrus2D. The aim of this work is to develop a comprehensive simulator tailored to model the functioning of horizontal flow constructed wetlands and in turn provide a reliable design and optimization tool. The model is based upon PHWAT, a general reactive transport code for saturated flow. PHWAT couples MODFLOW, MT3DMS and PHREEQC-2 using an operator-splitting approach. The use of PHREEQC to simulate reactions allows great flexibility in simulating biogeochemical processes. The biogeochemical reaction network is similar to that of CW2D, and is based on the Activated Sludge Model (ASM). Kinetic oxidation of carbon sources and nutrient transformations (nitrogen and phosphorous primarily) are modeled via Monod-type kinetic equations. Oxygen dissolution is accounted for via a first-order mass-transfer equation. While the ASM model only includes a limited number of kinetic equations, the new simulator permits incorporation of an unlimited number of both kinetic and equilibrium reactions. Changes in pH, redox potential and surface reactions can be easily incorporated
Dynamic Optimization Design of Cranes Based on Human–Crane–Rail System Dynamics and Annoyance Rate
Directory of Open Access Journals (Sweden)
Yunsheng Xin
2017-01-01
Full Text Available The operators of overhead traveling cranes experience discomfort as a result of the vibrations of crane structures. These vibrations are produced by defects in the rails on which the cranes move. To improve the comfort of operators, a nine-degree-of-freedom (nine-DOF mathematical model of a “human–crane–rail” system was constructed. Based on the theoretical guidance provided in ISO 2631-1, an annoyance rate model was established, and quantization results were determined. A dynamic optimization design method for overhead traveling cranes is proposed. A particle swarm optimization (PSO algorithm was used to optimize the crane structural design, with the structure parameters as the basic variables, the annoyance rate model as the objective function, and the acceleration amplitude and displacement amplitude of the crane as the constraint conditions. The proposed model and method were used to optimize the design of a double-girder 100 t–28.5 m casting crane, and the optimal parameters are obtained. The results show that optimization decreases the human annoyance rate from 28.3% to 9.8% and the root mean square of the weighted acceleration of human vibration from 0.59 m/s2 to 0.38 m/s2. These results demonstrate the effectiveness and practical applicability of the models and method proposed in this paper.
Credit Rating via Dynamic Slack-Based Measure And It´s Optimal Investment Strategy
Directory of Open Access Journals (Sweden)
A. Delavarkhalafi
2015-01-01
Full Text Available In this paper we check the credit rating of firms applied for a loan. In this regard we introduce a model, named Dynamic Slack-Based Measure (DSBM for measuring credit rating of applicant companies. Selection of financial ratios that represent the financial state of a company -in the best possible way- is one of the most challenging parts of any credit rating analysis. At first, ranking needs to identify the appropriate variables. Therefore we introduce five financial variables to provide a ranking. As noted above, we assess the performance of these firms. Then we introduce the dynamic model of SBM and theorems, also we discuss the overall structure of DSBM. Then we will present the implementation and the simulation model. After that, we propose a stochastic controlled dynamic system model to express the optimal strategy. Banks expect companies selected with DSBM model, act in accordance with this strategy. This stochastic dynamic system is originated from the balance sheets of firms applying for a loan. Finally we evaluate the performance of the system and strategy problem.
Mesoscopic modeling of DNA denaturation rates: Sequence dependence and experimental comparison
Energy Technology Data Exchange (ETDEWEB)
Dahlen, Oda, E-mail: oda.dahlen@ntnu.no; Erp, Titus S. van, E-mail: titus.van.erp@ntnu.no [Department of Chemistry, Norwegian University of Science and Technology (NTNU), Høgskoleringen 5, Realfagbygget D3-117 7491 Trondheim (Norway)
2015-06-21
Using rare event simulation techniques, we calculated DNA denaturation rate constants for a range of sequences and temperatures for the Peyrard-Bishop-Dauxois (PBD) model with two different parameter sets. We studied a larger variety of sequences compared to previous studies that only consider DNA homopolymers and DNA sequences containing an equal amount of weak AT- and strong GC-base pairs. Our results show that, contrary to previous findings, an even distribution of the strong GC-base pairs does not always result in the fastest possible denaturation. In addition, we applied an adaptation of the PBD model to study hairpin denaturation for which experimental data are available. This is the first quantitative study in which dynamical results from the mesoscopic PBD model have been compared with experiments. Our results show that present parameterized models, although giving good results regarding thermodynamic properties, overestimate denaturation rates by orders of magnitude. We believe that our dynamical approach is, therefore, an important tool for verifying DNA models and for developing next generation models that have higher predictive power than present ones.
A review of air exchange rate models for air pollution exposure assessments.
Breen, Michael S; Schultz, Bradley D; Sohn, Michael D; Long, Thomas; Langstaff, John; Williams, Ronald; Isaacs, Kristin; Meng, Qing Yu; Stallings, Casson; Smith, Luther
2014-11-01
A critical aspect of air pollution exposure assessments is estimation of the air exchange rate (AER) for various buildings where people spend their time. The AER, which is the rate of exchange of indoor air with outdoor air, is an important determinant for entry of outdoor air pollutants and for removal of indoor-emitted air pollutants. This paper presents an overview and critical analysis of the scientific literature on empirical and physically based AER models for residential and commercial buildings; the models highlighted here are feasible for exposure assessments as extensive inputs are not required. Models are included for the three types of airflows that can occur across building envelopes: leakage, natural ventilation, and mechanical ventilation. Guidance is provided to select the preferable AER model based on available data, desired temporal resolution, types of airflows, and types of buildings included in the exposure assessment. For exposure assessments with some limited building leakage or AER measurements, strategies are described to reduce AER model uncertainty. This review will facilitate the selection of AER models in support of air pollution exposure assessments.
Availability analysis of subsea blowout preventer using Markov model considering demand rate
Directory of Open Access Journals (Sweden)
Sunghee Kim
2014-12-01
Full Text Available Availabilities of subsea Blowout Preventers (BOP in the Gulf of Mexico Outer Continental Shelf (GoM OCS is investigated using a Markov method. An updated β factor model by SINTEF is used for common-cause failures in multiple redundant systems. Coefficient values of failure rates for the Markov model are derived using the β factor model of the PDS (reliability of computer-based safety systems, Norwegian acronym method. The blind shear ram preventer system of the subsea BOP components considers a demand rate to reflect reality more. Markov models considering the demand rate for one or two components are introduced. Two data sets are compared at the GoM OCS. The results show that three or four pipe ram preventers give similar availabilities, but redundant blind shear ram preventers or annular preventers enhance the availability of the subsea BOP. Also control systems (PODs and connectors are contributable components to improve the availability of the subsea BOPs based on sensitivity analysis.
SLS Model Based Design: A Navigation Perspective
Oliver, T. Emerson; Anzalone, Evan; Park, Thomas; Geohagan, Kevin
2018-01-01
The SLS Program has implemented a Model-based Design (MBD) and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team is responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1B design, the additional GPS Receiver hardware model is managed as a DMM at the vehicle design level. This paper describes the models, and discusses the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the navigation components.
Particle-based model for skiing traffic.
Holleczek, Thomas; Tröster, Gerhard
2012-05-01
We develop and investigate a particle-based model for ski slope traffic. Skiers are modeled as particles with a mass that are exposed to social and physical forces, which define the riding behavior of skiers during their descents on ski slopes. We also report position and speed data of 21 skiers recorded with GPS-equipped cell phones on two ski slopes. A comparison of these data with the trajectories resulting from computer simulations of our model shows a good correspondence. A study of the relationship among the density, speed, and flow of skiers reveals that congestion does not occur even with arrival rates of skiers exceeding the maximum ski lift capacity. In a sensitivity analysis, we identify the kinetic friction coefficient of skis on snow, the skier mass, the range of repelling social forces, and the arrival rate of skiers as the crucial parameters influencing the simulation results. Our model allows for the prediction of speed zones and skier densities on ski slopes, which is important in the prevention of skiing accidents.
Comparison of various models on cancer rate and forecasting ...
African Journals Online (AJOL)
ADOWIE PERE
model and the quadratic trend model and the results of the work compared. Data collected ... Keywords: Cancer, Tumor, Leukemia, Linear Regression, Mean Percentage Error. Cancer is a .... by a simple mathematical method. The quadratic ...
Radiocarbon Based Ages and Growth Rates: Hawaiian Deep Sea Corals
Energy Technology Data Exchange (ETDEWEB)
Roark, E B; Guilderson, T P; Dunbar, R B; Ingram, B L
2006-01-13
The radial growth rates and ages of three different groups of Hawaiian deep-sea 'corals' were determined using radiocarbon measurements. Specimens of Corallium secundum, Gerardia sp., and Leiopathes glaberrima, were collected from 450 {+-} 40 m at the Makapuu deep-sea coral bed using a submersible (PISCES V). Specimens of Antipathes dichotoma were collected at 50 m off Lahaina, Maui. The primary source of carbon to the calcitic C. secundum skeleton is in situ dissolved inorganic carbon (DIC). Using bomb {sup 14}C time markers we calculate radial growth rates of {approx} 170 {micro}m y{sup -1} and ages of 68-75 years on specimens as tall as 28 cm of C. secundum. Gerardia sp., A. dichotoma, and L. glaberrima have proteinaceous skeletons and labile particulate organic carbon (POC) is their primary source of architectural carbon. Using {sup 14}C we calculate a radial growth rate of 15 {micro}m y{sup -1} and an age of 807 {+-} 30 years for a live collected Gerardia sp., showing that these organisms are extremely long lived. Inner and outer {sup 14}C measurements on four sub-fossil Gerardia spp. samples produce similar growth rate estimates (range 14-45 {micro}m y{sup -1}) and ages (range 450-2742 years) as observed for the live collected sample. Similarly, with a growth rate of < 10 {micro}m y{sup -1} and an age of {approx}2377 years, L. glaberrima at the Makapuu coral bed, is also extremely long lived. In contrast, the shallow-collected A. dichotoma samples yield growth rates ranging from 130 to 1,140 {micro}m y{sup -1}. These results show that Hawaiian deep-sea corals grow more slowly and are older than previously thought.
empirical model for predicting rate of biogas production
African Journals Online (AJOL)
users
Rate of biogas production using cow manure as substrate was monitored in two laboratory scale ... Biogas is a Gas obtained by anaerobic ... A. A. Adamu, Petroleum and Natural Gas Processing Department, Petroleum Training Institute, P.M.B..
A simplified 137Cs transport model for estimating erosion rates in undisturbed soil
International Nuclear Information System (INIS)
Zhang Xinbao; Long Yi; He Xiubin; Fu Jiexiong; Zhang Yunqi
2008-01-01
137 Cs is an artificial radionuclide with a half-life of 30.12 years which released into the environment as a result of atmospheric testing of thermo-nuclear weapons primarily during the period of 1950s-1970s with the maximum rate of 137 Cs fallout from atmosphere in 1963. 137 Cs fallout is strongly and rapidly adsorbed by fine particles in the surface horizons of the soil, when it falls down on the ground mostly with precipitation. Its subsequent redistribution is associated with movements of the soil or sediment particles. The 137 Cs nuclide tracing technique has been used for assessment of soil losses for both undisturbed and cultivated soils. For undisturbed soils, a simple profile-shape model was developed in 1990 to describe the 137 Cs depth distribution in profile, where the maximum 137 Cs occurs in the surface horizon and it exponentially decreases with depth. The model implied that the total 137 Cs fallout amount deposited on the earth surface in 1963 and the 137 Cs profile shape has not changed with time. The model has been widely used for assessment of soil losses on undisturbed land. However, temporal variations of 137 Cs depth distribution in undisturbed soils after its deposition on the ground due to downward transport processes are not considered in the previous simple profile-shape model. Thus, the soil losses are overestimated by the model. On the base of the erosion assessment model developed by Walling, D.E., He, Q. [1999. Improved models for estimating soil erosion rates from cesium-137 measurements. Journal of Environmental Quality 28, 611-622], we discuss the 137 Cs transport process in the eroded soil profile and make some simplification to the model, develop a method to estimate the soil erosion rate more expediently. To compare the soil erosion rates calculated by the simple profile-shape model and the simple transport model, the soil losses related to different 137 Cs loss proportions of the reference inventory at the Kaixian site of the
Directory of Open Access Journals (Sweden)
B. Verheggen
2006-01-01
Full Text Available Classical nucleation theory is unable to explain the ubiquity of nucleation events observed in the atmosphere. This shows a need for an empirical determination of the nucleation rate. Here we present a novel inverse modeling procedure to determine particle nucleation and growth rates based on consecutive measurements of the aerosol size distribution. The particle growth rate is determined by regression analysis of the measured change in the aerosol size distribution over time, taking into account the effects of processes such as coagulation, deposition and/or dilution. This allows the growth rate to be determined with a higher time-resolution than can be deduced from inspecting contour plots ('banana-plots''. Knowing the growth rate as a function of time enables the evaluation of the time of nucleation of measured particles of a certain size. The nucleation rate is then obtained by integrating the particle losses from time of measurement to time of nucleation. The regression analysis can also be used to determine or verify the optimum value of other parameters of interest, such as the wall loss or coagulation rate constants. As an example, the method is applied to smog chamber measurements. This program offers a powerful interpretive tool to study empirical aerosol population dynamics in general, and nucleation and growth in particular.
Matching of experimental and statistical-model thermonuclear reaction rates at high temperatures
International Nuclear Information System (INIS)
Newton, J. R.; Longland, R.; Iliadis, C.
2008-01-01
We address the problem of extrapolating experimental thermonuclear reaction rates toward high stellar temperatures (T>1 GK) by using statistical model (Hauser-Feshbach) results. Reliable reaction rates at such temperatures are required for studies of advanced stellar burning stages, supernovae, and x-ray bursts. Generally accepted methods are based on the concept of a Gamow peak. We follow recent ideas that emphasized the fundamental shortcomings of the Gamow peak concept for narrow resonances at high stellar temperatures. Our new method defines the effective thermonuclear energy range (ETER) by using the 8th, 50th, and 92nd percentiles of the cumulative distribution of fractional resonant reaction rate contributions. This definition is unambiguous and has a straightforward probability interpretation. The ETER is used to define a temperature at which Hauser-Feshbach rates can be matched to experimental rates. This matching temperature is usually much higher compared to previous estimates that employed the Gamow peak concept. We suggest that an increased matching temperature provides more reliable extrapolated reaction rates since Hauser-Feshbach results are more trustwhorthy the higher the temperature. Our ideas are applied to 21 (p,γ), (p,α), and (α,γ) reactions on A=20-40 target nuclei. For many of the cases studied here, our extrapolated reaction rates at high temperatures differ significantly from those obtained using the Gamow peak concept
Improved picture rate conversion using classification based LMS-filters.
An, L.; Heinrich, A.; Cordes, C.N.; Haan, de G.; Rabbani, Majid
2009-01-01
Due to the recent explosion of multimedia formats and the need to convert between them, more attention is drawn to picture rate conversion. Moreover, growing demands on video motion portrayal without judder or blur requires improved format conversion. The simplest conversion repeats the latest
Thermodynamically based constraints for rate coefficients of large biochemical networks.
Vlad, Marcel O; Ross, John
2009-01-01
Wegscheider cyclicity conditions are relationships among the rate coefficients of a complex reaction network, which ensure the compatibility of kinetic equations with the conditions for thermodynamic equilibrium. The detailed balance at equilibrium, that is the equilibration of forward and backward rates for each elementary reaction, leads to compatibility between the conditions of kinetic and thermodynamic equilibrium. Therefore, Wegscheider cyclicity conditions can be derived by eliminating the equilibrium concentrations from the conditions of detailed balance. We develop matrix algebra tools needed to carry out this elimination, reexamine an old derivation of the general form of Wegscheider cyclicity condition, and develop new derivations which lead to more compact and easier-to-use formulas. We derive scaling laws for the nonequilibrium rates of a complex reaction network, which include Wegscheider conditions as a particular case. The scaling laws for the rates are used for clarifying the kinetic and thermodynamic meaning of Wegscheider cyclicity conditions. Finally, we discuss different ways of using Wegscheider cyclicity conditions for kinetic computations in systems biology.
Biasing transition rate method based on direct MC simulation for probabilistic safety assessment
Institute of Scientific and Technical Information of China (English)
Xiao-Lei Pan; Jia-Qun Wang; Run Yuan; Fang Wang; Han-Qing Lin; Li-Qin Hu; Jin Wang
2017-01-01
Direct Monte Carlo (MC) simulation is a powerful probabilistic safety assessment method for accounting dynamics of the system.But it is not efficient at simulating rare events.A biasing transition rate method based on direct MC simulation is proposed to solve the problem in this paper.This method biases transition rates of the components by adding virtual components to them in series to increase the occurrence probability of the rare event,hence the decrease in the variance of MC estimator.Several cases are used to benchmark this method.The results show that the method is effective at modeling system failure and is more efficient at collecting evidence of rare events than the direct MC simulation.The performance is greatly improved by the biasing transition rate method.
Ads' click-through rates predicting based on gated recurrent unit neural networks
Chen, Qiaohong; Guo, Zixuan; Dong, Wen; Jin, Lingzi
2018-05-01
In order to improve the effect of online advertising and to increase the revenue of advertising, the gated recurrent unit neural networks(GRU) model is used as the ads' click through rates(CTR) predicting. Combined with the characteristics of gated unit structure and the unique of time sequence in data, using BPTT algorithm to train the model. Furthermore, by optimizing the step length algorithm of the gated unit recurrent neural networks, making the model reach optimal point better and faster in less iterative rounds. The experiment results show that the model based on the gated recurrent unit neural networks and its optimization of step length algorithm has the better effect on the ads' CTR predicting, which helps advertisers, media and audience achieve a win-win and mutually beneficial situation in Three-Side Game.
Analytical Modeling of the High Strain Rate Deformation of Polymer Matrix Composites
Goldberg, Robert K.; Roberts, Gary D.; Gilat, Amos
2003-01-01
The results presented here are part of an ongoing research program to develop strain rate dependent deformation and failure models for the analysis of polymer matrix composites subject to high strain rate impact loads. State variable constitutive equations originally developed for metals have been modified in order to model the nonlinear, strain rate dependent deformation of polymeric matrix materials. To account for the effects of hydrostatic stresses, which are significant in polymers, the classical 5 plasticity theory definitions of effective stress and effective plastic strain are modified by applying variations of the Drucker-Prager yield criterion. To verify the revised formulation, the shear and tensile deformation of a representative toughened epoxy is analyzed across a wide range of strain rates (from quasi-static to high strain rates) and the results are compared to experimentally obtained values. For the analyzed polymers, both the tensile and shear stress-strain curves computed using the analytical model correlate well with values obtained through experimental tests. The polymer constitutive equations are implemented within a strength of materials based micromechanics method to predict the nonlinear, strain rate dependent deformation of polymer matrix composites. In the micromechanics, the unit cell is divided up into a number of independently analyzed slices, and laminate theory is then applied to obtain the effective deformation of the unit cell. The composite mechanics are verified by analyzing the deformation of a representative polymer matrix composite (composed using the representative polymer analyzed for the correlation of the polymer constitutive equations) for several fiber orientation angles across a variety of strain rates. The computed values compare favorably to experimentally obtained results.
Issues in practical model-based diagnosis
Bakker, R.R.; Bakker, R.R.; van den Bempt, P.C.A.; van den Bempt, P.C.A.; Mars, Nicolaas; Out, D.-J.; Out, D.J.; van Soest, D.C.; van Soes, D.C.
1993-01-01
The model-based diagnosis project at the University of Twente has been directed at improving the practical usefulness of model-based diagnosis. In cooperation with industrial partners, the research addressed the modeling problem and the efficiency problem in model-based reasoning. Main results of
Automated Prediction of Catalytic Mechanism and Rate Law Using Graph-Based Reaction Path Sampling.
Habershon, Scott
2016-04-12
In a recent article [ J. Chem. Phys. 2015 , 143 , 094106 ], we introduced a novel graph-based sampling scheme which can be used to generate chemical reaction paths in many-atom systems in an efficient and highly automated manner. The main goal of this work is to demonstrate how this approach, when combined with direct kinetic modeling, can be used to determine the mechanism and phenomenological rate law of a complex catalytic cycle, namely cobalt-catalyzed hydroformylation of ethene. Our graph-based sampling scheme generates 31 unique chemical products and 32 unique chemical reaction pathways; these sampled structures and reaction paths enable automated construction of a kinetic network model of the catalytic system when combined with density functional theory (DFT) calculations of free energies and resultant transition-state theory rate constants. Direct simulations of this kinetic network across a range of initial reactant concentrations enables determination of both the reaction mechanism and the associated rate law in an automated fashion, without the need for either presupposing a mechanism or making steady-state approximations in kinetic analysis. Most importantly, we find that the reaction mechanism which emerges from these simulations is exactly that originally proposed by Heck and Breslow; furthermore, the simulated rate law is also consistent with previous experimental and computational studies, exhibiting a complex dependence on carbon monoxide pressure. While the inherent errors of using DFT simulations to model chemical reactivity limit the quantitative accuracy of our calculated rates, this work confirms that our automated simulation strategy enables direct analysis of catalytic mechanisms from first principles.
Fuzzy portfolio model with fuzzy-input return rates and fuzzy-output proportions
Tsaur, Ruey-Chyn
2015-02-01
In the finance market, a short-term investment strategy is usually applied in portfolio selection in order to reduce investment risk; however, the economy is uncertain and the investment period is short. Further, an investor has incomplete information for selecting a portfolio with crisp proportions for each chosen security. In this paper we present a new method of constructing fuzzy portfolio model for the parameters of fuzzy-input return rates and fuzzy-output proportions, based on possibilistic mean-standard deviation models. Furthermore, we consider both excess or shortage of investment in different economic periods by using fuzzy constraint for the sum of the fuzzy proportions, and we also refer to risks of securities investment and vagueness of incomplete information during the period of depression economics for the portfolio selection. Finally, we present a numerical example of a portfolio selection problem to illustrate the proposed model and a sensitivity analysis is realised based on the results.
General extrapolation model for an important chemical dose-rate effect
International Nuclear Information System (INIS)
Gillen, K.T.; Clough, R.L.
1984-12-01
In order to extrapolate material accelerated aging data, methodologies must be developed based on sufficient understanding of the processes leading to material degradation. One of the most important mechanisms leading to chemical dose-rate effects in polymers involves the breakdown of intermediate hydroperoxide species. A general model for this mechanism is derived based on the underlying chemical steps. The results lead to a general formalism for understanding dose rate and sequential aging effects when hydroperoxide breakdown is important. We apply the model to combined radiation/temperature aging data for a PVC material and show that this data is consistent with the model and that model extrapolations are in excellent agreement with 12-year real-time aging results from an actual nuclear plant. This model and other techniques discussed in this report can aid in the selection of appropriate accelerated aging methods and can also be used to compare and select materials for use in safety-related components. This will result in increased assurance that equipment qualification procedures are adequate
A New Approach for Mobile Advertising Click-Through Rate Estimation Based on Deep Belief Nets.
Chen, Jie-Hao; Zhao, Zi-Qian; Shi, Ji-Yun; Zhao, Chong
2017-01-01
In recent years, with the rapid development of mobile Internet and its business applications, mobile advertising Click-Through Rate (CTR) estimation has become a hot research direction in the field of computational advertising, which is used to achieve accurate advertisement delivery for the best benefits in the three-side game between media, advertisers, and audiences. Current research on the estimation of CTR mainly uses the methods and models of machine learning, such as linear model or recommendation algorithms. However, most of these methods are insufficient to extract the data features and cannot reflect the nonlinear relationship between different features. In order to solve these problems, we propose a new model based on Deep Belief Nets to predict the CTR of mobile advertising, which combines together the powerful data representation and feature extraction capability of Deep Belief Nets, with the advantage of simplicity of traditional Logistic Regression models. Based on the training dataset with the information of over 40 million mobile advertisements during a period of 10 days, our experiments show that our new model has better estimation accuracy than the classic Logistic Regression (LR) model by 5.57% and Support Vector Regression (SVR) model by 5.80%.
A New Approach for Mobile Advertising Click-Through Rate Estimation Based on Deep Belief Nets
Directory of Open Access Journals (Sweden)
Jie-Hao Chen
2017-01-01
Full Text Available In recent years, with the rapid development of mobile Internet and its business applications, mobile advertising Click-Through Rate (CTR estimation has become a hot research direction in the field of computational advertising, which is used to achieve accurate advertisement delivery for the best benefits in the three-side game between media, advertisers, and audiences. Current research on the estimation of CTR mainly uses the methods and models of machine learning, such as linear model or recommendation algorithms. However, most of these methods are insufficient to extract the data features and cannot reflect the nonlinear relationship between different features. In order to solve these problems, we propose a new model based on Deep Belief Nets to predict the CTR of mobile advertising, which combines together the powerful data representation and feature extraction capability of Deep Belief Nets, with the advantage of simplicity of traditional Logistic Regression models. Based on the training dataset with the information of over 40 million mobile advertisements during a period of 10 days, our experiments show that our new model has better estimation accuracy than the classic Logistic Regression (LR model by 5.57% and Support Vector Regression (SVR model by 5.80%.
A New Approach for Mobile Advertising Click-Through Rate Estimation Based on Deep Belief Nets
Zhao, Zi-Qian; Shi, Ji-Yun; Zhao, Chong
2017-01-01
In recent years, with the rapid development of mobile Internet and its business applications, mobile advertising Click-Through Rate (CTR) estimation has become a hot research direction in the field of computational advertising, which is used to achieve accurate advertisement delivery for the best benefits in the three-side game between media, advertisers, and audiences. Current research on the estimation of CTR mainly uses the methods and models of machine learning, such as linear model or recommendation algorithms. However, most of these methods are insufficient to extract the data features and cannot reflect the nonlinear relationship between different features. In order to solve these problems, we propose a new model based on Deep Belief Nets to predict the CTR of mobile advertising, which combines together the powerful data representation and feature extraction capability of Deep Belief Nets, with the advantage of simplicity of traditional Logistic Regression models. Based on the training dataset with the information of over 40 million mobile advertisements during a period of 10 days, our experiments show that our new model has better estimation accuracy than the classic Logistic Regression (LR) model by 5.57% and Support Vector Regression (SVR) model by 5.80%. PMID:29209363
Directory of Open Access Journals (Sweden)
K. A. Halim
2011-01-01
Full Text Available In this article, we consider a single-unit unreliable production system which produces a single item. During a production run, the production process may shift from the in-control state to the out-of-control state at any random time when it produces some defective items. The defective item production rate is assumed to be imprecise and is characterized by a trapezoidal fuzzy number. The production rate is proportional to the demand rate where the proportionality constant is taken to be a fuzzy number. Two production planning models are developed on the basis of fuzzy and stochastic demand patterns. The expected cost per unit time in the fuzzy sense is derived in each model and defuzzified by using the graded mean integration representation method. Numerical examples are provided to illustrate the optimal results of the proposed fuzzy models.
Directory of Open Access Journals (Sweden)
Singh Chaman
2011-01-01
Full Text Available In the changing market scenario, supply chain management is getting phenomenal importance amongst researchers. Studies on supply chain management have emphasized the importance of a long-term strategic relationship between the manufacturer, distributor and retailer. In the present paper, a model has been developed by assuming that the demand rate and production rate as triangular fuzzy numbers and items deteriorate at a constant rate. The expressions for the average inventory cost are obtained both in crisp and fuzzy sense. The fuzzy model is defuzzified using the fuzzy extension principle, and its optimization with respect to the decision variable is also carried out. Finally, an example is given to illustrate the model and sensitivity analysis is performed to study the effect of parameters.
Modarres, Reza; Ouarda, Taha B. M. J.; Vanasse, Alain; Orzanco, Maria Gabriela; Gosselin, Pierre
2014-07-01
Changes in extreme meteorological variables and the demographic shift towards an older population have made it important to investigate the association of climate variables and hip fracture by advanced methods in order to determine the climate variables that most affect hip fracture incidence. The nonlinear autoregressive moving average with exogenous variable-generalized autoregressive conditional heteroscedasticity (ARMA X-GARCH) and multivariate GARCH (MGARCH) time series approaches were applied to investigate the nonlinear association between hip fracture rate in female and male patients aged 40-74 and 75+ years and climate variables in the period of 1993-2004, in Montreal, Canada. The models describe 50-56 % of daily variation in hip fracture rate and identify snow depth, air temperature, day length and air pressure as the influencing variables on the time-varying mean and variance of the hip fracture rate. The conditional covariance between climate variables and hip fracture rate is increasing exponentially, showing that the effect of climate variables on hip fracture rate is most acute when rates are high and climate conditions are at their worst. In Montreal, climate variables, particularly snow depth and air temperature, appear to be important predictors of hip fracture incidence. The association of climate variables and hip fracture does not seem to change linearly with time, but increases exponentially under harsh climate conditions. The results of this study can be used to provide an adaptive climate-related public health program and ti guide allocation of services for avoiding hip fracture risk.
Constraints based analysis of extended cybernetic models.
Mandli, Aravinda R; Venkatesh, Kareenhalli V; Modak, Jayant M
2015-11-01
The cybernetic modeling framework provides an interesting approach to model the regulatory phenomena occurring in microorganisms. In the present work, we adopt a constraints based approach to analyze the nonlinear behavior of the extended equations of the cybernetic model. We first show that the cybernetic model exhibits linear growth behavior under the constraint of no resource allocation for the induction of the key enzyme. We then quantify the maximum achievable specific growth rate of microorganisms on mixtures of substitutable substrates under various kinds of regulation and show its use in gaining an understanding of the regulatory strategies of microorganisms. Finally, we show that Saccharomyces cerevisiae exhibits suboptimal dynamic growth with a long diauxic lag phase when growing on a mixture of glucose and galactose and discuss on its potential to achieve optimal growth with a significantly reduced diauxic lag period. The analysis carried out in the present study illustrates the utility of adopting a constraints based approach to understand the dynamic growth strategies of microorganisms. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
INDIVIDUAL BASED MODELLING APPROACH TO THERMAL ...
Diadromous fish populations in the Pacific Northwest face challenges along their migratory routes from declining habitat quality, harvest, and barriers to longitudinal connectivity. Changes in river temperature regimes are producing an additional challenge for upstream migrating adult salmon and steelhead, species that are sensitive to absolute and cumulative thermal exposure. Adult salmon populations have been shown to utilize cold water patches along migration routes when mainstem river temperatures exceed thermal optimums. We are employing an individual based model (IBM) to explore the costs and benefits of spatially-distributed cold water refugia for adult migrating salmon. Our model, developed in the HexSim platform, is built around a mechanistic behavioral decision tree that drives individual interactions with their spatially explicit simulated environment. Population-scale responses to dynamic thermal regimes, coupled with other stressors such as disease and harvest, become emergent properties of the spatial IBM. Other model outputs include arrival times, species-specific survival rates, body energetic content, and reproductive fitness levels. Here, we discuss the challenges associated with parameterizing an individual based model of salmon and steelhead in a section of the Columbia River. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec
Molecular model for annihilation rates in positron complexes
Energy Technology Data Exchange (ETDEWEB)
Assafrao, Denise [Laboratorio de Atomos e Moleculas Especiais, Departamento de Fisica, ICEx, Universidade Federal de Minas Gerais, P.O. Box 702, 30123-970 Belo Horizonte, MG (Brazil); Department of Applied Mathematics and Theoretical Physics, Queen' s University of Belfast, Belfast BT7 1NN, Northern Ireland (United Kingdom); Walters, H.R. James [Department of Applied Mathematics and Theoretical Physics, Queen' s University of Belfast, Belfast BT7 1NN, Northern Ireland (United Kingdom); Mohallem, Jose R. [Laboratorio de Atomos e Moleculas Especiais, Departamento de Fisica, ICEx, Universidade Federal de Minas Gerais, P.O. Box 702, 30123-970 Belo Horizonte, MG (Brazil); Department of Applied Mathematics and Theoretical Physics, Queen' s University of Belfast, Belfast BT7 1NN, Northern Ireland (United Kingdom)], E-mail: rachid@fisica.ufmg.br
2008-02-15
The molecular approach for positron interaction with atoms is developed further. Potential energy curves for positron motion are obtained. Two procedures accounting for the nonadiabatic effective positron mass are introduced for calculating annihilation rate constants. The first one takes the bound-state energy eigenvalue as an input parameter. The second is a self-contained and self-consistent procedure. The methods are tested with quite different states of the small complexes HPs, e{sup +}He (electronic triplet) and e{sup +}Be (electronic singlet and triplet). For states yielding the positronium cluster, the annihilation rates are quite stable, irrespective of the accuracy in binding energies. For the e{sup +}Be states, annihilation rates are larger and more consistent with qualitative predictions than previously reported ones.
Molecular model for annihilation rates in positron complexes
International Nuclear Information System (INIS)
Assafrao, Denise; Walters, H.R. James; Mohallem, Jose R.
2008-01-01
The molecular approach for positron interaction with atoms is developed further. Potential energy curves for positron motion are obtained. Two procedures accounting for the nonadiabatic effective positron mass are introduced for calculating annihilation rate constants. The first one takes the bound-state energy eigenvalue as an input parameter. The second is a self-contained and self-consistent procedure. The methods are tested with quite different states of the small complexes HPs, e + He (electronic triplet) and e + Be (electronic singlet and triplet). For states yielding the positronium cluster, the annihilation rates are quite stable, irrespective of the accuracy in binding energies. For the e + Be states, annihilation rates are larger and more consistent with qualitative predictions than previously reported ones
Evaluation of Stress Parameters Based on Heart Rate Variability Measurements
Uysal, Fatma; Tokmakçı, Mahmut
2018-01-01
In this study, heart rate variabilitymeasurements and analysis was carried with help of the ECG recordings to showhow autonom nervous system activity changes. So as to evaluate the parametersrelated to stress of the study, the situation of relaxation, Stroop color/wordtest, mental test and auditory stimulus that would stress someone out wereapplied to six volunteer participants in a laboratory environment. Being takentotally seven minutes ECG recording and made analysis in time and frequencyd...
Enhancement of leak rate estimation model for corroded cracked thin tubes
International Nuclear Information System (INIS)
Chang, Y.S.; Jeong, J.U.; Kim, Y.J.; Hwang, S.S.; Kim, H.P.
2010-01-01
During the last couple of decades, lots of researches on structural integrity assessment and leak rate estimation have been carried out to prevent unanticipated catastrophic failures of pressure retaining nuclear components. However, from the standpoint of leakage integrity, there are still some arguments for predicting the leak rate of cracked components due primarily to uncertainties attached to various parameters in flow models. The purpose of present work is to suggest a leak rate estimation method for thin tubes with artificial cracks. In this context, 23 leak rate tests are carried out for laboratory generated stress corrosion cracked tube specimens subjected to internal pressure. Engineering equations to calculate crack opening displacements are developed from detailed three-dimensional elastic-plastic finite element analyses and then a simplified practical model is proposed based on the equations as well as test data. Verification of the proposed method is done through comparing leak rates and it will enable more reliable design and/or operation of thin tubes.
Can Low-Resolution Airborne Laser Scanning Data Be Used to Model Stream Rating Curves?
Directory of Open Access Journals (Sweden)
Steve W. Lyon
2015-03-01
Full Text Available This pilot study explores the potential of using low-resolution (0.2 points/m2 airborne laser scanning (ALS-derived elevation data to model stream rating curves. Rating curves, which allow the functional translation of stream water depth into discharge, making them integral to water resource monitoring efforts, were modeled using a physics-based approach that captures basic geometric measurements to establish flow resistance due to implicit channel roughness. We tested synthetically thinned high-resolution (more than 2 points/m2 ALS data as a proxy for low-resolution data at a point density equivalent to that obtained within most national-scale ALS strategies. Our results show that the errors incurred due to the effect of low-resolution versus high-resolution ALS data were less than those due to flow measurement and empirical rating curve fitting uncertainties. As such, although there likely are scale and technical limitations to consider, it is theoretically possible to generate rating curves in a river network from ALS data of the resolution anticipated within national-scale ALS schemes (at least for rivers with relatively simple geometries. This is promising, since generating rating curves from ALS scans would greatly enhance our ability to monitor streamflow by simplifying the overall effort required.
Can low-resolution airborne laser scanning data be used to model stream rating curves?
Lyon, Steve; Nathanson, Marcus; Lam, Norris; Dahlke, Helen; Rutzinger, Martin; Kean, Jason W.; Laudon, Hjalmar
2015-01-01
This pilot study explores the potential of using low-resolution (0.2 points/m2) airborne laser scanning (ALS)-derived elevation data to model stream rating curves. Rating curves, which allow the functional translation of stream water depth into discharge, making them integral to water resource monitoring efforts, were modeled using a physics-based approach that captures basic geometric measurements to establish flow resistance due to implicit channel roughness. We tested synthetically thinned high-resolution (more than 2 points/m2) ALS data as a proxy for low-resolution data at a point density equivalent to that obtained within most national-scale ALS strategies. Our results show that the errors incurred due to the effect of low-resolution versus high-resolution ALS data were less than those due to flow measurement and empirical rating curve fitting uncertainties. As such, although there likely are scale and technical limitations to consider, it is theoretically possible to generate rating curves in a river network from ALS data of the resolution anticipated within national-scale ALS schemes (at least for rivers with relatively simple geometries). This is promising, since generating rating curves from ALS scans would greatly enhance our ability to monitor streamflow by simplifying the overall effort required.
Infant breathing rate counter based on variable resistor for pneumonia
Sakti, Novi Angga; Hardiyanto, Ardy Dwi; La Febry Andira R., C.; Camelya, Kesa; Widiyanti, Prihartini
2016-03-01
Pneumonia is one of the leading causes of death in new born baby in Indonesia. According to WHO in 2002, breathing rate is very important index to be the symptom of pneumonia. In the Community Health Center, the nurses count with a stopwatch for exactly one minute. Miscalculation in Community Health Center occurs because of long time concentration and focus on two object at once. This calculation errors can cause the baby who should be admitted to the hospital only be attended at home. Therefore, an accurate breathing rate counter at Community Health Center level is necessary. In this work, resistance change of variable resistor is made to be breathing rate counter. Resistance change in voltage divider can produce voltage change. If the variable resistance moves periodically, the voltage will change periodically too. The voltage change counted by software in the microcontroller. For the every mm shift at the variable resistor produce average 0.96 voltage change. The software can count the number of wave generated by shifting resistor.
Bounding quantum gate error rate based on reported average fidelity
International Nuclear Information System (INIS)
Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C
2016-01-01
Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)
What explains usage of mobile physician-rating apps? Results from a web-based questionnaire.
Bidmon, Sonja; Terlutter, Ralf; Röttl, Johanna
2014-06-11
Consumers are increasingly accessing health-related information via mobile devices. Recently, several apps to rate and locate physicians have been released in the United States and Germany. However, knowledge about what kinds of variables explain usage of mobile physician-rating apps is still lacking. This study analyzes factors influencing the adoption of and willingness to pay for mobile physician-rating apps. A structural equation model was developed based on the Technology Acceptance Model and the literature on health-related information searches and usage of mobile apps. Relationships in the model were analyzed for moderating effects of physician-rating website (PRW) usage. A total of 1006 randomly selected German patients who had visited a general practitioner at least once in the 3 months before the beginning of the survey were randomly selected and surveyed. A total of 958 usable questionnaires were analyzed by partial least squares path modeling and moderator analyses. The suggested model yielded a high model fit. We found that perceived ease of use (PEOU) of the Internet to gain health-related information, the sociodemographic variables age and gender, and the psychographic variables digital literacy, feelings about the Internet and other Web-based applications in general, patients' value of health-related knowledgeability, as well as the information-seeking behavior variables regarding the amount of daily private Internet use for health-related information, frequency of using apps for health-related information in the past, and attitude toward PRWs significantly affected the adoption of mobile physician-rating apps. The sociodemographic variable age, but not gender, and the psychographic variables feelings about the Internet and other Web-based applications in general and patients' value of health-related knowledgeability, but not digital literacy, were significant predictors of willingness to pay. Frequency of using apps for health-related information
What Explains Usage of Mobile Physician-Rating Apps? Results From a Web-Based Questionnaire
Terlutter, Ralf; Röttl, Johanna
2014-01-01
Background Consumers are increasingly accessing health-related information via mobile devices. Recently, several apps to rate and locate physicians have been released in the United States and Germany. However, knowledge about what kinds of variables explain usage of mobile physician-rating apps is still lacking. Objective This study analyzes factors influencing the adoption of and willingness to pay for mobile physician-rating apps. A structural equation model was developed based on the Technology Acceptance Model and the literature on health-related information searches and usage of mobile apps. Relationships in the model were analyzed for moderating effects of physician-rating website (PRW) usage. Methods A total of 1006 randomly selected German patients who had visited a general practitioner at least once in the 3 months before the beginning of the survey were randomly selected and surveyed. A total of 958 usable questionnaires were analyzed by partial least squares path modeling and moderator analyses. Results The suggested model yielded a high model fit. We found that perceived ease of use (PEOU) of the Internet to gain health-related information, the sociodemographic variables age and gender, and the psychographic variables digital literacy, feelings about the Internet and other Web-based applications in general, patients’ value of health-related knowledgeability, as well as the information-seeking behavior variables regarding the amount of daily private Internet use for health-related information, frequency of using apps for health-related information in the past, and attitude toward PRWs significantly affected the adoption of mobile physician-rating apps. The sociodemographic variable age, but not gender, and the psychographic variables feelings about the Internet and other Web-based applications in general and patients’ value of health-related knowledgeability, but not digital literacy, were significant predictors of willingness to pay. Frequency of
Sensor-based interior modeling
International Nuclear Information System (INIS)
Herbert, M.; Hoffman, R.; Johnson, A.; Osborn, J.
1995-01-01
Robots and remote systems will play crucial roles in future decontamination and decommissioning (D ampersand D) of nuclear facilities. Many of these facilities, such as uranium enrichment plants, weapons assembly plants, research and production reactors, and fuel recycling facilities, are dormant; there is also an increasing number of commercial reactors whose useful lifetime is nearly over. To reduce worker exposure to radiation, occupational and other hazards associated with D ampersand D tasks, robots will execute much of the work agenda. Traditional teleoperated systems rely on human understanding (based on information gathered by remote viewing cameras) of the work environment to safely control the remote equipment. However, removing the operator from the work site substantially reduces his efficiency and effectiveness. To approach the productivity of a human worker, tasks will be performed telerobotically, in which many aspects of task execution are delegated to robot controllers and other software. This paper describes a system that semi-automatically builds a virtual world for remote D ampersand D operations by constructing 3-D models of a robot's work environment. Planar and quadric surface representations of objects typically found in nuclear facilities are generated from laser rangefinder data with a minimum of human interaction. The surface representations are then incorporated into a task space model that can be viewed and analyzed by the operator, accessed by motion planning and robot safeguarding algorithms, and ultimately used by the operator to instruct the robot at a level much higher than teleoperation
Directory of Open Access Journals (Sweden)
Bang Liu
2018-01-01
Full Text Available In mHealth field, accurate breathing rate monitoring technique has benefited a broad array of healthcare-related applications. Many approaches try to use smartphone or wearable device with fine-grained monitoring algorithm to accomplish the task, which can only be done by professional medical equipment before. However, such schemes usually result in bad performance in comparison to professional medical equipment. In this paper, we propose DeepFilter, a deep learning-based fine-grained breathing rate monitoring algorithm that works on smartphone and achieves professional-level accuracy. DeepFilter is a bidirectional recurrent neural network (RNN stacked with convolutional layers and speeded up by batch normalization. Moreover, we collect 16.17 GB breathing sound recording data of 248 hours from 109 and another 10 volunteers to train and test our model, respectively. The results show a reasonably good accuracy of breathing rate monitoring.
Modeling Populations of Thermostatic Loads with Switching Rate Actuation
DEFF Research Database (Denmark)
Totu, Luminita Cristiana; Wisniewski, Rafal; Leth, John-Josef
2015-01-01
We model thermostatic devices using a stochastic hybrid description, and introduce an external actuation mechanism that creates random switch events in the discrete dynamics. We then conjecture the form of the Fokker-Planck equation and successfully verify it numerically using Monte Carlo...... simulations. The actuation mechanism and subsequent modeling result are relevant for power system operation....
Estimation of Sand Production Rate Using Geomechanical and Hydromechanical Models
Directory of Open Access Journals (Sweden)
Son Tung Pham
2017-01-01
Full Text Available This paper aims to develop a numerical model that can be used in sand control during production phase of an oil and gas well. The model is able to predict not only the onset of sand production using critical bottom hole pressure inferred from geomechanical modelling, but also the mass of sand produced versus time as well as the change of porosity versus space and time using hydromechanical modelling. A detailed workflow of the modelling was presented with each step of calculations. The empirical parameters were calibrated using laboratory data. Then the modelling was applied in a case study of an oilfield in Cuu Long basin. In addition, a sensitivity study of the effect of drawdown pressure was presented in this paper. Moreover, a comparison between results of different hydromechanical models was also addressed. The outcome of this paper demonstrated the possibility of modelling the sand production mass in real cases, opening a new approach in sand control in petroleum industry.
A 1DVAR-based snowfall rate retrieval algorithm for passive microwave radiometers
Meng, Huan; Dong, Jun; Ferraro, Ralph; Yan, Banghua; Zhao, Limin; Kongoli, Cezar; Wang, Nai-Yu; Zavodsky, Bradley
2017-06-01
Snowfall rate retrieval from spaceborne passive microwave (PMW) radiometers has gained momentum in recent years. PMW can be so utilized because of its ability to sense in-cloud precipitation. A physically based, overland snowfall rate (SFR) algorithm has been developed using measurements from the Advanced Microwave Sounding Unit-A/Microwave Humidity Sounder sensor pair and the Advanced Technology Microwave Sounder. Currently, these instruments are aboard five polar-orbiting satellites, namely, NOAA-18, NOAA-19, Metop-A, Metop-B, and Suomi-NPP. The SFR algorithm relies on a separate snowfall detection algorithm that is composed of a satellite-based statistical model and a set of numerical weather prediction model-based filters. There are four components in the SFR algorithm itself: cloud properties retrieval, computation of ice particle terminal velocity, ice water content adjustment, and the determination of snowfall rate. The retrieval of cloud properties is the foundation of the algorithm and is accomplished using a one-dimensional variational (1DVAR) model. An existing model is adopted to derive ice particle terminal velocity. Since no measurement of cloud ice distribution is available when SFR is retrieved in near real time, such distribution is implicitly assumed by deriving an empirical function that adjusts retrieved SFR toward radar snowfall estimates. Finally, SFR is determined numerically from a complex integral. The algorithm has been validated against both radar and ground observations of snowfall events from the contiguous United States with satisfactory results. Currently, the SFR product is operationally generated at the National Oceanic and Atmospheric Administration and can be obtained from that organization.
Directory of Open Access Journals (Sweden)
Riané de Bruyn
2013-03-01
Full Text Available Evidence in favor of the monetary model of exchange rate determination for the South African Rand is, at best, mixed. A co-integrating relationship between the nominal exchange rate and monetary fundamentals forms the basis of the monetary model. With the econometric literature suggesting that the span of the data, not the frequency, determines the power of the co-integration tests and the studies on South Africa primarily using short-span data from the post-Bretton Woods era, we decided to test the long-run monetary model of exchange rate determination for the South African Rand relative to the US Dollar using annual data from 1910 – 2010. The results provide some support for the monetary model in that long-run co-integration is found between the nominal exchange rate and the output and money supply deviations. However, the theoretical restrictions required by the monetary model are rejected. A vector error-correction model identifies both the nominal exchange rate and the monetary fundamentals as the channel for the adjustment process of deviations from the long-run equilibrium exchange rate. A subsequent comparison of nominal exchange rate forecasts based on the monetary model with those of the random walk model suggests that the forecasting performance of the monetary model is superior.
International Nuclear Information System (INIS)
Valdés, José R.; Rodríguez, José M.; Saumell, Javier; Pütz, Thomas
2014-01-01
Highlights: • We develop a methodology for the parametric modelling of flow in hydraulic valves. • We characterize the flow coefficients with a generic function with two parameters. • The parameters are derived from CFD simulations of the generic geometry. • We apply the methodology to two cases from the automotive brake industry. • We validate by comparing with CFD results varying the original dimensions. - Abstract: The main objective of this work is to develop a methodology for the parametric modelling of the flow rate in hydraulic valve systems. This methodology is based on the derivation, from CFD simulations, of the flow coefficient of the critical restrictions as a function of the Reynolds number, using a generalized square root function with two parameters. The methodology is then demonstrated by applying it to two completely different hydraulic systems: a brake master cylinder and an ABS valve. This type of parametric valve models facilitates their implementation in dynamic simulation models of complex hydraulic systems
High Data Rate Optical Wireless Communications Based on Ultraviolet Band
Sun, Xiaobin
2017-01-01
Optical wireless communication systems based on ultraviolet (UV)-band has a lot inherent advantages, such as low background solar radiation, low device dark noise. Besides, it also has small restrictive requirements for PAT (pointing, acquisition
Predictive Finite Rate Model for Oxygen-Carbon Interactions at High Temperature
Poovathingal, Savio
An oxidation model for carbon surfaces is developed to predict ablation rates for carbon heat shields used in hypersonic vehicles. Unlike existing empirical models, the approach used here was to probe gas-surface interactions individually and then based on an understanding of the relevant fundamental processes, build a predictive model that would be accurate over a wide range of pressures and temperatures, and even microstructures. Initially, molecular dynamics was used to understand the oxidation processes on the surface. The molecular dynamics simulations were compared to molecular beam experiments and good qualitative agreement was observed. The simulations reproduced cylindrical pitting observed in the experiments where oxidation was rapid and primarily occurred around a defect. However, the studies were limited to small systems at low temperatures and could simulate time scales only of the order of nanoseconds. Molecular beam experiments at high surface temperature indicated that a majority of surface reaction products were produced through thermal mechanisms. Since the reactions were thermal, they occurred over long time scales which were computationally prohibitive for molecular dynamics to simulate. The experiments provided detailed dynamical data on the scattering of O, O2, CO, and CO2 and it was found that the data from molecular beam experiments could be used directly to build a model. The data was initially used to deduce surface reaction probabilities at 800 K. The reaction probabilities were then incorporated into the direct simulation Monte Carlo (DSMC) method. Simulations were performed where the microstructure was resolved and dissociated oxygen convected and diffused towards it. For a gas-surface temperature of 800 K, it was found that despite CO being the dominant surface reaction product, a gas-phase reaction forms significant CO2 within the microstructure region. It was also found that surface area did not play any role in concentration of
Use of Physiologically Based Pharmacokinetic (PBPK) Models ...
EPA announced the availability of the final report, Use of Physiologically Based Pharmacokinetic (PBPK) Models to Quantify the Impact of Human Age and Interindividual Differences in Physiology and Biochemistry Pertinent to Risk Final Report for Cooperative Agreement. This report describes and demonstrates techniques necessary to extrapolate and incorporate in vitro derived metabolic rate constants in PBPK models. It also includes two case study examples designed to demonstrate the applicability of such data for health risk assessment and addresses the quantification, extrapolation and interpretation of advanced biochemical information on human interindividual variability of chemical metabolism for risk assessment application. It comprises five chapters; topics and results covered in the first four chapters have been published in the peer reviewed scientific literature. Topics covered include: Data Quality ObjectivesExperimental FrameworkRequired DataTwo example case studies that develop and incorporate in vitro metabolic rate constants in PBPK models designed to quantify human interindividual variability to better direct the choice of uncertainty factors for health risk assessment. This report is intended to serve as a reference document for risk assors to use when quantifying, extrapolating, and interpretating advanced biochemical information about human interindividual variability of chemical metabolism.
Financial Distress Prediction Using Discrete-time Hazard Model and Rating Transition Matrix Approach
Tsai, Bi-Huei; Chang, Chih-Huei
2009-08-01
Previous studies used constant cut-off indicator to distinguish distressed firms from non-distressed ones in the one-stage prediction models. However, distressed cut-off indicator must shift according to economic prosperity, rather than remains fixed all the time. This study focuses on Taiwanese listed firms and develops financial distress prediction models based upon the two-stage method. First, this study employs the firm-specific financial ratio and market factors to measure the probability of financial distress based on the discrete-time hazard models. Second, this paper further focuses on macroeconomic factors and applies rating transition matrix approach to determine the distressed cut-off indicator. The prediction models are developed by using the training sample from 1987 to 2004, and their levels of accuracy are compared with the test sample from 2005 to 2007. As for the one-stage prediction model, the model in incorporation with macroeconomic factors does not perform better than that without macroeconomic factors. This suggests that the accuracy is not improved for one-stage models which pool the firm-specific and macroeconomic factors together. In regards to the two stage models, the negative credit cycle index implies the worse economic status during the test period, so the distressed cut-off point is adjusted to increase based on such negative credit cycle index. After the two-stage models employ such adjusted cut-off point to discriminate the distressed firms from non-distressed ones, their error of misclassification becomes lower than that of one-stage ones. The two-stage models presented in this paper have incremental usefulness in predicting financial distress.
On Optimizing H. 264/AVC Rate Control by Improving R-D Model and Incorporating HVS Characteristics
Directory of Open Access Journals (Sweden)
Jiang Gangyi
2010-01-01
Full Text Available The state-of-the-art JVT-G012 rate control algorithm of H.264 is improved from two aspects. First, the quadratic rate-distortion (R-D model is modified based on both empirical observations and theoretical analysis. Second, based on the existing physiological and psychological research findings of human vision, the rate control algorithm is optimized by incorporating the main characteristics of the human visual system (HVS such as contrast sensitivity, multichannel theory, and masking effect. Experiments are conducted, and experimental results show that the improved algorithm can simultaneously enhance the overall subjective visual quality and improve the rate control precision effectively.
International Nuclear Information System (INIS)
Zhu Bangfen.
1985-10-01
A numerical calculation on the non-radiative multiphonon transition probability based on the adiabatic approximation (AA) and the static approximation (SA) has been accomplished in a model of two electronic levels coupled to one phonon mode. The numerical results indicate that the spectra based on different approximations are generally different apart from those vibrational levels which are far below the classical crossing point. For large electron-phonon coupling constant, the calculated transition rates based on AA are more reliable; on the other hand, for small transition coupling the transition rates near or beyond the cross region are quite different for two approximations. In addition to the diagonal non-adiabatic potential, the mixing and splitting of the original static potential sheets are responsible for the deviation of the transition rates based on different approximations. The relationship between the transition matrix element and the vibrational level shift, the Huang-Rhys factor, the separation of the electronic levels and the electron-phonon coupling is analysed and discussed. (author)
A Latent-Variable Causal Model of Faculty Reputational Ratings.
King, Suzanne; Wolfle, Lee M.
A reanalysis was conducted of Saunier's research (1985) on sources of variation in the National Research Council (NRC) reputational ratings of university faculty. Saunier conducted a stepwise regression analysis using 12 predictor variables. Due to problems with multicollinearity and because of the atheoretical nature of stepwise regression,…
A model for turbulent dissipation rate in a constant pressure ...
Indian Academy of Sciences (India)
J Dey
the logarithmic region. However, measurement of the. Taylor microscale remains a difficult task, as it involves correlation function [1]. Consequently, an appreciation of the Taylor microscale, dissipation rate, etc., is lacking in practice due to complexity involved in estimating these quantities. Segalini et al [2] have proposed a ...
Cross sectional efficient estimation of stochastic volatility short rate models
Danilov, Dmitri; Mandal, Pranab K.
2001-01-01
We consider the problem of estimation of term structure of interest rates. Filtering theory approach is very natural here with the underlying setup being non-linear and non-Gaussian. Earlier works make use of Extended Kalman Filter (EKF). However, as indicated by de Jong (2000), the EKF in this
Cross sectional efficient estimation of stochastic volatility short rate models
Danilov, Dmitri; Mandal, Pranab K.
2002-01-01
We consider the problem of estimation of term structure of interest rates. Filtering theory approach is very natural here with the underlying setup being non-linear and non-Gaussian. Earlier works make use of Extended Kalman Filter (EKF). However, the EKF in this situation leads to inconsistent
A packet-based dual-rate PID control strategy for a slow-rate sensing Networked Control System.
Cuenca, A; Alcaina, J; Salt, J; Casanova, V; Pizá, R
2018-05-01
This paper introduces a packet-based dual-rate control strategy to face time-varying network-induced delays, packet dropouts and packet disorder in a Networked Control System. Slow-rate sensing enables to achieve energy saving and to avoid packet disorder. Fast-rate actuation makes reaching the desired control performance possible. The dual-rate PID controller is split into two parts: a slow-rate PI controller located at the remote side (with no permanent communication to the plant) and a fast-rate PD controller located at the local side. The remote side also includes a prediction stage in order to generate the packet of future, estimated slow-rate control actions. These actions are sent to the local side and converted to fast-rate ones to be used when a packet does not arrive at this side due to the network-induced delay or due to occurring dropouts. The proposed control solution is able to approximately reach the nominal (no-delay, no-dropout) performance despite the existence of time-varying delays and packet dropouts. Control system stability is ensured in terms of probabilistic Linear Matrix Inequalities (LMIs). Via real-time control for a Cartesian robot, results clearly reveal the superiority of the control solution compared to a previous proposal by authors. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Murray, I.; Mather, S.J.
2015-01-01
Full text of publication follows. The aim of this work was to test the hypothesis that the Linear-Quadratic (LQ) model of cell survival, developed for external beam radiotherapy (EBRT), could be extended to targeted radionuclide therapy (TRT) in order to predict dose-response relationships in a cell line exhibiting low dose hypersensitivity (LDH). Methods: aliquots of the PC-3 cancer cell line were treated with either EBRT or an in-vitro model of TRT (Irradiation of cell culture with Y-90 EDTA over 24, 48, 72 or 96 hours). Dosimetry for the TRT was calculated using radiation transport simulations with the Monte Carlo PENELOPE code. Clonogenic as well as functional biological assays were used to assess cell response. An extension of the LQ model was developed which incorporated a dose-rate threshold for activation of repair mechanisms. Results: accurate dosimetry for in-vitro exposures of cell cultures to radioactivity was established. LQ parameters of cell survival were established for the PC-3 cell line in response to EBRT. The standard LQ model did not predict survival in PC-3 cells exposed to Y 90 irradiation over periods of up to 96 hours. In fact cells were more sensitive to the same dose when irradiation was carried out over 96 hours than 24 hours. I.e. at a lower dose-rate. Deviations from the LQ predictions were most pronounced below a threshold dose-rate of 0.5 Gy/hr. These results led to an extension of the LQ model based upon a dose-rate dependent sigmoid model of single strand DNA repair. This extension to the model resulted in predicted cell survival curves that closely matched the experimental data. Conclusion: the LQ model of cell survival to radiation has been shown to be largely predictive of response to low dose-rate irradiation. However, in cells displaying LDH, further adaptation of the model was required. (authors)
Differential Geometry Based Multiscale Models
Wei, Guo-Wei
2010-01-01
Large chemical and biological systems such as fuel cells, ion channels, molecular motors, and viruses are of great importance to the scientific community and public health. Typically, these complex systems in conjunction with their aquatic environment pose a fabulous challenge to theoretical description, simulation, and prediction. In this work, we propose a differential geometry based multiscale paradigm to model complex macromolecular systems, and to put macroscopic and microscopic descriptions on an equal footing. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum mechanical description of the aquatic environment with the microscopic discrete atom-istic description of the macromolecule. Multiscale free energy functionals, or multiscale action functionals are constructed as a unified framework to derive the governing equations for the dynamics of different scales and different descriptions. Two types of aqueous macromolecular complexes, ones that are near equilibrium and others that are far from equilibrium, are considered in our formulations. We show that generalized Navier–Stokes equations for the fluid dynamics, generalized Poisson equations or generalized Poisson–Boltzmann equations for electrostatic interactions, and Newton's equation for the molecular dynamics can be derived by the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows. Comparison is given to classical descriptions of the fluid and electrostatic interactions without geometric flow based micro-macro interfaces. The detailed balance of forces is emphasized in the present work. We further extend the proposed multiscale paradigm to micro-macro analysis of electrohydrodynamics, electrophoresis, fuel cells, and ion channels. We derive generalized Poisson–Nernst–Planck equations that
Differential geometry based multiscale models.
Wei, Guo-Wei
2010-08-01
Large chemical and biological systems such as fuel cells, ion channels, molecular motors, and viruses are of great importance to the scientific community and public health. Typically, these complex systems in conjunction with their aquatic environment pose a fabulous challenge to theoretical description, simulation, and prediction. In this work, we propose a differential geometry based multiscale paradigm to model complex macromolecular systems, and to put macroscopic and microscopic descriptions on an equal footing. In our approach, the differential geometry theory of surfaces and geometric measure theory are employed as a natural means to couple the macroscopic continuum mechanical description of the aquatic environment with the microscopic discrete atomistic description of the macromolecule. Multiscale free energy functionals, or multiscale action functionals are constructed as a unified framework to derive the governing equations for the dynamics of different scales and different descriptions. Two types of aqueous macromolecular complexes, ones that are near equilibrium and others that are far from equilibrium, are considered in our formulations. We show that generalized Navier-Stokes equations for the fluid dynamics, generalized Poisson equations or generalized Poisson-Boltzmann equations for electrostatic interactions, and Newton's equation for the molecular dynamics can be derived by the least action principle. These equations are coupled through the continuum-discrete interface whose dynamics is governed by potential driven geometric flows. Comparison is given to classical descriptions of the fluid and electrostatic interactions without geometric flow based micro-macro interfaces. The detailed balance of forces is emphasized in the present work. We further extend the proposed multiscale paradigm to micro-macro analysis of electrohydrodynamics, electrophoresis, fuel cells, and ion channels. We derive generalized Poisson-Nernst-Planck equations that are
Kamminga, Tjerko; Slagman, Simen-Jan; Bijlsma, Jetta J E; Martins Dos Santos, Vitor A P; Suarez-Diez, Maria; Schaap, Peter J
2017-10-01
Mycoplasma hyopneumoniae is cultured on large-scale to produce antigen for inactivated whole-cell vaccines against respiratory disease in pigs. However, the fastidious nutrient requirements of this minimal bacterium and the low growth rate make it challenging to reach sufficient biomass yield for antigen production. In this study, we sequenced the genome of M. hyopneumoniae strain 11 and constructed a high quality constraint-based genome-scale metabolic model of 284 chemical reactions and 298 metabolites. We validated the model with time-series data of duplicate fermentation cultures to aim for an integrated model describing the dynamic profiles measured in fermentations. The model predicted that 84% of cellular energy in a standard M. hyopneumoniae cultivation was used for non-growth associated maintenance and only 16% of cellular energy was used for growth and growth associated maintenance. Following a cycle of model-driven experimentation in dedicated fermentation experiments, we were able to increase the fraction of cellular energy used for growth through pyruvate addition to the medium. This increase in turn led to an increase in growth rate and a 2.3 times increase in the total biomass concentration reached after 3-4 days of fermentation, enhancing the productivity of the overall process. The model presented provides a solid basis to understand and further improve M. hyopneumoniae fermentation processes. Biotechnol. Bioeng. 2017;114: 2339-2347. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Forecasting the mortality rates of Malaysian population using Heligman-Pollard model
Ibrahim, Rose Irnawaty; Mohd, Razak; Ngataman, Nuraini; Abrisam, Wan Nur Azifah Wan Mohd
2017-08-01
Actuaries, demographers and other professionals have always been aware of the critical importance of mortality forecasting due to declining trend of mortality and continuous increases in life expectancy. Heligman-Pollard model was introduced in 1980 and has been widely used by researchers in modelling and forecasting future mortality. This paper aims to estimate an eight-parameter model based on Heligman and Pollard's law of mortality. Since the model involves nonlinear equations that are explicitly difficult to solve, the Matrix Laboratory Version 7.0 (MATLAB 7.0) software will be used in order to estimate the parameters. Statistical Package for the Social Sciences (SPSS) will be applied to forecast all the parameters according to Autoregressive Integrated Moving Average (ARIMA). The empirical data sets of Malaysian population for period of 1981 to 2015 for both genders will be considered, which the period of 1981 to 2010 will be used as "training set" and the period of 2011 to 2015 as "testing set". In order to investigate the accuracy of the estimation, the forecast results will be compared against actual data of mortality rates. The result shows that Heligman-Pollard model fit well for male population at all ages while the model seems to underestimate the mortality rates for female population at the older ages.
Jesús Crespo Cuaresma; Anna Orthofer
2010-01-01
Reliable medium-term forecasts are essential for forward-looking monetary policy decisionmaking. Traditionally, predictions of the exchange rate tend to be linked to the equilibrium concept implied by the purchasing power parity (PPP) theory. In particular, the traditional benchmark for exchange rate models is based on a linear adjustment of the exchange rate to the level implied by PPP. In the presence of aggregation effects, transaction costs or uncertainty, however, economic theory predict...
Simplification of an MCNP model designed for dose rate estimation
Laptev, Alexander; Perry, Robert
2017-09-01
A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.
Simplification of an MCNP model designed for dose rate estimation
Directory of Open Access Journals (Sweden)
Laptev Alexander
2017-01-01
Full Text Available A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.
Basarab, B.; Fuchs, B.; Rutledge, S. A.
2013-12-01
Predicting lightning activity in thunderstorms is important in order to accurately quantify the production of nitrogen oxides (NOx = NO + NO2) by lightning (LNOx). Lightning is an important global source of NOx, and since NOx is a chemical precursor to ozone, the climatological impacts of LNOx could be significant. Many cloud-resolving models rely on parameterizations to predict lightning and LNOx since the processes leading to charge separation and lightning discharge are not yet fully understood. This study evaluates predicted flash rates based on existing lightning parameterizations against flash rates observed for Colorado storms during the Deep Convective Clouds and Chemistry Experiment (DC3). Evaluating lightning parameterizations against storm observations is a useful way to possibly improve the prediction of flash rates and LNOx in models. Additionally, since convective storms that form in the eastern plains of Colorado can be different thermodynamically and electrically from storms in other regions, it is useful to test existing parameterizations against observations from these storms. We present an analysis of the dynamics, microphysics, and lightning characteristics of two case studies, severe storms that developed on 6 and 7 June 2012. This analysis includes dual-Doppler derived horizontal and vertical velocities, a hydrometeor identification based on polarimetric radar variables using the CSU-CHILL radar, and insight into the charge structure using observations from the northern Colorado Lightning Mapping Array (LMA). Flash rates were inferred from the LMA data using a flash counting algorithm. We have calculated various microphysical and dynamical parameters for these storms that have been used in empirical flash rate parameterizations. In particular, maximum vertical velocity has been used to predict flash rates in some cloud-resolving chemistry simulations. We diagnose flash rates for the 6 and 7 June storms using this parameterization and compare
Effects of sample size on estimates of population growth rates calculated with matrix models.
Directory of Open Access Journals (Sweden)
Ian J Fiske
Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high
Effects of sample size on estimates of population growth rates calculated with matrix models.
Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M
2008-08-28
Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.
Does childhood cancer affect parental divorce rates? A population-based study.
Syse, Astri; Loge, Jon H; Lyngstad, Torkild H
2010-02-10
PURPOSE Cancer in children may profoundly affect parents' personal relationships in terms of psychological stress and an increased care burden. This could hypothetically elevate divorce rates. Few studies on divorce occurrence exist, so the effect of childhood cancers on parental divorce rates was explored. PATIENTS AND METHODS Data on the entire Norwegian married population, age 17 to 69 years, with children age 0 to 20 years in 1974 to 2001 (N = 977,928 couples) were retrieved from the Cancer Registry, the Central Population Register, the Directorate of Taxes, and population censuses. Divorce rates for 4,590 couples who were parenting a child with cancer were compared with those of otherwise similar couples by discrete-time hazard regression models. Results Cancer in a child was not associated with an increased risk of parental divorce overall. An increased divorce rate was observed with Wilms tumor (odds ratio [OR], 1.52) but not with any of the other common childhood cancers. The child's age at diagnosis, time elapsed from diagnosis, and death from cancer did not influence divorce rates significantly. Increased divorce rates were observed for couples in whom the mothers had an education greater than high school level (OR, 1.16); the risk was particularly high shortly after diagnosis, for CNS cancers and Wilms tumors, for couples with children 0 to 9 years of age at diagnosis, and after a child's death. CONCLUSION This large, registry-based study shows that cancer in children is not associated with an increased parental divorce rate, except with Wilms tumors. Couples in whom the wife is highly educated appear to face increased divorce rates after a child's cancer, and this may warrant additional study.
A High Performance Impedance-based Platform for Evaporation Rate Detection.
Chou, Wei-Lung; Lee, Pee-Yew; Chen, Cheng-You; Lin, Yu-Hsin; Lin, Yung-Sheng
2016-10-17
This paper describes the method of a novel impedance-based platform for the detection of the evaporation rate. The model compound hyaluronic acid was employed here for demonstration purposes. Multiple evaporation tests on the model compound as a humectant with various concentrations in solutions were conducted for comparison purposes. A conventional weight loss approach is known as the most straightforward, but time-consuming, measurement technique for evaporation rate detection. Yet, a clear disadvantage is that a large volume of sample is required and multiple sample tests cannot be conducted at the same time. For the first time in literature, an electrical impedance sensing chip is successfully applied to a real-time evaporation investigation in a time sharing, continuous and automatic manner. Moreover, as little as 0.5 ml of test samples is required in this impedance-based apparatus, and a large impedance variation is demonstrated among various dilute solutions. The proposed high-sensitivity and fast-response impedance sensing system is found to outperform a conventional weight loss approach in terms of evaporation rate detection.
Statistically Based Morphodynamic Modeling of Tracer Slowdown
Borhani, S.; Ghasemi, A.; Hill, K. M.; Viparelli, E.
2017-12-01
Tracer particles are used to study bedload transport in gravel-bed rivers. One of the advantages associated with using of tracer particles is that they allow for direct measures of the entrainment rates and their size distributions. The main issue in large scale studies with tracer particles is the difference between tracer stone short term and long term behavior. This difference is due to the fact that particles undergo vertical mixing or move to less active locations such as bars or even floodplains. For these reasons the average virtual velocity of tracer particle decreases in time, i.e. the tracer slowdown. In summary, tracer slowdown can have a significant impact on the estimation of bedload transport rate or long term dispersal of contaminated sediment. The vast majority of the morphodynamic models that account for the non-uniformity of the bed material (tracer and not tracer, in this case) are based on a discrete description of the alluvial deposit. The deposit is divided in two different regions; the active layer and the substrate. The active layer is a thin layer in the topmost part of the deposit whose particles can interact with the bed material transport. The substrate is the part of the deposit below the active layer. Due to the discrete representation of the alluvial deposit, active layer models are not able to reproduce tracer slowdown. In this study we try to model the slowdown of tracer particles with the continuous Parker-Paola-Leclair morphodynamic framework. This continuous, i.e. not layer-based, framework is based on a stochastic description of the temporal variation of bed surface elevation, and of the elevation specific particle entrainment and deposition. Particle entrainment rates are computed as a function of the flow and sediment characteristics, while particle deposition is estimated with a step length formulation. Here we present one of the first implementation of the continuum framework at laboratory scale, its validation against
International Nuclear Information System (INIS)
Guilani, Pedram Pourkarim; Azimi, Parham; Niaki, S.T.A.; Niaki, Seyed Armin Akhavan
2016-01-01
The redundancy allocation problem (RAP) is a useful method to enhance system reliability. In most works involving RAP, failure rates of the system components are assumed to follow either exponential or k-Erlang distributions. In real world problems however, many systems have components with increasing failure rates. This indicates that as time passes by, the failure rates of the system components increase in comparison to their initial failure rates. In this paper, the redundancy allocation problem of a series–parallel system with components having an increasing failure rate based on Weibull distribution is investigated. An optimization method via simulation is proposed for modeling and a genetic algorithm is developed to solve the problem. - Highlights: • The redundancy allocation problem of a series–parallel system is aimed. • Components possess an increasing failure rate based on Weibull distribution. • An optimization method via simulation is proposed for modeling. • A genetic algorithm is developed to solve the problem.
Directory of Open Access Journals (Sweden)
Dario Cuevas Rivera
2015-10-01
Full Text Available The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an 'intelligent coincidence detector', which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena.
Observation-Based Modeling for Model-Based Testing
Kanstrén, T.; Piel, E.; Gross, H.G.
2009-01-01
One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through
Directory of Open Access Journals (Sweden)
O. Möhler
2006-01-01
Full Text Available Activation energies ΔGact for the nucleation of nitric acid dihydrate (NAD in supercooled binary HNO3/H2O solution droplets were calculated from volume-based nucleation rate measurements using the AIDA (Aerosol, Interactions, and Dynamics in the Atmosphere aerosol chamber of Forschungszentrum Karlsruhe. The experimental conditions covered temperatures T between 192 and 197 K, NAD saturation ratios SNAD between 7 and 10, and nitric acid molar fractions of the nucleating sub-micron sized droplets between 0.26 and 0.28. Based on classical nucleation theory, a new parameterisation for ΔGact=A×(T ln SNAD−2+B is fitted to the experimental data with A=2.5×106 kcal K2 mol−1 and B=11.2−0.1(T−192 kcal mol−1. A and B were chosen to also achieve good agreement with literature data of ΔGact. The parameter A implies, for the temperature and composition range of our analysis, a mean interface tension σsl=51 cal mol−1 cm−2 between the growing NAD germ and the supercooled solution. A slight temperature dependence of the diffusion activation energy is represented by the parameter B. Investigations with a detailed microphysical process model showed that literature formulations of volume-based (Salcedo et al., 2001 and surface-based (Tabazadeh et al., 2002 nucleation rates significantly overestimate NAD formation rates when applied to the conditions of our experiments.
Rate-based structural health monitoring using permanently installed sensors
Corcoran, Joseph
2017-09-01
Permanently installed sensors are becoming increasingly ubiquitous, facilitating very frequent in situ measurements and consequently improved monitoring of `trends' in the observed system behaviour. It is proposed that this newly available data may be used to provide prior warning and forecasting of critical events, particularly system failure. Numerous damage mechanisms are examples of positive feedback; they are `self-accelerating' with an increasing rate of damage towards failure. The positive feedback leads to a common time-response behaviour which may be described by an empirical relation allowing prediction of the time to criticality. This study focuses on Structural Health Monitoring of engineering components; failure times are projected well in advance of failure for fatigue, creep crack growth and volumetric creep damage experiments. The proposed methodology provides a widely applicable framework for using newly available near-continuous data from permanently installed sensors to predict time until failure in a range of application areas including engineering, geophysics and medicine.
Nicholl, Jon; Jacques, Richard M; Campbell, Michael J
2013-10-29
Comparison of outcomes between populations or centres may be confounded by any casemix differences and standardisation is carried out to avoid this. However, when the casemix adjustment models are large and complex, direct standardisation has been described as "practically impossible", and indirect standardisation may lead to unfair comparisons. We propose a new method of directly standardising for risk rather than standardising for casemix which overcomes these problems. Using a casemix model which is the same model as would be used in indirect standardisation, the risk in individuals is estimated. Risk categories are defined, and event rates in each category for each centre to be compared are calculated. A weighted sum of the risk category specific event rates is then calculated. We have illustrated this method using data on 6 million admissions to 146 hospitals in England in 2007/8 and an existing model with over 5000 casemix combinations, and a second dataset of 18,668 adult emergency admissions to 9 centres in the UK and overseas and a published model with over 20,000 casemix combinations and a continuous covariate. Substantial differences between conventional directly casemix standardised rates and rates from direct risk standardisation (DRS) were found. Results based on DRS were very similar to Standardised Mortality Ratios (SMRs) obtained from indirect standardisation, with similar standard errors. Direct risk standardisation using our proposed method is as straightforward as using conventional direct or indirect standardisation, always enables fair comparisons of performance to be made, can use continuous casemix covariates, and was found in our examples to have similar standard errors to the SMR. It should be preferred when there is a risk that conventional direct or indirect standardisation will lead to unfair comparisons.
Multi-Agent Market Modeling of Foreign Exchange Rates
Zimmermann, Georg; Neuneier, Ralph; Grothmann, Ralph
A market mechanism is basically driven by a superposition of decisions of many agents optimizing their profit. The oeconomic price dynamic is a consequence of the cumulated excess demand/supply created on this micro level. The behavior analysis of a small number of agents is well understood through the game theory. In case of a large number of agents one may use the limiting case that an individual agent does not have an influence on the market, which allows the aggregation of agents by statistic methods. In contrast to this restriction, we can omit the assumption of an atomic market structure, if we model the market through a multi-agent approach. The contribution of the mathematical theory of neural networks to the market price formation is mostly seen on the econometric side: neural networks allow the fitting of high dimensional nonlinear dynamic models. Furthermore, in our opinion, there is a close relationship between economics and the modeling ability of neural networks because a neuron can be interpreted as a simple model of decision making. With this in mind, a neural network models the interaction of many decisions and, hence, can be interpreted as the price formation mechanism of a market.